Basics
Contents
Import Statement
Data Types
Intermediate
Contents
Closures
Recursion
Advanced
Contents
Goroutines
Channels Introduction
More About Concurrency
Contents :
- Concurrency vs Parallelism
- Race Conditions
- Deadlocks
- RWMutex
- sync.NewCond
- sync.Once
- sync.Pool
- for select statement
- Quiz-11: Advanced Concurrency
Concurrency vs Parallelism
Introduction
-
Concurrency: The ability of a system to handle multple tasks simultaneously. It involves managing multiple tasks that are in progress at the same time but not necessariliy executed at the same instant.
-
Parallelism: The simultaneous execution of multiple tasks, typically using multiple processors or cores, to improve performance by running operations at the same time.
-
Parallelism is all about executing multiple tasks simultaneously, typically on multiple cores or processors and this is a subset of concurrency.
Code:
package main
import (
"fmt"
"runtime"
"sync"
"time"
)
func main() {
concurrencyVsParallelism1()
concurrencyVsParallelism2()
}
func heavyTask(id int, wg *sync.WaitGroup){
defer wg.Done()
fmt.Printf("Task %d is starting..\n", id)
for range 100_000_000 {
}
fmt.Printf("Tasks %d is finished at time %v\n", id, time.Now())
}
func concurrencyVsParallelism2(){
numThreads := 4
runtime.GOMAXPROCS(numThreads)
var wg sync.WaitGroup
for i := range numThreads{
wg.Add(1)
go heavyTask(i, &wg)
}
wg.Wait()
}
func printNumbers(){
for i := range 5 {
fmt.Println(i, ":", time.Now())
time.Sleep(500 * time.Millisecond)
}
}
func printLetters(){
for _,letter := range "ABCDE"{
fmt.Println(string(letter), ":",time.Now())
time.Sleep(500 * time.Millisecond)
}
}
func concurrencyVsParallelism1(){
go printNumbers()
go printLetters()
time.Sleep(3 * time.Second)
}
How parallelism is implemented in GO ?
-
It's the go runtime. Go's runtimes scheduler can execute Go routines in parallel, taking advantage of multiple core processors.
-
We can have processes that are executed concurrently without being parallel. And that happens when we have a single core CPU with time slicing. The single core CPU will divide time using time slicing and work on those multiple tasks simultaneously by dividing time and giving time to different functions, different tasks in a shared way. eg: So maybe 200 milliseconds to a tasks and then next 200 ms to another tasks and next 50 ms to the first task that it left earlier, and so on.
-
Practical Applications:
- Concurrency Use cases:
- I/O bound tasks
- Server Applications
- Parallelism Use Cases
- CPU Bound tasks
- Scientific Computing
- Concurrency Use cases:
-
Challenges and Considerations :
- Concurrency Challenges
- Synchronization: managing shared resources to prevent race conditions.
- Deadlocks: avoid situations where tasks are stuck waiting for each other.
- Parallelism Challenges
- Data Sharing
- Overhead
- Performance Tuning
- Concurrency Challenges
Race Conditions
Introduction
A race condition occurs when the outcome of a program depends on the relative timing of uncontrollable events such as thread or goroutine scheduling. It usually happens when multiple threads or goroutines access shared resources concurrently without proper synchronizatino, leading to unpredictable and incorrect behavior.
Why does it matter ?
- Race conditions can cause bugs that are difficult to reproduce and debug, leading to unreliable and inconsistent program behavior.
Code:
package main
import (
"fmt"
"sync"
)
func main() {
mutexStructMain()
}
type counter struct {
mu sync.Mutex
count int
}
func (c *counter) increment(){
// c.mu.Lock() // --> Possible Solution is using mutexes
// defer c.mu.Unlock()
c.count++
}
func (c *counter) getValue() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.count
}
func mutexStructMain(){
var wg sync.WaitGroup
counter := &counter{}
numGoroutines := 100
for range numGoroutines{
wg.Add(1)
go func(){
defer wg.Done()
for range 1_000_000{
counter.increment()
}
}()
}
wg.Wait()
fmt.Println("Final Value of counter:", counter.count)
}
Notes
- To check if the program has a race condition, add the
-race
flag during running the program.
go run -race race_conditions.go
-
When we have multiple goroutines accessing the same value, trying to modify the same value or trying to do something at the same time with a same type/variable/object, then in that case, use this
-race
flag and find if you have a data race in your program. -
Go provides a builtin race detector tool that helps identify the race conditions in your programs. The race detector monitors accesses to shared variables and reports data races during execution. Finally in the output, the race detector shows where data races occur including the read and write operations.
-
We use mutexes or stateful goroutines or atomic operations to avoid race conditions.
-
Best Practices to Avoid Race Conditions :
-
Proper Synchronization: use synchronization primitives like mutexes or atomic operations to ensure exclusive access to shared resources.
-
Minimize Shared State: reduce the amount of shared state between concurrent operations to lower the risk of race conditions.
-
Encapsulate State: use encapsulation to manage state within structs or functions limiting exposure to shared data.
-
Code Reviews and Testing: regularly review code for potential race conditions and utilize tools like the race detector to identify issues during development.
-
-
Practical Considerations
- Complexity of Synchronization
- Avoiding Deadlocks
- Performance Impact
Deadlocks
Introduction
A deadlock is a situation in concurrent computing when two or more processes or goroutines are unable to proceed because each is waiting for the other to release resources. This results in a state where none of the processes or goroutines can make progress.
Deadlocks can cause programs to hand or freeze, leading to unresponsive systems and poor user experience. Understanding and preventing deadlocks is crucial for reliable and efficient concurrent systems.
Code:
package main
import (
"fmt"
"sync"
"time"
)
func main() {
var mu1, mu2 sync.Mutex
go func(){
mu1.Lock()
fmt.Println("Goroutine 1 locked mu1")
time.Sleep(time.Second)
mu2.Lock()
fmt.Println("Goroutine 1 locked mu2")
mu1.Unlock()
mu2.Unlock()
}()
go func(){
mu2.Lock()
fmt.Println("Goroutine 1 locked mu2")
time.Sleep(time.Second)
mu1.Lock()
fmt.Println("Goroutine 1 locked mu1")
mu2.Unlock()
mu1.Unlock()
}()
// time.Sleep(3 * time.Second)
// fmt.Println("Main function Completed")
select {}
/* CORRECT CODE AVOIDING DEADLOCKS
One of the Soln: Follow the same lock order
go func(){
mu1.Lock()
fmt.Println("Goroutine 1 locked mu1")
time.Sleep(time.Second)
mu2.Lock()
fmt.Println("Goroutine 1 locked mu2")
mu1.Unlock()
mu2.Unlock()
}()
go func(){
mu1.Lock()
fmt.Println("Goroutine 1 locked mu1")
time.Sleep(time.Second)
mu2.Lock()
fmt.Println("Goroutine 1 locked mu2")
mu1.Unlock()
mu2.Unlock()
}()
time.Sleep(3 * time.Second)
fmt.Println("Main function Completed")
// select {}
*/
}
Causes of Deadlocks: Four Conditions for Deadlocks :
-
Mutual Exclusion: at least one resource is held in a non-shareable mode. Only one process or goroutine can use the resource at a time.
-
Hold and Wait: process or goroutine holding at least one resource is waiting to acquire additional resources held by other processes or goroutines.
-
No Preemption: resources cannot be forcibly taken away from processes or goroutines. They must be released voluntarily.
-
Circular Wait: a set or processes or goroutines are waiting for each other in a circular chain, with each holding a resource that the next one in the chain is waiting for.
-
Detecting Deadlocks:
- Deadlock Detection Strategies
- Static Analysis
- Dynamic Analysis
- Deadlock Detection Tools
- Deadlock Detection Strategies
select {}
-
A blank select statement waits indefinitely for the goroutines to finish.
-
mutex.Lock()
is blocking in nature. -
Deadlock happens when two locked mutexes try to access each other's values/ each other's mutex.
-
Consitent lock order helps us avoid deadlocks. If we do not follow a consistent lock order then we might have a deadlock. So by acquiring locks in a consistent order accross all goroutines, we can avoid the deadlock scenario and ensure that the program runs smoothly.
-
Best Practices for avoiding deadlocks:
- Lock Ordering
- Timeouts and Deadlock Detection
- Resource Allocation Strategies
-
Best Practices and Patterns :
- Avoid nested locks
- Use lock-free data structures
- Keep critical sections short
-
Practical Consierations:
- Complex Systems
- Testing for Deadlocks
- Code Reviews
RWMutex
Introduction
RWMutex stands for read-write mutex, is a synchronization primitive in Go that allows multiple readers to hold the lock simultaneously while ensuring exclusive access for a single writer. It provides an efficient way to handle concurrent read and write operations, particularly when read operations are frequent and writes are infrequent.
RWMutex is designed to optimize scenarios where multiple goroutines need to read shared data concurrently. But write operations are less frequent.
So RWMutex helps to improve performance by reducing contention during read operations while still maintaining exclusive access for write operations.
Key Concepts of sync.RWMutex
-
Read Lock (RLock): allows multiple goroutine to acquire RLock simultaneously. It is used when a goroutine needs to read shared data without modifying it.
-
Write Lock (Lock): ensures exclusive access to the shared resources and only one goroutine can hold the write lock at a time. Moreover all readers and writers are blocked until the write block is released.
-
Unlock (Unlock and RUnlock)
When to use RWMutex
- Read Heavy Workloads
- Shared Data Structures
Code
package main
import (
"fmt"
"sync"
"time"
)
var (
rwmu sync.RWMutex
counter int
)
func readCounter(wg *sync.WaitGroup){
defer wg.Done()
rwmu.RLock()
fmt.Println("Read Counter:", counter)
rwmu.RUnlock()
}
func writeCounter(wg *sync.WaitGroup, value int){
defer wg.Done()
rwmu.Lock()
counter = value
fmt.Println("Writing value to counter: Done")
rwmu.Unlock()
}
func main() {
var wg sync.WaitGroup
for range 5{
wg.Add(1)
go readCounter(&wg)
}
wg.Add(1)
time.Sleep(3*time.Second)
go writeCounter(&wg, 18)
wg.Wait()
}
How RWMutex Works
-
Read Lock Behavior
-
Write Lock Behavior
-
Lock Contention and Starvation
-
When a write lock is requested, may block readers if a write lock is pending. Conversely long held read locks can delay the acquisition of a write lock. Only one goroutine can acquire the write lock at a time.
-
While a goroutine holds the write lock, no other goroutine can acquire either a read or write lock. However for the read lock behavior, multiple goroutines can acquire the read lock simultaneously, provided no go routine holds the write.
-
Read Locks are shared and do not block other readers.
-
Starvatinos means that your write operation (or any other operation) needs to acquire the lock but it is being held in a limbo, waiting for the lock to be released.
Best Practices for Using RWMutex
-
Minimize Lock Duration: to avoid blocking other goroutines unnecessarily.
-
Avoid Lock Starvation: Be mindful of long held read locks potentially causing write lock starvation. If write operations are critical, ensure that read operations are not indefinitely blocking writes because then your write operation will be starving.
-
Avoid Deadlocks
-
Balance Read and Write Operations
Advanced Use Cases:
- Caching with RWMutex
- Concurrent Data Structures
sync.NewCond
Introduction
NewCond is a function in Go's sync package that creates a new condition variable. A condition variable is a synchronization primitive that allows goroutines to wait for certain conditions to be met while holding a lock. It is used to signal one ore more goroutines that some condition has changes.
Condition variables are essential for more complex synchronization scenarios beyond simple locking mechanisms. They are useful in situations where goroutines need to wait for specific conditions or events before proceeding.
-
Key Concepts of
sync.NewCond
:- Condition Variables
- Mutex and Condition Variables
-
Methods of
sync.Cond
Wait()
Signal()
Broadcast()
Code
package main
import (
"fmt"
"sync"
"time"
)
const bufferSize = 5
type buffer struct {
items []int
mu sync.Mutex
cond *sync.Cond
}
func newBuffer(size int) *buffer {
b := &buffer{
items: make([]int, 0, size),
}
b.cond = sync.NewCond(&b.mu)
return b
}
func (b *buffer) produce(item int){
b.mu.Lock()
defer b.mu.Unlock()
// Conditional infinite for loop
for len(b.items) == bufferSize {
b.cond.Wait()
}
b.items = append(b.items, item)
fmt.Println("Produced:", item)
b.cond.Signal() // signal the consumer that the producer has done it's job to produce an item.
}
func (b *buffer) consume() int{
b.mu.Lock()
defer b.mu.Unlock()
for len(b.items) == 0 {
b.cond.Wait()
// This functions stops doing anything and waits for
// other functions to append to the slice
}
item := b.items[0]
b.items = b.items[1:]
fmt.Println("Consumed:", item)
b.cond.Signal()
return item
}
func producer(b *buffer, wg *sync.WaitGroup){
defer wg.Done()
for i := range 10 {
b.produce(i+1000)
time.Sleep(200 * time.Millisecond)
}
}
func consumer(b *buffer, wg *sync.WaitGroup){
defer wg.Done()
for range 10 {
b.consume()
time.Sleep(1500 * time.Millisecond)
}
}
func main() {
buffer := newBuffer(bufferSize)
var wg sync.WaitGroup
wg.Add(2)
go producer(buffer, &wg)
go consumer(buffer, &wg)
wg.Wait()
}
Notes:
Key Points :
-
Signal is for waking up the other goroutine. Wait is for making our goroutine fall asleep.
-
sync.NewCond
: it allows goroutines to wait for or signal changes in program state. It creates a new condition variable associate with the buffers mutex, which it takes as an argument. -
b.cond.Wait()
: makes the goroutine wait until the signal is received. It puts the goroutines to sleep and Signal wakes up that sleeping goroutine. -
b.cond.Signal()
: sends a notification to notify a consumer.
Best Practices for using sync.NewCond
- Ensure Mutex is held
- Avoid spurious wakeups
- Use condition variables judiciously
- Balance signal and broadcast
Advanced Use Cases
- Task Scheduling
- Resource Pools
- Event Notification Systems
sync.Once
Intro
A once ensures that a piece of code is executed only once, regardless of how many goroutines attempt to execute it. It is useful for initializing shared resources or performing setup tasks.
Code
package main
import (
"fmt"
"sync"
)
var once sync.Once
func initialize(){
fmt.Println("This function is executed only once, no matter how many times you call it")
}
func main() {
var wg sync.WaitGroup
for i:= range 10{
wg.Add(1)
go func(){
defer wg.Done()
fmt.Println("Goroutine: #", i)
once.Do(initialize)
}()
}
wg.Wait()
}
sync.Pool
sync.Pool
is a type provided by the go standard library in the sync package. It implements a pool of reusable objects. The primary purpose of sync.Pool
is to reduce the overhead of allocating and deallocating objects frequenty by providing a pool where objects can be reused.
Why does it matter ?
Because object allocation and garbage collection can be expensive, especially in high performance applications or scenarios with frequent allocations. sync.Pool
helps maintaining this by mitigating a pool of objects that can be reused, reducing the need for frequent allocations and garbage collection.
-
Key Concepts of
sync.Pool
:- Object Pooling
- Object Retrieval and Return
-
Methods of
sync.Pool
:Get()
Put(interface{})
New(Optional)
-
It works on the LIFO principle.
-
The new field will create a new instance if the object pool is empty.
Code
package main
import (
"fmt"
"sync"
)
type person struct{
name string
age int
}
func main() {
poolWithNew()
poolWithoutNew()
}
func poolWithoutNew(){
var pool = sync.Pool{}
pool.Put(&person{name: "John", age: 26})
person1 := pool.Get().(*person)
fmt.Println("Person 1:", person1)
fmt.Printf("Person1: Name: %s | Age: %d\n", person1.name, person1.age)
pool.Put(person1)
fmt.Println("Returned Person to Pool")
person2 := pool.Get().(*person)
fmt.Println("Got Person 2:", person2)
person3 := pool.Get()
if person3 != nil {
fmt.Println("Got Person 3:", person3)
person3.(*person).name = "James"
} else {
fmt.Println("Sync Pool is empty. So person3 is not assigned anything")
}
// Returning object to the pool again
pool.Put(person2)
pool.Put(person3)
person4 := pool.Get().(*person)
fmt.Println("Got Person 4:", person4)
person5 := pool.Get()
if person5 != nil {
fmt.Println("Got Person 3:", person5)
person5.(*person).name = "James"
} else {
fmt.Println("Sync Pool is empty. So person5 is not assigned anything")
}
}
func poolWithNew(){
var pool = sync.Pool{
New: func() interface{}{
fmt.Println("Creating a new Person")
return &person{}
},
}
// Get an Object from the pool
person1 := pool.Get().(*person)
person1.name = "John"
person1.age = 18
fmt.Println("Person 1:", person1)
fmt.Printf("Person1: Name: %s | Age: %d\n", person1.name, person1.age)
pool.Put(person1)
fmt.Println("Returned Person to Pool")
person2 := pool.Get().(*person)
fmt.Println("Got Person 2:", person2)
person3 := pool.Get().(*person)
fmt.Println("Got Person 3:", person3)
person3.name = "James"
// Returning object to the pool again
pool.Put(person2)
pool.Put(person3)
person4 := pool.Get().(*person)
fmt.Println("Got Person 4:", person4)
person5 := pool.Get().(*person)
fmt.Println("Got Person 5:", person5)
}
Key Notes:
-
Best Practices for using
sync.Pool
:- Use for expensive object allocations
- Keep Objects in Pool Clean
- Avoid Complex Objects
-
Advanced Use Cases
- Reusing Buffers
- Managing Database Connections
- High Performance Applications
-
Considerations and Limitations
- Garbafe Collection
- Not for Long-Lived Objects
- Thread Safety
for select
statemet
Code
package main
import (
"fmt"
"time"
)
func main() {
ticker := time.NewTicker(1 * time.Second)
quit := make(chan string)
go func(){
time.Sleep(5 * time.Second)
close(quit)
}()
for {
select {
case <- ticker.C:
fmt.Println("Tick")
case <-quit:
fmt.Println("Quiting..")
return
}
}
}
Quiz - 11: Advanced Concurrency
REST API Project
Contents
API Planning
In this project we are going to assume that we have been contracted to create a backend server/API for a school. The school is our client and we are going to plan the API as per our client requirements.
So the first stage is understanding the project requirements.
Project Goal:
Create an API for a school management system that administrative staff can use to manage students, teachers, and other staff members.
Key Requirements:
- Addition of student/teaches/staff/exec entry
- Modification of student/teacher/staff/exec entry
- Delete student/teacher/staff/exec entry
- Get list of all students/teachers/staff/execs
- Authentication: login, logout
- Bulk Modifications: students/teachers/staff/execs
- Class Management:
- Total count of a class with class teacher
- List of all students in a class with class teacher
Security and Rate Limiting:
- Rate Limit the application
- Password reset mechanisms (forgot password, update password)
- Deactivate user
Fields:
Student | Teacher | Executives |
---|---|---|
First Name | First Name | First Name |
Last Name | Last Name | Last Name |
Class | Subject | Role |
Class | ||
Username | ||
Password |
Endpoints
Executives
- GET
/execs
: Get list of executives - POST
/execs
: Add a new executive - PATCH
/execs
: Modify multiple executives - GET
/execs/{id}
: Get a specific executive - PATCH
/execs/{id}
: Modify a specific executive - DELETE
/execs/{id}
: Delete a specific executive - POST
/execs/login
: Login - POST
/execs/logout
: Logout - POST
/execs/forgotpassword
: Forgot Password - POST
/execs/resetpassword/reset/{resetcode}
: Reset Password
Students
- GET
/students
: Get list of students - POST
/students
: Add a new students - PATCH
/students
: Modify multiple students - DELETE
/students
: Delete multiple students - GET
/students/{id}
: Get a specific student - PATCH
/students/{id}
: Modify a specific student - PUT
/students/{id}
: Update a specific student - DELETE
/students/{id}
: Delete a specific student
Teachers
- GET
/teachers
: Get list of teachers - POST
/teachers
: Add a new teachers - PATCH
/teachers
: Modify multiple teachers - DELETE
/teachers
: Delete multiple teachers - GET
/teachers/{id}
: Get a specific teacher - PATCH
/teachers/{id}
: Modify a specific teacher - PUT
/teachers/{id}
: Update a specific teacher - DELETE
/teachers/{id}
: Delete a specific teacher - GET
/teachers/{id}/students
: Get students of a specific teacher - GET
/teachers/{id}/studentcount
: Get student count for a specific teacher
Best Practices and Common Pitfalls
-
Best Practices
- Modularity
- Documentation
- Error Handling
- Security
- Testing
-
Common Pitfalls
- Overcomplicating the API
- Ignoring Security
- Poor Documentation
- Inadequate Testing
By breaking down project requirements into tasks and subsequently into endpoints, you create a clear roadmap for development. Following best practices and avoiding common pitfalls will ensure your API is robust, secure and easy to use.
How Internet Works
Contents
- URI/URL
- Request Response Cycle
- What is Frontend Dev/ Client Side
- What is Backend Dev/ API/ Server Side
- HTTP-1/2/3, HTTPS
- Quiz-12: Internet Quiz
URI / URL
How Internet Works
Internet is a global network of interconnected computers that communicate using standardized protocols.
Key Components
- Clients and Servers
- Protocols
- IP Addresses
- Domain Name Systems(DNS)
A Web Request's Journey
- Step-1: Entering a URL
- Step-2: DNS Lookup, DNS Server Interaction
- Step-3: Establicshing a TCP connection
- browser sends a TCP SYN (synchronize) packet to the server.
- server responds with a SYN-SCK (synchronize-acknowledgement) packet.
- browser sends an ACK (acknowledgement) packet, completing the three-way handshake.
- Step-4: Sending an HTTP Request
- Step-5: Server Processing and Response
- Step-6: Rendering the Webpage
URI & URL
URI (Uniform Resource Locator)
Components:
- URL (Uniform Resource Locator)
- URN (Uniform Resource Name)
Components of a URL
- Scheme
- Host
- Port
- Path
- Query
- Fragment
Request Response Cycle
Introduction
The request response cycle is the fundamental process through which a client, typically a web browser communicates with a server to request and receive resources. The key component of request response cycle include client, server and protocol.
Key Components
- Client
- Server
- Protocol
Steps in the Request-Response Cycle:
- Cient Sends a Request
- DNS Resolution
- Establishing a Connection
- Server Receives the Request
- Server Sends a Response
- Client Receives the response
Note Points:
-
HTTP Request Components
-
HTTP Response Components
-
HTTP Methods: GET, POST, PUT, PATCH, DELETE
-
Status Codes (reference: https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status)
-
Headers
- Request Headers
- Response Headers
-
Practical Use Cases and Examples
- Accessing a Webpage
- Submitting a Form
- API Calls
-
Best Practices
- Optimize Requests
- Handle Errors Gradually
- Secure Comunmications
Frontend / Client-Side
The fronted, also known as client-side refers to the part of a web-application that the users interact with directly. It includes everything users experience on their web browsers or mobile devices. The fontend is responsible for the presentation and behavior of a website or web application. It onvolves designing and implementing user interfaces, handling user interactions and presenting data retrieved from the backend.
Frontend (Client-Side)
- User Interface (UI)
- User Experience (UX)
- Technologies Used: html, css and javascript
- Frameworks and Libraries: react, vue.js, angular
How Frontend interacts with backend
- Client-Server Communication
- HTTP Request and Responses
- APIs
- Asynchronous Operations
- AJAX (Asynchronous Javascript and XML)
- Fetch API
Practical examples of frontend applications
- Static Websites
- Dynamic Web Applications
- Single-Page Applications (SPAs)
Frontend Development Best Practices
- Responsive Design
- Definition
- Techniques
- Performance Optimization
- Definition
- Techniques
- Accessibility
- Definition
- Techniques
Backend / Server Side
The backend also known as server side, refers to the part od a web application that runs on the server and is responsible for processing requests, managing data and performing application logic. So the complete application logic resides in the server on your server-side application. The backend handles the server side operations that support the functionality of a web application. It processes requests from the frontend, interacts with databases, performs computations and sends responses back to the client.
Key Components of Backend Development
- Server
- Application Logic
- Database
- APIs
How Backendd interacts with Frontend
- Client-Server Communication
- HTTP Requests and Responses
- APIs
- Data Handling
- Request Processing
- Response Generation
HTTP 1,2,3 | HTTPs
HTTP/1.0
- 1996
- Features
- Request-Response Model
- Stateless
- Connection
HTTP/1.1
-
1999
-
Features
- Persistent Connections
- Pipelining
- Additional Headers
-
Limitations
- Head Of Line Blocking
- Limited Multiplexing
HTTP/2
-
2015
-
Features
- Binary Protocol
- Multiplexing
- Header Compression
- Stream Prioritization
- Server Push
-
Advantages
- Reduced Latency
- Efficient Use of Connections
HTTP/3
- 2020
- Features
- Based on QUIC
- UDP Based
- Built-In Encryption
- Stream Multiplexing
- Advantages
- Faster Connection Establishment
- Improved Resilience