You type https://google.com into your browser and press Enter.
In about 200 milliseconds, a fully rendered page appears.
But what actually happened in those 200 milliseconds?
Most developers — even experienced ones — have a vague idea. ‘DNS happens, then TCP, then… HTTP stuff.’ But if you’re building backends, designing APIs, debugging slow responses, or preparing for system design interviews, ‘vague’ is not good enough.
In this post, we’re going to trace every single thing that happens — from the moment you press Enter to the moment the response arrives — explained in plain language with real examples and actual HTTP messages.
Let’s go.
The 7 Stages at a Glance
Before we dive deep into each stage, here’s the full picture:
| Stage | What Happens | Who Does It |
| 1. URL Parsing | Browser breaks down the URL into parts | Browser |
| 2. DNS Lookup | Domain name is resolved to an IP address | OS + DNS Servers |
| 3. TCP Handshake | A reliable connection is established | Client + Server |
| 4. TLS Handshake | Connection is encrypted (HTTPS only) | Client + Server |
| 5. HTTP Request | Client sends the actual request message | Browser / HTTP Client |
| 6. Server Processing | Server receives, routes, processes, and responds | Your Backend Code |
| 7. Connection End | Connection is closed or kept alive for future requests | Client + Server |
| 💡 As a backend developer, you directly control Stage 6 (Server Processing). But understanding Stages 1-5 is what lets you diagnose slow APIs, set correct headers, design better systems, and answer senior-level interview questions. |
Stage 1 URL Parsing
Before anything goes over the network, the browser breaks down what you typed into structured parts.
Take this URL: https://api.example.com:443/users/42?format=json#profile
| Part | Value | What it means |
| Scheme | https | Use HTTPS protocol (encrypted HTTP) |
| Host | api.example.com | The domain name to connect to |
| Port | 443 | Port number (443 is default for HTTPS, 80 for HTTP) |
| Path | /users/42 | The specific resource being requested |
| Query | ?format=json | Additional parameters passed to the server |
| Fragment | #profile | Client-side anchor — never sent to the server |
| 🔑 The fragment (#profile) is NEVER sent to the server. It is processed entirely by the browser. Your backend will never see it. This is a classic interview gotcha. |
The browser also checks a few things at this stage:
- Is this a valid URL format?
- Is the scheme supported (http, https, ftp, etc.)?
- If no port is specified — assume 80 for HTTP, 443 for HTTPS
- Encode any special characters in the path or query using percent-encoding (spaces become %20, etc.)
Stage 2 DNS Lookup — Translating Domain to IP Address
Your computer does not know where ‘api.example.com’ is. It only understands IP addresses — numerical addresses like 142.250.80.46. DNS (Domain Name System) is the phonebook that translates one to the other.
The DNS Resolution Chain
DNS lookup does not happen in one step. It’s a chain of queries, each one more specific than the last:
| Step | Where it checks | What it does |
| 1 | Browser DNS cache | Did I recently look up this domain? Use cached IP. |
| 2 | OS DNS cache | Has this machine looked it up? Check /etc/hosts and OS cache. |
| 3 | Router / ISP DNS | Ask the local DNS resolver (usually your router or ISP). |
| 4 | Root DNS servers | 13 root servers worldwide. Returns address of TLD nameserver. |
| 5 | TLD nameserver (.com, .in) | Returns address of the domain’s authoritative nameserver. |
| 6 | Authoritative nameserver | Returns the actual IP address for the domain. |
# You can trace DNS resolution yourself using dig:
$ dig api.example.com
# Output shows:
;; QUESTION SECTION:
;api.example.com. IN A
;; ANSWER SECTION:
api.example.com. 300 IN A 93.184.216.34
# 300 = TTL (Time To Live) in seconds
# A record = IPv4 address
# 93.184.216.34 = the IP the browser will connect to
DNS Caching and TTL
Every DNS record has a TTL (Time To Live) — how many seconds the result can be cached before a fresh lookup is needed. This is set by the domain owner.
| TTL Value | What it means | Use case |
| 60 seconds | Very short — re-check DNS every minute | Rapid failover, frequently changing IPs |
| 300 seconds | 5 minutes — common for APIs | Balance between freshness and speed |
| 3600 seconds | 1 hour — standard for most websites | Stable servers, good caching |
| 86400 seconds | 24 hours — long-lived | CDN origins, very stable infrastructure |
| 🌍 As a backend developer: if you ever change your server’s IP address (migrate hosting, add a load balancer), DNS propagation delay means some users will still hit the old IP for up to TTL seconds. Always lower your TTL to 60 seconds BEFORE migrating, then raise it back after. |
What the Browser Gets After DNS
After DNS resolution, the browser now has:
- The IP address to connect to (e.g. 93.184.216.34)
- The port to connect on (443 for HTTPS)
- Everything it needs to open a network connection
Stage 3 TCP Handshake — Establishing a Reliable Connection
HTTP is built on top of TCP (Transmission Control Protocol). Before any HTTP data is sent, TCP must first establish a reliable connection between client and server.
This happens through what’s called the 3-way handshake.
Why TCP? Why Not Just Send the Request Directly?
| 🤔 The internet is not reliable by default. Packets can get lost, arrive out of order, or get duplicated. TCP adds a layer on top that guarantees: packets arrive, they arrive in order, and lost packets are retransmitted. |
HTTP needs this guarantee because a web page arriving in scrambled order would be useless. TCP ensures the data arrives intact.
The 3-Way Handshake — Step by Step
Client Server
| |
| -------- SYN --------> | Step 1: Client sends SYN
| (seq=100, SYN=1) | "I want to connect, my sequence starts at 100"
| |
| <------- SYN-ACK -------- | Step 2: Server responds SYN-ACK
| (seq=300, ack=101, SYN=1) | "OK, I acknowledge. My sequence starts at 300"
| |
| -------- ACK --------> | Step 3: Client confirms
| (ack=301) | "Got it. Connection established."
| |
| === CONNECTION ESTABLISHED ===
| |
| Step | Packet | Meaning |
| 1 | SYN | Client says: ‘I want to start a conversation. My sequence number is X.’ |
| 2 | SYN-ACK | Server says: ‘Heard you. I acknowledge X+1. My sequence number is Y.’ |
| 3 | ACK | Client says: ‘Heard you. I acknowledge Y+1. Let’s talk.’ |
| 💡 The TCP handshake costs one full round trip (RTT) of latency before any HTTP data is sent. On a connection with 50ms RTT, this alone adds 50ms to every new connection. This is why HTTP keep-alive and connection pooling matter so much. |
What is RTT (Round Trip Time)?
RTT is the time it takes for a packet to travel from client to server and back. If you’re in Mumbai connecting to a server in the US, RTT might be 150-200ms. If you’re connecting to a server in the same city, RTT might be 1-5ms.
Every network handshake step costs 1 RTT. That’s why reducing round trips is one of the core goals of HTTP/2 and HTTP/3.
Stage 4 TLS Handshake — Encrypting the Connection (HTTPS Only)
If the URL uses HTTPS (which it should — always), after the TCP handshake there is an additional TLS (Transport Layer Security) handshake.
TLS is what puts the ‘S’ in HTTPS. It ensures that:
- All data sent between client and server is encrypted
- The server is who it claims to be (certificate verification)
- Data cannot be tampered with in transit
TLS 1.3 Handshake (Modern — 1 Round Trip)
TLS 1.3 (current standard) is much faster than older versions — it completes in just 1 additional round trip after TCP:
TLS Handshake
Client Server
| |
| ------- ClientHello -----------> | Client sends:
| (TLS version, cipher suites, | - Supported TLS versions
| client random, key_share) | - Supported encryption algorithms
| | - Public key share
| |
| <------ ServerHello + Certificate -- | Server sends:
| (chosen cipher, server random, | - Chosen encryption algorithm
| certificate, key_share, Finished) | - SSL certificate (with public key)
| | - Its own key share
| | - "Finished" (already encrypted!)
| |
| ------- Finished + Request ------> | Client:
| (Finished, then HTTP Request) | - Verifies server certificate
| | - Sends Finished
| | - Immediately sends HTTP request
| |
| === ENCRYPTED CONNECTION + DATA ===
What is an SSL Certificate?
An SSL certificate is a digital document that proves the server’s identity. It contains:
- The domain name it’s issued for (e.g. api.example.com)
- The certificate authority (CA) that issued it (e.g. Let’s Encrypt, DigiCert)
- The server’s public key
- An expiry date
The browser verifies the certificate by checking it against trusted Certificate Authorities (CAs) built into the OS and browser. If the cert is invalid, expired, or for the wrong domain — you see the ‘Your connection is not private’ warning.
| TLS Version | Round Trips | Status | Notes |
| TLS 1.0 | 2 RTT | Deprecated | Insecure — disabled on modern browsers |
| TLS 1.1 | 2 RTT | Deprecated | Insecure — disabled on modern browsers |
| TLS 1.2 | 2 RTT | Still used | Secure but slower — 2 extra round trips |
| TLS 1.3 | 1 RTT | Recommended | Faster and more secure — current standard |
| ⚠️ If your server still supports TLS 1.0 or 1.1, disable them immediately. They have known vulnerabilities (POODLE, BEAST attacks). Modern production servers should support TLS 1.2 minimum and TLS 1.3 preferred. |
Stage 5 The HTTP Request — What the Client Actually Sends
Now — after DNS, TCP, and TLS — the actual HTTP request is finally sent. This is the message your browser or API client sends to the server.
An HTTP request has three parts: the request line, headers, and optionally a body.Anatomy of a Real HTTP Request
HTTP REQUEST
POST /api/users HTTP/1.1
Host: api.example.com
Content-Type: application/json
Content-Length: 47
Authorization: Bearer eyJhbGciOiJIUzI1NiJ9...
Accept: application/json
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64)
{"name": "Rahul Sharma", "email": "rahul@dev.com"}
Breaking Down Every Line
| Line / Header | What it means |
| POST /api/users HTTP/1.1 | Method=POST, Path=/api/users, using HTTP version 1.1 |
| Host | Which domain this request is for — required in HTTP/1.1 |
| Content-Type | Format of the request body — tells server how to parse it |
| Content-Length | Size of the body in bytes — server knows when body ends |
| Authorization | Credentials to authenticate the request (Bearer token here) |
| Accept | What response formats the client can understand |
| Accept-Encoding | Client supports gzip/brotli compressed responses — reduces size |
| Connection: keep-alive | Don’t close the TCP connection after this request — reuse it |
| User-Agent | What client is making the request (browser, curl, Postman, etc.) |
| (blank line) | Separates headers from body — required by HTTP spec |
| {…} (body) | The actual data being sent — only for POST, PUT, PATCH |
HTTP Methods — A Quick Reminder
| Method | Has Body? | Purpose |
| GET | No | Retrieve a resource — never modify data |
| POST | Yes | Create a new resource or trigger an action |
| PUT | Yes | Replace a resource completely |
| PATCH | Yes | Partially update a resource |
| DELETE | No | Delete a resource |
| HEAD | No | Like GET but returns only headers — no body |
| OPTIONS | No | Ask server what methods are allowed (used in CORS preflight) |
| 💡 GET requests should NEVER modify data on the server. They must be safe and idempotent. This is not just a convention — caches, browsers, and CDNs rely on this guarantee. (We will cover idempotency in depth in Part 2 of this series.) |
Stage 6 Server Processing — What Your Backend Actually Does
This is the stage you as a backend developer own entirely. Let’s trace exactly what happens inside the server after the HTTP request arrives.
The Server Processing Pipeline
| Sub-stage | What happens | Example in Express/Node.js |
| 1. Accept connection | OS TCP stack accepts the connection, hands to app | http.createServer() callback fires |
| 2. Parse request | Framework parses method, path, headers, body | req.method, req.headers, req.body |
| 3. Middleware chain | Request passes through middleware in order | app.use(cors()), app.use(json()) |
| 4. Authentication | Verify who is making the request | Verify JWT token, check session |
| 5. Routing | Match URL path to the correct handler function | app.post(‘/api/users’, handler) |
| 6. Business logic | Execute the actual feature code | Validate input, call service layer |
| 7. Database query | Fetch or write data | User.create({name, email}) |
| 8. Build response | Construct the HTTP response object | res.status(201).json(user) |
| 9. Send response | Serialize and send data back over TCP connection | Express calls res.end() internally |
A Real Node.js + Express Example
Let’s see what this pipeline looks like in actual code:
const express = require('express');
const app = express();
// ── Middleware (runs for every request) ──────────────────
app.use(express.json()); // parse JSON body
app.use(require('cors')()); // add CORS headers
app.use(require('morgan')('dev')); // log every request
// ── Authentication middleware ────────────────────────────
function authenticate(req, res, next) {
const authHeader = req.headers['authorization'];
if (!authHeader || !authHeader.startsWith('Bearer ')) {
return res.status(401).json({ error: 'No token provided' });
}
const token = authHeader.split(' ')[1];
try {
const decoded = jwt.verify(token, process.env.JWT_SECRET);
req.user = decoded; // attach user to request
next(); // pass to next middleware
} catch (err) {
return res.status(401).json({ error: 'Invalid token' });
}
}
// ── Route handler (POST /api/users) ─────────────────────
app.post('/api/users', authenticate, async (req, res) => {
try {
// Step 1: validate input
const { name, email } = req.body;
if (!name || !email) {
return res.status(400).json({
error: 'name and email are required'
});
}
// Step 2: check if user already exists
const existing = await User.findOne({ email });
if (existing) {
return res.status(409).json({
error: 'Email already registered'
});
}
// Step 3: create user in database
const user = await User.create({ name, email });
// Step 4: send response
return res.status(201).json({
message: 'User created successfully',
data: { id: user._id, name: user.name, email: user.email }
});
} catch (err) {
console.error(err);
return res.status(500).json({ error: 'Internal server error' });
}
});
app.listen(3000, () => console.log('Server running on port 3000'));
| 🌍 Every professional backend API follows this pattern — middleware chain → authentication → routing → business logic → database → response. Understanding this pipeline is what lets you debug where a request is failing and design clean, maintainable APIs. |
The HTTP Response — What the Server Sends Back
After the server processes the request, it sends an HTTP response. Just like the request, it has three parts: the status line, headers, and optionally a body.
HTTP RESPONSE
HTTP/1.1 201 Created
Content-Type: application/json; charset=utf-8
Content-Length: 89
Date: Tue, 31 Mar 2026 06:30:00 GMT
Server: nginx/1.24.0
Cache-Control: no-store
X-Request-Id: a1b2c3d4-e5f6-7890-abcd-ef1234567890
Connection: keep-alive
{
"message": "User created successfully",
"data": {
"id": "64f1a2b3c4d5e6f7a8b9c0d1",
"name": "Rahul Sharma",
"email": "rahul@dev.com"
}
}
Breaking Down the Response
| Line / Header | What it means |
| HTTP/1.1 201 Created | HTTP version + status code + human-readable status message |
| Content-Type | Format of the response body — client uses this to parse it |
| Content-Length | How many bytes the body is — client knows when response ends |
| Date | When the server generated the response |
| Server | What server software handled the request (often hidden in production) |
| Cache-Control: no-store | Don’t cache this response — used for sensitive data like user creation |
| X-Request-Id | Unique ID for this request — critical for distributed tracing and debugging |
| Connection: keep-alive | Keep the TCP connection open for future requests |
HTTP Status Codes — The Right Ones for the Right Situation
| Code | Name | Use it when… |
| 200 | OK | Request succeeded, response body has the data |
| 201 | Created | Resource was successfully created (POST) |
| 204 | No Content | Success but no body — used for DELETE or PUT with no return |
| 400 | Bad Request | Client sent invalid data — missing field, wrong format |
| 401 | Unauthorized | Not authenticated — no token or invalid token |
| 403 | Forbidden | Authenticated but not allowed — wrong permissions |
| 404 | Not Found | Resource does not exist |
| 409 | Conflict | Duplicate — email already registered, username taken |
| 422 | Unprocessable Entity | Data is valid format but fails business validation |
| 429 | Too Many Requests | Rate limit exceeded |
| 500 | Internal Server Error | Something broke on the server — never expose details |
| 503 | Service Unavailable | Server is down or overloaded — used with load balancers |
| ⚠️ Never return 200 OK for an error. This is the most common mistake junior developers make. If a user tries to register with an existing email, return 409 — not 200 with an error message in the body. APIs must use status codes correctly so clients (browsers, mobile apps, other services) can handle errors properly. |
Stage 7 Connection Termination or Keep-Alive
After the response is sent, the connection needs to be either closed or kept open for future requests.
| Connection Type | How it works | When to use |
| Close (HTTP/1.0 default) | TCP connection is closed after every request. New DNS+TCP+TLS for next request. | Legacy, avoid in production |
| Keep-Alive (HTTP/1.1 default) | TCP connection stays open. Multiple requests can reuse the same connection. | Standard — reduces latency significantly |
| HTTP/2 Multiplexing | Multiple requests sent simultaneously over ONE connection — no head-of-line blocking. | APIs with many parallel requests |
| HTTP/3 (QUIC) | Built on UDP instead of TCP — even faster connection establishment. | Cutting edge — major CDNs support it |
| 💡 HTTP/1.1 keep-alive is the minimum you should be using in production. It means the DNS + TCP + TLS overhead is paid only once, and subsequent requests to the same server reuse the connection. This alone can cut response times significantly for users making multiple API calls. |
The Complete Picture — Total Latency Breakdown
Let’s put it all together. How much time does each stage take for a typical HTTPS request from India to a US server (RTT ~150ms)?
| Stage | Latency | Notes |
| DNS Lookup | 0-150ms | 0ms if cached, up to 150ms if full resolution needed |
| TCP Handshake | 150ms | 1 RTT — unavoidable for new connections |
| TLS 1.3 Handshake | 150ms | 1 RTT — unavoidable for HTTPS |
| HTTP Request | ~0ms | Tiny — just the time to transmit the data |
| Server Processing | 10-100ms | Depends on your code, DB queries, business logic |
| HTTP Response | 150ms | 1 RTT to get data back to client |
| TOTAL (new conn) | 460-700ms | For subsequent requests on keep-alive: only ~310ms |
| 🚀 For a developer in India building APIs used by Indian users (RTT ~5-30ms to nearby servers), these numbers are much better. Hosting your backend in Mumbai/Singapore instead of US East can cut latency by 10x for Indian users. Always deploy close to your users. |
How to Reduce Latency at Each Stage
| Stage | How to reduce latency |
| DNS | Lower TTL before migrations. Use fast DNS providers (Cloudflare 1.1.1.1). Enable DNS prefetching. |
| TCP | Enable TCP Fast Open. Use connection pooling. Avoid creating new connections per request. |
| TLS | Use TLS 1.3. Enable TLS session resumption. Use OCSP stapling. |
| Server | Optimise DB queries. Add caching (Redis). Use async code. Avoid N+1 queries. |
| Response | Enable gzip/brotli compression. Use CDN for static assets. Set proper Cache-Control headers. |
| Connection | Always use HTTP keep-alive. Migrate to HTTP/2 for APIs with many requests. |
What Every Backend Developer Must Know From This
Here are the practical takeaways that will directly affect the code you write and the systems you build:
1. You Control Stage 6 — Make It Count
Your server’s response time (Stage 6) is the one thing you fully control. DNS, TCP, and TLS are largely outside your hands. Focus on:
- Efficient database queries — use indexes, avoid N+1 queries
- Caching repeated computations — Redis for sessions, expensive calculations
- Async processing — don’t block the event loop on slow operations
- Streaming large responses instead of buffering them
2. Always Use HTTPS in Production
There is no valid reason to use plain HTTP in production in 2026. TLS 1.3 adds only 1 RTT of overhead and protects your users from man-in-the-middle attacks. Use Let’s Encrypt — it’s free.
3. Set the Right Headers
The headers your server sends directly affect caching, security, and performance:
- Content-Type — always set this correctly
- Cache-Control — tell clients and CDNs how to cache your responses
- X-Request-Id — log this on every request for distributed tracing
- CORS headers — required for browser cross-origin requests (we cover this in Part 5)
4. Use Status Codes Correctly
Your API’s status codes are its contract with clients. Use them correctly — 201 for creation, 204 for empty success, 409 for conflicts, 401 vs 403 for auth issues. Never return 200 for errors.
5. Understand Connection Pooling
Every time your server talks to a database, it needs a TCP connection to that database too. Opening a new connection per request is expensive. Use a connection pool — most database libraries (Mongoose, pg, mysql2) support this and have it on by default.
Node JS
// mongoose connection pooling example
mongoose.connect(process.env.MONGODB_URI, {
maxPoolSize: 10, // maintain up to 10 TCP connections to MongoDB
minPoolSize: 2, // always keep at least 2 connections open
});
// Now all requests reuse these 10 connections
// instead of opening a new one each time
Interview Questions on HTTP Lifecycle
This topic appears in backend interviews, system design rounds, and even frontend interviews at senior level. Here are the most common questions and what a great answer covers:
| 🎯 Q: What happens when you type a URL and press Enter? |
Great answer covers: URL parsing → DNS lookup (with caching) → TCP 3-way handshake → TLS handshake (for HTTPS) → HTTP request sent → server processes (routing, middleware, DB, business logic) → HTTP response returned → connection kept alive. Mention RTT and why each step costs latency.
| 🎯 Q: What is the difference between HTTP and HTTPS? |
HTTP sends data in plain text — anyone on the network can read it. HTTPS adds TLS encryption — all data is encrypted end to end. HTTPS also verifies server identity via SSL certificates. In terms of the lifecycle, HTTPS adds one TLS handshake round trip after TCP.
| 🎯 Q: Why is the first request to a new server always slower? |
Because it pays the full cost: DNS lookup (if not cached) + TCP handshake (1 RTT) + TLS handshake (1 RTT for TLS 1.3). Subsequent requests on the same keep-alive connection skip all three — they just send the HTTP request and wait for the response.
| 🎯 Q: How would you reduce the latency of your API? |
Cover multiple layers: (1) Infrastructure — host close to users, use a CDN, use HTTP/2. (2) DNS — lower TTL, use fast DNS. (3) Server — optimise DB queries, add Redis caching, use async code, enable compression. (4) Connections — use keep-alive, connection pooling to DB. (5) Response — set correct Cache-Control headers so clients and CDNs cache responses.
Quick Reference — The HTTP Lifecycle in One Table
| Stage | What happens | Key concept |
| 1. URL Parsing | Break URL into scheme, host, port, path, query, fragment | Fragment never reaches server |
| 2. DNS Lookup | Resolve domain name to IP address | Cached results use TTL |
| 3. TCP Handshake | 3-way SYN / SYN-ACK / ACK | Costs 1 RTT, enables reliability |
| 4. TLS Handshake | Negotiate encryption, verify certificate | TLS 1.3 = 1 RTT overhead |
| 5. HTTP Request | Client sends method, path, headers, optional body | Headers define behaviour |
| 6. Server Processing | Middleware → auth → routing → logic → DB → response | This is YOUR code |
| 7. HTTP Response | Server sends status code, headers, body | Status codes must be correct |
| 8. Connection | Close or keep-alive for future requests | Keep-alive saves RTT |
Wrapping Up
The HTTP request lifecycle is not just theoretical knowledge — it directly shapes how you write backend code, design APIs, debug performance issues, and answer system design interview questions.
Every layer we covered maps to something you control or configure as a backend developer:
- DNS — you control TTL and where your servers are hosted
- TCP/TLS — you control which versions and ciphers your server supports
- HTTP Request — you design the API contract clients follow
- Server Processing — you write the middleware, auth, routing, and business logic
- HTTP Response — you choose the status codes, headers, and response format
- Connection — you configure keep-alive, pooling, and HTTP version
In the next post in this series, we go deep into HTTP Request and Response Structure — every header, every method, and what they mean for the APIs you build.
| 📬 This is Part 1 of the Web & HTTP Foundations series on Daily Dev Notes. Next: Part 2 — HTTP Request & Response Structure (Methods, Headers, Body). Subscribe to the newsletter so you don’t miss it. |
What are the stages of an HTTP request lifecycle?
The HTTP request lifecycle has 7 stages: (1) URL parsing and validation, (2) DNS lookup to resolve domain to IP, (3) TCP connection establishment via 3-way handshake, (4) TLS handshake for HTTPS, (5) HTTP request sent by client, (6) Server processes request and sends HTTP response, (7) Connection termination or keep-alive. Each stage adds latency, which is why backend developers must understand all of them.
What is the difference between HTTP and HTTPS in the request lifecycle?
In HTTP, after the TCP handshake the client immediately sends the request. In HTTPS, there is an additional TLS handshake step between TCP and the actual request. This TLS handshake negotiates encryption keys and verifies the server’s SSL certificate, adding 1-2 round trips of latency but making the connection secure and encrypted.
What is a TCP 3-way handshake and why does HTTP need it?
A TCP 3-way handshake is how a reliable connection is established between client and server before any data is sent. It involves three steps: SYN (client says ‘I want to connect’), SYN-ACK (server says ‘OK, I acknowledge’), and ACK (client confirms). HTTP needs TCP because TCP guarantees that packets arrive in order and without loss — critical for web data.
Have a question about any stage? Drop it in the comments — I reply to every one.