Click to play video
Up to this point, we've assumed that HTTP messages are parsed and sent byte by byte, but we've still treated the message body as a single block of data. This works great for many use cases, but what happens if we want to send updates bit by bit? Well, remember our fundamental unit of HTTP, the HTTP-message:
HTTP-message = start-line CRLF
*( field-line CRLF )
CRLF
[ message-body ]
Turns out, [ message-body ] can be a bit deceiving... its a rather flexible field that can contain a variable length of data, known only as its sent by making use of the Transfer-Encoding header rather than the Content-Length header. Here's the format:
HTTP/1.1 200 OK
Content-Type: text/plain
Transfer-Encoding: chunked
<n>\r\n
<data of length n>\r\n
<n>\r\n
<data of length n>\r\n
<n>\r\n
<data of length n>\r\n
<n>\r\n
<data of length n>\r\n
... repeat ...
0\r\n
\r\n
Where <n> is just a hexadecimal number indicating the size of the chunk in bytes and <data of length n> is the actual data for that chunk. That pattern can be repeated as many times as necessary to send the entire message body. Here's a concrete example with plain text:
HTTP/1.1 200 OK
Content-Type: text/plain
Transfer-Encoding: chunked
1E
I could go for a cup of coffee
C
But not Java
12
Never go full Java
0
Chunked encoding is most often used for:
We need our server to support chunked responses because occasionally our server acts as a proxy for another server. Any requests to the /httpbin endpoint will be proxied to the amazing httpbin.org service: a wonderful online tool for testing HTTP stuff. So, for example, this request:
GET localhost:42069/httpbin/stream/100 will trigger a handler on our server that sends a request to https://httpbin.org/stream/100 and then forwards the response back to the client chunk by chunk.https://httpbin.org/stream/100 streams 100 JSON responses back to our server, making it a great way for us to test our chunked response implementation.
func (w *Writer) WriteChunkedBody(p []byte) (int, error)func (w *Writer) WriteChunkedBodyDone() (int, error)Chunk sizes should be the sizes in bytes of the data, and should be in hexadecimal format.
strings.HasPrefix and strings.TrimPrefix functions to handle routing and route parsing.Content-Length header from the response, and add the Transfer-Encoding: chunked header.http.Get to make the request to httpbin.org and resp.Body.Read to read the response body. I used a buffer size of 1024 bytes, and then printed n on every call to Read so that I could see how much data was being read. Use n as your chunk size and write that chunk data back to the client as soon as you get it from httpbin.org. It's pretty cool to see how the data can be forwarded in real-time!netcat to test your chunked responses. Curl will abstract away the chunking for you, so you won't see your hex and cr and lf characters in your terminal if you use curl. I used this command to see my raw chunked response:echo -e "GET /httpbin/stream/100 HTTP/1.1\r\nHost: localhost:42069\r\nConnection: close\r\n\r\n" | nc localhost 42069
I actually also dropped my buffer size while debugging so that I could have more deterministic (32 byte) chunks. But I then raised it once everything was working properly.
I also literally ran into a bug with curl -v (you heard me) while working on this. If you're on curl 8.6, it might tell you that you have "leftovers after chunking", but you don't. To be sure, use nc to see the raw response or just upgrade to curl 8.12.
Run and submit the CLI tests with your server running.