Just like the Straight insufficient bytes error, this is another error you can see in your varnishlogs.
It looks like this.
11 VCL_return c hash 11 VCL_call c pass pass 11 FetchError c http first read error: -1 11 (Resource temporarily unavailable) 11 VCL_call c error deliver 11 VCL_call c deliver deliver 11 TxProtocol c HTTP/1.1 11 TxStatus c 503 11 TxResponse c Service Unavailable 11 TxHeader c Content-Type: text/html; charset=utf-8 11 TxHeader c Content-Length: 686 11 TxHeader c Accept-Ranges: bytes 11 TxHeader c Age: 15 11 TxHeader c Connection: close 11 TxHeader c x-Cache: uncached 11 TxHeader c x-Hits: 0
What’s happening#
The error “HTTP first read error” can indicate that your Varnish config is not waiting long enough for the backend to respond. A default backend definition in Varnish will wait for 5 seconds for a “first byte” from the upstream. If it doesn’t respond in that time, it’ll close the connection and throw an HTTP 503 error.
The quick fix (but not the best)#
Modify your backend definition to look like this, to allow for longer waiting responses. It’s especially the .first_byte_timeout option that will make the difference.
backend my_backend {
.host = "127.0.0.1";
.port = "80";
# How long to wait before we receive a first byte from our backend?
.first_byte_timeout = 300s;
# How long to wait for a backend connection?
.connect_timeout = 3s;
# How long to wait between bytes received from our backend?
.between_bytes_timeout = 10s;
}
The example above will wait for 300 seconds (= 5 minutes) for the backend to respond with the first byte.
The error will mostly pop-up when a POST is made to the server, but the request is too slow to be processed and is causing the server to take longer to send a reply to the request. Increasing the upstream timeouts can help. But you may not want to do this for all requests.
Setting a longer timeout for certain pages#
In case you don’t want longer timeouts for all pages, it’s a good idea to configure 2 backends; one that you can use for “normal” situations and one for requests that you know take longer. For instance, a config like this might help you. You define a second backend, with longer timeouts and in your Varnish VCL, you set logic for switching between these backends.
backend my_backend_normal {
.host = "127.0.0.1";
.port = "80";
}
backend my_backend_longwait {
.host = "127.0.0.1";
.port = "80";
# How long to wait before we receive a first byte from our backend?
.first_byte_timeout = 300s;
# How long to wait for a backend connection?
.connect_timeout = 3s;
# How long to wait between bytes received from our backend?
.between_bytes_timeout = 10s;
}
sub vcl_recv {
# Set the default backend server
set req.backend = my_backend_normal;
# Set the backend for POST requests
if (req.request == "POST") {
set req.backend = my_backend_longwait;
}
# Or set a longer-waiting backend for specific URL schemes
if (req.url ~ "^/some/vary/long/page$") {
set req.backend = my_backend_longwait;
}
...
}
This gives you the benefit of having smaller timeouts for “normal” pages, and selective longer timeouts for pages that you know will take longer to respond.