Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jetty-users] Fail-fast request handling best practices

Thanks all for the feedback, ideas and suggestions.

I tested two of the suggested changes separately: reading the request completely before replying, and adding a "Connection: close" header (which I was not sending). Both options seem to eliminate the 502 responses, so I've decided to add the header and otherwise keep the fail-fast behavior as is (without reading the request completely).

I did briefly look into replacing nginx with Apache to check the behavior in that scenario, and this should theoretically be possible, but the corresponding configuration did not seem to be available in the AWS console for some reason and I didn't dig into it enough to figure out why.

Also, I wouldn't control the client in this scenario, so the "Expect: 100-continue" header isn't really an option.

I did manage to stay away from SO_LINGER :-)

Thanks again!

Daniel


On Wed, Mar 31, 2021 at 3:38 PM Joakim Erdfelt <joakim@xxxxxxxxxxx> wrote:
If you are going to reject the request in your own code with a 400.
Make sure you set the `Connection: close` header on the response, as this will trigger Jetty to close the connection on it's side.

Something the HTTP spec allows for (the server is allowed to close the connection at any point).
nginx should see this header and not send more data to Jetty (this is actually spelled out in the spec)

Joakim Erdfelt / joakim@xxxxxxxxxxx


On Tue, Mar 30, 2021 at 11:45 PM Daniel Gredler <djgredler@xxxxxxxxx> wrote:
Hi,

I'm playing around with a Jetty-based API service deployed to AWS Elastic Beanstalk in a Docker container. The setup is basically: EC2 load balancer -> nginx reverse proxy -> Docker container running the Jetty service.

One of the API endpoints accepts large POST requests. As a safeguard, I wanted to add a maximum request size (e.g. any request body larger than 1 MB is rejected). I thought I'd be clever and check the Content-Length header, if present. If the header indicates that the body is too large, I'd reject the request immediately (HTTP 400 error), without even wasting time reading the request body. I can imagine similar fail-fast checks on the security side, using the Authorization HTTP request header.

This Content-Length check works correctly most of the time, but occasionally nginx reports "writev() failed (32: Broken pipe) while sending request to upstream" and sends a HTTP 502 error upstream to the load balancer, which duly informs the client that there was a HTTP 502 Bad Gateway error somewhere along the line.

It appears that in these instances Jetty is closing the connection after sending back the HTTP 400 error, nginx doesn't notice and continues to try to send the request body content to Jetty, sees at that point that the connection is closed, and reports a less-than-friendly HTTP 502 error to the client.

So I'm wondering... is this fail-fast Content-Length header check too clever? Is it best practice to actually always read the full request body, and only fail once the body has been fully read, even if we have enough information to reject the request much earlier? Or would most people just accept the occasional 502 error? I've seen some mentions of SO_LINGER / setSoLingerTime and setAcceptQueueSize as possible workarounds, but SO_LINGER especially always seems to be surrounded with "here be dragons" warnings...

What's the best practice here? Should I just accept that I need to read these useless bytes?

Take care,

Daniel

_______________________________________________
jetty-users mailing list
jetty-users@xxxxxxxxxxx
To unsubscribe from this list, visit https://www.eclipse.org/mailman/listinfo/jetty-users
_______________________________________________
jetty-users mailing list
jetty-users@xxxxxxxxxxx
To unsubscribe from this list, visit https://www.eclipse.org/mailman/listinfo/jetty-users

Back to the top