12

We have recently implemented a nginx based reverse proxy.

While, debugging our access logs, we are seeing quite a bit of status code 400 results.

They look something like this:

[07/Sep/2011:05:49:04 -0700] - "400" 0 "-" "-" "-"

We have enabled debug error logging, and they usually correspond to something like this:

2011/09/07 05:09:28 [info] 5937#0: *30904 client closed prematurely connection while reading client request line

We have tried raising a number of the buffers, as mentioned by a few pages we were able to google up.

http://www.ruby-forum.com/topic/173362

or

http://blog.craz8.com/articles/2009/06/17/nginx-400-bad-request-errors-due-to-cookies-and-what-to-do-about-them

To no avail.

Why is this happening?

This is a strandard nginx reverse proxy -> apache backend server.

Worth mentioning, the unique type of content on our site is fairly minimal. We have tested this using many browsers and are not personally receiving any of these 400 results.

Thanks!


Further urls detailing similar entries in their logs:

http://blog.rayfoo.info/2009/10/weird-web-server-access-log-entries

3
  • Is the first log your Apache's log and the second the nginx's? Commented Sep 10, 2011 at 18:22
  • Negative. The first is the nginx access log, the second is the nginx error log set to debug. Commented Sep 10, 2011 at 19:57
  • 1
    Are you behind an EC2 Elastic Load Balancer? Part of their healthcheck causes these to be recorded frequently in the logs. Commented Jul 31, 2013 at 22:53

3 Answers 3

8

I found this was caused by using Chrome, which apparently opens extra connections occasionally without sending any data.

Here's some more info: http://www.ruby-forum.com/topic/2953545

Now the question is what to do about them - the answer provided there wasn't very satisfying.

Sign up to request clarification or add additional context in comments.

Comments

1

Are you handling SSL connections? Can you add $ssl_cipher $ssl_protocol to your access log format?

Comments

0

First, it's fairly possible that your clients send request with really big http headers or urls. Maybe an older version of your application set some (probably big) cookies which are unused now and some clients are still trying send them back.

I'd set the header buffers to a really big value and on the application side log the size of the headers/requests and the complete request if they are bigger than usual. Or completely take out the nginx from the chain and log the header/request with the same conditions. If you can, take out the nginx for only those IPs/subnets where the 400 errors came from. I suppose nginx can log the source IP for these 400 errors.

2 Comments

tried setting the max header size to 2048k. same issue persisted. since this is happening fairly frequently ill try to do a few tcpdump runs... even tho this is on a very active, production server a few 2-3 min runs shouldnt overload the box.
Uh, 2097152 bytes should be enough for any http request. tcpdump is good idea. Let me know if you find something.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.