239

I am using Nginx as a reverse proxy that takes requests then does a proxy_pass to get the actual web application from the upstream server running on port 8001.

If I go to mywebsite.example or do a wget, I get a 504 Gateway Timeout after 60 seconds... However, if I load mywebsite.example:8001, the application loads as expected!

So something is preventing Nginx from communicating with the upstream server.

All this started after my hosting company reset the machine my stuff was running on, prior to that no issues whatsoever.

Here's my vhosts server block:

server {
    listen   80;
    server_name mywebsite.example;

    root /home/user/public_html/mywebsite.example/public;

    access_log /home/user/public_html/mywebsite.example/log/access.log upstreamlog;
    error_log /home/user/public_html/mywebsite.example/log/error.log;

    location / {
        proxy_pass http://xxx.xxx.xxx.xxx:8001;
        proxy_redirect off;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

And the output from my Nginx error log:

2014/06/27 13:10:58 [error] 31406#0: *1 upstream timed out (110: Connection timed out) while connecting to upstream, client: xxx.xx.xxx.xxx, server: mywebsite.example, request: "GET / HTTP/1.1", upstream: "http://xxx.xxx.xxx.xxx:8001/", host: "mywebsite.example"
3
  • Is the server running SELinux? Commented Feb 12, 2017 at 15:50
  • IN MY CASE, NAT gateway was the issue, not the NGINX or the backend API. stackoverflow.com/a/62351959/9956279 Commented Jun 13, 2020 at 18:42
  • upstreamlog ??? Commented Dec 17, 2024 at 18:30

11 Answers 11

248

Probably can add a few more line to increase the timeout period to upstream. The examples below sets the timeout to 300 seconds :

proxy_connect_timeout       300;
proxy_send_timeout          300;
proxy_read_timeout          300;
send_timeout                300;
Sign up to request clarification or add additional context in comments.

11 Comments

I think that increasing the timeout is seldom the answer unless you know your network/service will always or in some cases respond very slowly. Few web requests nowadays should take more than a few seconds unless you are downloading content (files/images)
@Almund I thought the same thing (almost didn't bother trying this), but for whatever reason this just worked for me. (Previously timed out after 60 sec, now get response immediately).
Did not solve the problem for me using it with a nodejs server
I find that I only need the proxy_read_timeout when debugging on the backend. thanks!
Where specifically should we add these lines?
|
168

Increasing the timeout will not likely solve your issue since, as you say, the actual target web server is responding just fine.

I had this same issue and I found it had to do with not using a keep-alive on the connection. I can't actually answer why this is but, in clearing the connection header I solved this issue and the request was proxied just fine:

server {
    location / {
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   Host      $http_host;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_pass http://localhost:5000;
    }
}

Have a look at this posts which explains it in more detail:

5 Comments

MONTHS of problems solved by a single line proxy_set_header Connection ""; lol, dont use runcloud
We had a proxy that was timing out of the source took more than 5 seconds to respond. This did the trick. Thank you!
Thank you for this. This is the official explanation for why HTTP 1.1 is necessary. By default NGINX uses HTTP/1.0 for connections to upstream servers and accordingly adds the Connection: close header to the requests that it forwards to the servers. The result is that each connection gets closed when the request completes, despite the presence of the keepalive directive in the upstream{} block. Source: nginx.com/blog/avoiding-top-10-nginx-configuration-mistakes/…
I also had the issue that when using nginx as reverse-proxy that random requests would end in 504 or 502. (same nginx.conf on staging worked, while it was buggy on prod) proxy_set_header Connection ""; seemed to fix the issue but I now realize that a http with responseType: text consistently fails (pending for 5 min into 504, although it should be done in few millis). I literally only changed that Connection: "" thingy. Anyone has any idea on WTF is going on there?
nginx close upstream connection after request ?
38

user2540984, as well as many others have pointed out that you can try increasing your timeout settings. I myself faced a similar issue to this one and tried to change my timeout settings in the /etc/nginx/nginx.conf file, as almost everyone in these threads suggest. This, however, did not help me a single bit; there was no apparent change in NGINX' timeout settings. After many hours of searching, I finally managed to solve my issue.

The solution lies in this forum thread, and what it says is that you should put your timeout settings in /etc/nginx/conf.d/timeout.conf (and if this file doesn't exist, you should create it). I used the same settings as suggested in the thread:

proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;

This might not be the solution to your particular problem, but if anyone else notices that the timeout changes in /etc/nginx/nginx.conf don't do anything, I hope this answer helps!

5 Comments

hi there is no timeout.conf in my config.d directory. You said create it, and I want to confirm it just add the above setting in the timeout.conf?
Yes, just add them. You can modify them for your own needs, but these worked for me!
Unfortunately, in Laravel homestead with ubuntu and Nginx, this does not work. :( Do you mean just to add those lines? without server{} or something else? This error comes out right after 5 minutes. I reload, reboot, and it never makes it through beyond those 5 minutes or 300 seconds. Are there more ideas to fix it?
In your nginx.conf main configuration file you have not mentioned where this timeout.conf file is included. In the end, Nginx have only one configuration file which includes all .conf files. I think it worked at your end because you increased timeout to 600.
I've voted this down because the config doesn't need to go in any particular file, particularly not one called "timeout.conf". It can go anywhere that the rules will cause it to be applied. I think most people with problems with these configs are not familiar with how nginx rules work and therefore putting them essentially at a "top level" will make them work. But they don't have to go there, and in many cases, you only want increased timeouts to apply to a particular directory. If these settings don't work, it's because the rule is not being applied to the endpoint you're testing.
32

If you want to increase or add time limit to all sites then you can add below lines to the nginx.conf file.

Add below lines to the http section of /usr/local/etc/nginx/nginx.conf or /etc/nginx/nginx.conf file.

fastcgi_read_timeout 600;
proxy_read_timeout 600;

If the above lines doesn't exist in conf file then add them, otherwise increase fastcgi_read_timeout and proxy_read_timeout to make sure that nginx and php-fpm did not timeout.

To increase time limit for only one site then you can edit in vim /etc/nginx/sites-available/example.com

location ~ \.php$ {
    include /etc/nginx/fastcgi_params;
        fastcgi_pass  unix:/var/run/php5-fpm.sock;
    fastcgi_read_timeout 300; 
}

and after adding these lines in nginx.conf, then don't forget to restart nginx.

service php7-fpm reload 
service nginx reload

or, if you're using valet then simply type valet restart.

3 Comments

Thanks works for me: fastcgi_read_timeout 600; proxy_read_timeout 600;
are you sure that fastcgi_read_timeout making a response 504 Gateway Timeout?
Just adding fastcgi_read_timeout 600; worked for me! Thanks!!
25

You can also face this situation if your upstream server uses a domain name, and its IP address changes (e.g.: your upstream points to an AWS Elastic Load Balancer)

The problem is that nginx will resolve the IP address once, and keep it cached for subsequent requests until the configuration is reloaded.

You can tell nginx to use a name server to re-resolve the domain once the cached entry expires:

location /mylocation {
    # use google dns to resolve host after IP cached expires
    resolver 8.8.8.8;
    set $upstream_endpoint http://your.backend.server/;
    proxy_pass $upstream_endpoint;
}

The docs on proxy_pass explain why this trick works:

Parameter value can contain variables. In this case, if an address is specified as a domain name, the name is searched among the described server groups, and, if not found, is determined using a resolver.

Kudos to "Nginx with dynamic upstreams" (tenzer.dk) for the detailed explanation, which also contains some relevant information on a caveat of this approach regarding forwarded URIs.

3 Comments

this answer is gold, exactly what happened to me. upstream points to aws elb and all the sudden Gateway timeout.
great answer ! managed to solved it
I had the same issue with AWS upstream endpoints. Using an external resolver fixed it. I was able to trace the upstream defects by logging out upstream IP in access.log.
7

nginx

proxy_read_timeout          300;

In my case with AWS, I edited load balance setting also. Attributes => Idle timeout

2 Comments

Thanks, AWS load balancer attribute was missing for me.
i forgot to change this attribute on my AWS ELB, much time was wasted only on nginx conf.
6

Adding following values in the /etc/nginx/nginx.conf fixed the issue for me.

proxy_connect_timeout 600;
proxy_send_timeout   600;
proxy_read_timeout   600;
send_timeout         600;

Comments

3

Had the same problem. Turned out it was caused by iptables connection tracking on the upstream server. After removing --state NEW,ESTABLISHED,RELATED from the firewall script and flushing with conntrack -F the problem was gone.

Comments

3

If you're using a cloud provider and experiencing issues with NGINX, NGINX itself may not be the root cause.

Check the value of the minimum ports per VM instance setting on the NAT Gateway that sits between your NGINX instance(s) and the proxy_pass destination. * IF * the value is too small for the number of concurrent requests, increase it to resolve the problem.

For example, on Google Cloud, a case where a reverse proxy NGINX was placed inside a subnet with a NAT Gateway, requests are proxied to an API URL associated with the backend (upstream) through the NAT Gateway.

Refer to GCP's documentation on how NAT Gateway relates to the NGINX 504 timeout.

Comments

1

In my case i restart php for and it become ok.

1 Comment

tried all the above steps from all the answers but finally, this works, sometimes we just missed the smallest thing.
0

If nginx_ajp_module is used, try adding ajp_read_timeout 10m; in nginx.conf file.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.