9

What are the optimal settings for Nginx to handle LOTS of requests at the same time?

My server is configured with Nginx and PHP7.3 on Ubuntu 20.04 LTS. Application that is running is build with Laravel 7.

This is my current config:

location ~ \.php$ {
    fastcgi_split_path_info ^(.+\.php)(/.+)$;
    fastcgi_pass unix:/var/run/php/php7.3-fpm.sock;
    fastcgi_index index.php;
    fastcgi_buffer_size 14096k;
    fastcgi_buffers 512 14096k;
    fastcgi_busy_buffers_size 14096k;
    fastcgi_connect_timeout 60;
    fastcgi_send_timeout 300;
    fastcgi_read_timeout 300;
}

The fastcgi-parameters I placed I found via Google and tweaked the number to some high values.

Application does the following:

  • 1500+ users are online
  • they get a multiple-choice-question pushed directly via Pusher
  • they answer the question all together almost at once = 1 request to the server via Ajax for each answer
  • every time an answer is giving, the results are fetched from the server for each user

The all four steps can be done within couple of seconds.

Server is not peaking in CPU nor Memory when this is done, the only thing that is happening is that some users get a 502 timeout.

Looks like a server config-issue in Nginx.

This are stats of the server of the moment it happened:

  • System: 25%, CPU: 22%, Disk IO: 0% - available 8 processor cores
  • RAM: 1.79GB - available 3GB

Side note is that I disabled the VerifyCsrfToken in Laravel to the routes that are called to prevent extra server-load.

What am I missing? Do I have to change some PHP-FPM settings also? If so, to which and were can I do that?

This is what the Nginx-error logs of the domain tells me:

2020/04/25 13:58:14 [error] 7210#7210: *21537 connect() to unix:/var/run/php/php7.3-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: 54.221.15.18, server: website.url, request: "GET /loader HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.3-fpm.sock:", host: "website.url"

Settings of www.conf:

pm.max_children = 100
pm.start_servers = 25
pm.min_spare_servers = 25
pm.max_spare_servers = 50
pm.max_requests = 9000
;pm.process_idle_timeout = 10s;
;pm.status_path = /status
19
  • "What am I missing?" To test how much memory is in use for exactly single request. And to divide whole server available RAM with that number. That way you will get exact theoretic amount of requests at once. Commented Apr 24, 2020 at 11:30
  • added the information i found out about this Commented Apr 24, 2020 at 11:37
  • Thing is that one request-response cycle in Laravel takes amount of available RAM. On your machine it is 3 GB but you need to consider RAM used for other parts of system like for OS functioning, other processes and applications mandatory for server running. So let's assume you have 2 GB of RAM available for your laravel application. Maybe it is more, but let's take 2 GB in calculation. Even empty laravel application takes ~10 MB of RAM for each req/res cycle. It would be 200 parallel connection/processes. Commented Apr 24, 2020 at 13:24
  • 1
    Let's say one cycle finishes in 400 ms that says we have 2 and a half requests per second. 200x2.5 = 500 requests per seconds available (theoretically, if all is ok). But, if in some peak time of the day you have more than 500 requests that are staggered this sterile way (exactly maximum 200 requests started in same moment) it is why it breaks. RAM on machine is not infinite. Commented Apr 24, 2020 at 13:29
  • And when out of ram, a 502 timeout or Nginx occurs? So upgrading ram solves it? @Tpojka Commented Apr 25, 2020 at 6:23

2 Answers 2

1

(11: Resource temporarily unavailable)

That's EAGAIN/EWOULDBLOCK, that means nginx did accept client connections, but it cannot connect to PHP-FPM's UNIX socket without blocking (waiting), and probably, without looking at nginx's source code, nginx had tried several times connecting to said UNIX socket but failed, so nginx throws a Connection refused.

There's a few ways to solve this, either:

  1. increase listen.backlog config value in your PHP-FPM pool config, with its corresponding net.ipv4.tcp_max_syn_backlog, net.ipv6.tcp_max_syn_backlog, and net.core.netdev_max_backlog values in sysctl.
  2. create multiple php-fpm pools, then use upstream nginx config to use these pools.
Sign up to request clarification or add additional context in comments.

Comments

0

Edit /etc/security/limits.conf, enter:

# vi /etc/security/limits.conf

Set soft and hard limit for all users or nginx user as follows:

nginx       soft    nofile   10000
nginx       hard    nofile  30000

1 Comment

Not helping. Still same error in the Nginx error log: 2020/05/03 20:18:08 [error] 3729#3729: *110606 connect() to unix:/var/run/php/php7.3-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: 300.24.116.18, server: website.url, request: "GET /loader HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.3-fpm.sock:", host: "website.url"

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.