I'm running a server in nginx with Laravel (medium static web) and I'm doing for example 500 constant load simultaneous users during 1 minute (not distributed users during that minute).
And getting this error:
unix:/var/run/php/php7.1-fpm.sock failed - Resource temporarily unavailable
cginx.conf
worker_processes auto;
events {
use epoll;
worker_connections 1524; #in my case it should be 1024, but well..
multi_accept on;
}
http {
#with this I reduce disk usage a lot
client_body_buffer_size 10K;
client_header_buffer_size 1k;
large_client_header_buffers 2 1k;
reset_timedout_connection on;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
www.conf
pm.max_children = 500
pm.start_servers = 20
pm.min_spare_servers = 20
pm.max_spare_servers = 64
Results with Google compute engine:
f1-micro (1 vCPU, 0,6 GB) - Is supporting 40 - 60 requests per second
g1-small (1 vCPU, 1,7 GB) - Is maintaining 80 request per second
n1-standard (1vCPU, 3,75 GB) - - Is maintaining 130 request per second
n1-standard-2 (2vCPU, 7,5 GB) - Is maintaining 250 request per second
.
.
n1-standard-16 (16 vCPU, 60 GB) - Is maintaining 840 request per second
The last one is the first passing the test, the rest are dropping Bad Gateways errors from 200 users to 400
If I test for example not 2.000 users distributed in 30 secs with the micro instance then is fine, but not simultaneous sending requests.
Starting with 2 cores, CPUs level show perfectly fine, same as disk operations etc..
So after a loooot of tests I have some questions:
1) Is this normal? Not for me, is not normal to need 16 cores to run a simple web.. or the stress test is too heavy and it's normal?
2) Then, am I missing something? Is Google limiting request per second somehow?
3) What would be normal parameters for the given config files?
Any other help is more than welcome