我有一个Rails(3.2)应用程序,它运行在云平台上的Nginx和unicorn上. “盒子”在Ubuntu 12.04上运行.
当系统负载大约为70%或更高时,Nginx突然(并且看似随机)开始抛出502 Bad网关错误;当负载较小时,没有什么比这更好的了.我已经尝试了不同数量的核心(4,6,10 – 我可以“改变硬件”,因为它在云平台上),情况总是一样的. (cpu负载类似于系统负载,用户空间为55%,其余为系统和被盗,有足够的可用内存,没有交换.)
502通常分批进行,但并非总是如此.
(我每个核心运行一个独角兽工作者,以及一个或两个Nginx工作者.当在10个核心上运行时,请参阅下面配置的相关部分.)
我真的不知道如何跟踪这些错误的原因.我怀疑它可能与麒麟工人无法服务(及时?)有关但它看起来很奇怪,因为它们似乎没有使cpu饱和,我认为他们没有理由等待IO(但我不喜欢不知道如何确保这一点.
请你帮我看看如何找到原因?
Unicorn config(unicorn.rb):
worker_processes 10
working_directory "/var/www/app/current"
listen "/var/www/app/current/tmp/sockets/unicorn.sock",:backlog => 64
listen 2007,:tcp_nopush => true
timeout 90
pid "/var/www/app/current/tmp/pids/unicorn.pid"
stderr_path "/var/www/app/shared/log/unicorn.stderr.log"
stdout_path "/var/www/app/shared/log/unicorn.stdout.log"
preload_app true
GC.respond_to?(:copy_on_write_friendly=) and
GC.copy_on_write_friendly = true
check_client_connection false
before_fork do |server,worker|
... I believe the stuff here is irrelevant ...
end
after_fork do |server,worker|
... I believe the stuff here is irrelevant ...
end
和ngnix配置:
worker_processes 2;
worker_rlimit_nofile 2048;
user www-data www-admin;
pid /var/run/Nginx.pid;
error_log /var/log/Nginx/Nginx.error.log info;
events {
worker_connections 2048;
accept_mutex on; # "on" if Nginx worker_processes > 1
use epoll;
}
http {
include /etc/Nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/Nginx/access.log main;
# optimialization efforts
client_max_body_size 2m;
client_body_buffer_size 128k;
client_header_buffer_size 4k;
large_client_header_buffers 10 4k; # one for each core or one for each unicorn worker?
client_body_temp_path /tmp/Nginx/client_body_temp;
include /etc/Nginx/conf.d/*.conf;
}
/etc/Nginx/conf.d/app.conf:
sendfile on;
tcp_nopush on;
tcp_nodelay off;
gzip on;
gzip_http_version 1.0;
gzip_proxied any;
gzip_min_length 500;
gzip_disable "MSIE [1-6]\.";
gzip_types text/plain text/css text/javascript application/x-javascript;
upstream app_server {
# fail_timeout=0 means we always retry an upstream even if it Failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
server unix:/var/www/app/current/tmp/sockets/unicorn.sock fail_timeout=0;
}
server {
listen 80 default deferred;
server_name _;
client_max_body_size 1G;
keepalive_timeout 5;
root /var/www/app/current/public;
location ~ "^/assets/.*" {
...
}
# Prefer to serve static files directly from Nginx to avoid unnecessary
# data copies from the application server.
try_files $uri/index.html $uri.html $uri @app;
location @app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 128k;
proxy_buffers 10 256k; # one per core or one per unicorn worker?
proxy_busy_buffers_size 256k;
proxy_temp_file_write_size 256k;
proxy_max_temp_file_size 512k;
proxy_temp_path /mnt/data/tmp/Nginx/proxy_temp;
open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
}
}
问题的核心是套接字积压太短.各种考虑因素应该是多少(您是否要尽快检测集群成员故障或保持应用程序推送负载限制).但无论如何,listen:backlog需要调整.
我发现在我的情况下听一听……:backlog => 2048就足够了. (我没有多少实验,尽管如果你愿意,可以通过两个套接字在Nginx和unicorn之间进行通信,使用不同的积压和更长的备份;但是在Nginx日志中查看更短的队列失败的频率.)请注意,这不是科学计算和YMMV的结果.
但请注意,许多操作系统(大多数Linux发行版,包括Ubuntu 12.04)在套接字积压大小(低至128)上具有低得多的操作系统级默认限制.
您可以按如下方式更改操作系统限制(以root用户身份):
sysctl -w net.core.somaxconn=2048
sysctl -w net.core.netdev_max_backlog=2048
将这些添加到/etc/sysctl.conf以使更改成为永久更改. (/ etc / sysctl.conf可以重新加载而无需使用sysctl -p重新启动.)
有人提到您可能必须增加进程可以打开的最大文件数(使用ulimit -n和/etc/security/limits.conf来保持永久性).由于其他原因,我已经这样做了,所以我不知道它是否有所作为.