我有一些运行apache trafficserver的linux服务器,发现大约1/20的请求需要几毫秒,这比预期的要长得多(应该是1-2ms).
在通过systemtap进行跟踪之后,我发现在linux内核copy_to_user()调用中花费了大量时间. (ACCEPT-> inet_csk_accept-> move_addr_to_user-> copy_to_user)
服务器负载退出低(<100qps),copy_to_user()只复制16字节的数据(struct sockaddr),但花费数百毫秒.
由于我不再使用systemtap和内核跟踪技术,我无法进一步调查原因.我检查了cpu使用情况,交换使用情况
谢谢你的建议.
硬件:
> cpu:Intel(R)Xeon(R)cpu E5-2640 v3 @ 2.60GHz
>记忆:64G
>磁盘:11 * 6T硬盘
软件:
> centos6 2.6.32-696.el6.x86_64#1 SMP Tue Mar 21 19:29:05 UTC 2017 x86_64 x86_64 x86_64 GNU / Linux
> apache trafficserver 5.3.2配置10G ram
> Nginx 1.10.2,代理用户对trafficserver的请求
// systemtap
probe kernel.function("move_addr_to_user")
{
ts["move_addr_to_user",pid()] = gettimeofday_ms()
}
probe kernel.function("move_addr_to_user").return
{
printf("[%d]move_addr_to_user done:%d %s %d\n",gettimeofday_ms(),pid(),execname(),gettimeofday_ms()-ts["move_addr_to_user",pid()])
}
probe kernel.function("copy_to_user")
{
ts["copy_to_user",pid()] = gettimeofday_ms()
}
probe kernel.function("copy_to_user").return
{
printf("[%d]copy_to_user done:%d %s %d %d\n",gettimeofday_ms()-ts["copy_to_user",pid()],gettimeofday_ms()-ts["__copy_to_user",pid()])
}
// output:
[1495630190767] 16 16 move_addr_to_user done:24145 [ACCEPT 0:8080] 0
[1495630191164]copy_to_user done:24145 [ACCEPT 0:8080] 0
[1495630191164] 16 16 move_addr_to_user done:24145 [ACCEPT 0:8080] 0
[1495630192172]copy_to_user done:24145 [ACCEPT 0:8080] 861
[1495630192172] 16 16 move_addr_to_user done:24145 [ACCEPT 0:8080] 861
[1495630192173]copy_to_user done:24145 [ACCEPT 0:8080] 0
[1495630192173] 16 16 move_addr_to_user done:24145 [ACCEPT 0:8080] 0
[1495630192173]copy_to_user done:24145 [ACCEPT 0:8080] 0
[1495630192173] 16 16 move_addr_to_user done:24145 [ACCEPT 0:8080] 0
[1495630192173]copy_to_user done:24145 [ACCEPT 0:8080] 0
free -g
total used free shared buffers cached
Mem: 62 55 6 0 0 32
-/+ buffers/cache: 23 39
Swap: 15 0 15
top - 20:57:39 up 24 days,19:26,2 users,load average: 7.70,9.43,9.62
Tasks: 643 total,1 running,642 sleeping,0 stopped,0 zombie
cpu(s): 0.1%us,1.0%sy,0.0%ni,97.5%id,1.1%wa,0.0%hi,0.3%si,0.0%st
Mem: 65560992k total,58525192k used,7035800k free,365084k buffers
Swap: 16777212k total,0k used,16777212k free,33957572k cached
PID USER PR NI VIRT RES SHR S %cpu %MEM TIME+ COMMAND
24145 traffics 20 0 21.7g 12g 6212 S 24.7 19.3 212:42.85 [ET_NET 0]
22173 root 20 0 677m 325m 1180 S 3.6 0.5 0:41.10 Nginx
22161 root 20 0 677m 325m 1184 S 2.6 0.5 0:47.50 Nginx
22168 root 20 0 677m 326m 2076 S 2.6 0.5 0:28.31 Nginx
22150 root 20 0 677m 325m 1208 S 1.6 0.5 0:42.75 Nginx
22165 root 20 0 677m 325m 1200 S 1.6 0.5 0:31.77 Nginx
更新:
本月@employee感谢您的建议,在__do_page_fault上添加一些探测后,我发现时间花在__do_page_fault上 – > down_read(安培; MM-> mmap_sem来);
[1495677639202]copy_to_user done:24145 [ACCEPT 0:8080] 1
[1495677639202] 16 16 move_addr_to_user done:24145 [ACCEPT 0:8080] 1
[1495677639314]copy_to_user done:24145 [ACCEPT 0:8080] 0
[1495677639314] 16 16 move_addr_to_user done:24145 [ACCEPT 0:8080] 0
[1495677641329]do page fault done:24145 [ACCEPT 0:8080] 622
[1495677641329]copy_to_user done:24145 [ACCEPT 0:8080] 622
[1495677641329] 16 16 move_addr_to_user done:24145 [ACCEPT 0:8080] 622
@Ricardo Biehl Pasquali感谢您的建议. Apache trafficserver在单独的线程中读/写hdd. trafficserver有1个线程接受连接,88个(每个硬盘8个线程)线程做阻塞读/写(缓存内容),但我不明白为什么在其他线程中阻塞读/写可能导致__do_page_fault()的高延迟.
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0.00 16.00 0.00 4.00 0.00 160.00 40.00 0.05 13.50 0.00 13.50 7.75 3.10
sdi 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sde 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdj 0.00 0.00 6.00 0.00 4326.00 0.00 721.00 0.06 10.50 10.50 0.00 6.83 4.10
sdc 0.00 0.00 2.00 0.00 1472.00 0.00 736.00 0.04 18.50 18.50 0.00 9.50 1.90
sdh 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdg 0.00 0.00 14.00 0.00 10464.00 0.00 747.43 0.04 2.71 2.71 0.00 1.79 2.50
更新: