最好的祝福,
内核3.10.24
Dec 27 09:19:05 2013 kernel: : [277622.359064] squid invoked oom-killer: gfp_mask=0x42d0,order=3,oom_score_adj=0 Dec 27 09:19:05 2013 kernel: : [277622.359069] squid cpuset=/ mems_allowed=0 Dec 27 09:19:05 2013 kernel: : [277622.359074] cpu: 9 PID: 15533 Comm: squid Not tainted 3.10.24-1.lsg #1 Dec 27 09:19:05 2013 kernel: : [277622.359076] Hardware name: Intel Thurley/Greencity,BIOS 080016 10/05/2011 Dec 27 09:19:05 2013 kernel: : [277622.359078] 00000003 e377b280 e03c3c38 c06472d6 e03c3c98 c04d2d96 c0a68f84 e377b580 Dec 27 09:19:05 2013 kernel: : [277622.359089] 000042d0 00000003 00000000 e03c3c64 c04abbda e42bd318 00000000 e03c3cf4 Dec 27 09:19:05 2013 kernel: : [277622.359096] 000042d0 00000001 00000247 00000000 e03c3c94 c04d3d5f 00000001 00000042 Dec 27 09:19:05 2013 kernel: : [277622.359105] Call Trace: Dec 27 09:19:05 2013 kernel: : [277622.359116] [<c06472d6>] dump_stack+0x16/0x20 Dec 27 09:19:05 2013 kernel: : [277622.359121] [<c04d2d96>] dump_header+0x66/0x1c0 Dec 27 09:19:05 2013 kernel: : [277622.359127] [<c04abbda>] ? __delayacct_freepages_end+0x3a/0x40 Dec 27 09:19:05 2013 kernel: : [277622.359131] [<c04d3d5f>] ? zone_watermark_ok+0x2f/0x40 Dec 27 09:19:05 2013 kernel: : [277622.359135] [<c04d2f27>] check_panic_on_oom+0x37/0x60 Dec 27 09:19:05 2013 kernel: : [277622.359138] [<c04d36d2>] out_of_memory+0x92/0x250 Dec 27 09:19:05 2013 kernel: : [277622.359144] [<c04dd1fa>] ? wakeup_kswapd+0xda/0x120 Dec 27 09:19:05 2013 kernel: : [277622.359148] [<c04d6cee>] __alloc_pages_nodemask+0x68e/0x6a0 Dec 27 09:19:05 2013 kernel: : [277622.359155] [<c0801c1e>] sk_page_frag_refill+0x7e/0x120 Dec 27 09:19:05 2013 kernel: : [277622.359160] [<c084b8c7>] tcp_sendmsg+0x387/0xbf0 Dec 27 09:19:05 2013 kernel: : [277622.359166] [<c0469a2f>] ? put_prev_task_fair+0x1f/0x350 Dec 27 09:19:05 2013 kernel: : [277622.359173] [<c0ba7d8b>] ? longrun_init+0x2b/0x30 Dec 27 09:19:05 2013 kernel: : [277622.359177] [<c084b540>] ? tcp_tso_segment+0x380/0x380 Dec 27 09:19:05 2013 kernel: : [277622.359182] [<c086d0da>] inet_sendmsg+0x4a/0xa0 Dec 27 09:19:05 2013 kernel: : [277622.359186] [<c07ff3a6>] sock_aio_write+0x116/0x130 Dec 27 09:19:05 2013 kernel: : [277622.359191] [<c0457acc>] ? hrtimer_try_to_cancel+0x3c/0xb0 Dec 27 09:19:05 2013 kernel: : [277622.359197] [<c050b208>] do_sync_write+0x68/0xa0 Dec 27 09:19:05 2013 kernel: : [277622.359202] [<c050caa0>] vfs_write+0x190/0x1b0 Dec 27 09:19:05 2013 kernel: : [277622.359206] [<c050cbb3>] SyS_write+0x53/0x80 Dec 27 09:19:05 2013 kernel: : [277622.359211] [<c08f72ba>] sysenter_do_call+0x12/0x22 Dec 27 09:19:05 2013 kernel: : [277622.359213] Mem-Info: Dec 27 09:19:05 2013 kernel: : [277622.359215] DMA per-cpu: Dec 27 09:19:05 2013 kernel: : [277622.359218] cpu 0: hi: 0,btch: 1 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359220] cpu 1: hi: 0,btch: 1 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359222] cpu 2: hi: 0,btch: 1 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359224] cpu 3: hi: 0,btch: 1 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359226] cpu 4: hi: 0,btch: 1 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359228] cpu 5: hi: 0,btch: 1 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359230] cpu 6: hi: 0,btch: 1 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359232] cpu 7: hi: 0,btch: 1 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359234] cpu 8: hi: 0,btch: 1 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359236] cpu 9: hi: 0,btch: 1 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359238] cpu 10: hi: 0,btch: 1 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359240] cpu 11: hi: 0,btch: 1 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359242] cpu 12: hi: 0,btch: 1 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359244] cpu 13: hi: 0,btch: 1 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359246] cpu 14: hi: 0,btch: 1 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359248] cpu 15: hi: 0,btch: 1 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359250] cpu 16: hi: 0,btch: 1 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359253] cpu 17: hi: 0,btch: 1 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359255] cpu 18: hi: 0,btch: 1 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359258] cpu 19: hi: 0,btch: 1 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359260] cpu 20: hi: 0,btch: 1 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359262] cpu 21: hi: 0,btch: 1 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359264] cpu 22: hi: 0,btch: 1 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359266] cpu 23: hi: 0,btch: 1 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359268] Normal per-cpu: Dec 27 09:19:05 2013 kernel: : [277622.359270] cpu 0: hi: 186,btch: 31 usd: 34 Dec 27 09:19:05 2013 kernel: : [277622.359272] cpu 1: hi: 186,btch: 31 usd: 72 Dec 27 09:19:05 2013 kernel: : [277622.359274] cpu 2: hi: 186,btch: 31 usd: 40 Dec 27 09:19:05 2013 kernel: : [277622.359276] cpu 3: hi: 186,btch: 31 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359279] cpu 4: hi: 186,btch: 31 usd: 39 Dec 27 09:19:05 2013 kernel: : [277622.359281] cpu 5: hi: 186,btch: 31 usd: 49 Dec 27 09:19:05 2013 kernel: : [277622.359283] cpu 6: hi: 186,btch: 31 usd: 50 Dec 27 09:19:05 2013 kernel: : [277622.359285] cpu 7: hi: 186,btch: 31 usd: 25 Dec 27 09:19:05 2013 kernel: : [277622.359286] cpu 8: hi: 186,btch: 31 usd: 42 Dec 27 09:19:05 2013 kernel: : [277622.359289] cpu 9: hi: 186,btch: 31 usd: 39 Dec 27 09:19:05 2013 kernel: : [277622.359290] cpu 10: hi: 186,btch: 31 usd: 155 Dec 27 09:19:05 2013 kernel: : [277622.359293] cpu 11: hi: 186,btch: 31 usd: 56 Dec 27 09:19:05 2013 kernel: : [277622.359295] cpu 12: hi: 186,btch: 31 usd: 2 Dec 27 09:19:05 2013 kernel: : [277622.359297] cpu 13: hi: 186,btch: 31 usd: 162 Dec 27 09:19:05 2013 kernel: : [277622.359299] cpu 14: hi: 186,btch: 31 usd: 67 Dec 27 09:19:05 2013 kernel: : [277622.359301] cpu 15: hi: 186,btch: 31 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359303] cpu 16: hi: 186,btch: 31 usd: 68 Dec 27 09:19:05 2013 kernel: : [277622.359305] cpu 17: hi: 186,btch: 31 usd: 38 Dec 27 09:19:05 2013 kernel: : [277622.359307] cpu 18: hi: 186,btch: 31 usd: 56 Dec 27 09:19:05 2013 kernel: : [277622.359308] cpu 19: hi: 186,btch: 31 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359310] cpu 20: hi: 186,btch: 31 usd: 54 Dec 27 09:19:05 2013 kernel: : [277622.359312] cpu 21: hi: 186,btch: 31 usd: 35 Dec 27 09:19:05 2013 kernel: : [277622.359314] cpu 22: hi: 186,btch: 31 usd: 2 Dec 27 09:19:05 2013 kernel: : [277622.359316] cpu 23: hi: 186,btch: 31 usd: 60 Dec 27 09:19:05 2013 kernel: : [277622.359318] HighMem per-cpu: Dec 27 09:19:05 2013 kernel: : [277622.359320] cpu 0: hi: 186,btch: 31 usd: 32 Dec 27 09:19:05 2013 kernel: : [277622.359322] cpu 1: hi: 186,btch: 31 usd: 52 Dec 27 09:19:05 2013 kernel: : [277622.359324] cpu 2: hi: 186,btch: 31 usd: 9 Dec 27 09:19:05 2013 kernel: : [277622.359326] cpu 3: hi: 186,btch: 31 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359328] cpu 4: hi: 186,btch: 31 usd: 125 Dec 27 09:19:05 2013 kernel: : [277622.359330] cpu 5: hi: 186,btch: 31 usd: 116 Dec 27 09:19:05 2013 kernel: : [277622.359332] cpu 6: hi: 186,btch: 31 usd: 126 Dec 27 09:19:05 2013 kernel: : [277622.359333] cpu 7: hi: 186,btch: 31 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359336] cpu 8: hi: 186,btch: 31 usd: 79 Dec 27 09:19:05 2013 kernel: : [277622.359338] cpu 9: hi: 186,btch: 31 usd: 34 Dec 27 09:19:05 2013 kernel: : [277622.359340] cpu 10: hi: 186,btch: 31 usd: 111 Dec 27 09:19:05 2013 kernel: : [277622.359341] cpu 11: hi: 186,btch: 31 usd: 144 Dec 27 09:19:05 2013 kernel: : [277622.359343] cpu 12: hi: 186,btch: 31 usd: 15 Dec 27 09:19:05 2013 kernel: : [277622.359345] cpu 13: hi: 186,btch: 31 usd: 166 Dec 27 09:19:05 2013 kernel: : [277622.359347] cpu 14: hi: 186,btch: 31 usd: 185 Dec 27 09:19:05 2013 kernel: : [277622.359349] cpu 15: hi: 186,btch: 31 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359351] cpu 16: hi: 186,btch: 31 usd: 58 Dec 27 09:19:05 2013 kernel: : [277622.359353] cpu 17: hi: 186,btch: 31 usd: 122 Dec 27 09:19:05 2013 kernel: : [277622.359356] cpu 18: hi: 186,btch: 31 usd: 170 Dec 27 09:19:05 2013 kernel: : [277622.359358] cpu 19: hi: 186,btch: 31 usd: 0 Dec 27 09:19:05 2013 kernel: : [277622.359360] cpu 20: hi: 186,btch: 31 usd: 30 Dec 27 09:19:05 2013 kernel: : [277622.359362] cpu 21: hi: 186,btch: 31 usd: 33 Dec 27 09:19:05 2013 kernel: : [277622.359364] cpu 22: hi: 186,btch: 31 usd: 28 Dec 27 09:19:05 2013 kernel: : [277622.359366] cpu 23: hi: 186,btch: 31 usd: 44 Dec 27 09:19:05 2013 kernel: : [277622.359371] active_anon:658515 inactive_anon:54399 isolated_anon:0 Dec 27 09:19:05 2013 kernel: : [277622.359371] active_file:1172176 inactive_file:323606 isolated_file:0 Dec 27 09:19:05 2013 kernel: : [277622.359371] unevictable:0 dirty:0 writeback:0 unstable:0 Dec 27 09:19:05 2013 kernel: : [277622.359371] free:6911872 slab_reclaimable:29430 slab_unreclaimable:34726 Dec 27 09:19:05 2013 kernel: : [277622.359371] mapped:45784 shmem:9850 pagetables:107714 bounce:0 Dec 27 09:19:05 2013 kernel: : [277622.359371] free_cma:0 Dec 27 09:19:05 2013 kernel: : [277622.359382] DMA free:2332kB min:36kB low:44kB high:52kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15968kB managed:6960kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:8kB slab_unreclaimable:288kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes Dec 27 09:19:05 2013 kernel: : [277622.359384] lowmem_reserve[]: 0 573 36539 36539 Dec 27 09:19:05 2013 kernel: : [277622.359393] Normal free:114488kB min:3044kB low:3804kB high:4564kB active_anon:0kB inactive_anon:0kB active_file:252kB inactive_file:256kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:894968kB managed:587540kB mlocked:0kB dirty:0kB writeback:0kB mapped:4kB shmem:0kB slab_reclaimable:117712kB slab_unreclaimable:138616kB kernel_stack:11976kB pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:982 all_unreclaimable? yes Dec 27 09:19:05 2013 kernel: : [277622.359395] lowmem_reserve[]: 0 0 287725 287725 Dec 27 09:19:05 2013 kernel: : [277622.359404] HighMem free:27530668kB min:512kB low:48272kB high:96036kB active_anon:2634060kB inactive_anon:217596kB active_file:4688452kB inactive_file:1294168kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:36828872kB managed:36828872kB mlocked:0kB dirty:0kB writeback:0kB mapped:183132kB shmem:39400kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:430856kB unstable:0kB bounce:367564104kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Dec 27 09:19:05 2013 kernel: : [277622.359406] lowmem_reserve[]: 0 0 0 0 Dec 27 09:19:05 2013 kernel: : [277622.359410] DMA: 3*4kB (U) 2*8kB (U) 4*16kB (U) 5*32kB (U) 2*64kB (U) 0*128kB 0*256kB 0*512kB 0*1024kB 1*2048kB (R) 0*4096kB = 2428kB Dec 27 09:19:05 2013 kernel: : [277622.359422] Normal: 5360*4kB (UEM) 3667*8kB (UEM) 3964*16kB (UEMR) 13*32kB (MR) 0*64kB 1*128kB (R) 1*256kB (R) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 115000kB Dec 27 09:19:05 2013 kernel: : [277622.359435] HighMem: 6672*4kB (M) 74585*8kB (UM) 40828*16kB (UM) 17275*32kB (UM) 3314*64kB (UM) 1126*128kB (UM) 992*256kB (UM) 585*512kB (UM) 225*1024kB (UM) 78*2048kB (UMR) 5957*4096kB (UM) = 27529128kB Dec 27 09:19:05 2013 kernel: : [277622.359452] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB Dec 27 09:19:05 2013 kernel: : [277622.359454] 1505509 total pagecache pages Dec 27 09:19:05 2013 kernel: : [277622.359457] 4 pages in swap cache Dec 27 09:19:05 2013 kernel: : [277622.359459] Swap cache stats: add 13,delete 9,find 0/0 Dec 27 09:19:05 2013 kernel: : [277622.359460] Free swap = 35318812kB Dec 27 09:19:05 2013 kernel: : [277622.359462] Total swap = 35318864kB Dec 27 09:19:05 2013 kernel: : [277622.450529] 9699327 pages RAM Dec 27 09:19:05 2013 kernel: : [277622.450532] 9471490 pages HighMem Dec 27 09:19:05 2013 kernel: : [277622.450533] 342749 pages reserved Dec 27 09:19:05 2013 kernel: : [277622.450534] 2864256 pages shared Dec 27 09:19:05 2013 kernel: : [277622.450535] 1501243 pages non-shared Dec 27 09:19:05 2013 kernel: : [277622.450538] Kernel panic - not syncing: Out of memory: system-wide panic_on_oom is enabled Dec 27 09:19:05 2013 kernel: : [277622.450538]
和
# cat /proc/meminfo MemTotal: 37426312 kB MemFree: 28328992 kB Buffers: 94728 kB Cached: 6216068 kB SwapCached: 0 kB Active: 6958572 kB Inactive: 1815380 kB Active(anon): 2329152 kB Inactive(anon): 170252 kB Active(file): 4629420 kB Inactive(file): 1645128 kB Unevictable: 0 kB Mlocked: 0 kB HighTotal: 36828872 kB HighFree: 28076144 kB LowTotal: 597440 kB LowFree: 252848 kB SwapTotal: 35318864 kB SwapFree: 35318860 kB Dirty: 0 kB Writeback: 8 kB AnonPages: 2463512 kB Mapped: 162296 kB Shmem: 36332 kB Slab: 208676 kB SReclaimable: 120872 kB SUnreclaim: 87804 kB KernelStack: 6320 kB PageTables: 42280 kB NFS_Unstable: 0 kB Bounce: 124 kB WritebackTmp: 0 kB CommitLimit: 54032020 kB Committed_AS: 3191916 kB VmallocTotal: 122880 kB VmallocUsed: 27088 kB VmallocChunk: 29312 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 10232 kB DirectMap2M: 901120 kB
sysctl的:
vm.oom_dump_tasks = 0 vm.oom_kill_allocating_task = 1 vm.panic_on_oom = 1 vm.admin_reserve_kbytes = 8192 vm.block_dump = 0 vm.dirty_background_bytes = 0 vm.dirty_background_ratio = 10 vm.dirty_bytes = 0 vm.dirty_expire_centisecs = 3000 vm.dirty_ratio = 20 vm.dirty_writeback_centisecs = 500 vm.drop_caches = 0 vm.highmem_is_dirtyable = 0 vm.hugepages_treat_as_movable = 0 vm.hugetlb_shm_group = 0 vm.laptop_mode = 0 vm.legacy_va_layout = 0 vm.lowmem_reserve_ratio = 256 32 32 vm.max_map_count = 65530 vm.min_free_kbytes = 3084 vm.mmap_min_addr = 4096 vm.nr_hugepages = 0 vm.nr_overcommit_hugepages = 0 vm.nr_pdflush_threads = 0 vm.overcommit_memory = 0 vm.overcommit_ratio = 50 vm.page-cluster = 3 vm.percpu_pagelist_fraction = 0 vm.scan_unevictable_pages = 0 vm.stat_interval = 1 vm.swappiness = 30 vm.user_reserve_kbytes = 131072 vm.vdso_enabled = 1 vm.vfs_cache_pressure = 100
和
# ulimit -a core file size (blocks,-c) 0 data seg size (kbytes,-d) unlimited scheduling priority (-e) 0 file size (blocks,-f) unlimited pending signals (-i) 292370 max locked memory (kbytes,-l) 64 max memory size (kbytes,-m) unlimited open files (-n) 36728 pipe size (512 bytes,-p) 8 POSIX message queues (bytes,-q) 819200 real-time priority (-r) 0 stack size (kbytes,-s) 8192 cpu time (seconds,-t) unlimited max user processes (-u) 292370 virtual memory (kbytes,-v) unlimited file locks (-x) unlimited
解决方法
当我有时间时,我可能会回来并用更长的解释重新编辑它.
虽然“大锤”的方法是升级到64位O / S(这是32位),因为区域的布局是不同的.
好的,所以在这里我将尝试回答你为什么在这里体验过OOM.这里有很多因素在起作用.
>请求的订单大小以及内核如何处理某些订单大小.
>正在选择的区域.
>该区域使用的水印.
>区域中的碎片.
如果你看看OOM本身,显然有很多可用内存可用,但是OOM杀手被调用了?为什么?
请求的订单大小以及内核如何处理某些订单大小
内核按顺序分配内存. “订单”是连续RAM的区域,必须满足该请求才能工作.使用算法2 ^(ORDER 12)按订单量级(因此命名顺序)排列订单.因此,订单0是4096,订单1是8192,订单2是16384等等.
内核具有被认为是“高阶”(> PAGE_ALLOC_COSTLY_ORDER)的硬编码值.这是4阶及以上(64kb或以上是高阶).
页面分配的高订单与低订单不同.如果在现代内核上无法获取内存,则会进行高阶分配.
>尝试运行内存压缩例程来对内存进行碎片整理.
>永远不要打电话给OOM杀手来满足要求.
您的订单大小列于此处
Dec 27 09:19:05 2013 kernel: : [277622.359064] squid invoked oom-killer: gfp_mask=0x42d0,oom_score_adj=0
订单3是低阶请求中最高的,并且(如您所见)调用OOM-killer以试图满足它.
请注意,大多数用户空间分配不使用高阶请求.通常它的内核需要连续的内存区域.例外情况可能是用户空间使用大页面时 – 但这不是这种情况.
在您的情况下,订单3分配由内核调用,希望将数据包排入网络堆栈 – 需要32kb分配才能执行此操作.
正在选择的区域.
内核将您的内存区域划分为多个区域.这种切断是因为在x86上某些存储区域只能由某些硬件寻址.例如,较旧的硬件可能只能在“DMA”区域中寻址存储器.当我们想要分配一些内存时,首先选择一个区域,并且在做出分配决策时只考虑该区域的空闲内存.
虽然我不完全了解区域选择算法,但典型的用例永远不会从DMA分配,而是通常选择可满足请求的最低可寻址区域.
在OOM期间,很多区域信息都会被吐出,也可以从/ proc / zoneinfo收集.
Dec 27 09:19:05 2013 kernel: : [277622.359382] DMA free:2332kB min:36kB low:44kB high:52kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15968kB managed:6960kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:8kB slab_unreclaimable:288kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes Dec 27 09:19:05 2013 kernel: : [277622.359393] Normal free:114488kB min:3044kB low:3804kB high:4564kB active_anon:0kB inactive_anon:0kB active_file:252kB inactive_file:256kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:894968kB managed:587540kB mlocked:0kB dirty:0kB writeback:0kB mapped:4kB shmem:0kB slab_reclaimable:117712kB slab_unreclaimable:138616kB kernel_stack:11976kB pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:982 all_unreclaimable? yes Dec 27 09:19:05 2013 kernel: : [277622.359404] HighMem free:27530668kB min:512kB low:48272kB high:96036kB active_anon:2634060kB inactive_anon:217596kB active_file:4688452kB inactive_file:1294168kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:36828872kB managed:36828872kB mlocked:0kB dirty:0kB writeback:0kB mapped:183132kB shmem:39400kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:430856kB unstable:0kB bounce:367564104kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
您拥有的区域,DMA,Normal和HighMem表示32位平台,因为HighMem区域在64位上不存在.同样在64位系统上,Normal映射到4GB甚至更高,而在32bit上它映射到896Mb(尽管如此,在你的情况下,内核报告只管理比这更小的部分: – managed:587540kB.)
通过再次查看第一行可以判断出这个分配的来源,gfp_mask = 0x42d0告诉我们完成了什么类型的分配.最后一个字节(0)告诉我们这是来自普通区域的分配. gfp的含义位于include/linux/gfp.h.
该区域使用的水印.
当内存不足时,回收它的动作由水印指定.它们出现在这里:最小:3044kB低:3804kB高:4564kB.如果空闲内存达到“低”,则交换将发生,直到我们通过“高”阈值.如果内存达到’min’,我们需要杀死东西,以便通过OOM杀手释放内存.
区域中的碎片.
为了查看是否可以满足特定内存顺序的请求,内核会考虑每个订单的可用页数和可用数.这在/ proc / buddyinfo中是可读的. OOM杀手报告另外吐出了buddyinfo,如下所示:
Normal: 5360*4kB (UEM) 3667*8kB (UEM) 3964*16kB (UEMR) 13*32kB (MR) 0*64kB 1*128kB (R) 1*256kB (R) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 115000kB
要满足内存分配,必须在请求的订单大小或更高的分配中提供可用内存.低订单中有大量免费数据,高订单中没有,这意味着您的内存碎片化.如果您获得了非常高的订单分配(即使有大量可用内存),由于没有高阶页面可用,因此无法满足.内核可以通过移动大量低阶页面来对内存进行碎片整理(这称为内存压缩),这样它们就不会在可寻址的ram空间中留下空白.
是否引用了OOM杀手?为什么?
所以,如果我们考虑这些因素,我们可以说以下内容;
>尝试了32kB的连续分配.从正常区域.
>所选区域中有足够的可用内存.
>有订单3,5和6可用内存13 * 32kB(MR)1 * 128kB(R)1 * 256kB(R)
因此,如果有空闲内存,其他订单可以满足请求.发生了什么?
那么,除了检查该订单可用的可用内存量或更高时,从订单分配还有更多.内核有效地从总空闲行中减去所有较低阶的内存,然后对剩下的内容执行最小水印检查.
在你的情况下发生的是检查我们必须做的那个区域的空闲内存.
115000 - (5360*4) - (3667*8) - (3964*16) = 800
根据最小水印(即3044)检查此可用内存量.因此,从技术上讲 – 您没有剩余可用内存来执行您请求的分配.这就是你调用OOM杀手的原因.
定影
有两个修复.升级到64位会更改您的区域分区,使“正常”为4GB到36GB,因此您不会最终“默认”您的内存分配到一个可能会如此严重碎片化的区域.它不是你有更多的可寻址内存来解决这个问题(因为你已经使用PAE),只是你选择的区域有更多的可寻址内存.
第二种方式(我从未测试过)是试图让内核更积极地压缩你的记忆.
如果将vm.extfrag_threshold的值从500更改为100,则更有可能压缩内存以尝试遵循高阶分配.虽然,我以前从未搞过这个值 – 它还取决于/ sys / kernel / debug / extfrag / extfrag_index中可用的碎片索引.目前我没有一个盒子,有足够新的内核可以看到显示的内容比这更多.
或者,您可以通过写入/ proc / sys / vm / compact_memory来运行某种类型的cron作业(这可怕,非常难看)来手动压缩内存.
老实说,尽管如此,我认为没有办法调整系统以避免这个问题 – 内存分配器以这种方式工作的本质.更改您使用的平台的体系结构可能是唯一从根本上解决的解决方案.