All of lore.kernel.org
 help / color / mirror / Atom feed
* 2.6.29.1: nfsd: page allocation failure - nfsd or kernel problem?
       [not found] <alpine.DEB.2.00.0906161203160.27742@p34.internal.lan>
@ 2009-06-16 16:06 ` Justin Piszcz
  2009-06-16 20:19   ` Michael Tokarev
  2009-06-17 19:44   ` [patch] ipv4: don't warn about skb ack allocation failures David Rientjes
  2009-06-22 16:08 ` 2.6.30: nfsd: page allocation failure - nfsd or kernel problem? (again with 2.6.30) Justin Piszcz
  1 sibling, 2 replies; 33+ messages in thread
From: Justin Piszcz @ 2009-06-16 16:06 UTC (permalink / raw)
  To: linux-kernel

Package: nfs-kernel-server
Version: 1.1.6-1
Distribution: Debian Testing
Architecture: 64-bit

[6042655.755870] Pid: 7093, comm: nfsd Not tainted 2.6.29.1 #4
[6042655.755872] Call Trace:
[6042655.755874]  <IRQ>  [<ffffffff802850fd>] __alloc_pages_internal+0x3dd/0x4e0
[6042655.755885]  [<ffffffff802a738b>] cache_alloc_refill+0x2fb/0x570
[6042655.755887]  [<ffffffff802a76db>] __kmalloc+0xdb/0xe0
[6042655.755891]  [<ffffffff8059a98d>] __alloc_skb+0x6d/0x150
[6042655.755893]  [<ffffffff8059b727>] __netdev_alloc_skb+0x17/0x40
[6042655.755897]  [<ffffffff804d8c8b>] e1000_alloc_rx_buffers+0x23b/0x2c0
[6042655.755899]  [<ffffffff804d8fbd>] e1000_clean_rx_irq+0x25d/0x3a0
[6042655.755901]  [<ffffffff804dad70>] e1000_clean+0x180/0x2d0
[6042655.755904]  [<ffffffff8059f5a7>] net_rx_action+0x87/0x130
[6042655.755907]  [<ffffffff80259cd3>] __do_softirq+0x93/0x160
[6042655.755910]  [<ffffffff8022c9fc>] call_softirq+0x1c/0x30
[6042655.755912]  [<ffffffff8022e455>] do_softirq+0x35/0x80
[6042655.755914]  [<ffffffff8022e523>] do_IRQ+0x83/0x110
[6042655.755917]  [<ffffffff8022c2d3>] ret_from_intr+0x0/0xa
[6042655.755918]  <EOI>  [<ffffffff80632190>] _spin_lock+0x10/0x20
[6042655.755924]  [<ffffffff802bb2fc>] d_find_alias+0x1c/0x40
[6042655.755926]  [<ffffffff802bd96d>] d_obtain_alias+0x4d/0x140
[6042655.755930]  [<ffffffff8033ffd3>] exportfs_decode_fh+0x63/0x2a0
[6042655.755932]  [<ffffffff80343970>] nfsd_acceptable+0x0/0x110
[6042655.755935]  [<ffffffff8061e74a>] cache_check+0x4a/0x4d0
[6042655.755937]  [<ffffffff80349437>] exp_find_key+0x57/0xe0
[6042655.755941]  [<ffffffff80592a35>] sock_recvmsg+0xd5/0x110
[6042655.755943]  [<ffffffff80349552>] exp_find+0x92/0xa0
[6042655.755945]  [<ffffffff80343e59>] fh_verify+0x369/0x680
[6042655.755948]  [<ffffffff8024add9>] check_preempt_wakeup+0xf9/0x120
[6042655.755950]  [<ffffffff803460be>] nfsd_open+0x2e/0x180
[6042655.755952]  [<ffffffff80346574>] nfsd_write+0xc4/0x120
[6042655.755955]  [<ffffffff8034dac0>] nfsd3_proc_write+0xb0/0x150
[6042655.755957]  [<ffffffff8034040a>] nfsd_dispatch+0xba/0x270
[6042655.755960]  [<ffffffff80615a1e>] svc_process+0x49e/0x800
[6042655.755962]  [<ffffffff8024dc80>] default_wake_function+0x0/0x10
[6042655.755965]  [<ffffffff80631fd7>] __down_read+0x17/0xae
[6042655.755966]  [<ffffffff80340b79>] nfsd+0x199/0x2b0
[6042655.755968]  [<ffffffff803409e0>] nfsd+0x0/0x2b0
[6042655.755971]  [<ffffffff802691d7>] kthread+0x47/0x90
[6042655.755973]  [<ffffffff8022c8fa>] child_rip+0xa/0x20
[6042655.755975]  [<ffffffff80269190>] kthread+0x0/0x90
[6042655.755977]  [<ffffffff8022c8f0>] child_rip+0x0/0x20
[6042655.755979] Mem-Info:
[6042655.755980] DMA per-cpu:
[6042655.755982] CPU    0: hi:    0, btch:   1 usd:   0
[6042655.755983] CPU    1: hi:    0, btch:   1 usd:   0
[6042655.755985] CPU    2: hi:    0, btch:   1 usd:   0
[6042655.755986] CPU    3: hi:    0, btch:   1 usd:   0
[6042655.755987] DMA32 per-cpu:
[6042655.755988] CPU    0: hi:  186, btch:  31 usd: 168
[6042655.755990] CPU    1: hi:  186, btch:  31 usd:  30
[6042655.755991] CPU    2: hi:  186, btch:  31 usd: 161
[6042655.755992] CPU    3: hi:  186, btch:  31 usd: 221
[6042655.755993] Normal per-cpu:
[6042655.755995] CPU    0: hi:  186, btch:  31 usd: 156
[6042655.755996] CPU    1: hi:  186, btch:  31 usd:  30
[6042655.755997] CPU    2: hi:  186, btch:  31 usd: 187
[6042655.755998] CPU    3: hi:  186, btch:  31 usd: 202
[6042655.756001] Active_anon:108072 active_file:103321 inactive_anon:31621
[6042655.756002]  inactive_file:984722 unevictable:0 dirty:71104 writeback:0 unstable:0
[6042655.756003]  free:8659 slab:746182 mapped:8842 pagetables:5374 bounce:0
[6042655.756005] DMA free:9736kB min:16kB low:20kB high:24kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB present:8744kB pages_scanned:0 all_unreclaimable? yes
[6042655.756008] lowmem_reserve[]: 0 3246 7980 7980
[6042655.756012] DMA32 free:21420kB min:6656kB low:8320kB high:9984kB active_anon:52420kB inactive_anon:38552kB active_file:146252kB inactive_file:1651512kB unevictable:0kB present:3324312kB pages_scanned:0 all_unreclaimable? no
[6042655.756014] lowmem_reserve[]: 0 0 4734 4734
[6042655.756018] Normal free:3480kB min:9708kB low:12132kB high:14560kB active_anon:379868kB inactive_anon:87932kB active_file:267032kB inactive_file:2287376kB unevictable:0kB present:4848000kB pages_scanned:0 all_unreclaimable? no
[6042655.756020] lowmem_reserve[]: 0 0 0 0
[6042655.756023] DMA: 4*4kB 5*8kB 3*16kB 3*32kB 1*64kB 2*128kB 2*256kB 1*512kB 2*1024kB 1*2048kB 1*4096kB = 9736kB
[6042655.756030] DMA32: 3123*4kB 77*8kB 3*16kB 1*32kB 1*64kB 0*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 1*4096kB = 21188kB
[6042655.756036] Normal: 1*4kB 1*8kB 0*16kB 1*32kB 1*64kB 0*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 0*4096kB = 3436kB
[6042655.756042] 1090130 total pagecache pages
[6042655.756044] 2059 pages in swap cache
[6042655.756046] Swap cache stats: add 125946, delete 123887, find 3279355/3285565
[6042655.756047] Free swap  = 16734964kB
[6042655.756048] Total swap = 16787768kB
[6042655.756125] 2277376 pages RAM
[6042655.756125] 252195 pages reserved
[6042655.756125] 790472 pages shared
[6042655.756125] 1269664 pages non-shared
[6042655.794633] nfsd: page allocation failure. order:0, mode:0x20
[6042655.794637] Pid: 7093, comm: nfsd Not tainted 2.6.29.1 #4
[6042655.794638] Call Trace:
[6042655.794640]  <IRQ>  [<ffffffff802850fd>] __alloc_pages_internal+0x3dd/0x4e0
[6042655.794649]  [<ffffffff802a738b>] cache_alloc_refill+0x2fb/0x570
[6042655.794652]  [<ffffffff802a7085>] kmem_cache_alloc+0x95/0xa0
[6042655.794655]  [<ffffffff8059a969>] __alloc_skb+0x49/0x150
[6042655.794658]  [<ffffffff805dee06>] tcp_send_ack+0x26/0x120
[6042655.794660]  [<ffffffff805dcbd2>] tcp_rcv_established+0x7a2/0x920
[6042655.794663]  [<ffffffff805e417d>] tcp_v4_do_rcv+0xdd/0x210
[6042655.794665]  [<ffffffff805e4926>] tcp_v4_rcv+0x676/0x710
[6042655.794668]  [<ffffffff805c6a5c>] ip_local_deliver_finish+0x8c/0x160
[6042655.794670]  [<ffffffff805c6551>] ip_rcv_finish+0x191/0x330
[6042655.794672]  [<ffffffff805c6936>] ip_rcv+0x246/0x2e0
[6042655.794676]  [<ffffffff804d8e74>] e1000_clean_rx_irq+0x114/0x3a0
[6042655.794678]  [<ffffffff804dad70>] e1000_clean+0x180/0x2d0
[6042655.794681]  [<ffffffff8059f5a7>] net_rx_action+0x87/0x130
[6042655.794683]  [<ffffffff80259cd3>] __do_softirq+0x93/0x160
[6042655.794687]  [<ffffffff8022c9fc>] call_softirq+0x1c/0x30
[6042655.794689]  [<ffffffff8022e455>] do_softirq+0x35/0x80
[6042655.794691]  [<ffffffff8022e523>] do_IRQ+0x83/0x110
[6042655.794693]  [<ffffffff8022c2d3>] ret_from_intr+0x0/0xa
[6042655.794694]  <EOI>  [<ffffffff80632190>] _spin_lock+0x10/0x20
[6042655.794700]  [<ffffffff802bb2fc>] d_find_alias+0x1c/0x40
[6042655.794703]  [<ffffffff802bd96d>] d_obtain_alias+0x4d/0x140
[6042655.794706]  [<ffffffff8033ffd3>] exportfs_decode_fh+0x63/0x2a0
[6042655.794708]  [<ffffffff80343970>] nfsd_acceptable+0x0/0x110
[6042655.794711]  [<ffffffff8061e74a>] cache_check+0x4a/0x4d0
[6042655.794714]  [<ffffffff80349437>] exp_find_key+0x57/0xe0
[6042655.794717]  [<ffffffff80592a35>] sock_recvmsg+0xd5/0x110
[6042655.794719]  [<ffffffff80349552>] exp_find+0x92/0xa0
[6042655.794721]  [<ffffffff80343e59>] fh_verify+0x369/0x680
[6042655.794724]  [<ffffffff8024add9>] check_preempt_wakeup+0xf9/0x120
[6042655.794726]  [<ffffffff803460be>] nfsd_open+0x2e/0x180
[6042655.794728]  [<ffffffff80346574>] nfsd_write+0xc4/0x120
[6042655.794730]  [<ffffffff8034dac0>] nfsd3_proc_write+0xb0/0x150
[6042655.794732]  [<ffffffff8034040a>] nfsd_dispatch+0xba/0x270
[6042655.794736]  [<ffffffff80615a1e>] svc_process+0x49e/0x800
[6042655.794738]  [<ffffffff8024dc80>] default_wake_function+0x0/0x10
[6042655.794740]  [<ffffffff80631fd7>] __down_read+0x17/0xae
[6042655.794742]  [<ffffffff80340b79>] nfsd+0x199/0x2b0
[6042655.794743]  [<ffffffff803409e0>] nfsd+0x0/0x2b0
[6042655.794747]  [<ffffffff802691d7>] kthread+0x47/0x90
[6042655.794749]  [<ffffffff8022c8fa>] child_rip+0xa/0x20
[6042655.794751]  [<ffffffff80269190>] kthread+0x0/0x90
[6042655.794753]  [<ffffffff8022c8f0>] child_rip+0x0/0x20
[6042655.794754] Mem-Info:
[6042655.794755] DMA per-cpu:
[6042655.794757] CPU    0: hi:    0, btch:   1 usd:   0
[6042655.794758] CPU    1: hi:    0, btch:   1 usd:   0
[6042655.794760] CPU    2: hi:    0, btch:   1 usd:   0
[6042655.794761] CPU    3: hi:    0, btch:   1 usd:   0
[6042655.794762] DMA32 per-cpu:
[6042655.794763] CPU    0: hi:  186, btch:  31 usd: 168
[6042655.794765] CPU    1: hi:  186, btch:  31 usd:  30
[6042655.794766] CPU    2: hi:  186, btch:  31 usd: 161
[6042655.794767] CPU    3: hi:  186, btch:  31 usd: 221
[6042655.794768] Normal per-cpu:
[6042655.794770] CPU    0: hi:  186, btch:  31 usd: 156
[6042655.794771] CPU    1: hi:  186, btch:  31 usd:  30
[6042655.794772] CPU    2: hi:  186, btch:  31 usd: 187
[6042655.794773] CPU    3: hi:  186, btch:  31 usd: 202
[6042655.794776] Active_anon:108072 active_file:103321 inactive_anon:31621
[6042655.794777]  inactive_file:984722 unevictable:0 dirty:71104 writeback:0 unstable:0
[6042655.794778]  free:8659 slab:746182 mapped:8842 pagetables:5374 bounce:0
[6042655.794780] DMA free:9736kB min:16kB low:20kB high:24kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB present:8744kB pages_scanned:0 all_unreclaimable? yes
[6042655.794783] lowmem_reserve[]: 0 3246 7980 7980
[6042655.794787] DMA32 free:21420kB min:6656kB low:8320kB high:9984kB active_anon:52420kB inactive_anon:38552kB active_file:146252kB inactive_file:1651512kB unevictable:0kB present:3324312kB pages_scanned:0 all_unreclaimable? no
[6042655.794789] lowmem_reserve[]: 0 0 4734 4734
[6042655.794793] Normal free:3480kB min:9708kB low:12132kB high:14560kB active_anon:379868kB inactive_anon:87932kB active_file:267032kB inactive_file:2287376kB unevictable:0kB present:4848000kB pages_scanned:0 all_unreclaimable? no
[6042655.794795] lowmem_reserve[]: 0 0 0 0
[6042655.794798] DMA: 4*4kB 5*8kB 3*16kB 3*32kB 1*64kB 2*128kB 2*256kB 1*512kB 2*1024kB 1*2048kB 1*4096kB = 9736kB
[6042655.794805] DMA32: 3123*4kB 77*8kB 3*16kB 1*32kB 1*64kB 0*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 1*4096kB = 21188kB
[6042655.794811] Normal: 1*4kB 1*8kB 0*16kB 1*32kB 1*64kB 0*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 0*4096kB = 3436kB
[6042655.794818] 1090130 total pagecache pages
[6042655.794819] 2059 pages in swap cache
[6042655.794821] Swap cache stats: add 125946, delete 123887, find 3279355/3285565
[6042655.794822] Free swap  = 16734964kB
[6042655.794823] Total swap = 16787768kB
[6042655.795578] 2277376 pages RAM
[6042655.795578] 252195 pages reserved
[6042655.795578] 790472 pages shared
[6042655.795578] 1269664 pages non-shared
[6042655.828540] nfsd: page allocation failure. order:0, mode:0x20
[6042655.828544] Pid: 7093, comm: nfsd Not tainted 2.6.29.1 #4
[6042655.828545] Call Trace:
[6042655.828547]  <IRQ>  [<ffffffff802850fd>] __alloc_pages_internal+0x3dd/0x4e0
[6042655.828555]  [<ffffffff802a738b>] cache_alloc_refill+0x2fb/0x570
[6042655.828557]  [<ffffffff802a7085>] kmem_cache_alloc+0x95/0xa0
[6042655.828561]  [<ffffffff8059a969>] __alloc_skb+0x49/0x150
[6042655.828564]  [<ffffffff8059b727>] __netdev_alloc_skb+0x17/0x40
[6042655.828567]  [<ffffffff804d8c8b>] e1000_alloc_rx_buffers+0x23b/0x2c0
[6042655.828570]  [<ffffffff804d8fbd>] e1000_clean_rx_irq+0x25d/0x3a0
[6042655.828572]  [<ffffffff804dad70>] e1000_clean+0x180/0x2d0
[6042655.828574]  [<ffffffff8059f5a7>] net_rx_action+0x87/0x130
[6042655.828578]  [<ffffffff80259cd3>] __do_softirq+0x93/0x160
[6042655.828581]  [<ffffffff8022c9fc>] call_softirq+0x1c/0x30
[6042655.828583]  [<ffffffff8022e455>] do_softirq+0x35/0x80
[6042655.828585]  [<ffffffff8022e523>] do_IRQ+0x83/0x110
[6042655.828587]  [<ffffffff8022c2d3>] ret_from_intr+0x0/0xa
[6042655.828589]  <EOI>  [<ffffffff80632190>] _spin_lock+0x10/0x20
[6042655.828595]  [<ffffffff802bb2fc>] d_find_alias+0x1c/0x40
[6042655.828598]  [<ffffffff802bd96d>] d_obtain_alias+0x4d/0x140
[6042655.828601]  [<ffffffff8033ffd3>] exportfs_decode_fh+0x63/0x2a0
[6042655.828604]  [<ffffffff80343970>] nfsd_acceptable+0x0/0x110
[6042655.828606]  [<ffffffff8061e74a>] cache_check+0x4a/0x4d0
[6042655.828609]  [<ffffffff80349437>] exp_find_key+0x57/0xe0
[6042655.828612]  [<ffffffff80592a35>] sock_recvmsg+0xd5/0x110
[6042655.828614]  [<ffffffff80349552>] exp_find+0x92/0xa0
[6042655.828616]  [<ffffffff80343e59>] fh_verify+0x369/0x680
[6042655.828619]  [<ffffffff8024add9>] check_preempt_wakeup+0xf9/0x120
[6042655.828622]  [<ffffffff803460be>] nfsd_open+0x2e/0x180
[6042655.828623]  [<ffffffff80346574>] nfsd_write+0xc4/0x120
[6042655.828626]  [<ffffffff8034dac0>] nfsd3_proc_write+0xb0/0x150
[6042655.828628]  [<ffffffff8034040a>] nfsd_dispatch+0xba/0x270
[6042655.828631]  [<ffffffff80615a1e>] svc_process+0x49e/0x800
[6042655.828634]  [<ffffffff8024dc80>] default_wake_function+0x0/0x10
[6042655.828636]  [<ffffffff80631fd7>] __down_read+0x17/0xae
[6042655.828638]  [<ffffffff80340b79>] nfsd+0x199/0x2b0
[6042655.828639]  [<ffffffff803409e0>] nfsd+0x0/0x2b0
[6042655.828643]  [<ffffffff802691d7>] kthread+0x47/0x90
[6042655.828645]  [<ffffffff8022c8fa>] child_rip+0xa/0x20
[6042655.828647]  [<ffffffff80269190>] kthread+0x0/0x90
[6042655.828649]  [<ffffffff8022c8f0>] child_rip+0x0/0x20
[6042655.828650] Mem-Info:
[6042655.828651] DMA per-cpu:
[6042655.828653] CPU    0: hi:    0, btch:   1 usd:   0
[6042655.828655] CPU    1: hi:    0, btch:   1 usd:   0
[6042655.828656] CPU    2: hi:    0, btch:   1 usd:   0
[6042655.828657] CPU    3: hi:    0, btch:   1 usd:   0
[6042655.828658] DMA32 per-cpu:
[6042655.828659] CPU    0: hi:  186, btch:  31 usd: 168
[6042655.828661] CPU    1: hi:  186, btch:  31 usd:  30
[6042655.828662] CPU    2: hi:  186, btch:  31 usd: 161
[6042655.828663] CPU    3: hi:  186, btch:  31 usd: 221
[6042655.828665] Normal per-cpu:
[6042655.828666] CPU    0: hi:  186, btch:  31 usd: 156
[6042655.828667] CPU    1: hi:  186, btch:  31 usd:  30
[6042655.828668] CPU    2: hi:  186, btch:  31 usd: 187
[6042655.828670] CPU    3: hi:  186, btch:  31 usd: 202
[6042655.828672] Active_anon:108072 active_file:103321 inactive_anon:31621
[6042655.828673]  inactive_file:984722 unevictable:0 dirty:71104 writeback:0 unstable:0
[6042655.828674]  free:8659 slab:746182 mapped:8842 pagetables:5374 bounce:0
[6042655.828677] DMA free:9736kB min:16kB low:20kB high:24kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB present:8744kB pages_scanned:0 all_unreclaimable? yes
[6042655.828679] lowmem_reserve[]: 0 3246 7980 7980
[6042655.828683] DMA32 free:21420kB min:6656kB low:8320kB high:9984kB active_anon:52420kB inactive_anon:38552kB active_file:146252kB inactive_file:1651512kB unevictable:0kB present:3324312kB pages_scanned:0 all_unreclaimable? no
[6042655.828685] lowmem_reserve[]: 0 0 4734 4734
[6042655.828689] Normal free:3480kB min:9708kB low:12132kB high:14560kB active_anon:379868kB inactive_anon:87932kB active_file:267032kB inactive_file:2287376kB unevictable:0kB present:4848000kB pages_scanned:0 all_unreclaimable? no
[6042655.828692] lowmem_reserve[]: 0 0 0 0
[6042655.828694] DMA: 4*4kB 5*8kB 3*16kB 3*32kB 1*64kB 2*128kB 2*256kB 1*512kB 2*1024kB 1*2048kB 1*4096kB = 9736kB
[6042655.828701] DMA32: 3123*4kB 77*8kB 3*16kB 1*32kB 1*64kB 0*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 1*4096kB = 21188kB
[6042655.828707] Normal: 1*4kB 1*8kB 0*16kB 1*32kB 1*64kB 0*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 0*4096kB = 3436kB
[6042655.828714] 1090130 total pagecache pages
[6042655.828715] 2059 pages in swap cache
[6042655.828717] Swap cache stats: add 125946, delete 123887, find 3279355/3285565
[6042655.828718] Free swap  = 16734964kB
[6042655.828719] Total swap = 16787768kB
[6042655.830324] 2277376 pages RAM
[6042655.830324] 252195 pages reserved
[6042655.830324] 790472 pages shared
[6042655.830324] 1269089 pages non-shared


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: 2.6.29.1: nfsd: page allocation failure - nfsd or kernel problem?
  2009-06-16 16:06 ` 2.6.29.1: nfsd: page allocation failure - nfsd or kernel problem? Justin Piszcz
@ 2009-06-16 20:19   ` Michael Tokarev
  2009-06-17  8:43     ` Michael Tokarev
  2009-06-17 19:44   ` [patch] ipv4: don't warn about skb ack allocation failures David Rientjes
  1 sibling, 1 reply; 33+ messages in thread
From: Michael Tokarev @ 2009-06-16 20:19 UTC (permalink / raw)
  To: Justin Piszcz; +Cc: linux-kernel

Justin Piszcz wrote:
> Package: nfs-kernel-server
> Version: 1.1.6-1
> Distribution: Debian Testing
> Architecture: 64-bit
> 
> [6042655.755870] Pid: 7093, comm: nfsd Not tainted 2.6.29.1 #4
> [6042655.755872] Call Trace:
> [6042655.755874]  <IRQ>  [<ffffffff802850fd>]  __alloc_pages_internal+0x3dd/0x4e0
> [6042655.755885]  [<ffffffff802a738b>] cache_alloc_refill+0x2fb/0x570
> [6042655.755887]  [<ffffffff802a76db>] __kmalloc+0xdb/0xe0

Was about to send this same report.

The thing continues servicing files it seems, but after some quite
good delay.

This happens after massive amount of writes.  2.6.29.4 does the
same thing.  Here it is, for comparison:

Jun 13 17:06:42 gnome vmunix: nfsd: page allocation failure. order:0, mode:0x20
Jun 13 17:06:42 gnome vmunix: Pid: 17812, comm: nfsd Tainted: G        W  2.6.29-x86-64 #2.6.29.4
Jun 13 17:06:42 gnome vmunix: Call Trace:
Jun 13 17:06:42 gnome vmunix:  <IRQ>  [<ffffffff8029559d>] __alloc_pages_internal+0x3fd/0x500
Jun 13 17:06:42 gnome vmunix:  [<ffffffff802bd4c3>] cache_alloc_refill+0x313/0x5c0
Jun 13 17:06:42 gnome vmunix:  [<ffffffff802bd873>] __kmalloc+0x103/0x110
Jun 13 17:06:42 gnome vmunix:  [<ffffffff803dca6d>] __alloc_skb+0x6d/0x150
Jun 13 17:06:42 gnome vmunix:  [<ffffffff8040b550>] ip_local_deliver_finish+0x0/0x2a0
Jun 13 17:06:42 gnome vmunix:  [<ffffffff803dd7f7>] __netdev_alloc_skb+0x17/0x40
Jun 13 17:06:42 gnome vmunix:  [<ffffffffa049c42c>] rtl8169_rx_fill+0xcc/0x230 [r8169]
Jun 13 17:06:42 gnome vmunix:  [<ffffffffa049c95e>] rtl8169_rx_interrupt+0x3ce/0x5a0 [r8169]
Jun 13 17:06:42 gnome vmunix:  [<ffffffffa00064c2>] scsi_run_queue+0xd2/0x3b0 [scsi_mod]
Jun 13 17:06:42 gnome vmunix:  [<ffffffffa04a009b>] rtl8169_poll+0x3b/0x250 [r8169]
Jun 13 17:06:42 gnome vmunix:  [<ffffffff803e207c>] net_rx_action+0xfc/0x1c0
Jun 13 17:06:42 gnome vmunix:  [<ffffffff802511ab>] __do_softirq+0x9b/0x140
Jun 13 17:06:42 gnome vmunix:  [<ffffffff80212d1c>] call_softirq+0x1c/0x30
Jun 13 17:06:42 gnome vmunix:  [<ffffffff802149fd>] do_softirq+0x4d/0x90
Jun 13 17:06:42 gnome vmunix:  [<ffffffff80250e55>] irq_exit+0x75/0x90
Jun 13 17:06:42 gnome vmunix:  [<ffffffff80214c63>] do_IRQ+0x83/0x110
Jun 13 17:06:42 gnome vmunix:  [<ffffffff80212493>] ret_from_intr+0x0/0x29
Jun 13 17:06:42 gnome vmunix:  <EOI>  [<ffffffff8029afc3>] shrink_list+0x273/0x690
Jun 13 17:06:42 gnome vmunix:  [<ffffffff802100a8>] __switch_to+0x3f8/0x4a0
Jun 13 17:06:42 gnome vmunix:  [<ffffffff802468da>] finish_task_switch+0x2a/0xe0
Jun 13 17:06:42 gnome vmunix:  [<ffffffff8046d8c5>] thread_return+0x3d/0x6d8
Jun 13 17:06:42 gnome vmunix:  [<ffffffff8029b64b>] shrink_zone+0x26b/0x380
Jun 13 17:06:42 gnome vmunix:  [<ffffffff8046f91e>] _spin_lock_irqsave+0x2e/0x40
Jun 13 17:06:42 gnome vmunix:  [<ffffffff8029c658>] try_to_free_pages+0x328/0x3e0
Jun 13 17:06:42 gnome vmunix:  [<ffffffff8046e0f3>] schedule_timeout+0x53/0xd0
Jun 13 17:06:42 gnome vmunix:  [<ffffffff802994c0>] isolate_pages_global+0x0/0x280
Jun 13 17:06:42 gnome vmunix:  [<ffffffff80261e90>] autoremove_wake_function+0x0/0x30
Jun 13 17:06:42 gnome vmunix:  [<ffffffff802953cd>] __alloc_pages_internal+0x22d/0x500
Jun 13 17:06:42 gnome vmunix:  [<ffffffff8028ef86>] grab_cache_page_write_begin+0x96/0xe0
Jun 13 17:06:42 gnome vmunix:  [<ffffffffa022ea46>] ext4_da_write_begin+0x116/0x230 [ext4]
Jun 13 17:06:42 gnome vmunix:  [<ffffffff8028fae8>] generic_file_buffered_write+0x128/0x320
Jun 13 17:06:42 gnome vmunix:  [<ffffffff802d50da>] file_update_time+0x11a/0x140
Jun 13 17:06:42 gnome vmunix:  [<ffffffff8046fa25>] _spin_lock+0x5/0x10
Jun 13 17:06:42 gnome vmunix:  [<ffffffff80290188>] __generic_file_aio_write_nolock+0x278/0x480
Jun 13 17:06:42 gnome vmunix:  [<ffffffff8046fb41>] _spin_lock_bh+0x11/0x20
Jun 13 17:06:42 gnome vmunix:  [<ffffffff80290d04>] generic_file_aio_write+0x64/0xe0
Jun 13 17:06:42 gnome vmunix:  [<ffffffffa0229d65>] ext4_file_write+0x55/0x180 [ext4]
Jun 13 17:06:42 gnome vmunix:  [<ffffffffa0229d10>] ext4_file_write+0x0/0x180 [ext4]
Jun 13 17:06:42 gnome vmunix:  [<ffffffff802c09cb>] do_sync_readv_writev+0xcb/0x110
Jun 13 17:06:42 gnome vmunix:  [<ffffffffa03875f0>] find_acceptable_alias+0x20/0x110 [exportfs]
Jun 13 17:06:42 gnome vmunix:  [<ffffffff80261e90>] autoremove_wake_function+0x0/0x30
Jun 13 17:06:42 gnome vmunix:  [<ffffffff802c07f6>] rw_copy_check_uvector+0x86/0x140
Jun 13 17:06:42 gnome vmunix:  [<ffffffff802c1142>] do_readv_writev+0xe2/0x230
Jun 13 17:06:42 gnome vmunix:  [<ffffffffa0214f2f>] jbd2_journal_stop+0x16f/0x2a0 [jbd2]
Jun 13 17:06:42 gnome vmunix:  [<ffffffffa0390c89>] nfsd_vfs_write+0xc9/0x410 [nfsd]
Jun 13 17:06:42 gnome vmunix:  [<ffffffffa039167d>] nfsd_open+0x14d/0x1f0 [nfsd]
Jun 13 17:06:42 gnome vmunix:  [<ffffffffa0391ad4>] nfsd_write+0x114/0x120 [nfsd]
Jun 13 17:06:42 gnome vmunix:  [<ffffffffa039a0f0>] nfsd3_proc_write+0xb0/0x150 [nfsd]
Jun 13 17:06:42 gnome vmunix:  [<ffffffffa038b27a>] nfsd_dispatch+0xba/0x270 [nfsd]
Jun 13 17:06:42 gnome vmunix:  [<ffffffffa02e3efb>] svc_process+0x4ab/0x810 [sunrpc]
Jun 13 17:06:42 gnome vmunix:  [<ffffffff80244640>] default_wake_function+0x0/0x10
Jun 13 17:06:42 gnome vmunix:  [<ffffffff8046f879>] __down_read+0xb9/0xc4
Jun 13 17:06:42 gnome vmunix:  [<ffffffffa038b9f4>] nfsd+0x184/0x2b0 [nfsd]
Jun 13 17:06:42 gnome vmunix:  [<ffffffffa038b870>] nfsd+0x0/0x2b0 [nfsd]
Jun 13 17:06:42 gnome vmunix:  [<ffffffffa038b870>] nfsd+0x0/0x2b0 [nfsd]
Jun 13 17:06:42 gnome vmunix:  [<ffffffff80261a57>] kthread+0x47/0x90
Jun 13 17:06:42 gnome vmunix:  [<ffffffff80212c1a>] child_rip+0xa/0x20
Jun 13 17:06:42 gnome vmunix:  [<ffffffff80261a10>] kthread+0x0/0x90
Jun 13 17:06:42 gnome vmunix:  [<ffffffff80212c10>] child_rip+0x0/0x20
Jun 13 17:06:42 gnome vmunix: Mem-Info:
Jun 13 17:06:42 gnome vmunix: DMA per-cpu:
Jun 13 17:06:42 gnome vmunix: CPU    0: hi:    0, btch:   1 usd:   0
Jun 13 17:06:42 gnome vmunix: DMA32 per-cpu:
Jun 13 17:06:42 gnome vmunix: CPU    0: hi:  186, btch:  31 usd: 170
Jun 13 17:06:42 gnome vmunix: Active_anon:4641 active_file:35865 inactive_anon:16138
Jun 13 17:06:42 gnome vmunix:  inactive_file:417340 unevictable:451 dirty:1330 writeback:13820 unstable:0
Jun 13 17:06:42 gnome vmunix:  free:2460 slab:16669 mapped:3659 pagetables:304 bounce:0
Jun 13 17:06:42 gnome vmunix: DMA free:7760kB min:24kB low:28kB high:36kB active_anon:0kB inactive_anon:84kB active_file:760kB inactive_file
Jun 13 17:06:42 gnome vmunix: lowmem_reserve[]: 0 1938 1938 1938
Jun 13 17:06:42 gnome vmunix: DMA32 free:2080kB min:5620kB low:7024kB high:8428kB active_anon:18564kB inactive_anon:64468kB active_file:1427
Jun 13 17:06:42 gnome vmunix: lowmem_reserve[]: 0 0 0 0
Jun 13 17:06:42 gnome vmunix: DMA: 6*4kB 7*8kB 6*16kB 9*32kB 6*64kB 4*128kB 3*256kB 1*512kB 1*1024kB 0*2048kB 1*4096kB = 7760kB
Jun 13 17:06:42 gnome vmunix: DMA32: 1*4kB 1*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 1*2048kB 0*4096kB = 2060kB
Jun 13 17:06:42 gnome vmunix: 454746 total pagecache pages
Jun 13 17:06:42 gnome vmunix: 570 pages in swap cache
Jun 13 17:06:42 gnome vmunix: Swap cache stats: add 3988, delete 3418, find 355269/355411
Jun 13 17:06:42 gnome vmunix: Free swap  = 4185552kB
Jun 13 17:06:42 gnome vmunix: Total swap = 4192956kB
Jun 13 17:06:42 gnome vmunix: 507360 pages RAM
Jun 13 17:06:42 gnome vmunix: 9169 pages reserved
Jun 13 17:06:42 gnome vmunix: 277152 pages shared
Jun 13 17:06:42 gnome vmunix: 222889 pages non-shared

No idea why it is tainted - probably the r8169 lockups.
The same page allocation failure happens on a freshly
booted kernel too (untainted) - after about half a
hour of constant writing to the same file (i were tarring
a large filesystem to the remote server), on a 2gb machine
which does nothing else.

Thanks.

/mjt

> [6042655.755891]  [<ffffffff8059a98d>] __alloc_skb+0x6d/0x150
> [6042655.755893]  [<ffffffff8059b727>] __netdev_alloc_skb+0x17/0x40
> [6042655.755897]  [<ffffffff804d8c8b>] e1000_alloc_rx_buffers+0x23b/0x2c0
> [6042655.755899]  [<ffffffff804d8fbd>] e1000_clean_rx_irq+0x25d/0x3a0
> [6042655.755901]  [<ffffffff804dad70>] e1000_clean+0x180/0x2d0
> [6042655.755904]  [<ffffffff8059f5a7>] net_rx_action+0x87/0x130
> [6042655.755907]  [<ffffffff80259cd3>] __do_softirq+0x93/0x160
> [6042655.755910]  [<ffffffff8022c9fc>] call_softirq+0x1c/0x30
> [6042655.755912]  [<ffffffff8022e455>] do_softirq+0x35/0x80
> [6042655.755914]  [<ffffffff8022e523>] do_IRQ+0x83/0x110
> [6042655.755917]  [<ffffffff8022c2d3>] ret_from_intr+0x0/0xa
> [6042655.755918]  <EOI>  [<ffffffff80632190>] _spin_lock+0x10/0x20
> [6042655.755924]  [<ffffffff802bb2fc>] d_find_alias+0x1c/0x40
> [6042655.755926]  [<ffffffff802bd96d>] d_obtain_alias+0x4d/0x140
> [6042655.755930]  [<ffffffff8033ffd3>] exportfs_decode_fh+0x63/0x2a0
> [6042655.755932]  [<ffffffff80343970>] nfsd_acceptable+0x0/0x110
> [6042655.755935]  [<ffffffff8061e74a>] cache_check+0x4a/0x4d0
> [6042655.755937]  [<ffffffff80349437>] exp_find_key+0x57/0xe0
> [6042655.755941]  [<ffffffff80592a35>] sock_recvmsg+0xd5/0x110
> [6042655.755943]  [<ffffffff80349552>] exp_find+0x92/0xa0
> [6042655.755945]  [<ffffffff80343e59>] fh_verify+0x369/0x680
> [6042655.755948]  [<ffffffff8024add9>] check_preempt_wakeup+0xf9/0x120
> [6042655.755950]  [<ffffffff803460be>] nfsd_open+0x2e/0x180
> [6042655.755952]  [<ffffffff80346574>] nfsd_write+0xc4/0x120
> [6042655.755955]  [<ffffffff8034dac0>] nfsd3_proc_write+0xb0/0x150
> [6042655.755957]  [<ffffffff8034040a>] nfsd_dispatch+0xba/0x270
> [6042655.755960]  [<ffffffff80615a1e>] svc_process+0x49e/0x800
> [6042655.755962]  [<ffffffff8024dc80>] default_wake_function+0x0/0x10
> [6042655.755965]  [<ffffffff80631fd7>] __down_read+0x17/0xae
> [6042655.755966]  [<ffffffff80340b79>] nfsd+0x199/0x2b0
> [6042655.755968]  [<ffffffff803409e0>] nfsd+0x0/0x2b0
> [6042655.755971]  [<ffffffff802691d7>] kthread+0x47/0x90
> [6042655.755973]  [<ffffffff8022c8fa>] child_rip+0xa/0x20
> [6042655.755975]  [<ffffffff80269190>] kthread+0x0/0x90
> [6042655.755977]  [<ffffffff8022c8f0>] child_rip+0x0/0x20
> [6042655.755979] Mem-Info:
> [6042655.755980] DMA per-cpu:
> [6042655.755982] CPU    0: hi:    0, btch:   1 usd:   0
> [6042655.755983] CPU    1: hi:    0, btch:   1 usd:   0
> [6042655.755985] CPU    2: hi:    0, btch:   1 usd:   0
> [6042655.755986] CPU    3: hi:    0, btch:   1 usd:   0
> [6042655.755987] DMA32 per-cpu:
> [6042655.755988] CPU    0: hi:  186, btch:  31 usd: 168
> [6042655.755990] CPU    1: hi:  186, btch:  31 usd:  30
> [6042655.755991] CPU    2: hi:  186, btch:  31 usd: 161
> [6042655.755992] CPU    3: hi:  186, btch:  31 usd: 221
> [6042655.755993] Normal per-cpu:
> [6042655.755995] CPU    0: hi:  186, btch:  31 usd: 156
> [6042655.755996] CPU    1: hi:  186, btch:  31 usd:  30
> [6042655.755997] CPU    2: hi:  186, btch:  31 usd: 187
> [6042655.755998] CPU    3: hi:  186, btch:  31 usd: 202
> [6042655.756001] Active_anon:108072 active_file:103321 inactive_anon:31621
> [6042655.756002]  inactive_file:984722 unevictable:0 dirty:71104 
> writeback:0 unstable:0
> [6042655.756003]  free:8659 slab:746182 mapped:8842 pagetables:5374 
> bounce:0
> [6042655.756005] DMA free:9736kB min:16kB low:20kB high:24kB 
> active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB 
> unevictable:0kB present:8744kB pages_scanned:0 all_unreclaimable? yes
> [6042655.756008] lowmem_reserve[]: 0 3246 7980 7980
> [6042655.756012] DMA32 free:21420kB min:6656kB low:8320kB high:9984kB 
> active_anon:52420kB inactive_anon:38552kB active_file:146252kB 
> inactive_file:1651512kB unevictable:0kB present:3324312kB 
> pages_scanned:0 all_unreclaimable? no
> [6042655.756014] lowmem_reserve[]: 0 0 4734 4734
> [6042655.756018] Normal free:3480kB min:9708kB low:12132kB high:14560kB 
> active_anon:379868kB inactive_anon:87932kB active_file:267032kB 
> inactive_file:2287376kB unevictable:0kB present:4848000kB 
> pages_scanned:0 all_unreclaimable? no
> [6042655.756020] lowmem_reserve[]: 0 0 0 0
> [6042655.756023] DMA: 4*4kB 5*8kB 3*16kB 3*32kB 1*64kB 2*128kB 2*256kB 
> 1*512kB 2*1024kB 1*2048kB 1*4096kB = 9736kB
> [6042655.756030] DMA32: 3123*4kB 77*8kB 3*16kB 1*32kB 1*64kB 0*128kB 
> 1*256kB 1*512kB 1*1024kB 1*2048kB 1*4096kB = 21188kB
> [6042655.756036] Normal: 1*4kB 1*8kB 0*16kB 1*32kB 1*64kB 0*128kB 
> 1*256kB 0*512kB 1*1024kB 1*2048kB 0*4096kB = 3436kB
> [6042655.756042] 1090130 total pagecache pages
> [6042655.756044] 2059 pages in swap cache
> [6042655.756046] Swap cache stats: add 125946, delete 123887, find 
> 3279355/3285565
> [6042655.756047] Free swap  = 16734964kB
> [6042655.756048] Total swap = 16787768kB
> [6042655.756125] 2277376 pages RAM
> [6042655.756125] 252195 pages reserved
> [6042655.756125] 790472 pages shared
> [6042655.756125] 1269664 pages non-shared
> [6042655.794633] nfsd: page allocation failure. order:0, mode:0x20
> [6042655.794637] Pid: 7093, comm: nfsd Not tainted 2.6.29.1 #4
> [6042655.794638] Call Trace:
> [6042655.794640]  <IRQ>  [<ffffffff802850fd>] 
> __alloc_pages_internal+0x3dd/0x4e0
> [6042655.794649]  [<ffffffff802a738b>] cache_alloc_refill+0x2fb/0x570
> [6042655.794652]  [<ffffffff802a7085>] kmem_cache_alloc+0x95/0xa0
> [6042655.794655]  [<ffffffff8059a969>] __alloc_skb+0x49/0x150
> [6042655.794658]  [<ffffffff805dee06>] tcp_send_ack+0x26/0x120
> [6042655.794660]  [<ffffffff805dcbd2>] tcp_rcv_established+0x7a2/0x920
> [6042655.794663]  [<ffffffff805e417d>] tcp_v4_do_rcv+0xdd/0x210
> [6042655.794665]  [<ffffffff805e4926>] tcp_v4_rcv+0x676/0x710
> [6042655.794668]  [<ffffffff805c6a5c>] ip_local_deliver_finish+0x8c/0x160
> [6042655.794670]  [<ffffffff805c6551>] ip_rcv_finish+0x191/0x330
> [6042655.794672]  [<ffffffff805c6936>] ip_rcv+0x246/0x2e0
> [6042655.794676]  [<ffffffff804d8e74>] e1000_clean_rx_irq+0x114/0x3a0
> [6042655.794678]  [<ffffffff804dad70>] e1000_clean+0x180/0x2d0
> [6042655.794681]  [<ffffffff8059f5a7>] net_rx_action+0x87/0x130
> [6042655.794683]  [<ffffffff80259cd3>] __do_softirq+0x93/0x160
> [6042655.794687]  [<ffffffff8022c9fc>] call_softirq+0x1c/0x30
> [6042655.794689]  [<ffffffff8022e455>] do_softirq+0x35/0x80
> [6042655.794691]  [<ffffffff8022e523>] do_IRQ+0x83/0x110
> [6042655.794693]  [<ffffffff8022c2d3>] ret_from_intr+0x0/0xa
> [6042655.794694]  <EOI>  [<ffffffff80632190>] _spin_lock+0x10/0x20
> [6042655.794700]  [<ffffffff802bb2fc>] d_find_alias+0x1c/0x40
> [6042655.794703]  [<ffffffff802bd96d>] d_obtain_alias+0x4d/0x140
> [6042655.794706]  [<ffffffff8033ffd3>] exportfs_decode_fh+0x63/0x2a0
> [6042655.794708]  [<ffffffff80343970>] nfsd_acceptable+0x0/0x110
> [6042655.794711]  [<ffffffff8061e74a>] cache_check+0x4a/0x4d0
> [6042655.794714]  [<ffffffff80349437>] exp_find_key+0x57/0xe0
> [6042655.794717]  [<ffffffff80592a35>] sock_recvmsg+0xd5/0x110
> [6042655.794719]  [<ffffffff80349552>] exp_find+0x92/0xa0
> [6042655.794721]  [<ffffffff80343e59>] fh_verify+0x369/0x680
> [6042655.794724]  [<ffffffff8024add9>] check_preempt_wakeup+0xf9/0x120
> [6042655.794726]  [<ffffffff803460be>] nfsd_open+0x2e/0x180
> [6042655.794728]  [<ffffffff80346574>] nfsd_write+0xc4/0x120
> [6042655.794730]  [<ffffffff8034dac0>] nfsd3_proc_write+0xb0/0x150
> [6042655.794732]  [<ffffffff8034040a>] nfsd_dispatch+0xba/0x270
> [6042655.794736]  [<ffffffff80615a1e>] svc_process+0x49e/0x800
> [6042655.794738]  [<ffffffff8024dc80>] default_wake_function+0x0/0x10
> [6042655.794740]  [<ffffffff80631fd7>] __down_read+0x17/0xae
> [6042655.794742]  [<ffffffff80340b79>] nfsd+0x199/0x2b0
> [6042655.794743]  [<ffffffff803409e0>] nfsd+0x0/0x2b0
> [6042655.794747]  [<ffffffff802691d7>] kthread+0x47/0x90
> [6042655.794749]  [<ffffffff8022c8fa>] child_rip+0xa/0x20
> [6042655.794751]  [<ffffffff80269190>] kthread+0x0/0x90
> [6042655.794753]  [<ffffffff8022c8f0>] child_rip+0x0/0x20
> [6042655.794754] Mem-Info:
> [6042655.794755] DMA per-cpu:
> [6042655.794757] CPU    0: hi:    0, btch:   1 usd:   0
> [6042655.794758] CPU    1: hi:    0, btch:   1 usd:   0
> [6042655.794760] CPU    2: hi:    0, btch:   1 usd:   0
> [6042655.794761] CPU    3: hi:    0, btch:   1 usd:   0
> [6042655.794762] DMA32 per-cpu:
> [6042655.794763] CPU    0: hi:  186, btch:  31 usd: 168
> [6042655.794765] CPU    1: hi:  186, btch:  31 usd:  30
> [6042655.794766] CPU    2: hi:  186, btch:  31 usd: 161
> [6042655.794767] CPU    3: hi:  186, btch:  31 usd: 221
> [6042655.794768] Normal per-cpu:
> [6042655.794770] CPU    0: hi:  186, btch:  31 usd: 156
> [6042655.794771] CPU    1: hi:  186, btch:  31 usd:  30
> [6042655.794772] CPU    2: hi:  186, btch:  31 usd: 187
> [6042655.794773] CPU    3: hi:  186, btch:  31 usd: 202
> [6042655.794776] Active_anon:108072 active_file:103321 inactive_anon:31621
> [6042655.794777]  inactive_file:984722 unevictable:0 dirty:71104 
> writeback:0 unstable:0
> [6042655.794778]  free:8659 slab:746182 mapped:8842 pagetables:5374 
> bounce:0
> [6042655.794780] DMA free:9736kB min:16kB low:20kB high:24kB 
> active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB 
> unevictable:0kB present:8744kB pages_scanned:0 all_unreclaimable? yes
> [6042655.794783] lowmem_reserve[]: 0 3246 7980 7980
> [6042655.794787] DMA32 free:21420kB min:6656kB low:8320kB high:9984kB 
> active_anon:52420kB inactive_anon:38552kB active_file:146252kB 
> inactive_file:1651512kB unevictable:0kB present:3324312kB 
> pages_scanned:0 all_unreclaimable? no
> [6042655.794789] lowmem_reserve[]: 0 0 4734 4734
> [6042655.794793] Normal free:3480kB min:9708kB low:12132kB high:14560kB 
> active_anon:379868kB inactive_anon:87932kB active_file:267032kB 
> inactive_file:2287376kB unevictable:0kB present:4848000kB 
> pages_scanned:0 all_unreclaimable? no
> [6042655.794795] lowmem_reserve[]: 0 0 0 0
> [6042655.794798] DMA: 4*4kB 5*8kB 3*16kB 3*32kB 1*64kB 2*128kB 2*256kB 
> 1*512kB 2*1024kB 1*2048kB 1*4096kB = 9736kB
> [6042655.794805] DMA32: 3123*4kB 77*8kB 3*16kB 1*32kB 1*64kB 0*128kB 
> 1*256kB 1*512kB 1*1024kB 1*2048kB 1*4096kB = 21188kB
> [6042655.794811] Normal: 1*4kB 1*8kB 0*16kB 1*32kB 1*64kB 0*128kB 
> 1*256kB 0*512kB 1*1024kB 1*2048kB 0*4096kB = 3436kB
> [6042655.794818] 1090130 total pagecache pages
> [6042655.794819] 2059 pages in swap cache
> [6042655.794821] Swap cache stats: add 125946, delete 123887, find 
> 3279355/3285565
> [6042655.794822] Free swap  = 16734964kB
> [6042655.794823] Total swap = 16787768kB
> [6042655.795578] 2277376 pages RAM
> [6042655.795578] 252195 pages reserved
> [6042655.795578] 790472 pages shared
> [6042655.795578] 1269664 pages non-shared
> [6042655.828540] nfsd: page allocation failure. order:0, mode:0x20
> [6042655.828544] Pid: 7093, comm: nfsd Not tainted 2.6.29.1 #4
> [6042655.828545] Call Trace:
> [6042655.828547]  <IRQ>  [<ffffffff802850fd>] 
> __alloc_pages_internal+0x3dd/0x4e0
> [6042655.828555]  [<ffffffff802a738b>] cache_alloc_refill+0x2fb/0x570
> [6042655.828557]  [<ffffffff802a7085>] kmem_cache_alloc+0x95/0xa0
> [6042655.828561]  [<ffffffff8059a969>] __alloc_skb+0x49/0x150
> [6042655.828564]  [<ffffffff8059b727>] __netdev_alloc_skb+0x17/0x40
> [6042655.828567]  [<ffffffff804d8c8b>] e1000_alloc_rx_buffers+0x23b/0x2c0
> [6042655.828570]  [<ffffffff804d8fbd>] e1000_clean_rx_irq+0x25d/0x3a0
> [6042655.828572]  [<ffffffff804dad70>] e1000_clean+0x180/0x2d0
> [6042655.828574]  [<ffffffff8059f5a7>] net_rx_action+0x87/0x130
> [6042655.828578]  [<ffffffff80259cd3>] __do_softirq+0x93/0x160
> [6042655.828581]  [<ffffffff8022c9fc>] call_softirq+0x1c/0x30
> [6042655.828583]  [<ffffffff8022e455>] do_softirq+0x35/0x80
> [6042655.828585]  [<ffffffff8022e523>] do_IRQ+0x83/0x110
> [6042655.828587]  [<ffffffff8022c2d3>] ret_from_intr+0x0/0xa
> [6042655.828589]  <EOI>  [<ffffffff80632190>] _spin_lock+0x10/0x20
> [6042655.828595]  [<ffffffff802bb2fc>] d_find_alias+0x1c/0x40
> [6042655.828598]  [<ffffffff802bd96d>] d_obtain_alias+0x4d/0x140
> [6042655.828601]  [<ffffffff8033ffd3>] exportfs_decode_fh+0x63/0x2a0
> [6042655.828604]  [<ffffffff80343970>] nfsd_acceptable+0x0/0x110
> [6042655.828606]  [<ffffffff8061e74a>] cache_check+0x4a/0x4d0
> [6042655.828609]  [<ffffffff80349437>] exp_find_key+0x57/0xe0
> [6042655.828612]  [<ffffffff80592a35>] sock_recvmsg+0xd5/0x110
> [6042655.828614]  [<ffffffff80349552>] exp_find+0x92/0xa0
> [6042655.828616]  [<ffffffff80343e59>] fh_verify+0x369/0x680
> [6042655.828619]  [<ffffffff8024add9>] check_preempt_wakeup+0xf9/0x120
> [6042655.828622]  [<ffffffff803460be>] nfsd_open+0x2e/0x180
> [6042655.828623]  [<ffffffff80346574>] nfsd_write+0xc4/0x120
> [6042655.828626]  [<ffffffff8034dac0>] nfsd3_proc_write+0xb0/0x150
> [6042655.828628]  [<ffffffff8034040a>] nfsd_dispatch+0xba/0x270
> [6042655.828631]  [<ffffffff80615a1e>] svc_process+0x49e/0x800
> [6042655.828634]  [<ffffffff8024dc80>] default_wake_function+0x0/0x10
> [6042655.828636]  [<ffffffff80631fd7>] __down_read+0x17/0xae
> [6042655.828638]  [<ffffffff80340b79>] nfsd+0x199/0x2b0
> [6042655.828639]  [<ffffffff803409e0>] nfsd+0x0/0x2b0
> [6042655.828643]  [<ffffffff802691d7>] kthread+0x47/0x90
> [6042655.828645]  [<ffffffff8022c8fa>] child_rip+0xa/0x20
> [6042655.828647]  [<ffffffff80269190>] kthread+0x0/0x90
> [6042655.828649]  [<ffffffff8022c8f0>] child_rip+0x0/0x20
> [6042655.828650] Mem-Info:
> [6042655.828651] DMA per-cpu:
> [6042655.828653] CPU    0: hi:    0, btch:   1 usd:   0
> [6042655.828655] CPU    1: hi:    0, btch:   1 usd:   0
> [6042655.828656] CPU    2: hi:    0, btch:   1 usd:   0
> [6042655.828657] CPU    3: hi:    0, btch:   1 usd:   0
> [6042655.828658] DMA32 per-cpu:
> [6042655.828659] CPU    0: hi:  186, btch:  31 usd: 168
> [6042655.828661] CPU    1: hi:  186, btch:  31 usd:  30
> [6042655.828662] CPU    2: hi:  186, btch:  31 usd: 161
> [6042655.828663] CPU    3: hi:  186, btch:  31 usd: 221
> [6042655.828665] Normal per-cpu:
> [6042655.828666] CPU    0: hi:  186, btch:  31 usd: 156
> [6042655.828667] CPU    1: hi:  186, btch:  31 usd:  30
> [6042655.828668] CPU    2: hi:  186, btch:  31 usd: 187
> [6042655.828670] CPU    3: hi:  186, btch:  31 usd: 202
> [6042655.828672] Active_anon:108072 active_file:103321 inactive_anon:31621
> [6042655.828673]  inactive_file:984722 unevictable:0 dirty:71104 
> writeback:0 unstable:0
> [6042655.828674]  free:8659 slab:746182 mapped:8842 pagetables:5374 
> bounce:0
> [6042655.828677] DMA free:9736kB min:16kB low:20kB high:24kB 
> active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB 
> unevictable:0kB present:8744kB pages_scanned:0 all_unreclaimable? yes
> [6042655.828679] lowmem_reserve[]: 0 3246 7980 7980
> [6042655.828683] DMA32 free:21420kB min:6656kB low:8320kB high:9984kB 
> active_anon:52420kB inactive_anon:38552kB active_file:146252kB 
> inactive_file:1651512kB unevictable:0kB present:3324312kB 
> pages_scanned:0 all_unreclaimable? no
> [6042655.828685] lowmem_reserve[]: 0 0 4734 4734
> [6042655.828689] Normal free:3480kB min:9708kB low:12132kB high:14560kB 
> active_anon:379868kB inactive_anon:87932kB active_file:267032kB 
> inactive_file:2287376kB unevictable:0kB present:4848000kB 
> pages_scanned:0 all_unreclaimable? no
> [6042655.828692] lowmem_reserve[]: 0 0 0 0
> [6042655.828694] DMA: 4*4kB 5*8kB 3*16kB 3*32kB 1*64kB 2*128kB 2*256kB 
> 1*512kB 2*1024kB 1*2048kB 1*4096kB = 9736kB
> [6042655.828701] DMA32: 3123*4kB 77*8kB 3*16kB 1*32kB 1*64kB 0*128kB 
> 1*256kB 1*512kB 1*1024kB 1*2048kB 1*4096kB = 21188kB
> [6042655.828707] Normal: 1*4kB 1*8kB 0*16kB 1*32kB 1*64kB 0*128kB 
> 1*256kB 0*512kB 1*1024kB 1*2048kB 0*4096kB = 3436kB
> [6042655.828714] 1090130 total pagecache pages
> [6042655.828715] 2059 pages in swap cache
> [6042655.828717] Swap cache stats: add 125946, delete 123887, find 
> 3279355/3285565
> [6042655.828718] Free swap  = 16734964kB
> [6042655.828719] Total swap = 16787768kB
> [6042655.830324] 2277376 pages RAM
> [6042655.830324] 252195 pages reserved
> [6042655.830324] 790472 pages shared
> [6042655.830324] 1269089 pages non-shared
> 
> -- 
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: 2.6.29.1: nfsd: page allocation failure - nfsd or kernel problem?
  2009-06-16 20:19   ` Michael Tokarev
@ 2009-06-17  8:43     ` Michael Tokarev
  2009-06-17  9:43       ` Justin Piszcz
  0 siblings, 1 reply; 33+ messages in thread
From: Michael Tokarev @ 2009-06-17  8:43 UTC (permalink / raw)
  To: Justin Piszcz; +Cc: linux-kernel

Michael Tokarev wrote:
> Justin Piszcz wrote:
>> Package: nfs-kernel-server
>> Version: 1.1.6-1
>> Distribution: Debian Testing
>> Architecture: 64-bit
>>
>> [6042655.755870] Pid: 7093, comm: nfsd Not tainted 2.6.29.1 #4
>> [6042655.755872] Call Trace:
>> [6042655.755874]  <IRQ>  [<ffffffff802850fd>]  
>> __alloc_pages_internal+0x3dd/0x4e0
>> [6042655.755885]  [<ffffffff802a738b>] cache_alloc_refill+0x2fb/0x570
>> [6042655.755887]  [<ffffffff802a76db>] __kmalloc+0xdb/0xe0
> 
> Was about to send this same report.
> 
> The thing continues servicing files it seems, but after some quite
> good delay.
> 
> This happens after massive amount of writes.  2.6.29.4 does the
> same thing.  Here it is, for comparison:
> 
> Jun 13 17:06:42 gnome vmunix: nfsd: page allocation failure. order:0, mode:0x20
> Jun 13 17:06:42 gnome vmunix: Call Trace:
> Jun 13 17:06:42 gnome vmunix:  <IRQ>  [<ffffffff8029559d>]  __alloc_pages_internal+0x3fd/0x500
> Jun 13 17:06:42 gnome vmunix:  [<ffffffff802bd4c3>]  cache_alloc_refill+0x313/0x5c0
> Jun 13 17:06:42 gnome vmunix:  [<ffffffff802bd873>] __kmalloc+0x103/0x110
> Jun 13 17:06:42 gnome vmunix:  [<ffffffff803dca6d>] __alloc_skb+0x6d/0x150
...

Justin, by the way, what's the underlying filesystem on the server?

I've seen this error on 2 machines already (both running 2.6.29.x x86-64),
and in both cases the filesystem on the server was xfs.  May this be
related somehow to http://bugzilla.kernel.org/show_bug.cgi?id=13375 ?
That one is different, but also about xfs and nfs.  I'm trying to
reproduce the problem on different filesystem...

/mjt

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: 2.6.29.1: nfsd: page allocation failure - nfsd or kernel problem?
  2009-06-17  8:43     ` Michael Tokarev
@ 2009-06-17  9:43       ` Justin Piszcz
  2009-06-17 10:39         ` Michael Tokarev
  0 siblings, 1 reply; 33+ messages in thread
From: Justin Piszcz @ 2009-06-17  9:43 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: linux-kernel



On Wed, 17 Jun 2009, Michael Tokarev wrote:

> Michael Tokarev wrote:
>> Justin Piszcz wrote:
> ...
>
> Justin, by the way, what's the underlying filesystem on the server?
>
> I've seen this error on 2 machines already (both running 2.6.29.x x86-64),
> and in both cases the filesystem on the server was xfs.  May this be
> related somehow to http://bugzilla.kernel.org/show_bug.cgi?id=13375 ?
> That one is different, but also about xfs and nfs.  I'm trying to
> reproduce the problem on different filesystem...
>
> /mjt
>

Hello, I am also running XFS on 2.6.29.x x86-64.

For me, the error happened when I was running an XFSDUMP from a client 
(and dumping) the stream over NFS to the XFS server/filesystem.  This is 
typically when the error occurs or during heavy I/O.

Justin.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: 2.6.29.1: nfsd: page allocation failure - nfsd or kernel problem?
  2009-06-17  9:43       ` Justin Piszcz
@ 2009-06-17 10:39         ` Michael Tokarev
  2009-06-17 18:51           ` J. Bruce Fields
  0 siblings, 1 reply; 33+ messages in thread
From: Michael Tokarev @ 2009-06-17 10:39 UTC (permalink / raw)
  To: Justin Piszcz; +Cc: linux-kernel

Justin Piszcz wrote:
> 
> 
> On Wed, 17 Jun 2009, Michael Tokarev wrote:
> 
>> Michael Tokarev wrote:
>>> Justin Piszcz wrote:
>> ...
>>
>> Justin, by the way, what's the underlying filesystem on the server?
>>
>> I've seen this error on 2 machines already (both running 2.6.29.x 
>> x86-64),
>> and in both cases the filesystem on the server was xfs.  May this be
>> related somehow to http://bugzilla.kernel.org/show_bug.cgi?id=13375 ?
>> That one is different, but also about xfs and nfs.  I'm trying to
>> reproduce the problem on different filesystem...
> 
> Hello, I am also running XFS on 2.6.29.x x86-64.
> 
> For me, the error happened when I was running an XFSDUMP from a client 
> (and dumping) the stream over NFS to the XFS server/filesystem.  This is 
> typically when the error occurs or during heavy I/O.

Very similar load was here -- not xfsdump but tar and dump of an ext3
filesystems.

And no, it's NOT xfs-related: I can trigger the same issue easily on
ext4 as well.  About 20 minutes of running 'dump' of another fs
to the nfs mount and voila, nfs server reports the same page allocation
failure.  Note that all file operations are still working, i.e. it
produces good (not corrupted) files on the server.

/mjt

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: 2.6.29.1: nfsd: page allocation failure - nfsd or kernel problem?
  2009-06-17 10:39         ` Michael Tokarev
@ 2009-06-17 18:51           ` J. Bruce Fields
  2009-06-17 20:24             ` Michael Tokarev
  0 siblings, 1 reply; 33+ messages in thread
From: J. Bruce Fields @ 2009-06-17 18:51 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: Justin Piszcz, linux-kernel

On Wed, Jun 17, 2009 at 02:39:06PM +0400, Michael Tokarev wrote:
> Justin Piszcz wrote:
>>
>>
>> On Wed, 17 Jun 2009, Michael Tokarev wrote:
>>
>>> Michael Tokarev wrote:
>>>> Justin Piszcz wrote:
>>> ...
>>>
>>> Justin, by the way, what's the underlying filesystem on the server?
>>>
>>> I've seen this error on 2 machines already (both running 2.6.29.x  
>>> x86-64),
>>> and in both cases the filesystem on the server was xfs.  May this be
>>> related somehow to http://bugzilla.kernel.org/show_bug.cgi?id=13375 ?
>>> That one is different, but also about xfs and nfs.  I'm trying to
>>> reproduce the problem on different filesystem...
>>
>> Hello, I am also running XFS on 2.6.29.x x86-64.
>>
>> For me, the error happened when I was running an XFSDUMP from a client  
>> (and dumping) the stream over NFS to the XFS server/filesystem.  This 
>> is typically when the error occurs or during heavy I/O.
>
> Very similar load was here -- not xfsdump but tar and dump of an ext3
> filesystems.
>
> And no, it's NOT xfs-related: I can trigger the same issue easily on
> ext4 as well.  About 20 minutes of running 'dump' of another fs
> to the nfs mount and voila, nfs server reports the same page allocation
> failure.  Note that all file operations are still working, i.e. it
> produces good (not corrupted) files on the server.

There's a possibly related report for 2.6.30 here:

	http://bugzilla.kernel.org/show_bug.cgi?id=13518

--b.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [patch] ipv4: don't warn about skb ack allocation failures
  2009-06-16 16:06 ` 2.6.29.1: nfsd: page allocation failure - nfsd or kernel problem? Justin Piszcz
  2009-06-16 20:19   ` Michael Tokarev
@ 2009-06-17 19:44   ` David Rientjes
  2009-06-17 20:16     ` Eric Dumazet
  1 sibling, 1 reply; 33+ messages in thread
From: David Rientjes @ 2009-06-17 19:44 UTC (permalink / raw)
  To: David S. Miller; +Cc: Justin Piszcz, linux-kernel

On Tue, 16 Jun 2009, Justin Piszcz wrote:

> [6042655.794633] nfsd: page allocation failure. order:0, mode:0x20

That's a GFP_ATOMIC allocation.

> [6042655.794637] Pid: 7093, comm: nfsd Not tainted 2.6.29.1 #4
> [6042655.794638] Call Trace:
> [6042655.794640]  <IRQ>  [<ffffffff802850fd>] __alloc_pages_internal+0x3dd/0x4e0
> [6042655.794649]  [<ffffffff802a738b>] cache_alloc_refill+0x2fb/0x570
> [6042655.794652]  [<ffffffff802a7085>] kmem_cache_alloc+0x95/0xa0

Attempting to allocate new slab with GFP_ATOMIC, so no reclaim is 
possible.

> [6042655.794655]  [<ffffffff8059a969>] __alloc_skb+0x49/0x150
> [6042655.794658]  [<ffffffff805dee06>] tcp_send_ack+0x26/0x120

If alloc_skb() cannot allocate a new skbuff_head_cache buffer atomically, 
tcp_send_ack() easily recovers, so perhaps this should be annotated with 
__GFP_NOWARN (as in the following patch).

> [6042655.794660]  [<ffffffff805dcbd2>] tcp_rcv_established+0x7a2/0x920
> [6042655.794663]  [<ffffffff805e417d>] tcp_v4_do_rcv+0xdd/0x210
> [6042655.794665]  [<ffffffff805e4926>] tcp_v4_rcv+0x676/0x710
> [6042655.794668]  [<ffffffff805c6a5c>] ip_local_deliver_finish+0x8c/0x160
> [6042655.794670]  [<ffffffff805c6551>] ip_rcv_finish+0x191/0x330
> [6042655.794672]  [<ffffffff805c6936>] ip_rcv+0x246/0x2e0
> [6042655.794676]  [<ffffffff804d8e74>] e1000_clean_rx_irq+0x114/0x3a0
> [6042655.794678]  [<ffffffff804dad70>] e1000_clean+0x180/0x2d0
> [6042655.794681]  [<ffffffff8059f5a7>] net_rx_action+0x87/0x130
> [6042655.794683]  [<ffffffff80259cd3>] __do_softirq+0x93/0x160
> [6042655.794687]  [<ffffffff8022c9fc>] call_softirq+0x1c/0x30
> [6042655.794689]  [<ffffffff8022e455>] do_softirq+0x35/0x80
> [6042655.794691]  [<ffffffff8022e523>] do_IRQ+0x83/0x110
> [6042655.794693]  [<ffffffff8022c2d3>] ret_from_intr+0x0/0xa
> [6042655.794694]  <EOI>  [<ffffffff80632190>] _spin_lock+0x10/0x20
> [6042655.794700]  [<ffffffff802bb2fc>] d_find_alias+0x1c/0x40
> [6042655.794703]  [<ffffffff802bd96d>] d_obtain_alias+0x4d/0x140
> [6042655.794706]  [<ffffffff8033ffd3>] exportfs_decode_fh+0x63/0x2a0
> [6042655.794708]  [<ffffffff80343970>] nfsd_acceptable+0x0/0x110
> [6042655.794711]  [<ffffffff8061e74a>] cache_check+0x4a/0x4d0
> [6042655.794714]  [<ffffffff80349437>] exp_find_key+0x57/0xe0
> [6042655.794717]  [<ffffffff80592a35>] sock_recvmsg+0xd5/0x110
> [6042655.794719]  [<ffffffff80349552>] exp_find+0x92/0xa0
> [6042655.794721]  [<ffffffff80343e59>] fh_verify+0x369/0x680
> [6042655.794724]  [<ffffffff8024add9>] check_preempt_wakeup+0xf9/0x120
> [6042655.794726]  [<ffffffff803460be>] nfsd_open+0x2e/0x180
> [6042655.794728]  [<ffffffff80346574>] nfsd_write+0xc4/0x120
> [6042655.794730]  [<ffffffff8034dac0>] nfsd3_proc_write+0xb0/0x150
> [6042655.794732]  [<ffffffff8034040a>] nfsd_dispatch+0xba/0x270
> [6042655.794736]  [<ffffffff80615a1e>] svc_process+0x49e/0x800
> [6042655.794738]  [<ffffffff8024dc80>] default_wake_function+0x0/0x10
> [6042655.794740]  [<ffffffff80631fd7>] __down_read+0x17/0xae
> [6042655.794742]  [<ffffffff80340b79>] nfsd+0x199/0x2b0
> [6042655.794743]  [<ffffffff803409e0>] nfsd+0x0/0x2b0
> [6042655.794747]  [<ffffffff802691d7>] kthread+0x47/0x90
> [6042655.794749]  [<ffffffff8022c8fa>] child_rip+0xa/0x20
> [6042655.794751]  [<ffffffff80269190>] kthread+0x0/0x90
> [6042655.794753]  [<ffffffff8022c8f0>] child_rip+0x0/0x20
> [6042655.794754] Mem-Info:
...
> [6042655.794776] Active_anon:108072 active_file:103321 inactive_anon:31621
> [6042655.794777]  inactive_file:984722 unevictable:0 dirty:71104 writeback:0
> unstable:0
> [6042655.794778]  free:8659 slab:746182 mapped:8842 pagetables:5374 bounce:0
> [6042655.794780] DMA free:9736kB min:16kB low:20kB high:24kB active_anon:0kB
> inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB
> present:8744kB pages_scanned:0 all_unreclaimable? yes
> [6042655.794783] lowmem_reserve[]: 0 3246 7980 7980

ZONE_DMA is inaccessible because of lowmem_reserve, assuming you have 4K 
pages: 9736K free < (16K min + (7980 pages * 4K /page)).

> [6042655.794787] DMA32 free:21420kB min:6656kB low:8320kB high:9984kB
> active_anon:52420kB inactive_anon:38552kB active_file:146252kB
> inactive_file:1651512kB unevictable:0kB present:3324312kB pages_scanned:0
> all_unreclaimable? no
> [6042655.794789] lowmem_reserve[]: 0 0 4734 4734

Likewise for ZONE_DMA32: 21420K free < (6656K min + (4734 pages * 
4K/page)).

> [6042655.794793] Normal free:3480kB min:9708kB low:12132kB high:14560kB
> active_anon:379868kB inactive_anon:87932kB active_file:267032kB
> inactive_file:2287376kB unevictable:0kB present:4848000kB pages_scanned:0
> all_unreclaimable? no
> [6042655.794795] lowmem_reserve[]: 0 0 0 0

And ZONE_NORMAL is oom: 3480K free < 9708K min.


ipv4: don't warn about skb ack allocation failures

tcp_send_ack() will recover from alloc_skb() allocation failures, so avoid 
emitting warnings.

Signed-off-by: David Rientjes <rientjes@google.com>
---
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -2442,7 +2442,7 @@ void tcp_send_ack(struct sock *sk)
 	 * tcp_transmit_skb() will set the ownership to this
 	 * sock.
 	 */
-	buff = alloc_skb(MAX_TCP_HEADER, GFP_ATOMIC);
+	buff = alloc_skb(MAX_TCP_HEADER, GFP_ATOMIC | __GFP_NOWARN);
 	if (buff == NULL) {
 		inet_csk_schedule_ack(sk);
 		inet_csk(sk)->icsk_ack.ato = TCP_ATO_MIN;

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [patch] ipv4: don't warn about skb ack allocation failures
  2009-06-17 19:44   ` [patch] ipv4: don't warn about skb ack allocation failures David Rientjes
@ 2009-06-17 20:16     ` Eric Dumazet
  2009-06-17 20:33       ` David Rientjes
  0 siblings, 1 reply; 33+ messages in thread
From: Eric Dumazet @ 2009-06-17 20:16 UTC (permalink / raw)
  To: David Rientjes; +Cc: David S. Miller, Justin Piszcz, linux-kernel

David Rientjes a écrit :
> On Tue, 16 Jun 2009, Justin Piszcz wrote:
> 
>> [6042655.794633] nfsd: page allocation failure. order:0, mode:0x20
> 
> That's a GFP_ATOMIC allocation.
> 
>> [6042655.794637] Pid: 7093, comm: nfsd Not tainted 2.6.29.1 #4
>> [6042655.794638] Call Trace:
>> [6042655.794640]  <IRQ>  [<ffffffff802850fd>] __alloc_pages_internal+0x3dd/0x4e0
>> [6042655.794649]  [<ffffffff802a738b>] cache_alloc_refill+0x2fb/0x570
>> [6042655.794652]  [<ffffffff802a7085>] kmem_cache_alloc+0x95/0xa0
> 
> Attempting to allocate new slab with GFP_ATOMIC, so no reclaim is 
> possible.
> 
>> [6042655.794655]  [<ffffffff8059a969>] __alloc_skb+0x49/0x150
>> [6042655.794658]  [<ffffffff805dee06>] tcp_send_ack+0x26/0x120
> 
> If alloc_skb() cannot allocate a new skbuff_head_cache buffer atomically, 
> tcp_send_ack() easily recovers, so perhaps this should be annotated with 
> __GFP_NOWARN (as in the following patch).
> 
>> [6042655.794660]  [<ffffffff805dcbd2>] tcp_rcv_established+0x7a2/0x920
>> [6042655.794663]  [<ffffffff805e417d>] tcp_v4_do_rcv+0xdd/0x210
>> [6042655.794665]  [<ffffffff805e4926>] tcp_v4_rcv+0x676/0x710
>> [6042655.794668]  [<ffffffff805c6a5c>] ip_local_deliver_finish+0x8c/0x160
>> [6042655.794670]  [<ffffffff805c6551>] ip_rcv_finish+0x191/0x330
>> [6042655.794672]  [<ffffffff805c6936>] ip_rcv+0x246/0x2e0
>> [6042655.794676]  [<ffffffff804d8e74>] e1000_clean_rx_irq+0x114/0x3a0
>> [6042655.794678]  [<ffffffff804dad70>] e1000_clean+0x180/0x2d0
>> [6042655.794681]  [<ffffffff8059f5a7>] net_rx_action+0x87/0x130
>> [6042655.794683]  [<ffffffff80259cd3>] __do_softirq+0x93/0x160
>> [6042655.794687]  [<ffffffff8022c9fc>] call_softirq+0x1c/0x30
>> [6042655.794689]  [<ffffffff8022e455>] do_softirq+0x35/0x80
>> [6042655.794691]  [<ffffffff8022e523>] do_IRQ+0x83/0x110
>> [6042655.794693]  [<ffffffff8022c2d3>] ret_from_intr+0x0/0xa
>> [6042655.794694]  <EOI>  [<ffffffff80632190>] _spin_lock+0x10/0x20
>> [6042655.794700]  [<ffffffff802bb2fc>] d_find_alias+0x1c/0x40
>> [6042655.794703]  [<ffffffff802bd96d>] d_obtain_alias+0x4d/0x140
>> [6042655.794706]  [<ffffffff8033ffd3>] exportfs_decode_fh+0x63/0x2a0
>> [6042655.794708]  [<ffffffff80343970>] nfsd_acceptable+0x0/0x110
>> [6042655.794711]  [<ffffffff8061e74a>] cache_check+0x4a/0x4d0
>> [6042655.794714]  [<ffffffff80349437>] exp_find_key+0x57/0xe0
>> [6042655.794717]  [<ffffffff80592a35>] sock_recvmsg+0xd5/0x110
>> [6042655.794719]  [<ffffffff80349552>] exp_find+0x92/0xa0
>> [6042655.794721]  [<ffffffff80343e59>] fh_verify+0x369/0x680
>> [6042655.794724]  [<ffffffff8024add9>] check_preempt_wakeup+0xf9/0x120
>> [6042655.794726]  [<ffffffff803460be>] nfsd_open+0x2e/0x180
>> [6042655.794728]  [<ffffffff80346574>] nfsd_write+0xc4/0x120
>> [6042655.794730]  [<ffffffff8034dac0>] nfsd3_proc_write+0xb0/0x150
>> [6042655.794732]  [<ffffffff8034040a>] nfsd_dispatch+0xba/0x270
>> [6042655.794736]  [<ffffffff80615a1e>] svc_process+0x49e/0x800
>> [6042655.794738]  [<ffffffff8024dc80>] default_wake_function+0x0/0x10
>> [6042655.794740]  [<ffffffff80631fd7>] __down_read+0x17/0xae
>> [6042655.794742]  [<ffffffff80340b79>] nfsd+0x199/0x2b0
>> [6042655.794743]  [<ffffffff803409e0>] nfsd+0x0/0x2b0
>> [6042655.794747]  [<ffffffff802691d7>] kthread+0x47/0x90
>> [6042655.794749]  [<ffffffff8022c8fa>] child_rip+0xa/0x20
>> [6042655.794751]  [<ffffffff80269190>] kthread+0x0/0x90
>> [6042655.794753]  [<ffffffff8022c8f0>] child_rip+0x0/0x20
>> [6042655.794754] Mem-Info:
> ...
>> [6042655.794776] Active_anon:108072 active_file:103321 inactive_anon:31621
>> [6042655.794777]  inactive_file:984722 unevictable:0 dirty:71104 writeback:0
>> unstable:0
>> [6042655.794778]  free:8659 slab:746182 mapped:8842 pagetables:5374 bounce:0
>> [6042655.794780] DMA free:9736kB min:16kB low:20kB high:24kB active_anon:0kB
>> inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB
>> present:8744kB pages_scanned:0 all_unreclaimable? yes
>> [6042655.794783] lowmem_reserve[]: 0 3246 7980 7980
> 
> ZONE_DMA is inaccessible because of lowmem_reserve, assuming you have 4K 
> pages: 9736K free < (16K min + (7980 pages * 4K /page)).
> 
>> [6042655.794787] DMA32 free:21420kB min:6656kB low:8320kB high:9984kB
>> active_anon:52420kB inactive_anon:38552kB active_file:146252kB
>> inactive_file:1651512kB unevictable:0kB present:3324312kB pages_scanned:0
>> all_unreclaimable? no
>> [6042655.794789] lowmem_reserve[]: 0 0 4734 4734
> 
> Likewise for ZONE_DMA32: 21420K free < (6656K min + (4734 pages * 
> 4K/page)).
> 
>> [6042655.794793] Normal free:3480kB min:9708kB low:12132kB high:14560kB
>> active_anon:379868kB inactive_anon:87932kB active_file:267032kB
>> inactive_file:2287376kB unevictable:0kB present:4848000kB pages_scanned:0
>> all_unreclaimable? no
>> [6042655.794795] lowmem_reserve[]: 0 0 0 0
> 
> And ZONE_NORMAL is oom: 3480K free < 9708K min.
> 
> 
> ipv4: don't warn about skb ack allocation failures
> 
> tcp_send_ack() will recover from alloc_skb() allocation failures, so avoid 
> emitting warnings.
> 
> Signed-off-by: David Rientjes <rientjes@google.com>
> ---
> diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
> --- a/net/ipv4/tcp_output.c
> +++ b/net/ipv4/tcp_output.c
> @@ -2442,7 +2442,7 @@ void tcp_send_ack(struct sock *sk)
>  	 * tcp_transmit_skb() will set the ownership to this
>  	 * sock.
>  	 */
> -	buff = alloc_skb(MAX_TCP_HEADER, GFP_ATOMIC);
> +	buff = alloc_skb(MAX_TCP_HEADER, GFP_ATOMIC | __GFP_NOWARN);
>  	if (buff == NULL) {
>  		inet_csk_schedule_ack(sk);
>  		inet_csk(sk)->icsk_ack.ato = TCP_ATO_MIN;

I count more than 800 GFP_ATOMIC allocations in net/ tree.

Most (if not all) of them can recover in case of failures.

Should we add __GFP_NOWARN to all of them ?

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: 2.6.29.1: nfsd: page allocation failure - nfsd or kernel problem?
  2009-06-17 18:51           ` J. Bruce Fields
@ 2009-06-17 20:24             ` Michael Tokarev
  2009-06-17 20:39               ` David Rientjes
                                 ` (2 more replies)
  0 siblings, 3 replies; 33+ messages in thread
From: Michael Tokarev @ 2009-06-17 20:24 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: Justin Piszcz, linux-kernel

J. Bruce Fields wrote:
> On Wed, Jun 17, 2009 at 02:39:06PM +0400, Michael Tokarev wrote:
>> Justin Piszcz wrote:
>>>
>>> On Wed, 17 Jun 2009, Michael Tokarev wrote:
>>>
>>>> Michael Tokarev wrote:
>>>>> Justin Piszcz wrote:
>>>> ...
>>>>
>>>> Justin, by the way, what's the underlying filesystem on the server?
>>>>
>>>> I've seen this error on 2 machines already (both running 2.6.29.x  
>>>> x86-64),
>>>> and in both cases the filesystem on the server was xfs.  May this be
>>>> related somehow to http://bugzilla.kernel.org/show_bug.cgi?id=13375 ?
>>>> That one is different, but also about xfs and nfs.  I'm trying to
>>>> reproduce the problem on different filesystem...
>>> Hello, I am also running XFS on 2.6.29.x x86-64.
>>>
>>> For me, the error happened when I was running an XFSDUMP from a client  
>>> (and dumping) the stream over NFS to the XFS server/filesystem.  This 
>>> is typically when the error occurs or during heavy I/O.
>> Very similar load was here -- not xfsdump but tar and dump of an ext3
>> filesystems.
>>
>> And no, it's NOT xfs-related: I can trigger the same issue easily on

Note the NOT, in upper case ;)

>> ext4 as well.  About 20 minutes of running 'dump' of another fs
>> to the nfs mount and voila, nfs server reports the same page allocation
>> failure.  Note that all file operations are still working, i.e. it
>> produces good (not corrupted) files on the server.
> 
> There's a possibly related report for 2.6.30 here:
> 
> 	http://bugzilla.kernel.org/show_bug.cgi?id=13518

Does not look similar.

I repeated the issue here.  The slab which is growing here is buffer_head.
It's growing slowly -- right now, after ~5 minutes of constant writes over
nfs, its size is 428423 objects, growing at about 5000 objects/minute rate.
When stopping writing, the cache shrinks slowly back to an acceptable
size, probably when the data gets actually written to disk.

It looks like we need a bug entry for this :)

I'll re-try 2.6.30 hopefully tomorrow.

/mjt

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [patch] ipv4: don't warn about skb ack allocation failures
  2009-06-17 20:16     ` Eric Dumazet
@ 2009-06-17 20:33       ` David Rientjes
  2009-06-17 20:52         ` Eric Dumazet
  0 siblings, 1 reply; 33+ messages in thread
From: David Rientjes @ 2009-06-17 20:33 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: David S. Miller, Justin Piszcz, linux-kernel

On Wed, 17 Jun 2009, Eric Dumazet wrote:

> > ipv4: don't warn about skb ack allocation failures
> > 
> > tcp_send_ack() will recover from alloc_skb() allocation failures, so avoid 
> > emitting warnings.
> > 
> > Signed-off-by: David Rientjes <rientjes@google.com>
> > ---
> > diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
> > --- a/net/ipv4/tcp_output.c
> > +++ b/net/ipv4/tcp_output.c
> > @@ -2442,7 +2442,7 @@ void tcp_send_ack(struct sock *sk)
> >  	 * tcp_transmit_skb() will set the ownership to this
> >  	 * sock.
> >  	 */
> > -	buff = alloc_skb(MAX_TCP_HEADER, GFP_ATOMIC);
> > +	buff = alloc_skb(MAX_TCP_HEADER, GFP_ATOMIC | __GFP_NOWARN);
> >  	if (buff == NULL) {
> >  		inet_csk_schedule_ack(sk);
> >  		inet_csk(sk)->icsk_ack.ato = TCP_ATO_MIN;
> 
> I count more than 800 GFP_ATOMIC allocations in net/ tree.
> 
> Most (if not all) of them can recover in case of failures.
> 
> Should we add __GFP_NOWARN to all of them ?
> 

Yes, if they are recoverable without any side effects.  Otherwise, they 
will continue to emit page allocation failure messages which cause users 
to waste their time when they recognize a problem of an unknown 
seriousness level in both reporting the issue and looking for resulting 
corruption.  The __GFP_NOWARN annotation suppresses such warnings for 
those very reasons.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: 2.6.29.1: nfsd: page allocation failure - nfsd or kernel problem?
  2009-06-17 20:24             ` Michael Tokarev
@ 2009-06-17 20:39               ` David Rientjes
  2009-06-18  8:54                 ` Michael Tokarev
  2009-06-17 22:45               ` J. Bruce Fields
  2009-06-18  0:14               ` Zdenek Kaspar
  2 siblings, 1 reply; 33+ messages in thread
From: David Rientjes @ 2009-06-17 20:39 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: J. Bruce Fields, Justin Piszcz, linux-kernel

On Thu, 18 Jun 2009, Michael Tokarev wrote:

> > 	http://bugzilla.kernel.org/show_bug.cgi?id=13518
> 
> Does not look similar.
> 
> I repeated the issue here.  The slab which is growing here is buffer_head.
> It's growing slowly -- right now, after ~5 minutes of constant writes over
> nfs, its size is 428423 objects, growing at about 5000 objects/minute rate.
> When stopping writing, the cache shrinks slowly back to an acceptable
> size, probably when the data gets actually written to disk.
> 

Not sure if you're referring to the bugzilla entry or Justin's reported 
issue.  Justin's issue is actually allocating a skbuff_head_cache slab 
while the system is oom.

> It looks like we need a bug entry for this :)
> 
> I'll re-try 2.6.30 hopefully tomorrow.
> 

You should get the same page allocation failure warning with 2.6.30.  You 
may want to try my patch in http://lkml.org/lkml/2009/6/17/437 which 
suppresses the warnings since, as you previously mentioned, there are no 
side effects and the failure is easily recoverable.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [patch] ipv4: don't warn about skb ack allocation failures
  2009-06-17 20:33       ` David Rientjes
@ 2009-06-17 20:52         ` Eric Dumazet
  2009-06-17 21:12           ` David Rientjes
  0 siblings, 1 reply; 33+ messages in thread
From: Eric Dumazet @ 2009-06-17 20:52 UTC (permalink / raw)
  To: David Rientjes; +Cc: Eric Dumazet, David S. Miller, Justin Piszcz, linux-kernel

David Rientjes a écrit :
> On Wed, 17 Jun 2009, Eric Dumazet wrote:
> 
>>> ipv4: don't warn about skb ack allocation failures
>>>
>>> tcp_send_ack() will recover from alloc_skb() allocation failures, so avoid 
>>> emitting warnings.
>>>
>>> Signed-off-by: David Rientjes <rientjes@google.com>
>>> ---
>>> diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
>>> --- a/net/ipv4/tcp_output.c
>>> +++ b/net/ipv4/tcp_output.c
>>> @@ -2442,7 +2442,7 @@ void tcp_send_ack(struct sock *sk)
>>>  	 * tcp_transmit_skb() will set the ownership to this
>>>  	 * sock.
>>>  	 */
>>> -	buff = alloc_skb(MAX_TCP_HEADER, GFP_ATOMIC);
>>> +	buff = alloc_skb(MAX_TCP_HEADER, GFP_ATOMIC | __GFP_NOWARN);
>>>  	if (buff == NULL) {
>>>  		inet_csk_schedule_ack(sk);
>>>  		inet_csk(sk)->icsk_ack.ato = TCP_ATO_MIN;
>> I count more than 800 GFP_ATOMIC allocations in net/ tree.
>>
>> Most (if not all) of them can recover in case of failures.
>>
>> Should we add __GFP_NOWARN to all of them ?
>>
> 
> Yes, if they are recoverable without any side effects.  Otherwise, they 
> will continue to emit page allocation failure messages which cause users 
> to waste their time when they recognize a problem of an unknown 
> seriousness level in both reporting the issue and looking for resulting 
> corruption.  The __GFP_NOWARN annotation suppresses such warnings for 
> those very reasons.

Then why emit the warning at first place ?

Once we patch all call sites to use GFP_ATOMIC | __GFP_NOWARN, I bet 99% 
GFP_ATOMIC allocations in kernel will use it, so we go back to silent mode.

If a GFP_ATOMIC call site *cannot* use __GFP_NOWARN, it will either :

- call panic()
- crash with a nice stack trace because caller was not aware NULL could be
returned by kmalloc()


Maybe GFP_ATOMIC should include __GFP_NOWARN

#define GFP_ATOMIC  (__GFP_HIGH)
->
#define GFP_ATOMIC  (__GFP_HIGH | __GFP_NOWARN)


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [patch] ipv4: don't warn about skb ack allocation failures
  2009-06-17 20:52         ` Eric Dumazet
@ 2009-06-17 21:12           ` David Rientjes
  2009-06-17 22:30             ` Eric Dumazet
  0 siblings, 1 reply; 33+ messages in thread
From: David Rientjes @ 2009-06-17 21:12 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: David S. Miller, Justin Piszcz, linux-kernel

On Wed, 17 Jun 2009, Eric Dumazet wrote:

> > Yes, if they are recoverable without any side effects.  Otherwise, they 
> > will continue to emit page allocation failure messages which cause users 
> > to waste their time when they recognize a problem of an unknown 
> > seriousness level in both reporting the issue and looking for resulting 
> > corruption.  The __GFP_NOWARN annotation suppresses such warnings for 
> > those very reasons.
> 
> Then why emit the warning at first place ?
> 
> Once we patch all call sites to use GFP_ATOMIC | __GFP_NOWARN, I bet 99% 
> GFP_ATOMIC allocations in kernel will use it, so we go back to silent mode.
> 
> If a GFP_ATOMIC call site *cannot* use __GFP_NOWARN, it will either :
> 
> - call panic()
> - crash with a nice stack trace because caller was not aware NULL could be
> returned by kmalloc()
> 
> 
> Maybe GFP_ATOMIC should include __GFP_NOWARN
> 
> #define GFP_ATOMIC  (__GFP_HIGH)
> ->
> #define GFP_ATOMIC  (__GFP_HIGH | __GFP_NOWARN)
> 

You must now mask off __GFP_NOWARN in the gfp flags for the allocation if 
you have a GFP_ATOMIC allocation that wants the page allocation failure 
warning messages.  That message includes pertinent information with regard 
to the state of the VM that is otherwise unavailable by a BUG_ON() or NULL 
pointer dereference.

For example, I could only diagnose Justin's failure as a harmless page 
allocator warning because I could identify its caller, the gfp mask of the 
allocation attempt, and the memory available.  It would not have otherwise 
been possible to find that the system was actually oom.

The general principle is that it is up to the caller to know whether an 
allocation failure is recoverable or not and not up to any VM 
implementation.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [patch] ipv4: don't warn about skb ack allocation failures
  2009-06-17 21:12           ` David Rientjes
@ 2009-06-17 22:30             ` Eric Dumazet
  2009-06-17 23:08               ` David Miller
  0 siblings, 1 reply; 33+ messages in thread
From: Eric Dumazet @ 2009-06-17 22:30 UTC (permalink / raw)
  To: David Rientjes; +Cc: David S. Miller, Justin Piszcz, linux-kernel

David Rientjes a écrit :
> On Wed, 17 Jun 2009, Eric Dumazet wrote:
> 
>>> Yes, if they are recoverable without any side effects.  Otherwise, they 
>>> will continue to emit page allocation failure messages which cause users 
>>> to waste their time when they recognize a problem of an unknown 
>>> seriousness level in both reporting the issue and looking for resulting 
>>> corruption.  The __GFP_NOWARN annotation suppresses such warnings for 
>>> those very reasons.
>> Then why emit the warning at first place ?
>>
>> Once we patch all call sites to use GFP_ATOMIC | __GFP_NOWARN, I bet 99% 
>> GFP_ATOMIC allocations in kernel will use it, so we go back to silent mode.
>>
>> If a GFP_ATOMIC call site *cannot* use __GFP_NOWARN, it will either :
>>
>> - call panic()
>> - crash with a nice stack trace because caller was not aware NULL could be
>> returned by kmalloc()
>>
>>
>> Maybe GFP_ATOMIC should include __GFP_NOWARN
>>
>> #define GFP_ATOMIC  (__GFP_HIGH)
>> ->
>> #define GFP_ATOMIC  (__GFP_HIGH | __GFP_NOWARN)
>>
> 
> You must now mask off __GFP_NOWARN in the gfp flags for the allocation if 
> you have a GFP_ATOMIC allocation that wants the page allocation failure 
> warning messages.  That message includes pertinent information with regard 
> to the state of the VM that is otherwise unavailable by a BUG_ON() or NULL 
> pointer dereference.
> 
> For example, I could only diagnose Justin's failure as a harmless page 
> allocator warning because I could identify its caller, the gfp mask of the 
> allocation attempt, and the memory available.  It would not have otherwise 
> been possible to find that the system was actually oom.
> 
> The general principle is that it is up to the caller to know whether an 
> allocation failure is recoverable or not and not up to any VM 
> implementation.

My point is that 99% of callers know allocation failures are
recoverable.

Instead of patching 10000 places in kernel, just patch 10 places where
allocations failures are not recoverable and call BUG() or whatever
lovely debugging aid (using __GFP_NOFAIL for example, I dont know)

GFP_NOWARN should be the default, and GFP_WARN_AND_FULL_EXPLANATION the exception.

In the past, only high order page allocations could trigger some trace,
I wonder why current kernels want to warn that an allocation failed,
since kmalloc(sz, GFP_ATOMIC) is allowed to return NULL and always was.


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: 2.6.29.1: nfsd: page allocation failure - nfsd or kernel problem?
  2009-06-17 20:24             ` Michael Tokarev
  2009-06-17 20:39               ` David Rientjes
@ 2009-06-17 22:45               ` J. Bruce Fields
  2009-06-18  0:14               ` Zdenek Kaspar
  2 siblings, 0 replies; 33+ messages in thread
From: J. Bruce Fields @ 2009-06-17 22:45 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: Justin Piszcz, linux-kernel

On Thu, Jun 18, 2009 at 12:24:57AM +0400, Michael Tokarev wrote:
> J. Bruce Fields wrote:
>> On Wed, Jun 17, 2009 at 02:39:06PM +0400, Michael Tokarev wrote:
>>> Justin Piszcz wrote:
>>>>
>>>> On Wed, 17 Jun 2009, Michael Tokarev wrote:
>>>>
>>>>> Michael Tokarev wrote:
>>>>>> Justin Piszcz wrote:
>>>>> ...
>>>>>
>>>>> Justin, by the way, what's the underlying filesystem on the server?
>>>>>
>>>>> I've seen this error on 2 machines already (both running 2.6.29.x 
>>>>>  x86-64),
>>>>> and in both cases the filesystem on the server was xfs.  May this be
>>>>> related somehow to http://bugzilla.kernel.org/show_bug.cgi?id=13375 ?
>>>>> That one is different, but also about xfs and nfs.  I'm trying to
>>>>> reproduce the problem on different filesystem...
>>>> Hello, I am also running XFS on 2.6.29.x x86-64.
>>>>
>>>> For me, the error happened when I was running an XFSDUMP from a 
>>>> client  (and dumping) the stream over NFS to the XFS 
>>>> server/filesystem.  This is typically when the error occurs or 
>>>> during heavy I/O.
>>> Very similar load was here -- not xfsdump but tar and dump of an ext3
>>> filesystems.
>>>
>>> And no, it's NOT xfs-related: I can trigger the same issue easily on
>
> Note the NOT, in upper case ;)
>
>>> ext4 as well.  About 20 minutes of running 'dump' of another fs
>>> to the nfs mount and voila, nfs server reports the same page allocation
>>> failure.  Note that all file operations are still working, i.e. it
>>> produces good (not corrupted) files on the server.
>>
>> There's a possibly related report for 2.6.30 here:
>>
>> 	http://bugzilla.kernel.org/show_bug.cgi?id=13518
>
> Does not look similar.
>
> I repeated the issue here.  The slab which is growing here is buffer_head.
> It's growing slowly -- right now, after ~5 minutes of constant writes over
> nfs, its size is 428423 objects, growing at about 5000 objects/minute rate.
> When stopping writing, the cache shrinks slowly back to an acceptable
> size, probably when the data gets actually written to disk.

OK, so if it eventually shrinks back to normal then it's not really a
leak--perhaps there's some bad interaction between nfsd and the vm.

Could you explain in more detail what the symptoms are (other than just
a message in the logs).

--b.

>
> It looks like we need a bug entry for this :)
>
> I'll re-try 2.6.30 hopefully tomorrow.
>
> /mjt

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [patch] ipv4: don't warn about skb ack allocation failures
  2009-06-17 22:30             ` Eric Dumazet
@ 2009-06-17 23:08               ` David Miller
  2009-06-18 16:56                 ` David Rientjes
  0 siblings, 1 reply; 33+ messages in thread
From: David Miller @ 2009-06-17 23:08 UTC (permalink / raw)
  To: eric.dumazet; +Cc: rientjes, jpiszcz, linux-kernel

From: Eric Dumazet <eric.dumazet@gmail.com>
Date: Thu, 18 Jun 2009 00:30:58 +0200

> My point is that 99% of callers know allocation failures are
> recoverable.

I agree that, surely for GFP_ATOMIC, warnings should be off
by default.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: 2.6.29.1: nfsd: page allocation failure - nfsd or kernel problem?
  2009-06-17 20:24             ` Michael Tokarev
  2009-06-17 20:39               ` David Rientjes
  2009-06-17 22:45               ` J. Bruce Fields
@ 2009-06-18  0:14               ` Zdenek Kaspar
  2 siblings, 0 replies; 33+ messages in thread
From: Zdenek Kaspar @ 2009-06-18  0:14 UTC (permalink / raw)
  To: linux-kernel; +Cc: J. Bruce Fields, Justin Piszcz

Michael Tokarev napsal(a):
> J. Bruce Fields wrote:
>> On Wed, Jun 17, 2009 at 02:39:06PM +0400, Michael Tokarev wrote:
>>> Justin Piszcz wrote:
>>>>
>>>> On Wed, 17 Jun 2009, Michael Tokarev wrote:
>>>>
>>>>> Michael Tokarev wrote:
>>>>>> Justin Piszcz wrote:
>>>>> ...
>>>>>
>>>>> Justin, by the way, what's the underlying filesystem on the server?
>>>>>
>>>>> I've seen this error on 2 machines already (both running 2.6.29.x 
>>>>> x86-64),
>>>>> and in both cases the filesystem on the server was xfs.  May this be
>>>>> related somehow to http://bugzilla.kernel.org/show_bug.cgi?id=13375 ?
>>>>> That one is different, but also about xfs and nfs.  I'm trying to
>>>>> reproduce the problem on different filesystem...
>>>> Hello, I am also running XFS on 2.6.29.x x86-64.
>>>>
>>>> For me, the error happened when I was running an XFSDUMP from a
>>>> client  (and dumping) the stream over NFS to the XFS
>>>> server/filesystem.  This is typically when the error occurs or
>>>> during heavy I/O.
>>> Very similar load was here -- not xfsdump but tar and dump of an ext3
>>> filesystems.
>>>
>>> And no, it's NOT xfs-related: I can trigger the same issue easily on
> 
> Note the NOT, in upper case ;)
> 
>>> ext4 as well.  About 20 minutes of running 'dump' of another fs
>>> to the nfs mount and voila, nfs server reports the same page allocation
>>> failure.  Note that all file operations are still working, i.e. it
>>> produces good (not corrupted) files on the server.
>>
>> There's a possibly related report for 2.6.30 here:
>>
>>     http://bugzilla.kernel.org/show_bug.cgi?id=13518
> 
> Does not look similar.
> 
> I repeated the issue here.  The slab which is growing here is buffer_head.
> It's growing slowly -- right now, after ~5 minutes of constant writes over
> nfs, its size is 428423 objects, growing at about 5000 objects/minute rate.
> When stopping writing, the cache shrinks slowly back to an acceptable
> size, probably when the data gets actually written to disk.
> 
> It looks like we need a bug entry for this :)
> 
> I'll re-try 2.6.30 hopefully tomorrow.
> 
> /mjt

Can you try if increasing vm.min_free_kbytes will help you? I
"temp-fixed" heavy I/O problems with vm.min_free_kbytes=32768 on machine
with 4G memory.

Z.


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: 2.6.29.1: nfsd: page allocation failure - nfsd or kernel problem?
  2009-06-17 20:39               ` David Rientjes
@ 2009-06-18  8:54                 ` Michael Tokarev
  2009-06-18 17:07                   ` David Rientjes
  0 siblings, 1 reply; 33+ messages in thread
From: Michael Tokarev @ 2009-06-18  8:54 UTC (permalink / raw)
  To: David Rientjes; +Cc: J. Bruce Fields, Justin Piszcz, linux-kernel

David Rientjes wrote:
> On Thu, 18 Jun 2009, Michael Tokarev wrote:
> 
>>> 	http://bugzilla.kernel.org/show_bug.cgi?id=13518
>> Does not look similar.
>>
>> I repeated the issue here.  The slab which is growing here is buffer_head.
>> It's growing slowly -- right now, after ~5 minutes of constant writes over
>> nfs, its size is 428423 objects, growing at about 5000 objects/minute rate.
>> When stopping writing, the cache shrinks slowly back to an acceptable
>> size, probably when the data gets actually written to disk.
> 
> Not sure if you're referring to the bugzilla entry or Justin's reported 
> issue.  Justin's issue is actually allocating a skbuff_head_cache slab 
> while the system is oom.

We have the same issue - I replied to Justin's initial email with exactly
the same trace as him.  I didn't see your reply up until today, -- the one
you're referring to below.

As far as I can see, the warning itself, while harmless, indicates some
deeper problem.  Namely, we shouldn't have an OOM condition - the system
is doing nothing but NFS, there's only one NFS client which writes single
large file, the system has 2GB (or 4Gb on another machine) RAM.  It should
not OOM to start with.

>> It looks like we need a bug entry for this :)
>>
>> I'll re-try 2.6.30 hopefully tomorrow.
> 
> You should get the same page allocation failure warning with 2.6.30.  You 
> may want to try my patch in http://lkml.org/lkml/2009/6/17/437 which 
> suppresses the warnings since, as you previously mentioned, there are no 
> side effects and the failure is easily recoverable.

Well, there ARE side-effects actually.  When the issue happens, the I/O
over NFS slows down to almost zero bytes/sec for some while, and resumes
slowly after about half a minute - sometimes faster, sometimes slower.
Again, the warning itself is harmless, but it shows a deeper issue.  I
don't think it's wise to ignore the sympthom -- the actual cause should
be fixed instead.  I think.

/mjt

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [patch] ipv4: don't warn about skb ack allocation failures
  2009-06-17 23:08               ` David Miller
@ 2009-06-18 16:56                 ` David Rientjes
  2009-06-18 19:00                   ` David Miller
  0 siblings, 1 reply; 33+ messages in thread
From: David Rientjes @ 2009-06-18 16:56 UTC (permalink / raw)
  To: David Miller; +Cc: eric.dumazet, jpiszcz, linux-kernel

On Wed, 17 Jun 2009, David Miller wrote:

> From: Eric Dumazet <eric.dumazet@gmail.com>
> Date: Thu, 18 Jun 2009 00:30:58 +0200
> 
> > My point is that 99% of callers know allocation failures are
> > recoverable.
> 
> I agree that, surely for GFP_ATOMIC, warnings should be off
> by default.
> 

I disagree, page allocation failure messages show vital information about 
the state of the VM so that we can find bugs and GFP_ATOMIC allocations 
are the most common trigger for these diagnostic messages since 
__GFP_WAIT allocations can trigger direct reclaim (and __GFP_FS 
allocations can trigger the oom killer) to free memory and will retry the 
allocation if ~__GFP_NORETRY.

GFP_ATOMIC allocations are allowed to access memory deeper in zone 
watermarks so that they are more likely to succeed; page allocation 
failure messages indicate that the system is either completely oom or that 
there is a serious VM bug.

Defining GFP_ATOMIC as (__GFP_HIGH | __GFP_NOWARN) would also require the 
bit to be masked off if page allocation failure messages should be 
emitted.  GFP_ATOMIC | GFP_DMA allocations become
(GFP_ATOMIC | GFP_DMA) & ~__GFP_NOWARN, for example.  This is not normally 
how GFP_* macros are composed from __GFP_* bits because of the increasing 
complexity.

It's again my opinion that allocators that can recover ("recover" used 
here in the sense of simply delaying an ack such as the ipv4 case 
discussed earlier, for example) without side effects (like a failed 
syscall) should specify __GFP_NOWARN.

Page allocation failures have emitted warnings for all gfp masks without 
__GFP_NOWARN since 2.5.53 over six and a half years ago.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: 2.6.29.1: nfsd: page allocation failure - nfsd or kernel problem?
  2009-06-18  8:54                 ` Michael Tokarev
@ 2009-06-18 17:07                   ` David Rientjes
  2009-06-18 17:56                     ` Michael Tokarev
  0 siblings, 1 reply; 33+ messages in thread
From: David Rientjes @ 2009-06-18 17:07 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: J. Bruce Fields, Justin Piszcz, linux-kernel

On Thu, 18 Jun 2009, Michael Tokarev wrote:

> David Rientjes wrote:
> > On Thu, 18 Jun 2009, Michael Tokarev wrote:
> > 
> > > > 	http://bugzilla.kernel.org/show_bug.cgi?id=13518
> > > Does not look similar.
> > > 
> > > I repeated the issue here.  The slab which is growing here is buffer_head.
> > > It's growing slowly -- right now, after ~5 minutes of constant writes over
> > > nfs, its size is 428423 objects, growing at about 5000 objects/minute
> > > rate.
> > > When stopping writing, the cache shrinks slowly back to an acceptable
> > > size, probably when the data gets actually written to disk.
> > 
> > Not sure if you're referring to the bugzilla entry or Justin's reported
> > issue.  Justin's issue is actually allocating a skbuff_head_cache slab while
> > the system is oom.
> 
> We have the same issue - I replied to Justin's initial email with exactly
> the same trace as him.  I didn't see your reply up until today, -- the one
> you're referring to below.
> 

If it's the exact same trace, then the page allocation failure is 
occurring as the result of slab's growth of the skbuff_head_cache cache, 
not buffer_head.

So it appears as though the issue you're raising is that buffer_head is 
consuming far too much memory, which causes the system to be oom when 
attempting a GFP_ATOMIC allocation for skbuff_head_cache and is otherwise 
unseen with alloc_buffer_head() because it is allowed to invoke direct 
reclaim:

	$ grep -r alloc_buffer_head\( fs/*
	fs/buffer.c:		bh = alloc_buffer_head(GFP_NOFS);
	fs/buffer.c:struct buffer_head *alloc_buffer_head(gfp_t gfp_flags)
	fs/gfs2/log.c:	bh = alloc_buffer_head(GFP_NOFS | __GFP_NOFAIL);
	fs/jbd/journal.c:	new_bh = alloc_buffer_head(GFP_NOFS|__GFP_NOFAIL);
	fs/jbd2/journal.c:	new_bh = alloc_buffer_head(GFP_NOFS|__GFP_NOFAIL);

> As far as I can see, the warning itself, while harmless, indicates some
> deeper problem.  Namely, we shouldn't have an OOM condition - the system
> is doing nothing but NFS, there's only one NFS client which writes single
> large file, the system has 2GB (or 4Gb on another machine) RAM.  It should
> not OOM to start with.
> 

Thanks to the page allocation failure that Justin posted earlier, which 
shows the state of the available system memory, it shows that the machine 
truly is oom.  You seem to have isolated that to an enormous amount of 
buffer_head slab, which is a good start.

> Well, there ARE side-effects actually.  When the issue happens, the I/O
> over NFS slows down to almost zero bytes/sec for some while, and resumes
> slowly after about half a minute - sometimes faster, sometimes slower.
> Again, the warning itself is harmless, but it shows a deeper issue.  I
> don't think it's wise to ignore the sympthom -- the actual cause should
> be fixed instead.  I think.
> 

Since the GFP_ATOMIC allocation cannot trigger reclaim itself, it must 
rely on other allocations or background writeout to free the memory and 
this will be considerably slower than a blocking allocation.  The page 
allocation failure messages from Justin's post indicate there are 0 pages 
under writeback at the time of oom yet ZONE_NORMAL has reclaimable memory; 
this is the result of the nonblocking allocation.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: 2.6.29.1: nfsd: page allocation failure - nfsd or kernel problem?
  2009-06-18 17:07                   ` David Rientjes
@ 2009-06-18 17:56                     ` Michael Tokarev
  2009-06-18 18:12                       ` J. Bruce Fields
  2009-06-18 18:15                       ` David Rientjes
  0 siblings, 2 replies; 33+ messages in thread
From: Michael Tokarev @ 2009-06-18 17:56 UTC (permalink / raw)
  To: David Rientjes; +Cc: J. Bruce Fields, Justin Piszcz, linux-kernel

David Rientjes wrote:
> On Thu, 18 Jun 2009, Michael Tokarev wrote:
> 
>> David Rientjes wrote:
>>> On Thu, 18 Jun 2009, Michael Tokarev wrote:
>>>
>>>>> 	http://bugzilla.kernel.org/show_bug.cgi?id=13518
>>>> Does not look similar.
>>>>
>>>> I repeated the issue here.  The slab which is growing here is buffer_head.
>>>> It's growing slowly -- right now, after ~5 minutes of constant writes over
>>>> nfs, its size is 428423 objects, growing at about 5000 objects/minute
>>>> rate.
>>>> When stopping writing, the cache shrinks slowly back to an acceptable
>>>> size, probably when the data gets actually written to disk.
>>> Not sure if you're referring to the bugzilla entry or Justin's reported
>>> issue.  Justin's issue is actually allocating a skbuff_head_cache slab while
>>> the system is oom.
>> We have the same issue - I replied to Justin's initial email with exactly
>> the same trace as him.  I didn't see your reply up until today, -- the one
>> you're referring to below.
>>
> 
> If it's the exact same trace, then the page allocation failure is 
> occurring as the result of slab's growth of the skbuff_head_cache cache, 
> not buffer_head.

See http://lkml.org/lkml/2009/6/16/550 -- second message in this thread
is mine, it shows exactly the same trace.

> So it appears as though the issue you're raising is that buffer_head is 
> consuming far too much memory, which causes the system to be oom when 
> attempting a GFP_ATOMIC allocation for skbuff_head_cache and is otherwise 
> unseen with alloc_buffer_head() because it is allowed to invoke direct 
> reclaim:
> 
> 	$ grep -r alloc_buffer_head\( fs/*
> 	fs/buffer.c:		bh = alloc_buffer_head(GFP_NOFS);
> 	fs/buffer.c:struct buffer_head *alloc_buffer_head(gfp_t gfp_flags)
> 	fs/gfs2/log.c:	bh = alloc_buffer_head(GFP_NOFS | __GFP_NOFAIL);
> 	fs/jbd/journal.c:	new_bh = alloc_buffer_head(GFP_NOFS|__GFP_NOFAIL);
> 	fs/jbd2/journal.c:	new_bh = alloc_buffer_head(GFP_NOFS|__GFP_NOFAIL);

Might be.

Here, I see the following scenario.  With freshly booted server, 1.9Gb RAM,
slabtop shows about 11K entries in buffer_head slab, and about 1.7Gb free RAM.

When starting writing from another machine to this one over nfs, buffer_head
slab grows quite rapidly up to about 450K entries (total size 48940K) and
free memory drops to almost zero -- this happens in first 1..2 minutes
(GigE network, writing from /dev/zero using dd).

The cache does not grow further -- just because there's no free memory for
growing.  On a 4Gb machine it grows up to about 920K objects.

And from time to time during write the same warning occurs.  And slows
down write from ~70Mb/sec (it is almost the actual speed of the target
drive - it can do ~80Mb/sec) to almost zero for several seconds.

>> As far as I can see, the warning itself, while harmless, indicates some
>> deeper problem.  Namely, we shouldn't have an OOM condition - the system
>> is doing nothing but NFS, there's only one NFS client which writes single
>> large file, the system has 2GB (or 4Gb on another machine) RAM.  It should
>> not OOM to start with.
> 
> Thanks to the page allocation failure that Justin posted earlier, which 
> shows the state of the available system memory, it shows that the machine 
> truly is oom.  You seem to have isolated that to an enormous amount of 
> buffer_head slab, which is a good start.

It's not really slabs it seems.  In my case the total amount of buffer_heads
is about 49Mb which is very small compared with the amount of memory on the
system.  But as far as I can *guess* buffer_head is just that - head, a
pointer to some other place...  Unwritten or cached data?

Note that the only way to shrink that buffer_head cache back is to remove
the file in question on the server.

>> Well, there ARE side-effects actually.  When the issue happens, the I/O
>> over NFS slows down to almost zero bytes/sec for some while, and resumes
>> slowly after about half a minute - sometimes faster, sometimes slower.
>> Again, the warning itself is harmless, but it shows a deeper issue.  I
>> don't think it's wise to ignore the sympthom -- the actual cause should
>> be fixed instead.  I think.
> 
> Since the GFP_ATOMIC allocation cannot trigger reclaim itself, it must 
> rely on other allocations or background writeout to free the memory and 
> this will be considerably slower than a blocking allocation.  The page 
> allocation failure messages from Justin's post indicate there are 0 pages 
> under writeback at the time of oom yet ZONE_NORMAL has reclaimable memory; 
> this is the result of the nonblocking allocation.

So... what's the "consensus" so far?  Just shut up the warning as you
initially proposed?

At least I don't see any immediately alternative.  Well, but I don't know
kernel internals either :)

Thanks!

/mjt

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: 2.6.29.1: nfsd: page allocation failure - nfsd or kernel problem?
  2009-06-18 17:56                     ` Michael Tokarev
@ 2009-06-18 18:12                       ` J. Bruce Fields
  2009-06-18 18:15                       ` David Rientjes
  1 sibling, 0 replies; 33+ messages in thread
From: J. Bruce Fields @ 2009-06-18 18:12 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: David Rientjes, Justin Piszcz, linux-kernel

On Thu, Jun 18, 2009 at 09:56:46PM +0400, Michael Tokarev wrote:
> David Rientjes wrote:
>> On Thu, 18 Jun 2009, Michael Tokarev wrote:
>>
>>> David Rientjes wrote:
>>>> On Thu, 18 Jun 2009, Michael Tokarev wrote:
>>>>
>>>>>> 	http://bugzilla.kernel.org/show_bug.cgi?id=13518
>>>>> Does not look similar.
>>>>>
>>>>> I repeated the issue here.  The slab which is growing here is buffer_head.
>>>>> It's growing slowly -- right now, after ~5 minutes of constant writes over
>>>>> nfs, its size is 428423 objects, growing at about 5000 objects/minute
>>>>> rate.
>>>>> When stopping writing, the cache shrinks slowly back to an acceptable
>>>>> size, probably when the data gets actually written to disk.
>>>> Not sure if you're referring to the bugzilla entry or Justin's reported
>>>> issue.  Justin's issue is actually allocating a skbuff_head_cache slab while
>>>> the system is oom.
>>> We have the same issue - I replied to Justin's initial email with exactly
>>> the same trace as him.  I didn't see your reply up until today, -- the one
>>> you're referring to below.
>>>
>>
>> If it's the exact same trace, then the page allocation failure is  
>> occurring as the result of slab's growth of the skbuff_head_cache 
>> cache, not buffer_head.
>
> See http://lkml.org/lkml/2009/6/16/550 -- second message in this thread
> is mine, it shows exactly the same trace.
>
>> So it appears as though the issue you're raising is that buffer_head is 
>> consuming far too much memory, which causes the system to be oom when  
>> attempting a GFP_ATOMIC allocation for skbuff_head_cache and is 
>> otherwise unseen with alloc_buffer_head() because it is allowed to 
>> invoke direct reclaim:
>>
>> 	$ grep -r alloc_buffer_head\( fs/*
>> 	fs/buffer.c:		bh = alloc_buffer_head(GFP_NOFS);
>> 	fs/buffer.c:struct buffer_head *alloc_buffer_head(gfp_t gfp_flags)
>> 	fs/gfs2/log.c:	bh = alloc_buffer_head(GFP_NOFS | __GFP_NOFAIL);
>> 	fs/jbd/journal.c:	new_bh = alloc_buffer_head(GFP_NOFS|__GFP_NOFAIL);
>> 	fs/jbd2/journal.c:	new_bh = alloc_buffer_head(GFP_NOFS|__GFP_NOFAIL);
>
> Might be.
>
> Here, I see the following scenario.  With freshly booted server, 1.9Gb RAM,
> slabtop shows about 11K entries in buffer_head slab, and about 1.7Gb free RAM.
>
> When starting writing from another machine to this one over nfs, buffer_head
> slab grows quite rapidly up to about 450K entries (total size 48940K) and
> free memory drops to almost zero -- this happens in first 1..2 minutes
> (GigE network, writing from /dev/zero using dd).
>
> The cache does not grow further -- just because there's no free memory for
> growing.  On a 4Gb machine it grows up to about 920K objects.
>
> And from time to time during write the same warning occurs.  And slows
> down write from ~70Mb/sec (it is almost the actual speed of the target
> drive - it can do ~80Mb/sec) to almost zero for several seconds.
>
>>> As far as I can see, the warning itself, while harmless, indicates some
>>> deeper problem.  Namely, we shouldn't have an OOM condition - the system
>>> is doing nothing but NFS, there's only one NFS client which writes single
>>> large file, the system has 2GB (or 4Gb on another machine) RAM.  It should
>>> not OOM to start with.
>>
>> Thanks to the page allocation failure that Justin posted earlier, which 
>> shows the state of the available system memory, it shows that the 
>> machine truly is oom.  You seem to have isolated that to an enormous 
>> amount of buffer_head slab, which is a good start.
>
> It's not really slabs it seems.  In my case the total amount of buffer_heads
> is about 49Mb which is very small compared with the amount of memory on the
> system.  But as far as I can *guess* buffer_head is just that - head, a
> pointer to some other place...  Unwritten or cached data?
>
> Note that the only way to shrink that buffer_head cache back is to remove
> the file in question on the server.
>
>>> Well, there ARE side-effects actually.  When the issue happens, the I/O
>>> over NFS slows down to almost zero bytes/sec for some while, and resumes
>>> slowly after about half a minute - sometimes faster, sometimes slower.
>>> Again, the warning itself is harmless, but it shows a deeper issue.  I
>>> don't think it's wise to ignore the sympthom -- the actual cause should
>>> be fixed instead.  I think.
>>
>> Since the GFP_ATOMIC allocation cannot trigger reclaim itself, it must  
>> rely on other allocations or background writeout to free the memory and 
>> this will be considerably slower than a blocking allocation.  The page  
>> allocation failure messages from Justin's post indicate there are 0 
>> pages under writeback at the time of oom yet ZONE_NORMAL has 
>> reclaimable memory; this is the result of the nonblocking allocation.
>
> So... what's the "consensus" so far?  Just shut up the warning as you
> initially proposed?

No, it's normal for clients to want to write data as fast as they can,
and we should throttle them so that we offer the disk bandwidth
consistently instead of accepting too much and then stalling.

Unfortunately I'm not very good at thinking about this kind of io/vm
behavior!  I guess what should be happening is the nfsd thread's writes
should start blocking earlier than they are.  I'm not sure where that's
decided.

There's always the dumb approach of going back in time to see
if there's an older kernel that handled this better, then bisecting to
figure out what changed.  Testing on server with much less RAM might
help reproduce the problem faster.

(E.g. from a very quick test yesterday it looked to me like I could
reproduce something like this fairly quickly with a small virtual server
on my laptop.)

--b.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: 2.6.29.1: nfsd: page allocation failure - nfsd or kernel problem?
  2009-06-18 17:56                     ` Michael Tokarev
  2009-06-18 18:12                       ` J. Bruce Fields
@ 2009-06-18 18:15                       ` David Rientjes
  1 sibling, 0 replies; 33+ messages in thread
From: David Rientjes @ 2009-06-18 18:15 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: J. Bruce Fields, Justin Piszcz, linux-kernel

On Thu, 18 Jun 2009, Michael Tokarev wrote:

> > If it's the exact same trace, then the page allocation failure is occurring
> > as the result of slab's growth of the skbuff_head_cache cache, not
> > buffer_head.
> 
> See http://lkml.org/lkml/2009/6/16/550 -- second message in this thread
> is mine, it shows exactly the same trace.
> 

This is skbuff_head_cache, although it's not exactly the same trace: 
Justin is using e1000, you're using RealTek.  The end result is indeed the 
same, however.

> > So it appears as though the issue you're raising is that buffer_head is
> > consuming far too much memory, which causes the system to be oom when
> > attempting a GFP_ATOMIC allocation for skbuff_head_cache and is otherwise
> > unseen with alloc_buffer_head() because it is allowed to invoke direct
> > reclaim:
> > 
> > 	$ grep -r alloc_buffer_head\( fs/*
> > 	fs/buffer.c:		bh = alloc_buffer_head(GFP_NOFS);
> > 	fs/buffer.c:struct buffer_head *alloc_buffer_head(gfp_t gfp_flags)
> > 	fs/gfs2/log.c:	bh = alloc_buffer_head(GFP_NOFS | __GFP_NOFAIL);
> > 	fs/jbd/journal.c:	new_bh =
> > alloc_buffer_head(GFP_NOFS|__GFP_NOFAIL);
> > 	fs/jbd2/journal.c:	new_bh =
> > alloc_buffer_head(GFP_NOFS|__GFP_NOFAIL);
> 
> Might be.
> 
> Here, I see the following scenario.  With freshly booted server, 1.9Gb RAM,
> slabtop shows about 11K entries in buffer_head slab, and about 1.7Gb free RAM.
> 
> When starting writing from another machine to this one over nfs, buffer_head
> slab grows quite rapidly up to about 450K entries (total size 48940K) and
> free memory drops to almost zero -- this happens in first 1..2 minutes
> (GigE network, writing from /dev/zero using dd).
> 
> The cache does not grow further -- just because there's no free memory for
> growing.  On a 4Gb machine it grows up to about 920K objects.
> 
> And from time to time during write the same warning occurs.  And slows
> down write from ~70Mb/sec (it is almost the actual speed of the target
> drive - it can do ~80Mb/sec) to almost zero for several seconds.
> 

This is the memory information printed with your page allocation failure:

Jun 13 17:06:42 gnome vmunix: Mem-Info:
Jun 13 17:06:42 gnome vmunix: DMA per-cpu:
Jun 13 17:06:42 gnome vmunix: CPU    0: hi:    0, btch:   1 usd:   0
Jun 13 17:06:42 gnome vmunix: DMA32 per-cpu:
Jun 13 17:06:42 gnome vmunix: CPU    0: hi:  186, btch:  31 usd: 170
Jun 13 17:06:42 gnome vmunix: Active_anon:4641 active_file:35865 inactive_anon:16138
Jun 13 17:06:42 gnome vmunix:  inactive_file:417340 unevictable:451 dirty:1330 writeback:13820 unstable:0
Jun 13 17:06:42 gnome vmunix:  free:2460 slab:16669 mapped:3659 pagetables:304 bounce:0
Jun 13 17:06:42 gnome vmunix: DMA free:7760kB min:24kB low:28kB high:36kB active_anon:0kB inactive_anon:84kB active_file:760kB inactive_file
Jun 13 17:06:42 gnome vmunix: lowmem_reserve[]: 0 1938 1938 1938

ZONE_DMA is inaccessible, just like Justin's machine:
7760K free < 24K min + (1938 pages * 4K/page).

Jun 13 17:06:42 gnome vmunix: DMA32 free:2080kB min:5620kB low:7024kB high:8428kB active_anon:18564kB inactive_anon:64468kB active_file:1427
Jun 13 17:06:42 gnome vmunix: lowmem_reserve[]: 0 0 0 0

And ZONE_DMA32 is far below its minimum watermark.  I mentioned in 
response to David Miller earlier that GFP_ATOMIC allocations can access 
beyond its minimum watermark; this is a good example of that.  For 
__GFP_HIGH allocations, the minimum watermark is halved, so this zone is 
oom because 2080K free < (5620K min / 2).

Notice that you have 13820 pages under writeback, however.  That's almost 
54M of memory being written back compared to 65M of slab total.

So this page allocation failure is only indicating that we're failing 
because it's GFP_ATOMIC and we can't do any direct reclaim.  All other 
memory allocations can block and can writeback pages to free memory, so 
while it is still stressing the VM, we don't get the same failure messages 
for such allocations.  The page allocator simply blocks and retries the 
allocation again; no oom killing occurs because reclaim makes progress 
each time, certainly because of the pages under writeback.

pdflush will do this in the background so all is not lost if subsequent 
__GFP_WAIT allocations do not trigger reclaim.  You may find it helpful to 
tune /proc/sys/vm/dirty_background_ratio to be lower to start background 
writeback sooner under such stress.  Details are in
Documentation/sysctl/vm.txt.

> > > As far as I can see, the warning itself, while harmless, indicates some
> > > deeper problem.  Namely, we shouldn't have an OOM condition - the system
> > > is doing nothing but NFS, there's only one NFS client which writes single
> > > large file, the system has 2GB (or 4Gb on another machine) RAM.  It should
> > > not OOM to start with.
> > 
> > Thanks to the page allocation failure that Justin posted earlier, which
> > shows the state of the available system memory, it shows that the machine
> > truly is oom.  You seem to have isolated that to an enormous amount of
> > buffer_head slab, which is a good start.
> 
> It's not really slabs it seems.  In my case the total amount of buffer_heads
> is about 49Mb which is very small compared with the amount of memory on the
> system.  But as far as I can *guess* buffer_head is just that - head, a
> pointer to some other place...  Unwritten or cached data?
> 

While 49M may seem rather small compared to your 2G system, it represents 
75% of slab allocations as shown in your page allocation failure.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [patch] ipv4: don't warn about skb ack allocation failures
  2009-06-18 16:56                 ` David Rientjes
@ 2009-06-18 19:00                   ` David Miller
  2009-06-18 19:23                     ` David Rientjes
  0 siblings, 1 reply; 33+ messages in thread
From: David Miller @ 2009-06-18 19:00 UTC (permalink / raw)
  To: rientjes; +Cc: eric.dumazet, jpiszcz, linux-kernel

From: David Rientjes <rientjes@google.com>
Date: Thu, 18 Jun 2009 09:56:14 -0700 (PDT)

> I disagree, page allocation failure messages show vital information about 
> the state of the VM so that we can find bugs and GFP_ATOMIC allocations 
> are the most common trigger for these diagnostic messages since 
> __GFP_WAIT allocations can trigger direct reclaim (and __GFP_FS 
> allocations can trigger the oom killer) to free memory and will retry the 
> allocation if ~__GFP_NORETRY.

It's COMPLETELY and ABSOLUTELY normal for GFP_ATOMIC allocations to
fail in the networking.

If you warn it will just spam the logs, and on a router forwarding
millions of packets per second are you sure that can ever be sane?

Use statistics and tracing if necessary, but log spam no way...

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [patch] ipv4: don't warn about skb ack allocation failures
  2009-06-18 19:00                   ` David Miller
@ 2009-06-18 19:23                     ` David Rientjes
  2009-06-18 19:37                       ` David Miller
  0 siblings, 1 reply; 33+ messages in thread
From: David Rientjes @ 2009-06-18 19:23 UTC (permalink / raw)
  To: David Miller; +Cc: eric.dumazet, jpiszcz, linux-kernel

On Thu, 18 Jun 2009, David Miller wrote:

> > I disagree, page allocation failure messages show vital information about 
> > the state of the VM so that we can find bugs and GFP_ATOMIC allocations 
> > are the most common trigger for these diagnostic messages since 
> > __GFP_WAIT allocations can trigger direct reclaim (and __GFP_FS 
> > allocations can trigger the oom killer) to free memory and will retry the 
> > allocation if ~__GFP_NORETRY.
> 
> It's COMPLETELY and ABSOLUTELY normal for GFP_ATOMIC allocations to
> fail in the networking.
> 

__GFP_NOWARN exists for that reason.

> If you warn it will just spam the logs, and on a router forwarding
> millions of packets per second are you sure that can ever be sane?
> 

The spamming is ratelimited, but GFP_ATOMIC is really the only time we get 
such diagnostic information since __GFP_WAIT allocations can reclaim, 
__GFP_FS allocations can utilize the oom killer, and other order-0 
allocations are implicitly ~__GFP_NORETRY.

As previously mentioned, GFP_ATOMIC allocations that are not __GFP_NOWARN 
have been emitting these diagnostics since 2.5.53.  This has been on your 
TODO list for 6 1/2 years and now you insist all GFP_ATOMIC allocations 
change their default behavior?

I understand what you're trying to avoid, but I disagree with the approach 
of altering the default behavior of GFP_ATOMIC.  I may suggest that 
emitting the page allocation failures become a compile time option; 
CONFIG_DEBUG_VM would be my suggestion.

> Use statistics and tracing if necessary, but log spam no way...
> 

You need the meminfo that is emitted at the time of failure for it to be 
useful.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [patch] ipv4: don't warn about skb ack allocation failures
  2009-06-18 19:23                     ` David Rientjes
@ 2009-06-18 19:37                       ` David Miller
  2009-06-19 19:45                         ` David Rientjes
  2009-06-19 20:41                         ` Eric W. Biederman
  0 siblings, 2 replies; 33+ messages in thread
From: David Miller @ 2009-06-18 19:37 UTC (permalink / raw)
  To: rientjes; +Cc: eric.dumazet, jpiszcz, linux-kernel

From: David Rientjes <rientjes@google.com>
Date: Thu, 18 Jun 2009 12:23:28 -0700 (PDT)

> On Thu, 18 Jun 2009, David Miller wrote:
> 
>> > I disagree, page allocation failure messages show vital information about 
>> > the state of the VM so that we can find bugs and GFP_ATOMIC allocations 
>> > are the most common trigger for these diagnostic messages since 
>> > __GFP_WAIT allocations can trigger direct reclaim (and __GFP_FS 
>> > allocations can trigger the oom killer) to free memory and will retry the 
>> > allocation if ~__GFP_NORETRY.
>> 
>> It's COMPLETELY and ABSOLUTELY normal for GFP_ATOMIC allocations to
>> fail in the networking.
>> 
> 
> __GFP_NOWARN exists for that reason.

You're going to have to put that into every driver, every part of
the core networking, every protocol.

That's dumb.

> I understand what you're trying to avoid, but I disagree with the
> approach of altering the default behavior of GFP_ATOMIC.

The default got changed at some point because it never did
crap like this before.

> I may suggest that emitting the page allocation failures become a
> compile time option; CONFIG_DEBUG_VM would be my suggestion.

Use statistics gathering and tracing for this, not log spam.

It serves all of your needs without spewing junk into the log.  It
allows complete diagnosis and gathering of whatever information you
may need.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [patch] ipv4: don't warn about skb ack allocation failures
  2009-06-18 19:37                       ` David Miller
@ 2009-06-19 19:45                         ` David Rientjes
  2009-06-19 20:41                         ` Eric W. Biederman
  1 sibling, 0 replies; 33+ messages in thread
From: David Rientjes @ 2009-06-19 19:45 UTC (permalink / raw)
  To: David Miller; +Cc: eric.dumazet, jpiszcz, linux-kernel

On Thu, 18 Jun 2009, David Miller wrote:

> > I understand what you're trying to avoid, but I disagree with the
> > approach of altering the default behavior of GFP_ATOMIC.
> 
> The default got changed at some point because it never did
> crap like this before.
> 

Wrong, the page allocator has warned about page allocation failures when 
__GFP_NOWARN was not specified since 2.5.53.  __GFP_NOWARN was never a 
default for GFP_ATOMIC.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [patch] ipv4: don't warn about skb ack allocation failures
  2009-06-18 19:37                       ` David Miller
  2009-06-19 19:45                         ` David Rientjes
@ 2009-06-19 20:41                         ` Eric W. Biederman
  2009-06-19 22:37                           ` David Rientjes
  2009-06-19 23:03                           ` David Miller
  1 sibling, 2 replies; 33+ messages in thread
From: Eric W. Biederman @ 2009-06-19 20:41 UTC (permalink / raw)
  To: David Miller; +Cc: rientjes, eric.dumazet, jpiszcz, linux-kernel

David Miller <davem@davemloft.net> writes:

> From: David Rientjes <rientjes@google.com>
> Date: Thu, 18 Jun 2009 12:23:28 -0700 (PDT)
>
>> On Thu, 18 Jun 2009, David Miller wrote:
>> 
>>> > I disagree, page allocation failure messages show vital information about 
>>> > the state of the VM so that we can find bugs and GFP_ATOMIC allocations 
>>> > are the most common trigger for these diagnostic messages since 
>>> > __GFP_WAIT allocations can trigger direct reclaim (and __GFP_FS 
>>> > allocations can trigger the oom killer) to free memory and will retry the 
>>> > allocation if ~__GFP_NORETRY.
>>> 
>>> It's COMPLETELY and ABSOLUTELY normal for GFP_ATOMIC allocations to
>>> fail in the networking.
>>> 
>> 
>> __GFP_NOWARN exists for that reason.
>
> You're going to have to put that into every driver, every part of
> the core networking, every protocol.
>
> That's dumb.
>
>> I understand what you're trying to avoid, but I disagree with the
>> approach of altering the default behavior of GFP_ATOMIC.
>
> The default got changed at some point because it never did
> crap like this before.

I started seeing this about when I upgraded to 2.6.28.

>> I may suggest that emitting the page allocation failures become a
>> compile time option; CONFIG_DEBUG_VM would be my suggestion.
>
> Use statistics gathering and tracing for this, not log spam.
>
> It serves all of your needs without spewing junk into the log.  It
> allows complete diagnosis and gathering of whatever information you
> may need.

I know my logs are overloaded with this noise, even on my laptop!

But Mr. Reintjes if you really want the traces I can set up 
an script to email them to you every time it happens.  Say about
1 a minute from my paltry little farm of machines.

Eric

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [patch] ipv4: don't warn about skb ack allocation failures
  2009-06-19 20:41                         ` Eric W. Biederman
@ 2009-06-19 22:37                           ` David Rientjes
  2009-06-19 23:04                             ` David Miller
  2009-06-20  1:28                             ` Eric W. Biederman
  2009-06-19 23:03                           ` David Miller
  1 sibling, 2 replies; 33+ messages in thread
From: David Rientjes @ 2009-06-19 22:37 UTC (permalink / raw)
  To: Eric W. Biederman; +Cc: David Miller, eric.dumazet, jpiszcz, linux-kernel

On Fri, 19 Jun 2009, Eric W. Biederman wrote:

> But Mr. Reintjes if you really want the traces I can set up 
> an script to email them to you every time it happens.  Say about
> 1 a minute from my paltry little farm of machines.
> 

Perhaps you missed my email where I suggested emitting the page allocation 
warnings only when CONFIG_DEBUG_VM is enabled.  It's at 
http://lkml.org/lkml/2009/6/18/355.

We can then keep the __GFP_NOFAIL flag to indicate that the warnings 
should never be emitted for that allocation, regardless of the .config.

It's funny, though, that the problem that originally started this thread 
was quickly diagnosed because of these messages.  As far as I know, my 
suggestion to increase /proc/sys/vm/dirty_background_ratio to kick pdflush 
earlier has prevented the slab allocation failures and not required 
delayed acks for nfsd.

That was possible because of the page allocation failure messages that 
were noticed by the user; without this evidence, the only symptom would 
have been extremely slow I/O over nfs.  This could involve any number of 
subsystems and I doubt the first reaction would have been to enable 
CONFIG_DEBUG_VM.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [patch] ipv4: don't warn about skb ack allocation failures
  2009-06-19 20:41                         ` Eric W. Biederman
  2009-06-19 22:37                           ` David Rientjes
@ 2009-06-19 23:03                           ` David Miller
  1 sibling, 0 replies; 33+ messages in thread
From: David Miller @ 2009-06-19 23:03 UTC (permalink / raw)
  To: ebiederm; +Cc: rientjes, eric.dumazet, jpiszcz, linux-kernel

From: ebiederm@xmission.com (Eric W. Biederman)
Date: Fri, 19 Jun 2009 13:41:37 -0700

> But Mr. Reintjes if you really want the traces I can set up 
> an script to email them to you every time it happens.  Say about
> 1 a minute from my paltry little farm of machines.

Same here :-)

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [patch] ipv4: don't warn about skb ack allocation failures
  2009-06-19 22:37                           ` David Rientjes
@ 2009-06-19 23:04                             ` David Miller
  2009-06-20  1:28                             ` Eric W. Biederman
  1 sibling, 0 replies; 33+ messages in thread
From: David Miller @ 2009-06-19 23:04 UTC (permalink / raw)
  To: rientjes; +Cc: ebiederm, eric.dumazet, jpiszcz, linux-kernel

From: David Rientjes <rientjes@google.com>
Date: Fri, 19 Jun 2009 15:37:17 -0700 (PDT)

> That was possible because of the page allocation failure messages that 
> were noticed by the user; without this evidence, the only symptom would 
> have been extremely slow I/O over nfs.

That's garbage.

With tracing and statistics you could have noticed it too.

Just because the notification is more annoying and in your
face, does not mean it's implicitly better.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [patch] ipv4: don't warn about skb ack allocation failures
  2009-06-19 22:37                           ` David Rientjes
  2009-06-19 23:04                             ` David Miller
@ 2009-06-20  1:28                             ` Eric W. Biederman
  1 sibling, 0 replies; 33+ messages in thread
From: Eric W. Biederman @ 2009-06-20  1:28 UTC (permalink / raw)
  To: David Rientjes; +Cc: David Miller, eric.dumazet, jpiszcz, linux-kernel

David Rientjes <rientjes@google.com> writes:

> On Fri, 19 Jun 2009, Eric W. Biederman wrote:
>
>> But Mr. Reintjes if you really want the traces I can set up 
>> an script to email them to you every time it happens.  Say about
>> 1 a minute from my paltry little farm of machines.
>> 
>
> Perhaps you missed my email where I suggested emitting the page allocation 
> warnings only when CONFIG_DEBUG_VM is enabled.  It's at 
> http://lkml.org/lkml/2009/6/18/355.
>
> We can then keep the __GFP_NOFAIL flag to indicate that the warnings 
> should never be emitted for that allocation, regardless of the .config.
>
> It's funny, though, that the problem that originally started this thread 
> was quickly diagnosed because of these messages.  As far as I know, my 
> suggestion to increase /proc/sys/vm/dirty_background_ratio to kick pdflush 
> earlier has prevented the slab allocation failures and not required 
> delayed acks for nfsd.

increase?

Perhaps then the problem is simply dirty_background_ratio.  Is the vm
not properly autotuning?

With a 50MB/s disk I wonder what the proper window size is.  Several
gigabytes as implied by a 5% or a 10% dirty_background_ratio seems
absurd.  TCP sockets seem to get along fine with even large latencies
and windows measured in megabytes not gigabytes.

This does explain a few things.

Eric

^ permalink raw reply	[flat|nested] 33+ messages in thread

* 2.6.30: nfsd: page allocation failure - nfsd or kernel problem? (again with 2.6.30)
       [not found] <alpine.DEB.2.00.0906161203160.27742@p34.internal.lan>
  2009-06-16 16:06 ` 2.6.29.1: nfsd: page allocation failure - nfsd or kernel problem? Justin Piszcz
@ 2009-06-22 16:08 ` Justin Piszcz
  1 sibling, 0 replies; 33+ messages in thread
From: Justin Piszcz @ 2009-06-22 16:08 UTC (permalink / raw)
  To: linux-kernel

Package: nfs-kernel-server
Version: 1.1.6-1
Distribution: Debian Testing
Architecture: 64-bit

This time with 2.6.30 (and 16384 in min_free_kbytes).
echo 16384 > /proc/sys/vm/min_free_kbytes

[415964.022306] nfsd: page allocation failure. order:0, mode:0x20
[415964.022311] Pid: 2680, comm: nfsd Not tainted 2.6.30 #2
[415964.022313] Call Trace:
[415964.022315]  <IRQ>  [<ffffffff802849ed>] ? __alloc_pages_internal+0x3dd/0x4e0
[415964.022326]  [<ffffffff802a6c77>] ? cache_alloc_refill+0x2d7/0x570
[415964.022328]  [<ffffffff802a6feb>] ? __kmalloc+0xdb/0xe0
[415964.022332]  [<ffffffff805a612d>] ? __alloc_skb+0x6d/0x160
[415964.022334]  [<ffffffff805a6ed7>] ? __netdev_alloc_skb+0x17/0x40
[415964.022338]  [<ffffffff80509d73>] ? e1000_alloc_rx_buffers+0x2b3/0x360
[415964.022340]  [<ffffffff8050a173>] ? e1000_clean_rx_irq+0x2d3/0x3a0
[415964.022342]  [<ffffffff8050bd9f>] ? e1000_clean+0x7f/0x2b0
[415964.022346]  [<ffffffff8026ff2b>] ? getnstimeofday+0x5b/0xe0
[415964.022349]  [<ffffffff805aaf33>] ? net_rx_action+0x83/0x120
[415964.022353]  [<ffffffff8025897b>] ? __do_softirq+0x7b/0x110
[415964.022356]  [<ffffffff8022d6bc>] ? call_softirq+0x1c/0x30
[415964.022359]  [<ffffffff8022f3a5>] ? do_softirq+0x35/0x70
[415964.022361]  [<ffffffff8022eb65>] ? do_IRQ+0x85/0xf0
[415964.022363]  [<ffffffff8022cf93>] ? ret_from_intr+0x0/0xa
[415964.022364]  <EOI>  [<ffffffff8063e670>] ? _spin_lock+0x10/0x20
[415964.022370]  [<ffffffff802bbf4c>] ? d_find_alias+0x1c/0x40
[415964.022373]  [<ffffffff802be59d>] ? d_obtain_alias+0x4d/0x140
[415964.022375]  [<ffffffff80341fa3>] ? exportfs_decode_fh+0x63/0x2a0
[415964.022375]  [<ffffffff803459c0>] ? nfsd_acceptable+0x0/0x110
[415964.022375]  [<ffffffff8062a4aa>] ? cache_check+0x4a/0x4d0
[415964.022375]  [<ffffffff8034b4e7>] ? exp_find_key+0x57/0xe0
[415964.022375]  [<ffffffff8059f740>] ? sock_common_recvmsg+0x30/0x50
[415964.022375]  [<ffffffff8034b602>] ? exp_find+0x92/0xa0
[415964.022375]  [<ffffffff80345ea9>] ? fh_verify+0x369/0x680
[415964.022375]  [<ffffffff8024876e>] ? wakeup_preempt_entity+0x9e/0xb0
[415964.022375]  [<ffffffff8024c8ff>] ? try_to_wake_up+0xaf/0x200
[415964.022375]  [<ffffffff803480fe>] ? nfsd_open+0x2e/0x180
[415964.022375]  [<ffffffff803485e4>] ? nfsd_write+0xc4/0x110
[415964.022375]  [<ffffffff8034fb36>] ? nfsd3_proc_write+0xb6/0x160
[415964.022375]  [<ffffffff8034246a>] ? nfsd_dispatch+0xba/0x270
[415964.022375]  [<ffffffff80621667>] ? svc_process+0x4a7/0x800
[415964.022375]  [<ffffffff8024ca50>] ? default_wake_function+0x0/0x10
[415964.022375]  [<ffffffff8063e4b7>] ? __down_read+0x17/0xae
[415964.022375]  [<ffffffff80342b85>] ? nfsd+0xd5/0x160
[415964.022375]  [<ffffffff80342ab0>] ? nfsd+0x0/0x160
[415964.022375]  [<ffffffff80268304>] ? kthread+0x54/0x90
[415964.022375]  [<ffffffff8022d5ba>] ? child_rip+0xa/0x20
[415964.022375]  [<ffffffff802682b0>] ? kthread+0x0/0x90
[415964.022375]  [<ffffffff8022d5b0>] ? child_rip+0x0/0x20
[415964.022375] Mem-Info:
[415964.022375] DMA per-cpu:
[415964.022375] CPU    0: hi:    0, btch:   1 usd:   0
[415964.022375] CPU    1: hi:    0, btch:   1 usd:   0
[415964.022375] CPU    2: hi:    0, btch:   1 usd:   0
[415964.022375] CPU    3: hi:    0, btch:   1 usd:   0
[415964.022375] DMA32 per-cpu:
[415964.022375] CPU    0: hi:  186, btch:  31 usd: 179
[415964.022375] CPU    1: hi:  186, btch:  31 usd: 156
[415964.022375] CPU    2: hi:  186, btch:  31 usd: 225
[415964.022375] CPU    3: hi:  186, btch:  31 usd: 198
[415964.022375] Normal per-cpu:
[415964.022375] CPU    0: hi:  186, btch:  31 usd: 183
[415964.022375] CPU    1: hi:  186, btch:  31 usd: 176
[415964.022375] CPU    2: hi:  186, btch:  31 usd: 178
[415964.022375] CPU    3: hi:  186, btch:  31 usd: 215
[415964.022375] Active_anon:154810 active_file:131162 inactive_anon:33447
[415964.022375]  inactive_file:690987 unevictable:0 dirty:112116 writeback:0 unstable:0
[415964.022375]  free:8662 slab:965366 mapped:9316 pagetables:4618 bounce:0
[415964.022375] DMA free:9692kB min:16kB low:20kB high:24kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB present:8668kB pages_scanned:0 all_unreclaimable? yes
[415964.022375] lowmem_reserve[]: 0 3246 7980 7980
[415964.022375] DMA32 free:21312kB min:6656kB low:8320kB high:9984kB active_anon:118464kB inactive_anon:23908kB active_file:174708kB inactive_file:1206812kB unevictable:0kB present:3324312kB pages_scanned:0 all_unreclaimable? no
[415964.022375] lowmem_reserve[]: 0 0 4734 4734
[415964.022375] Normal free:3644kB min:9708kB low:12132kB high:14560kB active_anon:500776kB inactive_anon:109880kB active_file:349940kB inactive_file:1557136kB unevictable:0kB present:4848000kB pages_scanned:0 all_unreclaimable? no
[415964.022375] lowmem_reserve[]: 0 0 0 0
[415964.022375] DMA: 3*4kB 4*8kB 3*16kB 4*32kB 0*64kB 2*128kB 2*256kB 1*512kB 2*1024kB 1*2048kB 1*4096kB = 9692kB
[415964.022375] DMA32: 3289*4kB 0*8kB 1*16kB 1*32kB 1*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 1*4096kB = 21332kB
[415964.022375] Normal: 0*4kB 1*8kB 0*16kB 1*32kB 1*64kB 1*128kB 0*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 3816kB
[415964.022375] 827035 total pagecache pages
[415964.022375] 4728 pages in swap cache
[415964.022375] Swap cache stats: add 12746, delete 8018, find 16878/17480
[415964.022375] Free swap  = 16756356kB
[415964.022375] Total swap = 16787768kB
[415964.022375] 2277376 pages RAM
[415964.022375] 252254 pages reserved
[415964.022375] 546309 pages shared
[415964.022375] 1520221 pages non-shared
[415964.060817] nfsd: page allocation failure. order:0, mode:0x20
[415964.060822] Pid: 2680, comm: nfsd Not tainted 2.6.30 #2
[415964.060823] Call Trace:
[415964.060825]  <IRQ>  [<ffffffff802849ed>] ? __alloc_pages_internal+0x3dd/0x4e0
[415964.060835]  [<ffffffff802a6c77>] ? cache_alloc_refill+0x2d7/0x570
[415964.060837]  [<ffffffff802a6feb>] ? __kmalloc+0xdb/0xe0
[415964.060840]  [<ffffffff805a612d>] ? __alloc_skb+0x6d/0x160
[415964.060842]  [<ffffffff805a6ed7>] ? __netdev_alloc_skb+0x17/0x40
[415964.060846]  [<ffffffff80509d73>] ? e1000_alloc_rx_buffers+0x2b3/0x360
[415964.060848]  [<ffffffff8050a100>] ? e1000_clean_rx_irq+0x260/0x3a0
[415964.060851]  [<ffffffff8050bd9f>] ? e1000_clean+0x7f/0x2b0
[415964.060853]  [<ffffffff805aaf33>] ? net_rx_action+0x83/0x120
[415964.060857]  [<ffffffff8025897b>] ? __do_softirq+0x7b/0x110
[415964.060860]  [<ffffffff8022d6bc>] ? call_softirq+0x1c/0x30
[415964.060863]  [<ffffffff8022f3a5>] ? do_softirq+0x35/0x70
[415964.060865]  [<ffffffff8022eb65>] ? do_IRQ+0x85/0xf0
[415964.060867]  [<ffffffff8022cf93>] ? ret_from_intr+0x0/0xa
[415964.060869]  <EOI>  [<ffffffff8063e66c>] ? _spin_lock+0xc/0x20
[415964.060874]  [<ffffffff802bbf4c>] ? d_find_alias+0x1c/0x40
[415964.060877]  [<ffffffff802be59d>] ? d_obtain_alias+0x4d/0x140
[415964.060880]  [<ffffffff80341fa3>] ? exportfs_decode_fh+0x63/0x2a0
[415964.060883]  [<ffffffff803459c0>] ? nfsd_acceptable+0x0/0x110
[415964.060887]  [<ffffffff8062a4aa>] ? cache_check+0x4a/0x4d0
[415964.060889]  [<ffffffff8034b4e7>] ? exp_find_key+0x57/0xe0
[415964.060893]  [<ffffffff8059f740>] ? sock_common_recvmsg+0x30/0x50
[415964.060895]  [<ffffffff8034b602>] ? exp_find+0x92/0xa0
[415964.060898]  [<ffffffff80345ea9>] ? fh_verify+0x369/0x680
[415964.060901]  [<ffffffff8024876e>] ? wakeup_preempt_entity+0x9e/0xb0
[415964.060904]  [<ffffffff8024c8ff>] ? try_to_wake_up+0xaf/0x200
[415964.060906]  [<ffffffff803480fe>] ? nfsd_open+0x2e/0x180
[415964.060908]  [<ffffffff803485e4>] ? nfsd_write+0xc4/0x110
[415964.060912]  [<ffffffff8034fb36>] ? nfsd3_proc_write+0xb6/0x160
[415964.060914]  [<ffffffff8034246a>] ? nfsd_dispatch+0xba/0x270
[415964.060916]  [<ffffffff80621667>] ? svc_process+0x4a7/0x800
[415964.060919]  [<ffffffff8024ca50>] ? default_wake_function+0x0/0x10
[415964.060922]  [<ffffffff8063e4b7>] ? __down_read+0x17/0xae
[415964.060924]  [<ffffffff80342b85>] ? nfsd+0xd5/0x160
[415964.060926]  [<ffffffff80342ab0>] ? nfsd+0x0/0x160
[415964.060929]  [<ffffffff80268304>] ? kthread+0x54/0x90
[415964.060932]  [<ffffffff8022d5ba>] ? child_rip+0xa/0x20
[415964.060934]  [<ffffffff802682b0>] ? kthread+0x0/0x90
[415964.060936]  [<ffffffff8022d5b0>] ? child_rip+0x0/0x20
[415964.060937] Mem-Info:
[415964.060939] DMA per-cpu:
[415964.060941] CPU    0: hi:    0, btch:   1 usd:   0
[415964.060942] CPU    1: hi:    0, btch:   1 usd:   0
[415964.060944] CPU    2: hi:    0, btch:   1 usd:   0
[415964.060945] CPU    3: hi:    0, btch:   1 usd:   0
[415964.060947] DMA32 per-cpu:
[415964.060948] CPU    0: hi:  186, btch:  31 usd: 179
[415964.060950] CPU    1: hi:  186, btch:  31 usd: 156
[415964.060952] CPU    2: hi:  186, btch:  31 usd: 225
[415964.060953] CPU    3: hi:  186, btch:  31 usd: 198
[415964.060954] Normal per-cpu:
[415964.060956] CPU    0: hi:  186, btch:  31 usd: 183
[415964.060957] CPU    1: hi:  186, btch:  31 usd: 176
[415964.060959] CPU    2: hi:  186, btch:  31 usd: 178
[415964.060960] CPU    3: hi:  186, btch:  31 usd: 215
[415964.060964] Active_anon:154810 active_file:131162 inactive_anon:33447
[415964.060964]  inactive_file:690987 unevictable:0 dirty:112116 writeback:0 unstable:0
[415964.060965]  free:8662 slab:965366 mapped:9316 pagetables:4618 bounce:0
[415964.060969] DMA free:9692kB min:16kB low:20kB high:24kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB present:8668kB pages_scanned:0 all_unreclaimable? yes
[415964.060971] lowmem_reserve[]: 0 3246 7980 7980
[415964.060976] DMA32 free:21312kB min:6656kB low:8320kB high:9984kB active_anon:118464kB inactive_anon:23908kB active_file:174708kB inactive_file:1206812kB unevictable:0kB present:3324312kB pages_scanned:0 all_unreclaimable? no
[415964.060978] lowmem_reserve[]: 0 0 4734 4734
[415964.060983] Normal free:3644kB min:9708kB low:12132kB high:14560kB active_anon:500776kB inactive_anon:109880kB active_file:349940kB inactive_file:1557136kB unevictable:0kB present:4848000kB pages_scanned:0 all_unreclaimable? no
[415964.060985] lowmem_reserve[]: 0 0 0 0
[415964.060988] DMA: 3*4kB 4*8kB 3*16kB 4*32kB 0*64kB 2*128kB 2*256kB 1*512kB 2*1024kB 1*2048kB 1*4096kB = 9692kB
[415964.060996] DMA32: 3289*4kB 0*8kB 1*16kB 1*32kB 1*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 1*4096kB = 21332kB
[415964.061003] Normal: 0*4kB 1*8kB 0*16kB 1*32kB 1*64kB 1*128kB 0*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 3816kB
[415964.061010] 827035 total pagecache pages
[415964.061012] 4728 pages in swap cache
[415964.061013] Swap cache stats: add 12746, delete 8018, find 16878/17480
[415964.061015] Free swap  = 16756356kB
[415964.061016] Total swap = 16787768kB
[415964.061653] 2277376 pages RAM
[415964.061653] 252254 pages reserved
[415964.061653] 546309 pages shared
[415964.061653] 1520221 pages non-shared
[415964.097454] nfsd: page allocation failure. order:0, mode:0x20
[415964.097458] Pid: 2680, comm: nfsd Not tainted 2.6.30 #2
[415964.097460] Call Trace:
[415964.097462]  <IRQ>  [<ffffffff802849ed>] ? __alloc_pages_internal+0x3dd/0x4e0
[415964.097471]  [<ffffffff802a6c77>] ? cache_alloc_refill+0x2d7/0x570
[415964.097473]  [<ffffffff802a6feb>] ? __kmalloc+0xdb/0xe0
[415964.097477]  [<ffffffff805a612d>] ? __alloc_skb+0x6d/0x160
[415964.097479]  [<ffffffff805a6ed7>] ? __netdev_alloc_skb+0x17/0x40
[415964.097482]  [<ffffffff80509d73>] ? e1000_alloc_rx_buffers+0x2b3/0x360
[415964.097485]  [<ffffffff8050a100>] ? e1000_clean_rx_irq+0x260/0x3a0
[415964.097487]  [<ffffffff8050bd9f>] ? e1000_clean+0x7f/0x2b0
[415964.097490]  [<ffffffff805aaf33>] ? net_rx_action+0x83/0x120
[415964.097494]  [<ffffffff8025897b>] ? __do_softirq+0x7b/0x110
[415964.097497]  [<ffffffff8022d6bc>] ? call_softirq+0x1c/0x30
[415964.097499]  [<ffffffff8022f3a5>] ? do_softirq+0x35/0x70
[415964.097502]  [<ffffffff8022eb65>] ? do_IRQ+0x85/0xf0
[415964.097504]  [<ffffffff8022cf93>] ? ret_from_intr+0x0/0xa
[415964.097505]  <EOI>  [<ffffffff8063e66c>] ? _spin_lock+0xc/0x20
[415964.097511]  [<ffffffff802bbf4c>] ? d_find_alias+0x1c/0x40
[415964.097514]  [<ffffffff802be59d>] ? d_obtain_alias+0x4d/0x140
[415964.097517]  [<ffffffff80341fa3>] ? exportfs_decode_fh+0x63/0x2a0
[415964.097520]  [<ffffffff803459c0>] ? nfsd_acceptable+0x0/0x110
[415964.097523]  [<ffffffff8062a4aa>] ? cache_check+0x4a/0x4d0
[415964.097526]  [<ffffffff8034b4e7>] ? exp_find_key+0x57/0xe0
[415964.097530]  [<ffffffff8059f740>] ? sock_common_recvmsg+0x30/0x50
[415964.097532]  [<ffffffff8034b602>] ? exp_find+0x92/0xa0
[415964.097534]  [<ffffffff80345ea9>] ? fh_verify+0x369/0x680
[415964.097539]  [<ffffffff8024876e>] ? wakeup_preempt_entity+0x9e/0xb0
[415964.097541]  [<ffffffff8024c8ff>] ? try_to_wake_up+0xaf/0x200
[415964.097543]  [<ffffffff803480fe>] ? nfsd_open+0x2e/0x180
[415964.097546]  [<ffffffff803485e4>] ? nfsd_write+0xc4/0x110
[415964.097549]  [<ffffffff8034fb36>] ? nfsd3_proc_write+0xb6/0x160
[415964.097551]  [<ffffffff8034246a>] ? nfsd_dispatch+0xba/0x270
[415964.097554]  [<ffffffff80621667>] ? svc_process+0x4a7/0x800
[415964.097556]  [<ffffffff8024ca50>] ? default_wake_function+0x0/0x10
[415964.097559]  [<ffffffff8063e4b7>] ? __down_read+0x17/0xae
[415964.097561]  [<ffffffff80342b85>] ? nfsd+0xd5/0x160
[415964.097563]  [<ffffffff80342ab0>] ? nfsd+0x0/0x160
[415964.097566]  [<ffffffff80268304>] ? kthread+0x54/0x90
[415964.097568]  [<ffffffff8022d5ba>] ? child_rip+0xa/0x20
[415964.097571]  [<ffffffff802682b0>] ? kthread+0x0/0x90
[415964.097573]  [<ffffffff8022d5b0>] ? child_rip+0x0/0x20
[415964.097574] Mem-Info:
[415964.097575] DMA per-cpu:
[415964.097577] CPU    0: hi:    0, btch:   1 usd:   0
[415964.097579] CPU    1: hi:    0, btch:   1 usd:   0
[415964.097581] CPU    2: hi:    0, btch:   1 usd:   0
[415964.097582] CPU    3: hi:    0, btch:   1 usd:   0
[415964.097583] DMA32 per-cpu:
[415964.097585] CPU    0: hi:  186, btch:  31 usd: 179
[415964.097586] CPU    1: hi:  186, btch:  31 usd: 156
[415964.097588] CPU    2: hi:  186, btch:  31 usd: 225
[415964.097590] CPU    3: hi:  186, btch:  31 usd: 198
[415964.097591] Normal per-cpu:
[415964.097592] CPU    0: hi:  186, btch:  31 usd: 183
[415964.097594] CPU    1: hi:  186, btch:  31 usd: 176
[415964.097595] CPU    2: hi:  186, btch:  31 usd: 178
[415964.097597] CPU    3: hi:  186, btch:  31 usd: 215
[415964.097600] Active_anon:154810 active_file:131162 inactive_anon:33447
[415964.097601]  inactive_file:690987 unevictable:0 dirty:112116 writeback:0 unstable:0
[415964.097602]  free:8662 slab:965366 mapped:9316 pagetables:4618 bounce:0
[415964.097605] DMA free:9692kB min:16kB low:20kB high:24kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB present:8668kB pages_scanned:0 all_unreclaimable? yes
[415964.097607] lowmem_reserve[]: 0 3246 7980 7980
[415964.097612] DMA32 free:21312kB min:6656kB low:8320kB high:9984kB active_anon:118464kB inactive_anon:23908kB active_file:174708kB inactive_file:1206812kB unevictable:0kB present:3324312kB pages_scanned:0 all_unreclaimable? no
[415964.097615] lowmem_reserve[]: 0 0 4734 4734
[415964.097619] Normal free:3644kB min:9708kB low:12132kB high:14560kB active_anon:500776kB inactive_anon:109880kB active_file:349940kB inactive_file:1557136kB unevictable:0kB present:4848000kB pages_scanned:0 all_unreclaimable? no
[415964.097622] lowmem_reserve[]: 0 0 0 0
[415964.097624] DMA: 3*4kB 4*8kB 3*16kB 4*32kB 0*64kB 2*128kB 2*256kB 1*512kB 2*1024kB 1*2048kB 1*4096kB = 9692kB
[415964.097632] DMA32: 3289*4kB 0*8kB 1*16kB 1*32kB 1*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 1*4096kB = 21332kB
[415964.097639] Normal: 0*4kB 1*8kB 0*16kB 1*32kB 1*64kB 1*128kB 0*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 3816kB
[415964.097646] 827035 total pagecache pages
[415964.097647] 4728 pages in swap cache
[415964.097649] Swap cache stats: add 12746, delete 8018, find 16878/17480
[415964.097651] Free swap  = 16756356kB
[415964.097652] Total swap = 16787768kB
[415964.098375] 2277376 pages RAM
[415964.098375] 252254 pages reserved
[415964.098375] 546309 pages shared
[415964.098375] 1520221 pages non-shared
[415964.134117] nfsd: page allocation failure. order:0, mode:0x20
[415964.134121] Pid: 2680, comm: nfsd Not tainted 2.6.30 #2
[415964.134123] Call Trace:
[415964.134125]  <IRQ>  [<ffffffff802849ed>] ? __alloc_pages_internal+0x3dd/0x4e0
[415964.134134]  [<ffffffff802a6c77>] ? cache_alloc_refill+0x2d7/0x570
[415964.134136]  [<ffffffff802a707d>] ? kmem_cache_alloc+0x8d/0xa0
[415964.134140]  [<ffffffff805a6109>] ? __alloc_skb+0x49/0x160
[415964.134144]  [<ffffffff805ea846>] ? tcp_send_ack+0x26/0x120
[415964.134146]  [<ffffffff805e867d>] ? tcp_rcv_established+0x7bd/0x940
[415964.134149]  [<ffffffff805efb1d>] ? tcp_v4_do_rcv+0xdd/0x210
[415964.134151]  [<ffffffff805f02d6>] ? tcp_v4_rcv+0x686/0x750
[415964.134154]  [<ffffffff805d235c>] ? ip_local_deliver_finish+0x8c/0x170
[415964.134156]  [<ffffffff805d1e51>] ? ip_rcv_finish+0x191/0x330
[415964.134158]  [<ffffffff805d2237>] ? ip_rcv+0x247/0x2e0
[415964.134162]  [<ffffffff80509fb4>] ? e1000_clean_rx_irq+0x114/0x3a0
[415964.134164]  [<ffffffff8050bd9f>] ? e1000_clean+0x7f/0x2b0
[415964.134167]  [<ffffffff805aaf33>] ? net_rx_action+0x83/0x120
[415964.134170]  [<ffffffff8025897b>] ? __do_softirq+0x7b/0x110
[415964.134174]  [<ffffffff8022d6bc>] ? call_softirq+0x1c/0x30
[415964.134176]  [<ffffffff8022f3a5>] ? do_softirq+0x35/0x70
[415964.134178]  [<ffffffff8022eb65>] ? do_IRQ+0x85/0xf0
[415964.134180]  [<ffffffff8022cf93>] ? ret_from_intr+0x0/0xa
[415964.134182]  <EOI>  [<ffffffff8063e66c>] ? _spin_lock+0xc/0x20
[415964.134188]  [<ffffffff802bbf4c>] ? d_find_alias+0x1c/0x40
[415964.134190]  [<ffffffff802be59d>] ? d_obtain_alias+0x4d/0x140
[415964.134194]  [<ffffffff80341fa3>] ? exportfs_decode_fh+0x63/0x2a0
[415964.134197]  [<ffffffff803459c0>] ? nfsd_acceptable+0x0/0x110
[415964.134200]  [<ffffffff8062a4aa>] ? cache_check+0x4a/0x4d0
[415964.134203]  [<ffffffff8034b4e7>] ? exp_find_key+0x57/0xe0
[415964.134206]  [<ffffffff8059f740>] ? sock_common_recvmsg+0x30/0x50
[415964.134209]  [<ffffffff8034b602>] ? exp_find+0x92/0xa0
[415964.134211]  [<ffffffff80345ea9>] ? fh_verify+0x369/0x680
[415964.134215]  [<ffffffff8024876e>] ? wakeup_preempt_entity+0x9e/0xb0
[415964.134217]  [<ffffffff8024c8ff>] ? try_to_wake_up+0xaf/0x200
[415964.134220]  [<ffffffff803480fe>] ? nfsd_open+0x2e/0x180
[415964.134222]  [<ffffffff803485e4>] ? nfsd_write+0xc4/0x110
[415964.134225]  [<ffffffff8034fb36>] ? nfsd3_proc_write+0xb6/0x160
[415964.134228]  [<ffffffff8034246a>] ? nfsd_dispatch+0xba/0x270
[415964.134230]  [<ffffffff80621667>] ? svc_process+0x4a7/0x800
[415964.134232]  [<ffffffff8024ca50>] ? default_wake_function+0x0/0x10
[415964.134236]  [<ffffffff8063e4b7>] ? __down_read+0x17/0xae
[415964.134238]  [<ffffffff80342b85>] ? nfsd+0xd5/0x160
[415964.134240]  [<ffffffff80342ab0>] ? nfsd+0x0/0x160
[415964.134243]  [<ffffffff80268304>] ? kthread+0x54/0x90
[415964.134245]  [<ffffffff8022d5ba>] ? child_rip+0xa/0x20
[415964.134247]  [<ffffffff802682b0>] ? kthread+0x0/0x90
[415964.134249]  [<ffffffff8022d5b0>] ? child_rip+0x0/0x20
[415964.134251] Mem-Info:
[415964.134252] DMA per-cpu:
[415964.134254] CPU    0: hi:    0, btch:   1 usd:   0
[415964.134256] CPU    1: hi:    0, btch:   1 usd:   0
[415964.134258] CPU    2: hi:    0, btch:   1 usd:   0
[415964.134260] CPU    3: hi:    0, btch:   1 usd:   0
[415964.134261] DMA32 per-cpu:
[415964.134263] CPU    0: hi:  186, btch:  31 usd: 179
[415964.134265] CPU    1: hi:  186, btch:  31 usd: 156
[415964.134266] CPU    2: hi:  186, btch:  31 usd: 225
[415964.134268] CPU    3: hi:  186, btch:  31 usd: 198
[415964.134269] Normal per-cpu:
[415964.134271] CPU    0: hi:  186, btch:  31 usd: 183
[415964.134273] CPU    1: hi:  186, btch:  31 usd: 176
[415964.134275] CPU    2: hi:  186, btch:  31 usd: 178
[415964.134276] CPU    3: hi:  186, btch:  31 usd: 215
[415964.134280] Active_anon:154810 active_file:131162 inactive_anon:33447
[415964.134281]  inactive_file:690987 unevictable:0 dirty:112116 writeback:0 unstable:0
[415964.134282]  free:8662 slab:965366 mapped:9316 pagetables:4618 bounce:0
[415964.134285] DMA free:9692kB min:16kB low:20kB high:24kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB present:8668kB pages_scanned:0 all_unreclaimable? yes
[415964.134288] lowmem_reserve[]: 0 3246 7980 7980
[415964.134293] DMA32 free:21312kB min:6656kB low:8320kB high:9984kB active_anon:118464kB inactive_anon:23908kB active_file:174708kB inactive_file:1206812kB unevictable:0kB present:3324312kB pages_scanned:0 all_unreclaimable? no
[415964.134297] lowmem_reserve[]: 0 0 4734 4734
[415964.134301] Normal free:3644kB min:9708kB low:12132kB high:14560kB active_anon:500776kB inactive_anon:109880kB active_file:349940kB inactive_file:1557136kB unevictable:0kB present:4848000kB pages_scanned:0 all_unreclaimable? no
[415964.134304] lowmem_reserve[]: 0 0 0 0
[415964.134307] DMA: 3*4kB 4*8kB 3*16kB 4*32kB 0*64kB 2*128kB 2*256kB 1*512kB 2*1024kB 1*2048kB 1*4096kB = 9692kB
[415964.134315] DMA32: 3289*4kB 0*8kB 1*16kB 1*32kB 1*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 1*4096kB = 21332kB
[415964.134323] Normal: 0*4kB 1*8kB 0*16kB 1*32kB 1*64kB 1*128kB 0*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 3816kB
[415964.134330] 827035 total pagecache pages
[415964.134332] 4728 pages in swap cache
[415964.134334] Swap cache stats: add 12746, delete 8018, find 16878/17480
[415964.134335] Free swap  = 16756356kB
[415964.134336] Total swap = 16787768kB
[415964.154129] 2277376 pages RAM
[415964.154129] 252254 pages reserved
[415964.154129] 546309 pages shared
[415964.154129] 1520221 pages non-shared
[415964.164297] nfsd: page allocation failure. order:0, mode:0x20
[415964.164301] Pid: 2680, comm: nfsd Not tainted 2.6.30 #2
[415964.164303] Call Trace:
[415964.164305]  <IRQ>  [<ffffffff802849ed>] ? __alloc_pages_internal+0x3dd/0x4e0
[415964.164315]  [<ffffffff802a6c77>] ? cache_alloc_refill+0x2d7/0x570
[415964.164317]  [<ffffffff802a707d>] ? kmem_cache_alloc+0x8d/0xa0
[415964.164320]  [<ffffffff805a6109>] ? __alloc_skb+0x49/0x160
[415964.164323]  [<ffffffff805a6ed7>] ? __netdev_alloc_skb+0x17/0x40
[415964.164326]  [<ffffffff80509d73>] ? e1000_alloc_rx_buffers+0x2b3/0x360
[415964.164329]  [<ffffffff8050a100>] ? e1000_clean_rx_irq+0x260/0x3a0
[415964.164331]  [<ffffffff8050bd9f>] ? e1000_clean+0x7f/0x2b0
[415964.164334]  [<ffffffff805aaf33>] ? net_rx_action+0x83/0x120
[415964.164337]  [<ffffffff8025897b>] ? __do_softirq+0x7b/0x110
[415964.164340]  [<ffffffff8022d6bc>] ? call_softirq+0x1c/0x30
[415964.164343]  [<ffffffff8022f3a5>] ? do_softirq+0x35/0x70
[415964.164345]  [<ffffffff8022eb65>] ? do_IRQ+0x85/0xf0
[415964.164347]  [<ffffffff8022cf93>] ? ret_from_intr+0x0/0xa
[415964.164348]  <EOI>  [<ffffffff8063e66c>] ? _spin_lock+0xc/0x20
[415964.164354]  [<ffffffff802bbf4c>] ? d_find_alias+0x1c/0x40
[415964.164356]  [<ffffffff802be59d>] ? d_obtain_alias+0x4d/0x140
[415964.164360]  [<ffffffff80341fa3>] ? exportfs_decode_fh+0x63/0x2a0
[415964.164363]  [<ffffffff803459c0>] ? nfsd_acceptable+0x0/0x110
[415964.164366]  [<ffffffff8062a4aa>] ? cache_check+0x4a/0x4d0
[415964.164368]  [<ffffffff8034b4e7>] ? exp_find_key+0x57/0xe0
[415964.164372]  [<ffffffff8059f740>] ? sock_common_recvmsg+0x30/0x50
[415964.164374]  [<ffffffff8034b602>] ? exp_find+0x92/0xa0
[415964.164376]  [<ffffffff80345ea9>] ? fh_verify+0x369/0x680
[415964.164380]  [<ffffffff8024876e>] ? wakeup_preempt_entity+0x9e/0xb0
[415964.164382]  [<ffffffff8024c8ff>] ? try_to_wake_up+0xaf/0x200
[415964.164384]  [<ffffffff803480fe>] ? nfsd_open+0x2e/0x180
[415964.164387]  [<ffffffff803485e4>] ? nfsd_write+0xc4/0x110
[415964.164390]  [<ffffffff8034fb36>] ? nfsd3_proc_write+0xb6/0x160
[415964.164392]  [<ffffffff8034246a>] ? nfsd_dispatch+0xba/0x270
[415964.164394]  [<ffffffff80621667>] ? svc_process+0x4a7/0x800
[415964.164397]  [<ffffffff8024ca50>] ? default_wake_function+0x0/0x10
[415964.164400]  [<ffffffff8063e4b7>] ? __down_read+0x17/0xae
[415964.164402]  [<ffffffff80342b85>] ? nfsd+0xd5/0x160
[415964.164404]  [<ffffffff80342ab0>] ? nfsd+0x0/0x160
[415964.164407]  [<ffffffff80268304>] ? kthread+0x54/0x90
[415964.164409]  [<ffffffff8022d5ba>] ? child_rip+0xa/0x20
[415964.164411]  [<ffffffff802682b0>] ? kthread+0x0/0x90
[415964.164413]  [<ffffffff8022d5b0>] ? child_rip+0x0/0x20
[415964.164414] Mem-Info:
[415964.164416] DMA per-cpu:
[415964.164418] CPU    0: hi:    0, btch:   1 usd:   0
[415964.164419] CPU    1: hi:    0, btch:   1 usd:   0
[415964.164421] CPU    2: hi:    0, btch:   1 usd:   0
[415964.164422] CPU    3: hi:    0, btch:   1 usd:   0
[415964.164424] DMA32 per-cpu:
[415964.164425] CPU    0: hi:  186, btch:  31 usd: 179
[415964.164427] CPU    1: hi:  186, btch:  31 usd: 156
[415964.164429] CPU    2: hi:  186, btch:  31 usd: 225
[415964.164430] CPU    3: hi:  186, btch:  31 usd: 198
[415964.164431] Normal per-cpu:
[415964.164433] CPU    0: hi:  186, btch:  31 usd: 183
[415964.164434] CPU    1: hi:  186, btch:  31 usd: 176
[415964.164436] CPU    2: hi:  186, btch:  31 usd: 178
[415964.164437] CPU    3: hi:  186, btch:  31 usd: 215
[415964.164440] Active_anon:154810 active_file:131162 inactive_anon:33447
[415964.164441]  inactive_file:690987 unevictable:0 dirty:112116 writeback:0 unstable:0
[415964.164442]  free:8662 slab:965366 mapped:9316 pagetables:4618 bounce:0
[415964.164445] DMA free:9692kB min:16kB low:20kB high:24kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB present:8668kB pages_scanned:0 all_unreclaimable? yes
[415964.164448] lowmem_reserve[]: 0 3246 7980 7980
[415964.164452] DMA32 free:21312kB min:6656kB low:8320kB high:9984kB active_anon:118464kB inactive_anon:23908kB active_file:174708kB inactive_file:1206812kB unevictable:0kB present:3324312kB pages_scanned:0 all_unreclaimable? no
[415964.164455] lowmem_reserve[]: 0 0 4734 4734
[415964.164459] Normal free:3644kB min:9708kB low:12132kB high:14560kB active_anon:500776kB inactive_anon:109880kB active_file:349940kB inactive_file:1557136kB unevictable:0kB present:4848000kB pages_scanned:0 all_unreclaimable? no
[415964.164462] lowmem_reserve[]: 0 0 0 0
[415964.164465] DMA: 3*4kB 4*8kB 3*16kB 4*32kB 0*64kB 2*128kB 2*256kB 1*512kB 2*1024kB 1*2048kB 1*4096kB = 9692kB
[415964.164472] DMA32: 3289*4kB 0*8kB 1*16kB 1*32kB 1*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 1*4096kB = 21332kB
[415964.164479] Normal: 0*4kB 1*8kB 0*16kB 1*32kB 1*64kB 1*128kB 0*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 3816kB
[415964.164486] 827035 total pagecache pages
[415964.164488] 4728 pages in swap cache
[415964.164489] Swap cache stats: add 12746, delete 8018, find 16878/17480
[415964.164491] Free swap  = 16756356kB
[415964.164492] Total swap = 16787768kB
[415964.165269] 2277376 pages RAM
[415964.165269] 252254 pages reserved
[415964.165269] 546309 pages shared
[415964.165269] 1520221 pages non-shared
[415964.193652] nfsd: page allocation failure. order:0, mode:0x20
[415964.193656] Pid: 2680, comm: nfsd Not tainted 2.6.30 #2
[415964.193657] Call Trace:
[415964.193659]  <IRQ>  [<ffffffff802849ed>] ? __alloc_pages_internal+0x3dd/0x4e0
[415964.193667]  [<ffffffff802a6c77>] ? cache_alloc_refill+0x2d7/0x570
[415964.193669]  [<ffffffff802a707d>] ? kmem_cache_alloc+0x8d/0xa0
[415964.193672]  [<ffffffff805a6109>] ? __alloc_skb+0x49/0x160
[415964.193676]  [<ffffffff805ea846>] ? tcp_send_ack+0x26/0x120
[415964.193678]  [<ffffffff805e867d>] ? tcp_rcv_established+0x7bd/0x940
[415964.193681]  [<ffffffff805efb1d>] ? tcp_v4_do_rcv+0xdd/0x210
[415964.193683]  [<ffffffff805f02d6>] ? tcp_v4_rcv+0x686/0x750
[415964.193686]  [<ffffffff805d235c>] ? ip_local_deliver_finish+0x8c/0x170
[415964.193688]  [<ffffffff805d1e51>] ? ip_rcv_finish+0x191/0x330
[415964.193690]  [<ffffffff805d2237>] ? ip_rcv+0x247/0x2e0
[415964.193693]  [<ffffffff80509fb4>] ? e1000_clean_rx_irq+0x114/0x3a0
[415964.193696]  [<ffffffff8050bd9f>] ? e1000_clean+0x7f/0x2b0
[415964.193698]  [<ffffffff805aaf33>] ? net_rx_action+0x83/0x120
[415964.193702]  [<ffffffff8025897b>] ? __do_softirq+0x7b/0x110
[415964.193705]  [<ffffffff8022d6bc>] ? call_softirq+0x1c/0x30
[415964.193707]  [<ffffffff8022f3a5>] ? do_softirq+0x35/0x70
[415964.193709]  [<ffffffff8022eb65>] ? do_IRQ+0x85/0xf0
[415964.193712]  [<ffffffff8022cf93>] ? ret_from_intr+0x0/0xa
[415964.193713]  <EOI>  [<ffffffff8063e66c>] ? _spin_lock+0xc/0x20
[415964.193718]  [<ffffffff802bbf4c>] ? d_find_alias+0x1c/0x40
[415964.193721]  [<ffffffff802be59d>] ? d_obtain_alias+0x4d/0x140
[415964.193724]  [<ffffffff80341fa3>] ? exportfs_decode_fh+0x63/0x2a0
[415964.193726]  [<ffffffff803459c0>] ? nfsd_acceptable+0x0/0x110
[415964.193730]  [<ffffffff8062a4aa>] ? cache_check+0x4a/0x4d0
[415964.193732]  [<ffffffff8034b4e7>] ? exp_find_key+0x57/0xe0
[415964.193735]  [<ffffffff8059f740>] ? sock_common_recvmsg+0x30/0x50
[415964.193738]  [<ffffffff8034b602>] ? exp_find+0x92/0xa0
[415964.193740]  [<ffffffff80345ea9>] ? fh_verify+0x369/0x680
[415964.193743]  [<ffffffff8024876e>] ? wakeup_preempt_entity+0x9e/0xb0
[415964.193746]  [<ffffffff8024c8ff>] ? try_to_wake_up+0xaf/0x200
[415964.193748]  [<ffffffff803480fe>] ? nfsd_open+0x2e/0x180
[415964.193751]  [<ffffffff803485e4>] ? nfsd_write+0xc4/0x110
[415964.193754]  [<ffffffff8034fb36>] ? nfsd3_proc_write+0xb6/0x160
[415964.193756]  [<ffffffff8034246a>] ? nfsd_dispatch+0xba/0x270
[415964.193758]  [<ffffffff80621667>] ? svc_process+0x4a7/0x800
[415964.193760]  [<ffffffff8024ca50>] ? default_wake_function+0x0/0x10
[415964.193764]  [<ffffffff8063e4b7>] ? __down_read+0x17/0xae
[415964.193766]  [<ffffffff80342b85>] ? nfsd+0xd5/0x160
[415964.193768]  [<ffffffff80342ab0>] ? nfsd+0x0/0x160
[415964.193771]  [<ffffffff80268304>] ? kthread+0x54/0x90
[415964.193773]  [<ffffffff8022d5ba>] ? child_rip+0xa/0x20
[415964.193775]  [<ffffffff802682b0>] ? kthread+0x0/0x90
[415964.193777]  [<ffffffff8022d5b0>] ? child_rip+0x0/0x20
[415964.193778] Mem-Info:
[415964.193780] DMA per-cpu:
[415964.193781] CPU    0: hi:    0, btch:   1 usd:   0
[415964.193783] CPU    1: hi:    0, btch:   1 usd:   0
[415964.193785] CPU    2: hi:    0, btch:   1 usd:   0
[415964.193787] CPU    3: hi:    0, btch:   1 usd:   0
[415964.193788] DMA32 per-cpu:
[415964.193790] CPU    0: hi:  186, btch:  31 usd: 179
[415964.193792] CPU    1: hi:  186, btch:  31 usd: 156
[415964.193794] CPU    2: hi:  186, btch:  31 usd: 225
[415964.193795] CPU    3: hi:  186, btch:  31 usd: 198
[415964.193797] Normal per-cpu:
[415964.193798] CPU    0: hi:  186, btch:  31 usd: 183
[415964.193800] CPU    1: hi:  186, btch:  31 usd: 176
[415964.193802] CPU    2: hi:  186, btch:  31 usd: 178
[415964.193803] CPU    3: hi:  186, btch:  31 usd: 215
[415964.193807] Active_anon:154810 active_file:131162 inactive_anon:33447
[415964.193808]  inactive_file:690987 unevictable:0 dirty:112116 writeback:0 unstable:0
[415964.193809]  free:8662 slab:965366 mapped:9316 pagetables:4618 bounce:0
[415964.193812] DMA free:9692kB min:16kB low:20kB high:24kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB present:8668kB pages_scanned:0 all_unreclaimable? yes
[415964.193815] lowmem_reserve[]: 0 3246 7980 7980
[415964.193820] DMA32 free:21312kB min:6656kB low:8320kB high:9984kB active_anon:118464kB inactive_anon:23908kB active_file:174708kB inactive_file:1206812kB unevictable:0kB present:3324312kB pages_scanned:0 all_unreclaimable? no
[415964.193823] lowmem_reserve[]: 0 0 4734 4734
[415964.193828] Normal free:3644kB min:9708kB low:12132kB high:14560kB active_anon:500776kB inactive_anon:109880kB active_file:349940kB inactive_file:1557136kB unevictable:0kB present:4848000kB pages_scanned:0 all_unreclaimable? no
[415964.193831] lowmem_reserve[]: 0 0 0 0
[415964.193834] DMA: 3*4kB 4*8kB 3*16kB 4*32kB 0*64kB 2*128kB 2*256kB 1*512kB 2*1024kB 1*2048kB 1*4096kB = 9692kB
[415964.193842] DMA32: 3289*4kB 0*8kB 1*16kB 1*32kB 1*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 1*4096kB = 21332kB
[415964.193850] Normal: 0*4kB 1*8kB 0*16kB 1*32kB 1*64kB 1*128kB 0*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 3816kB
[415964.193857] 827035 total pagecache pages
[415964.193859] 4728 pages in swap cache
[415964.193860] Swap cache stats: add 12746, delete 8018, find 16878/17480
[415964.193862] Free swap  = 16756356kB
[415964.193863] Total swap = 16787768kB
[415964.194629] 2277376 pages RAM
[415964.194629] 252254 pages reserved
[415964.194629] 546309 pages shared
[415964.194629] 1520221 pages non-shared
[415964.223033] nfsd: page allocation failure. order:0, mode:0x20
[415964.223037] Pid: 2680, comm: nfsd Not tainted 2.6.30 #2
[415964.223038] Call Trace:
[415964.223040]  <IRQ>  [<ffffffff802849ed>] ? __alloc_pages_internal+0x3dd/0x4e0
[415964.223048]  [<ffffffff802a6c77>] ? cache_alloc_refill+0x2d7/0x570
[415964.223050]  [<ffffffff802a707d>] ? kmem_cache_alloc+0x8d/0xa0
[415964.223053]  [<ffffffff805a6109>] ? __alloc_skb+0x49/0x160
[415964.223057]  [<ffffffff805ea846>] ? tcp_send_ack+0x26/0x120
[415964.223059]  [<ffffffff805e82ad>] ? tcp_rcv_established+0x3ed/0x940
[415964.223062]  [<ffffffff805efb1d>] ? tcp_v4_do_rcv+0xdd/0x210
[415964.223064]  [<ffffffff805f02d6>] ? tcp_v4_rcv+0x686/0x750
[415964.223067]  [<ffffffff805d235c>] ? ip_local_deliver_finish+0x8c/0x170
[415964.223069]  [<ffffffff805d1e51>] ? ip_rcv_finish+0x191/0x330
[415964.223071]  [<ffffffff805d2237>] ? ip_rcv+0x247/0x2e0
[415964.223075]  [<ffffffff80509fb4>] ? e1000_clean_rx_irq+0x114/0x3a0
[415964.223077]  [<ffffffff8050bd9f>] ? e1000_clean+0x7f/0x2b0
[415964.223080]  [<ffffffff805aaf33>] ? net_rx_action+0x83/0x120
[415964.223083]  [<ffffffff8025897b>] ? __do_softirq+0x7b/0x110
[415964.223086]  [<ffffffff8022d6bc>] ? call_softirq+0x1c/0x30
[415964.223088]  [<ffffffff8022f3a5>] ? do_softirq+0x35/0x70
[415964.223091]  [<ffffffff8022eb65>] ? do_IRQ+0x85/0xf0
[415964.223093]  [<ffffffff8022cf93>] ? ret_from_intr+0x0/0xa
[415964.223094]  <EOI>  [<ffffffff8063e66c>] ? _spin_lock+0xc/0x20
[415964.223099]  [<ffffffff802bbf4c>] ? d_find_alias+0x1c/0x40
[415964.223102]  [<ffffffff802be59d>] ? d_obtain_alias+0x4d/0x140
[415964.223105]  [<ffffffff80341fa3>] ? exportfs_decode_fh+0x63/0x2a0
[415964.223108]  [<ffffffff803459c0>] ? nfsd_acceptable+0x0/0x110
[415964.223111]  [<ffffffff8062a4aa>] ? cache_check+0x4a/0x4d0
[415964.223113]  [<ffffffff8034b4e7>] ? exp_find_key+0x57/0xe0
[415964.223117]  [<ffffffff8059f740>] ? sock_common_recvmsg+0x30/0x50
[415964.223119]  [<ffffffff8034b602>] ? exp_find+0x92/0xa0
[415964.223121]  [<ffffffff80345ea9>] ? fh_verify+0x369/0x680
[415964.223125]  [<ffffffff8024876e>] ? wakeup_preempt_entity+0x9e/0xb0
[415964.223127]  [<ffffffff8024c8ff>] ? try_to_wake_up+0xaf/0x200
[415964.223130]  [<ffffffff803480fe>] ? nfsd_open+0x2e/0x180
[415964.223132]  [<ffffffff803485e4>] ? nfsd_write+0xc4/0x110
[415964.223135]  [<ffffffff8034fb36>] ? nfsd3_proc_write+0xb6/0x160
[415964.223137]  [<ffffffff8034246a>] ? nfsd_dispatch+0xba/0x270
[415964.223140]  [<ffffffff80621667>] ? svc_process+0x4a7/0x800
[415964.223142]  [<ffffffff8024ca50>] ? default_wake_function+0x0/0x10
[415964.223145]  [<ffffffff8063e4b7>] ? __down_read+0x17/0xae
[415964.223147]  [<ffffffff80342b85>] ? nfsd+0xd5/0x160
[415964.223149]  [<ffffffff80342ab0>] ? nfsd+0x0/0x160
[415964.223152]  [<ffffffff80268304>] ? kthread+0x54/0x90
[415964.223154]  [<ffffffff8022d5ba>] ? child_rip+0xa/0x20
[415964.223156]  [<ffffffff802682b0>] ? kthread+0x0/0x90
[415964.223158]  [<ffffffff8022d5b0>] ? child_rip+0x0/0x20
[415964.223160] Mem-Info:
[415964.223161] DMA per-cpu:
[415964.223163] CPU    0: hi:    0, btch:   1 usd:   0
[415964.223165] CPU    1: hi:    0, btch:   1 usd:   0
[415964.223166] CPU    2: hi:    0, btch:   1 usd:   0
[415964.223168] CPU    3: hi:    0, btch:   1 usd:   0
[415964.223170] DMA32 per-cpu:
[415964.223171] CPU    0: hi:  186, btch:  31 usd: 179
[415964.223173] CPU    1: hi:  186, btch:  31 usd: 156
[415964.223175] CPU    2: hi:  186, btch:  31 usd: 225
[415964.223177] CPU    3: hi:  186, btch:  31 usd: 198
[415964.223178] Normal per-cpu:
[415964.223179] CPU    0: hi:  186, btch:  31 usd: 183
[415964.223181] CPU    1: hi:  186, btch:  31 usd: 176
[415964.223183] CPU    2: hi:  186, btch:  31 usd: 178
[415964.223185] CPU    3: hi:  186, btch:  31 usd: 215
[415964.223188] Active_anon:154810 active_file:131162 inactive_anon:33447
[415964.223189]  inactive_file:690987 unevictable:0 dirty:112116 writeback:0 unstable:0
[415964.223190]  free:8662 slab:965366 mapped:9316 pagetables:4618 bounce:0
[415964.223194] DMA free:9692kB min:16kB low:20kB high:24kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB present:8668kB pages_scanned:0 all_unreclaimable? yes
[415964.223197] lowmem_reserve[]: 0 3246 7980 7980
[415964.223202] DMA32 free:21312kB min:6656kB low:8320kB high:9984kB active_anon:118464kB inactive_anon:23908kB active_file:174708kB inactive_file:1206812kB unevictable:0kB present:3324312kB pages_scanned:0 all_unreclaimable? no
[415964.223205] lowmem_reserve[]: 0 0 4734 4734
[415964.223209] Normal free:3644kB min:9708kB low:12132kB high:14560kB active_anon:500776kB inactive_anon:109880kB active_file:349940kB inactive_file:1557136kB unevictable:0kB present:4848000kB pages_scanned:0 all_unreclaimable? no
[415964.223213] lowmem_reserve[]: 0 0 0 0
[415964.223216] DMA: 3*4kB 4*8kB 3*16kB 4*32kB 0*64kB 2*128kB 2*256kB 1*512kB 2*1024kB 1*2048kB 1*4096kB = 9692kB
[415964.223223] DMA32: 3289*4kB 0*8kB 1*16kB 1*32kB 1*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 1*4096kB = 21332kB
[415964.223231] Normal: 0*4kB 1*8kB 0*16kB 1*32kB 1*64kB 1*128kB 0*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 3816kB
[415964.223238] 827035 total pagecache pages
[415964.223240] 4728 pages in swap cache
[415964.223242] Swap cache stats: add 12746, delete 8018, find 16878/17480
[415964.223243] Free swap  = 16756356kB
[415964.223244] Total swap = 16787768kB
[415964.224005] 2277376 pages RAM
[415964.224005] 252254 pages reserved
[415964.224005] 546309 pages shared
[415964.224005] 1520221 pages non-shared
[415964.252424] nfsd: page allocation failure. order:0, mode:0x20
[415964.252427] Pid: 2680, comm: nfsd Not tainted 2.6.30 #2
[415964.252429] Call Trace:
[415964.252431]  <IRQ>  [<ffffffff802849ed>] ? __alloc_pages_internal+0x3dd/0x4e0
[415964.252439]  [<ffffffff802a6c77>] ? cache_alloc_refill+0x2d7/0x570
[415964.252441]  [<ffffffff802a707d>] ? kmem_cache_alloc+0x8d/0xa0
[415964.252444]  [<ffffffff805a6109>] ? __alloc_skb+0x49/0x160
[415964.252448]  [<ffffffff805ea846>] ? tcp_send_ack+0x26/0x120
[415964.252450]  [<ffffffff805e867d>] ? tcp_rcv_established+0x7bd/0x940
[415964.252453]  [<ffffffff805efb1d>] ? tcp_v4_do_rcv+0xdd/0x210
[415964.252455]  [<ffffffff805f02d6>] ? tcp_v4_rcv+0x686/0x750
[415964.252458]  [<ffffffff805d235c>] ? ip_local_deliver_finish+0x8c/0x170
[415964.252460]  [<ffffffff805d1e51>] ? ip_rcv_finish+0x191/0x330
[415964.252462]  [<ffffffff805d2237>] ? ip_rcv+0x247/0x2e0
[415964.252466]  [<ffffffff80509fb4>] ? e1000_clean_rx_irq+0x114/0x3a0
[415964.252468]  [<ffffffff8050bd9f>] ? e1000_clean+0x7f/0x2b0
[415964.252471]  [<ffffffff805aaf33>] ? net_rx_action+0x83/0x120
[415964.252474]  [<ffffffff8025897b>] ? __do_softirq+0x7b/0x110
[415964.252477]  [<ffffffff8022d6bc>] ? call_softirq+0x1c/0x30
[415964.252480]  [<ffffffff8022f3a5>] ? do_softirq+0x35/0x70
[415964.252482]  [<ffffffff8022eb65>] ? do_IRQ+0x85/0xf0
[415964.252484]  [<ffffffff8022cf93>] ? ret_from_intr+0x0/0xa
[415964.252485]  <EOI>  [<ffffffff8063e66c>] ? _spin_lock+0xc/0x20
[415964.252491]  [<ffffffff802bbf4c>] ? d_find_alias+0x1c/0x40
[415964.252493]  [<ffffffff802be59d>] ? d_obtain_alias+0x4d/0x140
[415964.252497]  [<ffffffff80341fa3>] ? exportfs_decode_fh+0x63/0x2a0
[415964.252499]  [<ffffffff803459c0>] ? nfsd_acceptable+0x0/0x110
[415964.252502]  [<ffffffff8062a4aa>] ? cache_check+0x4a/0x4d0
[415964.252505]  [<ffffffff8034b4e7>] ? exp_find_key+0x57/0xe0
[415964.252508]  [<ffffffff8059f740>] ? sock_common_recvmsg+0x30/0x50
[415964.252511]  [<ffffffff8034b602>] ? exp_find+0x92/0xa0
[415964.252513]  [<ffffffff80345ea9>] ? fh_verify+0x369/0x680
[415964.252516]  [<ffffffff8024876e>] ? wakeup_preempt_entity+0x9e/0xb0
[415964.252519]  [<ffffffff8024c8ff>] ? try_to_wake_up+0xaf/0x200
[415964.252521]  [<ffffffff803480fe>] ? nfsd_open+0x2e/0x180
[415964.252523]  [<ffffffff803485e4>] ? nfsd_write+0xc4/0x110
[415964.252526]  [<ffffffff8034fb36>] ? nfsd3_proc_write+0xb6/0x160
[415964.252529]  [<ffffffff8034246a>] ? nfsd_dispatch+0xba/0x270
[415964.252531]  [<ffffffff80621667>] ? svc_process+0x4a7/0x800
[415964.252533]  [<ffffffff8024ca50>] ? default_wake_function+0x0/0x10
[415964.252537]  [<ffffffff8063e4b7>] ? __down_read+0x17/0xae
[415964.252539]  [<ffffffff80342b85>] ? nfsd+0xd5/0x160
[415964.252541]  [<ffffffff80342ab0>] ? nfsd+0x0/0x160
[415964.252544]  [<ffffffff80268304>] ? kthread+0x54/0x90
[415964.252546]  [<ffffffff8022d5ba>] ? child_rip+0xa/0x20
[415964.252548]  [<ffffffff802682b0>] ? kthread+0x0/0x90
[415964.252550]  [<ffffffff8022d5b0>] ? child_rip+0x0/0x20
[415964.252551] Mem-Info:
[415964.252553] DMA per-cpu:
[415964.252555] CPU    0: hi:    0, btch:   1 usd:   0
[415964.252556] CPU    1: hi:    0, btch:   1 usd:   0
[415964.252558] CPU    2: hi:    0, btch:   1 usd:   0
[415964.252560] CPU    3: hi:    0, btch:   1 usd:   0
[415964.252562] DMA32 per-cpu:
[415964.252563] CPU    0: hi:  186, btch:  31 usd: 179
[415964.252565] CPU    1: hi:  186, btch:  31 usd: 156
[415964.252567] CPU    2: hi:  186, btch:  31 usd: 225
[415964.252569] CPU    3: hi:  186, btch:  31 usd: 198
[415964.252570] Normal per-cpu:
[415964.252571] CPU    0: hi:  186, btch:  31 usd: 183
[415964.252573] CPU    1: hi:  186, btch:  31 usd: 176
[415964.252575] CPU    2: hi:  186, btch:  31 usd: 178
[415964.252577] CPU    3: hi:  186, btch:  31 usd: 215
[415964.252580] Active_anon:154810 active_file:131162 inactive_anon:33447
[415964.252581]  inactive_file:690987 unevictable:0 dirty:112116 writeback:0 unstable:0
[415964.252582]  free:8662 slab:965366 mapped:9316 pagetables:4618 bounce:0
[415964.252586] DMA free:9692kB min:16kB low:20kB high:24kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB present:8668kB pages_scanned:0 all_unreclaimable? yes
[415964.252588] lowmem_reserve[]: 0 3246 7980 7980
[415964.252593] DMA32 free:21312kB min:6656kB low:8320kB high:9984kB active_anon:118464kB inactive_anon:23908kB active_file:174708kB inactive_file:1206812kB unevictable:0kB present:3324312kB pages_scanned:0 all_unreclaimable? no
[415964.252597] lowmem_reserve[]: 0 0 4734 4734
[415964.252601] Normal free:3644kB min:9708kB low:12132kB high:14560kB active_anon:500776kB inactive_anon:109880kB active_file:349940kB inactive_file:1557136kB unevictable:0kB present:4848000kB pages_scanned:0 all_unreclaimable? no
[415964.252604] lowmem_reserve[]: 0 0 0 0
[415964.252607] DMA: 3*4kB 4*8kB 3*16kB 4*32kB 0*64kB 2*128kB 2*256kB 1*512kB 2*1024kB 1*2048kB 1*4096kB = 9692kB
[415964.252615] DMA32: 3289*4kB 0*8kB 1*16kB 1*32kB 1*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 1*4096kB = 21332kB
[415964.252622] Normal: 0*4kB 1*8kB 0*16kB 1*32kB 1*64kB 1*128kB 0*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 3816kB
[415964.252630] 827035 total pagecache pages
[415964.252631] 4728 pages in swap cache
[415964.252633] Swap cache stats: add 12746, delete 8018, find 16878/17480
[415964.252635] Free swap  = 16756356kB
[415964.252636] Total swap = 16787768kB
[415964.253399] 2277376 pages RAM
[415964.253399] 252254 pages reserved
[415964.253399] 546309 pages shared
[415964.253399] 1520221 pages non-shared
[415964.281793] nfsd: page allocation failure. order:0, mode:0x20
[415964.281797] Pid: 2680, comm: nfsd Not tainted 2.6.30 #2
[415964.281798] Call Trace:
[415964.281800]  <IRQ>  [<ffffffff802849ed>] ? __alloc_pages_internal+0x3dd/0x4e0
[415964.281808]  [<ffffffff802a6c77>] ? cache_alloc_refill+0x2d7/0x570
[415964.281810]  [<ffffffff802a707d>] ? kmem_cache_alloc+0x8d/0xa0
[415964.281813]  [<ffffffff805a6109>] ? __alloc_skb+0x49/0x160
[415964.281817]  [<ffffffff805ea846>] ? tcp_send_ack+0x26/0x120
[415964.281819]  [<ffffffff805e82ad>] ? tcp_rcv_established+0x3ed/0x940
[415964.281822]  [<ffffffff805efb1d>] ? tcp_v4_do_rcv+0xdd/0x210
[415964.281824]  [<ffffffff805f02d6>] ? tcp_v4_rcv+0x686/0x750
[415964.281827]  [<ffffffff805d235c>] ? ip_local_deliver_finish+0x8c/0x170
[415964.281829]  [<ffffffff805d1e51>] ? ip_rcv_finish+0x191/0x330
[415964.281831]  [<ffffffff805d2237>] ? ip_rcv+0x247/0x2e0
[415964.281835]  [<ffffffff80509fb4>] ? e1000_clean_rx_irq+0x114/0x3a0
[415964.281837]  [<ffffffff8050bd9f>] ? e1000_clean+0x7f/0x2b0
[415964.281840]  [<ffffffff805aaf33>] ? net_rx_action+0x83/0x120
[415964.281843]  [<ffffffff8025897b>] ? __do_softirq+0x7b/0x110
[415964.281846]  [<ffffffff8022d6bc>] ? call_softirq+0x1c/0x30
[415964.281849]  [<ffffffff8022f3a5>] ? do_softirq+0x35/0x70
[415964.281851]  [<ffffffff8022eb65>] ? do_IRQ+0x85/0xf0
[415964.281853]  [<ffffffff8022cf93>] ? ret_from_intr+0x0/0xa
[415964.281854]  <EOI>  [<ffffffff8063e66c>] ? _spin_lock+0xc/0x20
[415964.281859]  [<ffffffff802bbf4c>] ? d_find_alias+0x1c/0x40
[415964.281862]  [<ffffffff802be59d>] ? d_obtain_alias+0x4d/0x140
[415964.281865]  [<ffffffff80341fa3>] ? exportfs_decode_fh+0x63/0x2a0
[415964.281868]  [<ffffffff803459c0>] ? nfsd_acceptable+0x0/0x110
[415964.281871]  [<ffffffff8062a4aa>] ? cache_check+0x4a/0x4d0
[415964.281874]  [<ffffffff8034b4e7>] ? exp_find_key+0x57/0xe0
[415964.281877]  [<ffffffff8059f740>] ? sock_common_recvmsg+0x30/0x50
[415964.281879]  [<ffffffff8034b602>] ? exp_find+0x92/0xa0
[415964.281882]  [<ffffffff80345ea9>] ? fh_verify+0x369/0x680
[415964.281885]  [<ffffffff8024876e>] ? wakeup_preempt_entity+0x9e/0xb0
[415964.281888]  [<ffffffff8024c8ff>] ? try_to_wake_up+0xaf/0x200
[415964.281890]  [<ffffffff803480fe>] ? nfsd_open+0x2e/0x180
[415964.281892]  [<ffffffff803485e4>] ? nfsd_write+0xc4/0x110
[415964.281895]  [<ffffffff8034fb36>] ? nfsd3_proc_write+0xb6/0x160
[415964.281897]  [<ffffffff8034246a>] ? nfsd_dispatch+0xba/0x270
[415964.281900]  [<ffffffff80621667>] ? svc_process+0x4a7/0x800
[415964.281902]  [<ffffffff8024ca50>] ? default_wake_function+0x0/0x10
[415964.281906]  [<ffffffff8063e4b7>] ? __down_read+0x17/0xae
[415964.281908]  [<ffffffff80342b85>] ? nfsd+0xd5/0x160
[415964.281910]  [<ffffffff80342ab0>] ? nfsd+0x0/0x160
[415964.281912]  [<ffffffff80268304>] ? kthread+0x54/0x90
[415964.281915]  [<ffffffff8022d5ba>] ? child_rip+0xa/0x20
[415964.281917]  [<ffffffff802682b0>] ? kthread+0x0/0x90
[415964.281919]  [<ffffffff8022d5b0>] ? child_rip+0x0/0x20
[415964.281920] Mem-Info:
[415964.281922] DMA per-cpu:
[415964.281923] CPU    0: hi:    0, btch:   1 usd:   0
[415964.281925] CPU    1: hi:    0, btch:   1 usd:   0
[415964.281927] CPU    2: hi:    0, btch:   1 usd:   0
[415964.281929] CPU    3: hi:    0, btch:   1 usd:   0
[415964.281930] DMA32 per-cpu:
[415964.281932] CPU    0: hi:  186, btch:  31 usd: 179
[415964.281934] CPU    1: hi:  186, btch:  31 usd: 156
[415964.281936] CPU    2: hi:  186, btch:  31 usd: 225
[415964.281937] CPU    3: hi:  186, btch:  31 usd: 198
[415964.281939] Normal per-cpu:
[415964.281940] CPU    0: hi:  186, btch:  31 usd: 183
[415964.281942] CPU    1: hi:  186, btch:  31 usd: 176
[415964.281944] CPU    2: hi:  186, btch:  31 usd: 178
[415964.281945] CPU    3: hi:  186, btch:  31 usd: 215
[415964.281949] Active_anon:154810 active_file:131162 inactive_anon:33447
[415964.281950]  inactive_file:690987 unevictable:0 dirty:112116 writeback:0 unstable:0
[415964.281951]  free:8662 slab:965366 mapped:9316 pagetables:4618 bounce:0
[415964.281955] DMA free:9692kB min:16kB low:20kB high:24kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB present:8668kB pages_scanned:0 all_unreclaimable? yes
[415964.281957] lowmem_reserve[]: 0 3246 7980 7980
[415964.281962] DMA32 free:21312kB min:6656kB low:8320kB high:9984kB active_anon:118464kB inactive_anon:23908kB active_file:174708kB inactive_file:1206812kB unevictable:0kB present:3324312kB pages_scanned:0 all_unreclaimable? no
[415964.281966] lowmem_reserve[]: 0 0 4734 4734
[415964.281970] Normal free:3644kB min:9708kB low:12132kB high:14560kB active_anon:500776kB inactive_anon:109880kB active_file:349940kB inactive_file:1557136kB unevictable:0kB present:4848000kB pages_scanned:0 all_unreclaimable? no
[415964.281973] lowmem_reserve[]: 0 0 0 0
[415964.281976] DMA: 3*4kB 4*8kB 3*16kB 4*32kB 0*64kB 2*128kB 2*256kB 1*512kB 2*1024kB 1*2048kB 1*4096kB = 9692kB
[415964.281984] DMA32: 3289*4kB 0*8kB 1*16kB 1*32kB 1*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 1*4096kB = 21332kB
[415964.281991] Normal: 0*4kB 1*8kB 0*16kB 1*32kB 1*64kB 1*128kB 0*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 3816kB
[415964.281999] 827035 total pagecache pages
[415964.282001] 4728 pages in swap cache
[415964.282002] Swap cache stats: add 12746, delete 8018, find 16878/17480
[415964.282004] Free swap  = 16756356kB
[415964.282005] Total swap = 16787768kB
[415964.282766] 2277376 pages RAM
[415964.282766] 252254 pages reserved
[415964.282766] 546309 pages shared
[415964.282766] 1520221 pages non-shared
[415964.311165] nfsd: page allocation failure. order:0, mode:0x20
[415964.311168] Pid: 2680, comm: nfsd Not tainted 2.6.30 #2
[415964.311170] Call Trace:
[415964.311171]  <IRQ>  [<ffffffff802849ed>] ? __alloc_pages_internal+0x3dd/0x4e0
[415964.311179]  [<ffffffff802a6c77>] ? cache_alloc_refill+0x2d7/0x570
[415964.311182]  [<ffffffff802a707d>] ? kmem_cache_alloc+0x8d/0xa0
[415964.311185]  [<ffffffff805a6109>] ? __alloc_skb+0x49/0x160
[415964.311188]  [<ffffffff805ea846>] ? tcp_send_ack+0x26/0x120
[415964.311191]  [<ffffffff805e867d>] ? tcp_rcv_established+0x7bd/0x940
[415964.311193]  [<ffffffff805efb1d>] ? tcp_v4_do_rcv+0xdd/0x210
[415964.311195]  [<ffffffff805f02d6>] ? tcp_v4_rcv+0x686/0x750
[415964.311198]  [<ffffffff805d235c>] ? ip_local_deliver_finish+0x8c/0x170
[415964.311200]  [<ffffffff805d1e51>] ? ip_rcv_finish+0x191/0x330
[415964.311202]  [<ffffffff805d2237>] ? ip_rcv+0x247/0x2e0
[415964.311206]  [<ffffffff80509fb4>] ? e1000_clean_rx_irq+0x114/0x3a0
[415964.311208]  [<ffffffff8050bd9f>] ? e1000_clean+0x7f/0x2b0
[415964.311211]  [<ffffffff805aaf33>] ? net_rx_action+0x83/0x120
[415964.311214]  [<ffffffff8025897b>] ? __do_softirq+0x7b/0x110
[415964.311217]  [<ffffffff8022d6bc>] ? call_softirq+0x1c/0x30
[415964.311219]  [<ffffffff8022f3a5>] ? do_softirq+0x35/0x70
[415964.311222]  [<ffffffff8022eb65>] ? do_IRQ+0x85/0xf0
[415964.311224]  [<ffffffff8022cf93>] ? ret_from_intr+0x0/0xa
[415964.311225]  <EOI>  [<ffffffff8063e66c>] ? _spin_lock+0xc/0x20
[415964.311230]  [<ffffffff802bbf4c>] ? d_find_alias+0x1c/0x40
[415964.311233]  [<ffffffff802be59d>] ? d_obtain_alias+0x4d/0x140
[415964.311236]  [<ffffffff80341fa3>] ? exportfs_decode_fh+0x63/0x2a0
[415964.311239]  [<ffffffff803459c0>] ? nfsd_acceptable+0x0/0x110
[415964.311242]  [<ffffffff8062a4aa>] ? cache_check+0x4a/0x4d0
[415964.311244]  [<ffffffff8034b4e7>] ? exp_find_key+0x57/0xe0
[415964.311248]  [<ffffffff8059f740>] ? sock_common_recvmsg+0x30/0x50
[415964.311250]  [<ffffffff8034b602>] ? exp_find+0x92/0xa0
[415964.311252]  [<ffffffff80345ea9>] ? fh_verify+0x369/0x680
[415964.311256]  [<ffffffff8024876e>] ? wakeup_preempt_entity+0x9e/0xb0
[415964.311258]  [<ffffffff8024c8ff>] ? try_to_wake_up+0xaf/0x200
[415964.311261]  [<ffffffff803480fe>] ? nfsd_open+0x2e/0x180
[415964.311263]  [<ffffffff803485e4>] ? nfsd_write+0xc4/0x110
[415964.311266]  [<ffffffff8034fb36>] ? nfsd3_proc_write+0xb6/0x160
[415964.311268]  [<ffffffff8034246a>] ? nfsd_dispatch+0xba/0x270
[415964.311271]  [<ffffffff80621667>] ? svc_process+0x4a7/0x800
[415964.311273]  [<ffffffff8024ca50>] ? default_wake_function+0x0/0x10
[415964.311276]  [<ffffffff8063e4b7>] ? __down_read+0x17/0xae
[415964.311278]  [<ffffffff80342b85>] ? nfsd+0xd5/0x160
[415964.311280]  [<ffffffff80342ab0>] ? nfsd+0x0/0x160
[415964.311283]  [<ffffffff80268304>] ? kthread+0x54/0x90
[415964.311285]  [<ffffffff8022d5ba>] ? child_rip+0xa/0x20
[415964.311287]  [<ffffffff802682b0>] ? kthread+0x0/0x90
[415964.311289]  [<ffffffff8022d5b0>] ? child_rip+0x0/0x20
[415964.311291] Mem-Info:
[415964.311292] DMA per-cpu:
[415964.311294] CPU    0: hi:    0, btch:   1 usd:   0
[415964.311296] CPU    1: hi:    0, btch:   1 usd:   0
[415964.311298] CPU    2: hi:    0, btch:   1 usd:   0
[415964.311299] CPU    3: hi:    0, btch:   1 usd:   0
[415964.311301] DMA32 per-cpu:
[415964.311302] CPU    0: hi:  186, btch:  31 usd: 179
[415964.311304] CPU    1: hi:  186, btch:  31 usd: 156
[415964.311306] CPU    2: hi:  186, btch:  31 usd: 225
[415964.311308] CPU    3: hi:  186, btch:  31 usd: 198
[415964.311309] Normal per-cpu:
[415964.311311] CPU    0: hi:  186, btch:  31 usd: 183
[415964.311312] CPU    1: hi:  186, btch:  31 usd: 176
[415964.311314] CPU    2: hi:  186, btch:  31 usd: 178
[415964.311316] CPU    3: hi:  186, btch:  31 usd: 215
[415964.311319] Active_anon:154810 active_file:131162 inactive_anon:33447
[415964.311320]  inactive_file:690987 unevictable:0 dirty:112116 writeback:0 unstable:0
[415964.311321]  free:8662 slab:965366 mapped:9316 pagetables:4618 bounce:0
[415964.311325] DMA free:9692kB min:16kB low:20kB high:24kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB present:8668kB pages_scanned:0 all_unreclaimable? yes
[415964.311328] lowmem_reserve[]: 0 3246 7980 7980
[415964.311333] DMA32 free:21312kB min:6656kB low:8320kB high:9984kB active_anon:118464kB inactive_anon:23908kB active_file:174708kB inactive_file:1206812kB unevictable:0kB present:3324312kB pages_scanned:0 all_unreclaimable? no
[415964.311336] lowmem_reserve[]: 0 0 4734 4734
[415964.311341] Normal free:3644kB min:9708kB low:12132kB high:14560kB active_anon:500776kB inactive_anon:109880kB active_file:349940kB inactive_file:1557136kB unevictable:0kB present:4848000kB pages_scanned:0 all_unreclaimable? no
[415964.311344] lowmem_reserve[]: 0 0 0 0
[415964.311347] DMA: 3*4kB 4*8kB 3*16kB 4*32kB 0*64kB 2*128kB 2*256kB 1*512kB 2*1024kB 1*2048kB 1*4096kB = 9692kB
[415964.311354] DMA32: 3289*4kB 0*8kB 1*16kB 1*32kB 1*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 1*4096kB = 21332kB
[415964.311362] Normal: 0*4kB 1*8kB 0*16kB 1*32kB 1*64kB 1*128kB 0*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 3816kB
[415964.311369] 827035 total pagecache pages
[415964.311371] 4728 pages in swap cache
[415964.311373] Swap cache stats: add 12746, delete 8018, find 16878/17480
[415964.311374] Free swap  = 16756356kB
[415964.311375] Total swap = 16787768kB
[415964.312141] 2277376 pages RAM
[415964.312141] 252254 pages reserved
[415964.312141] 546309 pages shared
[415964.312141] 1520221 pages non-shared

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2009-06-22 16:08 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <alpine.DEB.2.00.0906161203160.27742@p34.internal.lan>
2009-06-16 16:06 ` 2.6.29.1: nfsd: page allocation failure - nfsd or kernel problem? Justin Piszcz
2009-06-16 20:19   ` Michael Tokarev
2009-06-17  8:43     ` Michael Tokarev
2009-06-17  9:43       ` Justin Piszcz
2009-06-17 10:39         ` Michael Tokarev
2009-06-17 18:51           ` J. Bruce Fields
2009-06-17 20:24             ` Michael Tokarev
2009-06-17 20:39               ` David Rientjes
2009-06-18  8:54                 ` Michael Tokarev
2009-06-18 17:07                   ` David Rientjes
2009-06-18 17:56                     ` Michael Tokarev
2009-06-18 18:12                       ` J. Bruce Fields
2009-06-18 18:15                       ` David Rientjes
2009-06-17 22:45               ` J. Bruce Fields
2009-06-18  0:14               ` Zdenek Kaspar
2009-06-17 19:44   ` [patch] ipv4: don't warn about skb ack allocation failures David Rientjes
2009-06-17 20:16     ` Eric Dumazet
2009-06-17 20:33       ` David Rientjes
2009-06-17 20:52         ` Eric Dumazet
2009-06-17 21:12           ` David Rientjes
2009-06-17 22:30             ` Eric Dumazet
2009-06-17 23:08               ` David Miller
2009-06-18 16:56                 ` David Rientjes
2009-06-18 19:00                   ` David Miller
2009-06-18 19:23                     ` David Rientjes
2009-06-18 19:37                       ` David Miller
2009-06-19 19:45                         ` David Rientjes
2009-06-19 20:41                         ` Eric W. Biederman
2009-06-19 22:37                           ` David Rientjes
2009-06-19 23:04                             ` David Miller
2009-06-20  1:28                             ` Eric W. Biederman
2009-06-19 23:03                           ` David Miller
2009-06-22 16:08 ` 2.6.30: nfsd: page allocation failure - nfsd or kernel problem? (again with 2.6.30) Justin Piszcz

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.