linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* 2.6.12-rc6-mm1 & 2K lun testing
@ 2005-06-15 17:36 Badari Pulavarty
  2005-06-15 18:30 ` Nick Piggin
                   ` (2 more replies)
  0 siblings, 3 replies; 25+ messages in thread
From: Badari Pulavarty @ 2005-06-15 17:36 UTC (permalink / raw)
  To: Linux Kernel Mailing List, linux-mm

[-- Attachment #1: Type: text/plain, Size: 660 bytes --]

Hi,

I sniff tested 2K lun support with 2.6.12-rc6-mm1 on
my AMD64 box. I had to tweak qlogic driver and
scsi_scan.c to see all the luns.

(2.6.12-rc6 doesn't see all the LUNS due to max_lun
issue - which is fixed in scsi-git tree).

Test 1:
	run dds on all 2048 "raw" devices - worked
great. No issues.

Tests 2: 
	run "dds" on 2048 filesystems (one file
per filesystem). Kind of works. I was expecting better
responsiveness & stability.


Overall - Good news is, it works. 

Not so good news - with filesystem tests, machine becomes 
unresponsive, lots of page allocation failures but machine 
stays up and completes the tests and recovers.

Thanks,
Badari


[-- Attachment #2: failures.out --]
[-- Type: text/plain, Size: 53193 bytes --]

elm3b29 login: dd: page allocation failure. order:0, mode:0x20

Call Trace: <IRQ> <ffffffff801632ae>{__alloc_pages+990} <ffffffff801668da>{cache_grow+314}
       <ffffffff80166d7f>{cache_alloc_refill+543} <ffffffff80166e86>{kmem_cache_alloc+54}
       <ffffffff8033d021>{scsi_get_command+81} <ffffffff8034181d>{scsi_prep_fn+301}
       <ffffffff802f7b98>{elv_next_request+72} <ffffffff80341c1d>{scsi_request_fn+221}
       <ffffffff802fb61d>{blk_run_queue+61} <ffffffff803420ff>{scsi_end_request+223}
       <ffffffff80342355>{scsi_io_completion+565} <ffffffff8034ada1>{sd_rw_intr+609}
       <ffffffff8033c696>{scsi_finish_command+166} <ffffffff8033c79b>{scsi_softirq+235}
       <ffffffff8013b601>{__do_softirq+113} <ffffffff8013b6b5>{do_softirq+53}
       <ffffffff801107d7>{do_IRQ+71} <ffffffff8010e253>{ret_from_intr+0}
        <EOI> <ffffffff80295ca1>{__memset+65} <ffffffff80187fee>{bio_alloc_bioset+382}
       <ffffffff801a875b>{mpage_alloc+43} <ffffffff801a8b96>{__mpage_writepage+918}
       <ffffffff8024fb10>{jfs_get_block+0} <ffffffff803e9e75>{__wait_on_bit_lock+101}
       <ffffffff8024fb10>{jfs_get_block+0} <ffffffff802fa77f>{submit_bio+223}
       <ffffffff8014c6b0>{wake_bit_function+0} <ffffffff80163b34>{get_writeback_state+52}
       <ffffffff8024fb10>{jfs_get_block+0} <ffffffff801a8d77>{mpage_writepage+55}
       <ffffffff80186fc0>{nobh_writepage+192} <ffffffff8016aa60>{shrink_zone+2720}
       <ffffffff80293351>{__up_read+33} <ffffffff8025e127>{dbAlloc+1095}
       <ffffffff801a85a4>{__mark_inode_dirty+52} <ffffffff80265429>{extAlloc+1129}
       <ffffffff802931b1>{__up_write+49} <ffffffff8024fad9>{jfs_get_blocks+521}
       <ffffffff8015d38c>{find_get_page+92} <ffffffff80184ac5>{__find_get_block_slow+85}
       <ffffffff8016b90d>{try_to_free_pages+317} <ffffffff80163132>{__alloc_pages+610}
       <ffffffff801a85a4>{__mark_inode_dirty+52} <ffffffff8015de7c>{generic_file_buffered_write+412}
       <ffffffff8013a965>{current_fs_time+85} <ffffffff80162e63>{buffered_rmqueue+723}
       <ffffffff8019d9ae>{inode_update_time+62} <ffffffff8015e63a>{__generic_file_aio_write_nolock+938}
       <ffffffff8016d67c>{do_no_page+860} <ffffffff8015e81e>{__generic_file_write_nolock+158}
       <ffffffff80170ade>{zeromap_page_range+990} <ffffffff8014c680>{autoremove_wake_function+0}
       <ffffffff80293351>{__up_read+33} <ffffffff8015e985>{generic_file_write+101}
       <ffffffff801830a9>{vfs_write+233} <ffffffff80183253>{sys_write+83}
       <ffffffff8010dc8e>{system_call+126} 
Mem-info:
Node 3 DMA per-cpu: empty
Node 3 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:157
cpu 0 cold: low 0, high 62, batch 31 used:41
cpu 1 hot: low 62, high 186, batch 31 used:80
cpu 1 cold: low 0, high 62, batch 31 used:58
cpu 2 hot: low 62, high 186, batch 31 used:94
cpu 2 cold: low 0, high 62, batch 31 used:48
cpu 3 hot: low 62, high 186, batch 31 used:126
cpu 3 cold: low 0, high 62, batch 31 used:53
Node 3 HighMem per-cpu: empty
Node 2 DMA per-cpu: empty
Node 2 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:155
cpu 0 cold: low 0, high 62, batch 31 used:44
cpu 1 hot: low 62, high 186, batch 31 used:174
cpu 1 cold: low 0, high 62, batch 31 used:35
cpu 2 hot: low 62, high 186, batch 31 used:98
cpu 2 cold: low 0, high 62, batch 31 used:61
cpu 3 hot: low 62, high 186, batch 31 used:107
cpu 3 cold: low 0, high 62, batch 31 used:52
Node 2 HighMem per-cpu: empty
Node 1 DMA per-cpu: empty
Node 1 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:157
cpu 0 cold: low 0, high 62, batch 31 used:60
cpu 1 hot: low 62, high 186, batch 31 used:93
cpu 1 cold: low 0, high 62, batch 31 used:60
cpu 2 hot: low 62, high 186, batch 31 used:109
cpu 2 cold: low 0, high 62, batch 31 used:55
cpu 3 hot: low 62, high 186, batch 31 used:93
cpu 3 cold: low 0, high 62, batch 31 used:57
Node 1 HighMem per-cpu: empty
Node 0 DMA per-cpu:
cpu 0 hot: low 2, high 6, batch 1 used:6
cpu 0 cold: low 0, high 2, batch 1 used:0
cpu 1 hot: low 2, high 6, batch 1 used:0
cpu 1 cold: low 0, high 2, batch 1 used:0
cpu 2 hot: low 2, high 6, batch 1 used:0
cpu 2 cold: low 0, high 2, batch 1 used:0
cpu 3 hot: low 2, high 6, batch 1 used:0
cpu 3 cold: low 0, high 2, batch 1 used:0
Node 0 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:70
cpu 0 cold: low 0, high 62, batch 31 used:54
cpu 1 hot: low 62, high 186, batch 31 used:92
cpu 1 cold: low 0, high 62, batch 31 used:52
cpu 2 hot: low 62, high 186, batch 31 used:89
cpu 2 cold: low 0, high 62, batch 31 used:50
cpu 3 hot: low 62, high 186, batch 31 used:74
cpu 3 cold: low 0, high 62, batch 31 used:59
Node 0 HighMem per-cpu: empty

Free pages:       18864kB (0kB HighMem)
Active:101677 inactive:1471000 dirty:1490116 writeback:13161 unstable:0 free:4716 slab:76480 mapped:98020 pagetables:17144
Node 3 DMA free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 511 511
Node 3 Normal free:508kB min:1492kB low:1864kB high:2236kB active:92496kB inactive:242628kB present:524284kB pages_scanned:5984 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 3 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 2 DMA free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 511 511
Node 2 Normal free:512kB min:1492kB low:1864kB high:2236kB active:113072kB inactive:117892kB present:524284kB pages_scanned:13627 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 2 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 1 DMA free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 511 511
Node 1 Normal free:548kB min:1492kB low:1864kB high:2236kB active:44912kB inactive:209476kB present:524284kB pages_scanned:6452 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 1 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 0 DMA free:10628kB min:44kB low:52kB high:64kB active:0kB inactive:0kB present:16384kB pages_scanned:56 all_unreclaimable? yes
lowmem_reserve[]: 0 6127 6127
Node 0 Normal free:6668kB min:17896kB low:22368kB high:26844kB active:156228kB inactive:5313748kB present:6275068kB pages_scanned:38737 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 0 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 3 DMA: empty
Node 3 Normal: 1*4kB 1*8kB 1*16kB 1*32kB 1*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 508kB
Node 3 HighMem: empty
Node 2 DMA: empty
Node 2 Normal: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 1*512kB 0*1024kB 0*2048kB 0*4096kB = 512kB
Node 2 HighMem: empty
Node 1 DMA: empty
Node 1 Normal: 1*4kB 0*8kB 0*16kB 1*32kB 0*64kB 0*128kB 0*256kB 1*512kB 0*1024kB 0*2048kB 0*4096kB = 548kB
Node 1 HighMem: empty
Node 0 DMA: 5*4kB 4*8kB 5*16kB 4*32kB 4*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 0*2048kB 2*4096kB = 10628kB
Node 0 Normal: 1*4kB 1*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 1*512kB 0*1024kB 1*2048kB 1*4096kB = 6668kB
Node 0 HighMem: empty
Swap cache: add 1, delete 0, find 0/0, race 0+0
Free swap  = 1048780kB
Total swap = 1048784kB
Free swap:       1048780kB
1966076 pages of RAM
163061 reserved pages
716790 pages shared
1 pages swap cached
jfsCommit: page allocation failure. order:0, mode:0x20

Call Trace: <IRQ> <ffffffff801632ae>{__alloc_pages+990} <ffffffff801668da>{cache_grow+314}
       <ffffffff80166d7f>{cache_alloc_refill+543} <ffffffff80166e86>{kmem_cache_alloc+54}
       <ffffffff8033d021>{scsi_get_command+81} <ffffffff8034181d>{scsi_prep_fn+301}
       <ffffffff802f7b98>{elv_next_request+72} <ffffffff80341c1d>{scsi_request_fn+221}
       <ffffffff802fb61d>{blk_run_queue+61} <ffffffff803420ff>{scsi_end_request+223}
       <ffffffff80342355>{scsi_io_completion+565} <ffffffff8034ada1>{sd_rw_intr+609}
       <ffffffff8033c696>{scsi_finish_command+166} <ffffffff8033c79b>{scsi_softirq+235}
       <ffffffff8013b601>{__do_softirq+113} <ffffffff8013b6b5>{do_softirq+53}
       <ffffffff801107d7>{do_IRQ+71} <ffffffff8010e253>{ret_from_intr+0}
        <EOI> <ffffffff8016a639>{shrink_zone+1657} <ffffffff8016a639>{shrink_zone+1657}
       <ffffffff8016b90d>{try_to_free_pages+317} <ffffffff80163132>{__alloc_pages+610}
       <ffffffff80265860>{metapage_readpage+0} <ffffffff8017c183>{alloc_page_interleave+67}
       <ffffffff8015ec62>{read_cache_page+82} <ffffffff8026654f>{__get_metapage+271}
       <ffffffff8015d413>{unlock_page+35} <ffffffff80266898>{__get_metapage+1112}
       <ffffffff80131893>{__wake_up+67} <ffffffff8025e4c8>{dbUpdatePMap+376}
       <ffffffff80258165>{diUpdatePMap+805} <ffffffff8026abff>{txUpdateMap+495}
       <ffffffff8026c66f>{jfs_lazycommit+271} <ffffffff80130e20>{default_wake_function+0}
       <ffffffff80130e20>{default_wake_function+0} <ffffffff80133050>{schedule_tail+64}
       <ffffffff8010e967>{child_rip+8} <ffffffff8026c560>{jfs_lazycommit+0}
       <ffffffff8010e95f>{child_rip+0} 
Mem-info:
Node 3 DMA per-cpu: empty
Node 3 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:157
cpu 0 cold: low 0, high 62, batch 31 used:41
cpu 1 hot: low 62, high 186, batch 31 used:92
cpu 1 cold: low 0, high 62, batch 31 used:58
cpu 2 hot: low 62, high 186, batch 31 used:116
cpu 2 cold: low 0, high 62, batch 31 used:49
cpu 3 hot: low 62, high 186, batch 31 used:92
cpu 3 cold: low 0, high 62, batch 31 used:53
Node 3 HighMem per-cpu: empty
Node 2 DMA per-cpu: empty
Node 2 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:155
cpu 0 cold: low 0, high 62, batch 31 used:44
cpu 1 hot: low 62, high 186, batch 31 used:174
cpu 1 cold: low 0, high 62, batch 31 used:35
cpu 2 hot: low 62, high 186, batch 31 used:100
cpu 2 cold: low 0, high 62, batch 31 used:61
cpu 3 hot: low 62, high 186, batch 31 used:108
cpu 3 cold: low 0, high 62, batch 31 used:52
Node 2 HighMem per-cpu: empty
Node 1 DMA per-cpu: empty
Node 1 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:157
cpu 0 cold: low 0, high 62, batch 31 used:60
cpu 1 hot: low 62, high 186, batch 31 used:93
cpu 1 cold: low 0, high 62, batch 31 used:60
cpu 2 hot: low 62, high 186, batch 31 used:134
cpu 2 cold: low 0, high 62, batch 31 used:55
cpu 3 hot: low 62, high 186, batch 31 used:93
cpu 3 cold: low 0, high 62, batch 31 used:57
Node 1 HighMem per-cpu: empty
Node 0 DMA per-cpu:
cpu 0 hot: low 2, high 6, batch 1 used:6
cpu 0 cold: low 0, high 2, batch 1 used:0
cpu 1 hot: low 2, high 6, batch 1 used:0
cpu 1 cold: low 0, high 2, batch 1 used:0
cpu 2 hot: low 2, high 6, batch 1 used:0
cpu 2 cold: low 0, high 2, batch 1 used:0
cpu 3 hot: low 2, high 6, batch 1 used:0
cpu 3 cold: low 0, high 2, batch 1 used:0
Node 0 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:70
cpu 0 cold: low 0, high 62, batch 31 used:54
cpu 1 hot: low 62, high 186, batch 31 used:92
cpu 1 cold: low 0, high 62, batch 31 used:53
cpu 2 hot: low 62, high 186, batch 31 used:92
cpu 2 cold: low 0, high 62, batch 31 used:46
cpu 3 hot: low 62, high 186, batch 31 used:74
cpu 3 cold: low 0, high 62, batch 31 used:59
Node 0 HighMem per-cpu: empty

Free pages:       18896kB (0kB HighMem)
Active:100907 inactive:1466089 dirty:1489475 writeback:13761 unstable:0 free:4724 slab:76736 mapped:97911 pagetables:17120
Node 3 DMA free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 511 511
Node 3 Normal free:540kB min:1492kB low:1864kB high:2236kB active:92064kB inactive:237628kB present:524284kB pages_scanned:442 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 3 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 2 DMA free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 511 511
Node 2 Normal free:512kB min:1492kB low:1864kB high:2236kB active:113072kB inactive:111876kB present:524284kB pages_scanned:19243 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 2 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 1 DMA free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 511 511
Node 1 Normal free:548kB min:1492kB low:1864kB high:2236kB active:42372kB inactive:208688kB present:524284kB pages_scanned:10176 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 1 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 0 DMA free:10628kB min:44kB low:52kB high:64kB active:0kB inactive:0kB present:16384kB pages_scanned:56 all_unreclaimable? yes
lowmem_reserve[]: 0 6127 6127
Node 0 Normal free:6668kB min:17896kB low:22368kB high:26844kB active:156120kB inactive:5306204kB present:6275068kB pages_scanned:10672 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 0 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 3 DMA: empty
Node 3 Normal: 1*4kB 3*8kB 0*16kB 2*32kB 1*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 540kB
Node 3 HighMem: empty
Node 2 DMA: empty
Node 2 Normal: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 1*512kB 0*1024kB 0*2048kB 0*4096kB = 512kB
Node 2 HighMem: empty
Node 1 DMA: empty
Node 1 Normal: 1*4kB 0*8kB 0*16kB 1*32kB 0*64kB 0*128kB 0*256kB 1*512kB 0*1024kB 0*2048kB 0*4096kB = 548kB
Node 1 HighMem: empty
Node 0 DMA: 5*4kB 4*8kB 5*16kB 4*32kB 4*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 0*2048kB 2*4096kB = 10628kB
Node 0 Normal: 1*4kB 1*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 1*512kB 0*1024kB 1*2048kB 1*4096kB = 6668kB
Node 0 HighMem: empty
Swap cache: add 1, delete 0, find 0/0, race 0+0
Free swap  = 1048780kB
Total swap = 1048784kB
Free swap:       1048780kB
1966076 pages of RAM
163061 reserved pages
716887 pages shared
1 pages swap cached
jfsCommit: page allocation failure. order:0, mode:0x20

Call Trace: <IRQ> <ffffffff801632ae>{__alloc_pages+990} <ffffffff801668da>{cache_grow+314}
       <ffffffff80166d7f>{cache_alloc_refill+543} <ffffffff80166e86>{kmem_cache_alloc+54}
       <ffffffff8033d021>{scsi_get_command+81} <ffffffff8034181d>{scsi_prep_fn+301}
       <ffffffff802f7b98>{elv_next_request+72} <ffffffff80341c1d>{scsi_request_fn+221}
       <ffffffff802fb61d>{blk_run_queue+61} <ffffffff803420ff>{scsi_end_request+223}
       <ffffffff80342355>{scsi_io_completion+565} <ffffffff8034ada1>{sd_rw_intr+609}
       <ffffffff8033c696>{scsi_finish_command+166} <ffffffff8033c79b>{scsi_softirq+235}
       <ffffffff8013b601>{__do_softirq+113} <ffffffff8013b6b5>{do_softirq+53}
       <ffffffff801107d7>{do_IRQ+71} <ffffffff8010e253>{ret_from_intr+0}
        <EOI> <ffffffff8016a639>{shrink_zone+1657} <ffffffff8016a639>{shrink_zone+1657}
       <ffffffff8016b90d>{try_to_free_pages+317} <ffffffff80163132>{__alloc_pages+610}
       <ffffffff80265860>{metapage_readpage+0} <ffffffff8017c183>{alloc_page_interleave+67}
       <ffffffff8015ec62>{read_cache_page+82} <ffffffff8026654f>{__get_metapage+271}
       <ffffffff8015d413>{unlock_page+35} <ffffffff80266898>{__get_metapage+1112}
       <ffffffff80131893>{__wake_up+67} <ffffffff8025e4c8>{dbUpdatePMap+376}
       <ffffffff80258165>{diUpdatePMap+805} <ffffffff8026abff>{txUpdateMap+495}
       <ffffffff8026c66f>{jfs_lazycommit+271} <ffffffff80130e20>{default_wake_function+0}
       <ffffffff80130e20>{default_wake_function+0} <ffffffff80133050>{schedule_tail+64}
       <ffffffff8010e967>{child_rip+8} <ffffffff8026c560>{jfs_lazycommit+0}
       <ffffffff8010e95f>{child_rip+0} 
Mem-info:
Node 3 DMA per-cpu: empty
Node 3 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:157
cpu 0 cold: low 0, high 62, batch 31 used:41
cpu 1 hot: low 62, high 186, batch 31 used:92
cpu 1 cold: low 0, high 62, batch 31 used:58
cpu 2 hot: low 62, high 186, batch 31 used:116
cpu 2 cold: low 0, high 62, batch 31 used:49
cpu 3 hot: low 62, high 186, batch 31 used:92
cpu 3 cold: low 0, high 62, batch 31 used:53
Node 3 HighMem per-cpu: empty
Node 2 DMA per-cpu: empty
Node 2 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:155
cpu 0 cold: low 0, high 62, batch 31 used:44
cpu 1 hot: low 62, high 186, batch 31 used:174
cpu 1 cold: low 0, high 62, batch 31 used:35
cpu 2 hot: low 62, high 186, batch 31 used:100
cpu 2 cold: low 0, high 62, batch 31 used:62
cpu 3 hot: low 62, high 186, batch 31 used:108
cpu 3 cold: low 0, high 62, batch 31 used:52
Node 2 HighMem per-cpu: empty
Node 1 DMA per-cpu: empty
Node 1 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:157
cpu 0 cold: low 0, high 62, batch 31 used:60
cpu 1 hot: low 62, high 186, batch 31 used:93
cpu 1 cold: low 0, high 62, batch 31 used:60
cpu 2 hot: low 62, high 186, batch 31 used:134
cpu 2 cold: low 0, high 62, batch 31 used:55
cpu 3 hot: low 62, high 186, batch 31 used:93
cpu 3 cold: low 0, high 62, batch 31 used:57
Node 1 HighMem per-cpu: empty
Node 0 DMA per-cpu:
cpu 0 hot: low 2, high 6, batch 1 used:6
cpu 0 cold: low 0, high 2, batch 1 used:0
cpu 1 hot: low 2, high 6, batch 1 used:0
cpu 1 cold: low 0, high 2, batch 1 used:0
cpu 2 hot: low 2, high 6, batch 1 used:0
cpu 2 cold: low 0, high 2, batch 1 used:0
cpu 3 hot: low 2, high 6, batch 1 used:0
cpu 3 cold: low 0, high 2, batch 1 used:0
Node 0 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:70
cpu 0 cold: low 0, high 62, batch 31 used:54
cpu 1 hot: low 62, high 186, batch 31 used:92
cpu 1 cold: low 0, high 62, batch 31 used:53
cpu 2 hot: low 62, high 186, batch 31 used:92
cpu 2 cold: low 0, high 62, batch 31 used:60
cpu 3 hot: low 62, high 186, batch 31 used:74
cpu 3 cold: low 0, high 62, batch 31 used:59
Node 0 HighMem per-cpu: empty

Free pages:       18804kB (0kB HighMem)
Active:100796 inactive:1465891 dirty:1489134 writeback:14099 unstable:0 free:4701 slab:76918 mapped:97830 pagetables:17102
Node 3 DMA free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 511 511
Node 3 Normal free:448kB min:1492kB low:1864kB high:2236kB active:91740kB inactive:237500kB present:524284kB pages_scanned:279 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 3 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 2 DMA free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 511 511
Node 2 Normal free:512kB min:1492kB low:1864kB high:2236kB active:113072kB inactive:111872kB present:524284kB pages_scanned:19514 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 2 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 1 DMA free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 511 511
Node 1 Normal free:548kB min:1492kB low:1864kB high:2236kB active:42248kB inactive:208684kB present:524284kB pages_scanned:10411 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 1 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 0 DMA free:10628kB min:44kB low:52kB high:64kB active:0kB inactive:0kB present:16384kB pages_scanned:56 all_unreclaimable? yes
lowmem_reserve[]: 0 6127 6127
Node 0 Normal free:6668kB min:17896kB low:22368kB high:26844kB active:156124kB inactive:5305508kB present:6275068kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 0 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 3 DMA: empty
Node 3 Normal: 0*4kB 0*8kB 0*16kB 0*32kB 1*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 448kB
Node 3 HighMem: empty
Node 2 DMA: empty
Node 2 Normal: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 1*512kB 0*1024kB 0*2048kB 0*4096kB = 512kB
Node 2 HighMem: empty
Node 1 DMA: empty
Node 1 Normal: 1*4kB 0*8kB 0*16kB 1*32kB 0*64kB 0*128kB 0*256kB 1*512kB 0*1024kB 0*2048kB 0*4096kB = 548kB
Node 1 HighMem: empty
Node 0 DMA: 5*4kB 4*8kB 5*16kB 4*32kB 4*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 0*2048kB 2*4096kB = 10628kB
Node 0 Normal: 1*4kB 1*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 1*512kB 0*1024kB 1*2048kB 1*4096kB = 6668kB
Node 0 HighMem: empty
Swap cache: add 1, delete 0, find 0/0, race 0+0
Free swap  = 1048780kB
Total swap = 1048784kB
Free swap:       1048780kB
dd: page allocation failure. order:0, mode:0x20

Call Trace: <IRQ> <ffffffff801632ae>{__alloc_pages+990} <ffffffff801668da>{cache_grow+314}
       <ffffffff80166d7f>{cache_alloc_refill+543} <ffffffff80166e86>{kmem_cache_alloc+54}
       <ffffffff8033d021>{scsi_get_command+81} <ffffffff8034181d>{scsi_prep_fn+301}
       <ffffffff802f7b98>{elv_next_request+72} <ffffffff80341c1d>{scsi_request_fn+221}
       <ffffffff802fb61d>{blk_run_queue+61} <ffffffff803420ff>{scsi_end_request+223}
       <ffffffff80342355>{scsi_io_completion+565} <ffffffff8034ada1>{sd_rw_intr+609}
       <ffffffff8033c696>{scsi_finish_command+166} <ffffffff8033c79b>{scsi_softirq+235}
       <ffffffff8013b601>{__do_softirq+113} <ffffffff8013b6b5>{do_softirq+53}
       <ffffffff8010e615>{apic_timer_interrupt+133}  <EOI> <ffffffff802fc28c>{__make_request+1324}
       <ffffffff802fc28c>{__make_request+1324} <ffffffff8024fb10>{jfs_get_block+0}
       <ffffffff802fa67b>{generic_make_request+555} <ffffffff8014c680>{autoremove_wake_function+0}
       <ffffffff8014c680>{autoremove_wake_function+0} <ffffffff803e9e75>{__wait_on_bit_lock+101}
       <ffffffff8024fb10>{jfs_get_block+0} <ffffffff802fa77f>{submit_bio+223}
       <ffffffff8012e6b3>{__wake_up_common+67} <ffffffff80163b34>{get_writeback_state+52}
       <ffffffff8024fb10>{jfs_get_block+0} <ffffffff801a87f2>{mpage_bio_submit+34}
       <ffffffff801a8d89>{mpage_writepage+73} <ffffffff80186fc0>{nobh_writepage+192}
       <ffffffff8016aa60>{shrink_zone+2720} <ffffffff801652aa>{kmem_freepages+298}
       <ffffffff80165f32>{free_block+338} <ffffffff80131893>{__wake_up+67}
       <ffffffff8016b90d>{try_to_free_pages+317} <ffffffff80163132>{__alloc_pages+610}
       <ffffffff801a85a4>{__mark_inode_dirty+52} <ffffffff8015de7c>{generic_file_buffered_write+412}
       <ffffffff8013a965>{current_fs_time+85} <ffffffff80162e63>{buffered_rmqueue+723}
       <ffffffff8019d9ae>{inode_update_time+62} <ffffffff8015e63a>{__generic_file_aio_write_nolock+938}
       <ffffffff8016d67c>{do_no_page+860} <ffffffff8015e81e>{__generic_file_write_nolock+158}
       <ffffffff80170ade>{zeromap_page_range+990} <ffffffff8014c680>{autoremove_wake_function+0}
       <ffffffff80293351>{__up_read+33} <ffffffff8015e985>{generic_file_write+101}
       <ffffffff801830a9>{vfs_write+233} <ffffffff80183253>{sys_write+83}
       <ffffffff8010dc8e>{system_call+126} 
Mem-info:
Node 3 DMA per-cpu: empty
Node 3 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:157
cpu 0 cold: low 0, high 62, batch 31 used:41
cpu 1 hot: low 62, high 186, batch 31 used:92
cpu 1 cold: low 0, high 62, batch 31 used:58
cpu 2 hot: low 62, high 186, batch 31 used:116
cpu 2 cold: low 0, high 62, batch 31 used:49
cpu 3 hot: low 62, high 186, batch 31 used:92
cpu 3 cold: low 0, high 62, batch 31 used:53
Node 3 HighMem per-cpu: empty
Node 2 DMA per-cpu: empty
Node 2 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:155
cpu 0 cold: low 0, high 62, batch 31 used:44
cpu 1 hot: low 62, high 186, batch 31 used:174
cpu 1 cold: low 0, high 62, batch 31 used:35
cpu 2 hot: low 62, high 186, batch 31 used:100
cpu 2 cold: low 0, high 62, batch 31 used:62
cpu 3 hot: low 62, high 186, batch 31 used:108
cpu 3 cold: low 0, high 62, batch 31 used:52
Node 2 HighMem per-cpu: empty
Node 1 DMA per-cpu: empty
Node 1 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:157
cpu 0 cold: low 0, high 62, batch 31 used:60
cpu 1 hot: low 62, high 186, batch 31 used:93
cpu 1 cold: low 0, high 62, batch 31 used:60
cpu 2 hot: low 62, high 186, batch 31 used:134
cpu 2 cold: low 0, high 62, batch 31 used:55
cpu 3 hot: low 62, high 186, batch 31 used:93
cpu 3 cold: low 0, high 62, batch 31 used:57
Node 1 HighMem per-cpu: empty
Node 0 DMA per-cpu:
cpu 0 hot: low 2, high 6, batch 1 used:6
cpu 0 cold: low 0, high 2, batch 1 used:0
cpu 1 hot: low 2, high 6, batch 1 used:0
cpu 1 cold: low 0, high 2, batch 1 used:0
cpu 2 hot: low 2, high 6, batch 1 used:0
cpu 2 cold: low 0, high 2, batch 1 used:0
cpu 3 hot: low 2, high 6, batch 1 used:0
cpu 3 cold: low 0, high 2, batch 1 used:0
Node 0 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:70
cpu 0 cold: low 0, high 62, batch 31 used:54
cpu 1 hot: low 62, high 186, batch 31 used:92
cpu 1 cold: low 0, high 62, batch 31 used:53
cpu 2 hot: low 62, high 186, batch 31 used:92
cpu 2 cold: low 0, high 62, batch 31 used:60
cpu 3 hot: low 62, high 186, batch 31 used:74
cpu 3 cold: low 0, high 62, batch 31 used:59
Node 0 HighMem per-cpu: empty

Free pages:       18804kB (0kB HighMem)
Active:100796 inactive:1465891 dirty:1489134 writeback:14099 unstable:0 free:4701 slab:76918 mapped:97830 pagetables:17102
Node 3 DMA free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 511 511
Node 3 Normal free:448kB min:1492kB low:1864kB high:2236kB active:91740kB inactive:237500kB present:524284kB pages_scanned:279 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 3 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 2 DMA free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 511 511
Node 2 Normal free:512kB min:1492kB low:1864kB high:2236kB active:113072kB inactive:111872kB present:524284kB pages_scanned:19514 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 2 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 1 DMA free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 511 511
Node 1 Normal free:548kB min:1492kB low:1864kB high:2236kB active:42248kB inactive:208684kB present:524284kB pages_scanned:10411 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 1 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 0 DMA free:10628kB min:44kB low:52kB high:64kB active:0kB inactive:0kB present:16384kB pages_scanned:56 all_unreclaimable? yes
lowmem_reserve[]: 0 6127 6127
Node 0 Normal free:6668kB min:17896kB low:22368kB high:26844kB active:156124kB inactive:5305508kB present:6275068kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 0 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 3 DMA: empty
Node 3 Normal: 0*4kB 0*8kB 0*16kB 0*32kB 1*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 448kB
Node 3 HighMem: empty
Node 2 DMA: empty
Node 2 Normal: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 1*512kB 0*1024kB 0*2048kB 0*4096kB = 512kB
Node 2 HighMem: empty
Node 1 DMA: empty
Node 1 Normal: 1*4kB 0*8kB 0*16kB 1*32kB 0*64kB 0*128kB 0*256kB 1*512kB 0*1024kB 0*2048kB 0*4096kB = 548kB
Node 1 HighMem: empty
Node 0 DMA: 5*4kB 4*8kB 5*16kB 4*32kB 4*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 0*2048kB 2*4096kB = 10628kB
Node 0 Normal: 1*4kB 1*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 1*512kB 0*1024kB 1*2048kB 1*4096kB = 6668kB
Node 0 HighMem: empty
Swap cache: add 1, delete 0, find 0/0, race 0+0
Free swap  = 1048780kB
Total swap = 1048784kB
Free swap:       1048780kB
1966076 pages of RAM
163061 reserved pages
716550 pages shared
1 pages swap cached
1966076 pages of RAM
163061 reserved pages
716484 pages shared
1 pages swap cached
ksoftirqd/2: page allocation failure. order:0, mode:0x20

Call Trace:<ffffffff801632ae>{__alloc_pages+990} <ffffffff801668da>{cache_grow+314}
       <ffffffff80166d7f>{cache_alloc_refill+543} <ffffffff80166e86>{kmem_cache_alloc+54}
       <ffffffff8033d021>{scsi_get_command+81} <ffffffff8034181d>{scsi_prep_fn+301}
       <ffffffff802f7b98>{elv_next_request+72} <ffffffff80341c1d>{scsi_request_fn+221}
       <ffffffff802fb61d>{blk_run_queue+61} <ffffffff803420ff>{scsi_end_request+223}
       <ffffffff80342355>{scsi_io_completion+565} <ffffffff8034ada1>{sd_rw_intr+609}
       <ffffffff8033c696>{scsi_finish_command+166} <ffffffff8033c79b>{scsi_softirq+235}
       <ffffffff8013b601>{__do_softirq+113} <ffffffff8013b710>{ksoftirqd+0}
       <ffffffff8013b6b5>{do_softirq+53} <ffffffff8013b77a>{ksoftirqd+106}
       <ffffffff8013b710>{ksoftirqd+0} <ffffffff8014c4ab>{kthread+219}
       <ffffffff8010e967>{child_rip+8} <ffffffff8014c3d0>{kthread+0}
       <ffffffff8010e95f>{child_rip+0} 
Mem-info:
Node 3 DMA per-cpu: empty
Node 3 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:157
cpu 0 cold: low 0, high 62, batch 31 used:41
cpu 1 hot: low 62, high 186, batch 31 used:123
cpu 1 cold: low 0, high 62, batch 31 used:42
cpu 2 hot: low 62, high 186, batch 31 used:107
cpu 2 cold: low 0, high 62, batch 31 used:45
cpu 3 hot: low 62, high 186, batch 31 used:181
cpu 3 cold: low 0, high 62, batch 31 used:55
Node 3 HighMem per-cpu: empty
Node 2 DMA per-cpu: empty
Node 2 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:155
cpu 0 cold: low 0, high 62, batch 31 used:44
cpu 1 hot: low 62, high 186, batch 31 used:105
cpu 1 cold: low 0, high 62, batch 31 used:11
cpu 2 hot: low 62, high 186, batch 31 used:116
cpu 2 cold: low 0, high 62, batch 31 used:54
cpu 3 hot: low 62, high 186, batch 31 used:167
cpu 3 cold: low 0, high 62, batch 31 used:51
Node 2 HighMem per-cpu: empty
Node 1 DMA per-cpu: empty
Node 1 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:157
cpu 0 cold: low 0, high 62, batch 31 used:60
cpu 1 hot: low 62, high 186, batch 31 used:132
cpu 1 cold: low 0, high 62, batch 31 used:44
cpu 2 hot: low 62, high 186, batch 31 used:126
cpu 2 cold: low 0, high 62, batch 31 used:39
cpu 3 hot: low 62, high 186, batch 31 used:93
cpu 3 cold: low 0, high 62, batch 31 used:60
Node 1 HighMem per-cpu: empty
Node 0 DMA per-cpu:
cpu 0 hot: low 2, high 6, batch 1 used:6
cpu 0 cold: low 0, high 2, batch 1 used:0
cpu 1 hot: low 2, high 6, batch 1 used:0
cpu 1 cold: low 0, high 2, batch 1 used:0
cpu 2 hot: low 2, high 6, batch 1 used:0
cpu 2 cold: low 0, high 2, batch 1 used:0
cpu 3 hot: low 2, high 6, batch 1 used:0
cpu 3 cold: low 0, high 2, batch 1 used:0
Node 0 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:70
cpu 0 cold: low 0, high 62, batch 31 used:54
cpu 1 hot: low 62, high 186, batch 31 used:92
cpu 1 cold: low 0, high 62, batch 31 used:45
cpu 2 hot: low 62, high 186, batch 31 used:112
cpu 2 cold: low 0, high 62, batch 31 used:52
cpu 3 hot: low 62, high 186, batch 31 used:88
cpu 3 cold: low 0, high 62, batch 31 used:46
Node 0 HighMem per-cpu: empty

Free pages:       18776kB (0kB HighMem)
Active:99193 inactive:1460277 dirty:1385806 writeback:113713 unstable:0 free:4694 slab:83714 mapped:97380 pagetables:17003
Node 3 DMA free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 511 511
Node 3 Normal free:560kB min:1492kB low:1864kB high:2236kB active:90144kB inactive:237272kB present:524284kB pages_scanned:761 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 3 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 2 DMA free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 511 511
Node 2 Normal free:492kB min:1492kB low:1864kB high:2236kB active:113136kB inactive:111312kB present:524284kB pages_scanned:9864 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 2 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 1 DMA free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 511 511
Node 1 Normal free:444kB min:1492kB low:1864kB high:2236kB active:36192kB inactive:212236kB present:524284kB pages_scanned:5746 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 1 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 0 DMA free:10628kB min:44kB low:52kB high:64kB active:0kB inactive:0kB present:16384kB pages_scanned:62 all_unreclaimable? yes
lowmem_reserve[]: 0 6127 6127
Node 0 Normal free:6652kB min:17896kB low:22368kB high:26844kB active:157300kB inactive:5280288kB present:6275068kB pages_scanned:2579 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 0 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 3 DMA: empty
Node 3 Normal: 0*4kB 0*8kB 3*16kB 2*32kB 1*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 560kB
Node 3 HighMem: empty
Node 2 DMA: empty
Node 2 Normal: 1*4kB 1*8kB 0*16kB 1*32kB 1*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 492kB
Node 2 HighMem: empty
Node 1 DMA: empty
Node 1 Normal: 1*4kB 1*8kB 1*16kB 1*32kB 0*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 444kB
Node 1 HighMem: empty
Node 0 DMA: 5*4kB 4*8kB 5*16kB 4*32kB 4*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 0*2048kB 2*4096kB = 10628kB
Node 0 Normal: 1*4kB 1*8kB 1*16kB 1*32kB 1*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 1*2048kB 1*4096kB = 6652kB
Node 0 HighMem: empty
Swap cache: add 1, delete 0, find 0/0, race 0+0
Free swap  = 1048780kB
Total swap = 1048784kB
Free swap:       1048780kB
1966076 pages of RAM
163061 reserved pages
711737 pages shared
1 pages swap cached
kblockd/3: page allocation failure. order:0, mode:0x20

Call Trace:<ffffffff801632ae>{__alloc_pages+990} <ffffffff801668da>{cache_grow+314}
       <ffffffff80166d7f>{cache_alloc_refill+543} <ffffffff802f84e0>{blk_unplug_work+0}
       <ffffffff80166e86>{kmem_cache_alloc+54} <ffffffff8033d021>{scsi_get_command+81}
       <ffffffff8034181d>{scsi_prep_fn+301} <ffffffff802f7b98>{elv_next_request+72}
       <ffffffff80341c1d>{scsi_request_fn+221} <ffffffff802fb680>{__generic_unplug_device+32}
       <ffffffff802fbba8>{generic_unplug_device+24} <ffffffff802f84ea>{blk_unplug_work+10}
       <ffffffff80147b1c>{worker_thread+476} <ffffffff80130e20>{default_wake_function+0}
       <ffffffff8012e6b3>{__wake_up_common+67} <ffffffff80130e20>{default_wake_function+0}
       <ffffffff8014c350>{keventd_create_kthread+0} <ffffffff80147940>{worker_thread+0}
       <ffffffff8014c350>{keventd_create_kthread+0} <ffffffff8014c4ab>{kthread+219}
       <ffffffff8010e967>{child_rip+8} <ffffffff8014c350>{keventd_create_kthread+0}
       <ffffffff8014c3d0>{kthread+0} <ffffffff8010e95f>{child_rip+0}
       
Mem-info:
Node 3 DMA per-cpu: empty
Node 3 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:157
cpu 0 cold: low 0, high 62, batch 31 used:41
cpu 1 hot: low 62, high 186, batch 31 used:75
cpu 1 cold: low 0, high 62, batch 31 used:34
cpu 2 hot: low 62, high 186, batch 31 used:120
cpu 2 cold: low 0, high 62, batch 31 used:35
cpu 3 hot: low 62, high 186, batch 31 used:143
cpu 3 cold: low 0, high 62, batch 31 used:51
Node 3 HighMem per-cpu: empty
Node 2 DMA per-cpu: empty
Node 2 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:155
cpu 0 cold: low 0, high 62, batch 31 used:44
cpu 1 hot: low 62, high 186, batch 31 used:92
cpu 1 cold: low 0, high 62, batch 31 used:45
cpu 2 hot: low 62, high 186, batch 31 used:123
cpu 2 cold: low 0, high 62, batch 31 used:55
cpu 3 hot: low 62, high 186, batch 31 used:60
cpu 3 cold: low 0, high 62, batch 31 used:0
Node 2 HighMem per-cpu: empty
Node 1 DMA per-cpu: empty
Node 1 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:157
cpu 0 cold: low 0, high 62, batch 31 used:60
cpu 1 hot: low 62, high 186, batch 31 used:92
cpu 1 cold: low 0, high 62, batch 31 used:46
cpu 2 hot: low 62, high 186, batch 31 used:105
cpu 2 cold: low 0, high 62, batch 31 used:39
cpu 3 hot: low 62, high 186, batch 31 used:64
cpu 3 cold: low 0, high 62, batch 31 used:51
Node 1 HighMem per-cpu: empty
Node 0 DMA per-cpu:
cpu 0 hot: low 2, high 6, batch 1 used:6
cpu 0 cold: low 0, high 2, batch 1 used:0
cpu 1 hot: low 2, high 6, batch 1 used:0
cpu 1 cold: low 0, high 2, batch 1 used:0
cpu 2 hot: low 2, high 6, batch 1 used:0
cpu 2 cold: low 0, high 2, batch 1 used:0
cpu 3 hot: low 2, high 6, batch 1 used:0
cpu 3 cold: low 0, high 2, batch 1 used:0
Node 0 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:70
cpu 0 cold: low 0, high 62, batch 31 used:54
cpu 1 hot: low 62, high 186, batch 31 used:92
cpu 1 cold: low 0, high 62, batch 31 used:60
cpu 2 hot: low 62, high 186, batch 31 used:145
cpu 2 cold: low 0, high 62, batch 31 used:36
cpu 3 hot: low 62, high 186, batch 31 used:63
cpu 3 cold: low 0, high 62, batch 31 used:19
Node 0 HighMem per-cpu: empty

Free pages:       18788kB (0kB HighMem)
Active:98467 inactive:1458392 dirty:1377983 writeback:120734 unstable:0 free:4697 slab:86492 mapped:97190 pagetables:16961
Node 3 DMA free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 511 511
Node 3 Normal free:492kB min:1492kB low:1864kB high:2236kB active:88628kB inactive:236836kB present:524284kB pages_scanned:66 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 3 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 2 DMA free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 511 511
Node 2 Normal free:468kB min:1492kB low:1864kB high:2236kB active:112180kB inactive:117068kB present:524284kB pages_scanned:2273 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 2 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 1 DMA free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 511 511
Node 1 Normal free:504kB min:1492kB low:1864kB high:2236kB active:35632kB inactive:208708kB present:524284kB pages_scanned:103 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 1 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 0 DMA free:10628kB min:44kB low:52kB high:64kB active:0kB inactive:0kB present:16384kB pages_scanned:66 all_unreclaimable? yes
lowmem_reserve[]: 0 6127 6127
Node 0 Normal free:6696kB min:17896kB low:22368kB high:26844kB active:157428kB inactive:5270956kB present:6275068kB pages_scanned:108 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 0 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 3 DMA: empty
Node 3 Normal: 1*4kB 1*8kB 0*16kB 3*32kB 0*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 492kB
Node 3 HighMem: empty
Node 2 DMA: empty
Node 2 Normal: 1*4kB 0*8kB 1*16kB 0*32kB 1*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 468kB
Node 2 HighMem: empty
Node 1 DMA: empty
Node 1 Normal: 0*4kB 1*8kB 5*16kB 1*32kB 0*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 504kB
Node 1 HighMem: empty
Node 0 DMA: 5*4kB 4*8kB 5*16kB 4*32kB 4*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 0*2048kB 2*4096kB = 10628kB
Node 0 Normal: 0*4kB 1*8kB 0*16kB 1*32kB 2*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 1*2048kB 1*4096kB = 6696kB
Node 0 HighMem: empty
Swap cache: add 1, delete 0, find 0/0, race 0+0
Free swap  = 1048780kB
Total swap = 1048784kB
Free swap:       1048780kB
1966076 pages of RAM
163061 reserved pages
710953 pages shared
1 pages swap cached
kblockd/3: page allocation failure. order:0, mode:0x20

Call Trace:<ffffffff801632ae>{__alloc_pages+990} <ffffffff801668da>{cache_grow+314}
       <ffffffff80166d7f>{cache_alloc_refill+543} <ffffffff802f84e0>{blk_unplug_work+0}
       <ffffffff80166e86>{kmem_cache_alloc+54} <ffffffff8033d021>{scsi_get_command+81}
       <ffffffff8034181d>{scsi_prep_fn+301} <ffffffff802f7b98>{elv_next_request+72}
       <ffffffff80341c1d>{scsi_request_fn+221} <ffffffff802fb680>{__generic_unplug_device+32}
       <ffffffff802fbba8>{generic_unplug_device+24} <ffffffff802f84ea>{blk_unplug_work+10}
       <ffffffff80147b1c>{worker_thread+476} <ffffffff80130e20>{default_wake_function+0}
       <ffffffff8012e6b3>{__wake_up_common+67} <ffffffff80130e20>{default_wake_function+0}
       <ffffffff8014c350>{keventd_create_kthread+0} <ffffffff80147940>{worker_thread+0}
       <ffffffff8014c350>{keventd_create_kthread+0} <ffffffff8014c4ab>{kthread+219}
       <ffffffff8010e967>{child_rip+8} <ffffffff8014c350>{keventd_create_kthread+0}
       <ffffffff8014c3d0>{kthread+0} <ffffffff8010e95f>{child_rip+0}
       
Mem-info:
Node 3 DMA per-cpu: empty
Node 3 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:157
cpu 0 cold: low 0, high 62, batch 31 used:41
cpu 1 hot: low 62, high 186, batch 31 used:92
cpu 1 cold: low 0, high 62, batch 31 used:37
cpu 2 hot: low 62, high 186, batch 31 used:77
cpu 2 cold: low 0, high 62, batch 31 used:33
cpu 3 hot: low 62, high 186, batch 31 used:141
cpu 3 cold: low 0, high 62, batch 31 used:49
Node 3 HighMem per-cpu: empty
Node 2 DMA per-cpu: empty
Node 2 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:155
cpu 0 cold: low 0, high 62, batch 31 used:44
cpu 1 hot: low 62, high 186, batch 31 used:83
cpu 1 cold: low 0, high 62, batch 31 used:61
cpu 2 hot: low 62, high 186, batch 31 used:92
cpu 2 cold: low 0, high 62, batch 31 used:49
cpu 3 hot: low 62, high 186, batch 31 used:89
cpu 3 cold: low 0, high 62, batch 31 used:1
Node 2 HighMem per-cpu: empty
Node 1 DMA per-cpu: empty
Node 1 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:157
cpu 0 cold: low 0, high 62, batch 31 used:60
cpu 1 hot: low 62, high 186, batch 31 used:62
cpu 1 cold: low 0, high 62, batch 31 used:52
cpu 2 hot: low 62, high 186, batch 31 used:92
cpu 2 cold: low 0, high 62, batch 31 used:62
cpu 3 hot: low 62, high 186, batch 31 used:66
cpu 3 cold: low 0, high 62, batch 31 used:37
Node 1 HighMem per-cpu: empty
Node 0 DMA per-cpu:
cpu 0 hot: low 2, high 6, batch 1 used:6
cpu 0 cold: low 0, high 2, batch 1 used:0
cpu 1 hot: low 2, high 6, batch 1 used:0
cpu 1 cold: low 0, high 2, batch 1 used:0
cpu 2 hot: low 2, high 6, batch 1 used:0
cpu 2 cold: low 0, high 2, batch 1 used:0
cpu 3 hot: low 2, high 6, batch 1 used:0
cpu 3 cold: low 0, high 2, batch 1 used:0
Node 0 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:70
cpu 0 cold: low 0, high 62, batch 31 used:54
cpu 1 hot: low 62, high 186, batch 31 used:92
cpu 1 cold: low 0, high 62, batch 31 used:46
cpu 2 hot: low 62, high 186, batch 31 used:64
cpu 2 cold: low 0, high 62, batch 31 used:32
cpu 3 hot: low 62, high 186, batch 31 used:88
cpu 3 cold: low 0, high 62, batch 31 used:56
Node 0 HighMem per-cpu: empty

Free pages:       18776kB (0kB HighMem)
Active:98101 inactive:1451858 dirty:1348391 writeback:148762 unstable:0 free:4694 slab:93485 mapped:96813 pagetables:16877
Node 3 DMA free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 511 511
Node 3 Normal free:480kB min:1492kB low:1864kB high:2236kB active:87124kB inactive:236748kB present:524284kB pages_scanned:70 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 3 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 2 DMA free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 511 511
Node 2 Normal free:468kB min:1492kB low:1864kB high:2236kB active:112120kB inactive:109040kB present:524284kB pages_scanned:486 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 2 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 1 DMA free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 511 511
Node 1 Normal free:504kB min:1492kB low:1864kB high:2236kB active:35648kB inactive:204080kB present:524284kB pages_scanned:69 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 1 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 0 DMA free:10628kB min:44kB low:52kB high:64kB active:0kB inactive:0kB present:16384kB pages_scanned:68 all_unreclaimable? yes
lowmem_reserve[]: 0 6127 6127
Node 0 Normal free:6696kB min:17896kB low:22368kB high:26844kB active:157512kB inactive:5257564kB present:6275068kB pages_scanned:198 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 0 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 3 DMA: empty
Node 3 Normal: 0*4kB 0*8kB 2*16kB 0*32kB 1*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 480kB
Node 3 HighMem: empty
Node 2 DMA: empty
Node 2 Normal: 1*4kB 0*8kB 1*16kB 0*32kB 1*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 468kB
Node 2 HighMem: empty
Node 1 DMA: empty
Node 1 Normal: 0*4kB 1*8kB 1*16kB 1*32kB 1*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 504kB
Node 1 HighMem: empty
Node 0 DMA: 5*4kB 4*8kB 5*16kB 4*32kB 4*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 0*2048kB 2*4096kB = 10628kB
Node 0 Normal: 0*4kB 1*8kB 0*16kB 1*32kB 2*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 1*2048kB 1*4096kB = 6696kB
Node 0 HighMem: empty
Swap cache: add 1, delete 0, find 0/0, race 0+0
Free swap  = 1048780kB
Total swap = 1048784kB
Free swap:       1048780kB
1966076 pages of RAM
163061 reserved pages
709603 pages shared
1 pages swap cached
kblockd/3: page allocation failure. order:0, mode:0x20

Call Trace:<ffffffff801632ae>{__alloc_pages+990} <ffffffff801668da>{cache_grow+314}
       <ffffffff80166d7f>{cache_alloc_refill+543} <ffffffff802f84e0>{blk_unplug_work+0}
       <ffffffff80166e86>{kmem_cache_alloc+54} <ffffffff8033d021>{scsi_get_command+81}
       <ffffffff8034181d>{scsi_prep_fn+301} <ffffffff802f7b98>{elv_next_request+72}
       <ffffffff80341c1d>{scsi_request_fn+221} <ffffffff802fb680>{__generic_unplug_device+32}
       <ffffffff802fbba8>{generic_unplug_device+24} <ffffffff802f84ea>{blk_unplug_work+10}
       <ffffffff80147b1c>{worker_thread+476} <ffffffff80130e20>{default_wake_function+0}
       <ffffffff8012e6b3>{__wake_up_common+67} <ffffffff80130e20>{default_wake_function+0}
       <ffffffff8014c350>{keventd_create_kthread+0} <ffffffff80147940>{worker_thread+0}
       <ffffffff8014c350>{keventd_create_kthread+0} <ffffffff8014c4ab>{kthread+219}
       <ffffffff8010e967>{child_rip+8} <ffffffff8014c350>{keventd_create_kthread+0}
       <ffffffff8014c3d0>{kthread+0} <ffffffff8010e95f>{child_rip+0}
       
Mem-info:
Node 3 DMA per-cpu: empty
Node 3 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:157
cpu 0 cold: low 0, high 62, batch 31 used:41
cpu 1 hot: low 62, high 186, batch 31 used:88
cpu 1 cold: low 0, high 62, batch 31 used:44
cpu 2 hot: low 62, high 186, batch 31 used:92
cpu 2 cold: low 0, high 62, batch 31 used:33
cpu 3 hot: low 62, high 186, batch 31 used:120
cpu 3 cold: low 0, high 62, batch 31 used:50
Node 3 HighMem per-cpu: empty
Node 2 DMA per-cpu: empty
Node 2 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:155
cpu 0 cold: low 0, high 62, batch 31 used:44
cpu 1 hot: low 62, high 186, batch 31 used:140
cpu 1 cold: low 0, high 62, batch 31 used:45
cpu 2 hot: low 62, high 186, batch 31 used:92
cpu 2 cold: low 0, high 62, batch 31 used:33
cpu 3 hot: low 62, high 186, batch 31 used:84
cpu 3 cold: low 0, high 62, batch 31 used:54
Node 2 HighMem per-cpu: empty
Node 1 DMA per-cpu: empty
Node 1 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:157
cpu 0 cold: low 0, high 62, batch 31 used:60
cpu 1 hot: low 62, high 186, batch 31 used:64
cpu 1 cold: low 0, high 62, batch 31 used:47
cpu 2 hot: low 62, high 186, batch 31 used:90
cpu 2 cold: low 0, high 62, batch 31 used:52
cpu 3 hot: low 62, high 186, batch 31 used:92
cpu 3 cold: low 0, high 62, batch 31 used:40
Node 1 HighMem per-cpu: empty
Node 0 DMA per-cpu:
cpu 0 hot: low 2, high 6, batch 1 used:6
cpu 0 cold: low 0, high 2, batch 1 used:0
cpu 1 hot: low 2, high 6, batch 1 used:0
cpu 1 cold: low 0, high 2, batch 1 used:0
cpu 2 hot: low 2, high 6, batch 1 used:0
cpu 2 cold: low 0, high 2, batch 1 used:0
cpu 3 hot: low 2, high 6, batch 1 used:0
cpu 3 cold: low 0, high 2, batch 1 used:0
Node 0 Normal per-cpu:
cpu 0 hot: low 62, high 186, batch 31 used:70
cpu 0 cold: low 0, high 62, batch 31 used:54
cpu 1 hot: low 62, high 186, batch 31 used:65
cpu 1 cold: low 0, high 62, batch 31 used:55
cpu 2 hot: low 62, high 186, batch 31 used:92
cpu 2 cold: low 0, high 62, batch 31 used:52
cpu 3 hot: low 62, high 186, batch 31 used:62
cpu 3 cold: low 0, high 62, batch 31 used:61
Node 0 HighMem per-cpu: empty

Free pages:       18824kB (0kB HighMem)
Active:97960 inactive:1449604 dirty:1341857 writeback:152751 unstable:0 free:4706 slab:95975 mapped:96651 pagetables:16841
Node 3 DMA free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 511 511
Node 3 Normal free:528kB min:1492kB low:1864kB high:2236kB active:86472kB inactive:236064kB present:524284kB pages_scanned:33 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 3 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 2 DMA free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 511 511
Node 2 Normal free:468kB min:1492kB low:1864kB high:2236kB active:112120kB inactive:111072kB present:524284kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 2 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 1 DMA free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 511 511
Node 1 Normal free:504kB min:1492kB low:1864kB high:2236kB active:35644kB inactive:203524kB present:524284kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 1 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 0 DMA free:10628kB min:44kB low:52kB high:64kB active:0kB inactive:0kB present:16384kB pages_scanned:70 all_unreclaimable? yes
lowmem_reserve[]: 0 6127 6127
Node 0 Normal free:6696kB min:17896kB low:22368kB high:26844kB active:157604kB inactive:5247756kB present:6275068kB pages_scanned:1144 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 0 HighMem free:0kB min:128kB low:160kB high:192kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
Node 3 DMA: empty
Node 3 Normal: 0*4kB 0*8kB 1*16kB 0*32kB 2*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 528kB
Node 3 HighMem: empty
Node 2 DMA: empty
Node 2 Normal: 1*4kB 0*8kB 1*16kB 0*32kB 1*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 468kB
Node 2 HighMem: empty
Node 1 DMA: empty
Node 1 Normal: 0*4kB 1*8kB 1*16kB 1*32kB 1*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 504kB
Node 1 HighMem: empty
Node 0 DMA: 5*4kB 4*8kB 5*16kB 4*32kB 4*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 0*2048kB 2*4096kB = 10628kB
Node 0 Normal: 0*4kB 1*8kB 0*16kB 1*32kB 0*64kB 2*128kB 1*256kB 0*512kB 0*1024kB 1*2048kB 1*4096kB = 6696kB
Node 0 HighMem: empty
Swap cache: add 1, delete 0, find 0/0, race 0+0
Free swap  = 1048780kB
Total swap = 1048784kB
Free swap:       1048780kB
1966076 pages of RAM
163061 reserved pages
708003 pages shared
1 pages swap cached

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: 2.6.12-rc6-mm1 & 2K lun testing
  2005-06-15 18:30 ` Nick Piggin
@ 2005-06-15 18:30   ` Badari Pulavarty
  2005-06-15 19:02     ` Nick Piggin
  2005-06-15 23:23   ` Dave Chinner
  1 sibling, 1 reply; 25+ messages in thread
From: Badari Pulavarty @ 2005-06-15 18:30 UTC (permalink / raw)
  To: Nick Piggin; +Cc: Linux Kernel Mailing List, linux-mm

On Wed, 2005-06-15 at 11:30, Nick Piggin wrote:
> Badari Pulavarty wrote:
> 
> > ------------------------------------------------------------------------
> > 
> > elm3b29 login: dd: page allocation failure. order:0, mode:0x20
> > 
> > Call Trace: <IRQ> <ffffffff801632ae>{__alloc_pages+990} <ffffffff801668da>{cache_grow+314}
> >        <ffffffff80166d7f>{cache_alloc_refill+543} <ffffffff80166e86>{kmem_cache_alloc+54}
> >        <ffffffff8033d021>{scsi_get_command+81} <ffffffff8034181d>{scsi_prep_fn+301}
> 
> They look like they're all in scsi_get_command.
> I would consider masking off __GFP_HIGH in the gfp_mask of that
> function, and setting __GFP_NOWARN. It looks like it has a mempoolish
> thingy in there, so perhaps it shouldn't delve so far into reserves.

You want me to take off GFP_HIGH ? or just set GFP_NOWARN with GFP_HIGH
?

- Badari


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: 2.6.12-rc6-mm1 & 2K lun testing
  2005-06-15 17:36 2.6.12-rc6-mm1 & 2K lun testing Badari Pulavarty
@ 2005-06-15 18:30 ` Nick Piggin
  2005-06-15 18:30   ` Badari Pulavarty
  2005-06-15 23:23   ` Dave Chinner
  2005-06-15 21:39 ` Chen, Kenneth W
  2005-06-16  7:24 ` Andrew Morton
  2 siblings, 2 replies; 25+ messages in thread
From: Nick Piggin @ 2005-06-15 18:30 UTC (permalink / raw)
  To: Badari Pulavarty; +Cc: Linux Kernel Mailing List, linux-mm

Badari Pulavarty wrote:

> ------------------------------------------------------------------------
> 
> elm3b29 login: dd: page allocation failure. order:0, mode:0x20
> 
> Call Trace: <IRQ> <ffffffff801632ae>{__alloc_pages+990} <ffffffff801668da>{cache_grow+314}
>        <ffffffff80166d7f>{cache_alloc_refill+543} <ffffffff80166e86>{kmem_cache_alloc+54}
>        <ffffffff8033d021>{scsi_get_command+81} <ffffffff8034181d>{scsi_prep_fn+301}

They look like they're all in scsi_get_command.
I would consider masking off __GFP_HIGH in the gfp_mask of that
function, and setting __GFP_NOWARN. It looks like it has a mempoolish
thingy in there, so perhaps it shouldn't delve so far into reserves.

-- 
SUSE Labs, Novell Inc.

Send instant messages to your online friends http://au.messenger.yahoo.com 

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: 2.6.12-rc6-mm1 & 2K lun testing
  2005-06-15 18:30   ` Badari Pulavarty
@ 2005-06-15 19:02     ` Nick Piggin
  2005-06-15 20:56       ` Badari Pulavarty
  0 siblings, 1 reply; 25+ messages in thread
From: Nick Piggin @ 2005-06-15 19:02 UTC (permalink / raw)
  To: Badari Pulavarty; +Cc: Linux Kernel Mailing List, linux-mm

Badari Pulavarty wrote:
> On Wed, 2005-06-15 at 11:30, Nick Piggin wrote:
> 
>>Badari Pulavarty wrote:
>>
>>
>>>------------------------------------------------------------------------
>>>
>>>elm3b29 login: dd: page allocation failure. order:0, mode:0x20
>>>
>>>Call Trace: <IRQ> <ffffffff801632ae>{__alloc_pages+990} <ffffffff801668da>{cache_grow+314}
>>>       <ffffffff80166d7f>{cache_alloc_refill+543} <ffffffff80166e86>{kmem_cache_alloc+54}
>>>       <ffffffff8033d021>{scsi_get_command+81} <ffffffff8034181d>{scsi_prep_fn+301}
>>
>>They look like they're all in scsi_get_command.
>>I would consider masking off __GFP_HIGH in the gfp_mask of that
>>function, and setting __GFP_NOWARN. It looks like it has a mempoolish
>>thingy in there, so perhaps it shouldn't delve so far into reserves.
> 
> 
> You want me to take off GFP_HIGH ? or just set GFP_NOWARN with GFP_HIGH
> ?
> 

Yeah, take off GFP_HIGH and set GFP_NOWARN (always). I would be
interested to see how that goes.

Obviously it won't eliminate your failures there (it will probably
produce more of them), however it might help the scsi command
allocation from overwhelming the system.

THanks,
Nick

-- 
SUSE Labs, Novell Inc.

Send instant messages to your online friends http://au.messenger.yahoo.com 

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: 2.6.12-rc6-mm1 & 2K lun testing
  2005-06-15 19:02     ` Nick Piggin
@ 2005-06-15 20:56       ` Badari Pulavarty
  2005-06-16  1:48         ` Nick Piggin
  0 siblings, 1 reply; 25+ messages in thread
From: Badari Pulavarty @ 2005-06-15 20:56 UTC (permalink / raw)
  To: Nick Piggin; +Cc: Linux Kernel Mailing List, linux-mm

On Wed, 2005-06-15 at 12:02, Nick Piggin wrote:
> Badari Pulavarty wrote:
> > On Wed, 2005-06-15 at 11:30, Nick Piggin wrote:
> > 
> >>Badari Pulavarty wrote:
> >>
> >>
> >>>------------------------------------------------------------------------
> >>>
> >>>elm3b29 login: dd: page allocation failure. order:0, mode:0x20
> >>>
> >>>Call Trace: <IRQ> <ffffffff801632ae>{__alloc_pages+990} <ffffffff801668da>{cache_grow+314}
> >>>       <ffffffff80166d7f>{cache_alloc_refill+543} <ffffffff80166e86>{kmem_cache_alloc+54}
> >>>       <ffffffff8033d021>{scsi_get_command+81} <ffffffff8034181d>{scsi_prep_fn+301}
> >>
> >>They look like they're all in scsi_get_command.
> >>I would consider masking off __GFP_HIGH in the gfp_mask of that
> >>function, and setting __GFP_NOWARN. It looks like it has a mempoolish
> >>thingy in there, so perhaps it shouldn't delve so far into reserves.
> > 
> > 
> > You want me to take off GFP_HIGH ? or just set GFP_NOWARN with GFP_HIGH
> > ?
> > 
> 
> Yeah, take off GFP_HIGH and set GFP_NOWARN (always). I would be
> interested to see how that goes.
> 
> Obviously it won't eliminate your failures there (it will probably
> produce more of them), however it might help the scsi command
> allocation from overwhelming the system.

Hmm.. seems to help little. IO rate is not great (compared to 90MB/sec
with "raw") - but machine is making progress. But again, its pretty
unresponsive.

Thanks,
Badari

procs -----------memory---------- ---swap-- -----io---- --system--
----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy
id wa
131 254  34896  31328   2540 4982740    0    0    29 101877 1086 11220 
0 100  0  0
149 268  34896  32824   2536 4983712   13    0    42 39505  439  4454  0
100  0  0
135 254  34896  31112   2536 4984768   11    0    20 36233  373  4078  0
100  0  0
130 242  34896  32600   2536 4987364    6    0   161 33626  377  3957  0
100  0  0
153 263  34896  32592   2532 4993560    0    0    14 37124  385  4468  0
100  0  0
144 236  34896  32668   2548 5013148    6    0   154 220366 2360 27530 
0 100  0  0
112 243  34896  34636   2544 5011112    5    0    62 79160  850 10540  0
100  0  0
103 234  34896  31980   2544 5014744    0    0   135 33814  363  4511  0
100  0  0
112 230  34896  32204   2552 5012156    0    0   140 33200  378  4812  0
100  0  0
139 212  34896  32832   2528 5020928   31    0   542 142834 1536 18007 
0 100  0  0
144 215  34896  32896   2528 5019872   17    0    74 41957  449  4781  0
100  0  0
184 252  34896  33252   2504 5024564    0    0    19 34506  374  4616  0
100  0  0
141 240  34896  31624   2516 5026616    0    0   153 31896  378  4904  0
100  0  0



^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: 2.6.12-rc6-mm1 & 2K lun testing
  2005-06-15 17:36 2.6.12-rc6-mm1 & 2K lun testing Badari Pulavarty
  2005-06-15 18:30 ` Nick Piggin
@ 2005-06-15 21:39 ` Chen, Kenneth W
  2005-06-15 22:35   ` Badari Pulavarty
  2005-06-16  7:24 ` Andrew Morton
  2 siblings, 1 reply; 25+ messages in thread
From: Chen, Kenneth W @ 2005-06-15 21:39 UTC (permalink / raw)
  To: 'Badari Pulavarty', Linux Kernel Mailing List, linux-mm

Badari Pulavarty wrote on Wednesday, June 15, 2005 10:36 AM
> I sniff tested 2K lun support with 2.6.12-rc6-mm1 on
> my AMD64 box. I had to tweak qlogic driver and
> scsi_scan.c to see all the luns.
> 
> (2.6.12-rc6 doesn't see all the LUNS due to max_lun
> issue - which is fixed in scsi-git tree).
> 
> Test 1:
> 	run dds on all 2048 "raw" devices - worked
> great. No issues.

Just curious, how many physical disks do you have for this test?


^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: 2.6.12-rc6-mm1 & 2K lun testing
  2005-06-15 21:39 ` Chen, Kenneth W
@ 2005-06-15 22:35   ` Badari Pulavarty
  0 siblings, 0 replies; 25+ messages in thread
From: Badari Pulavarty @ 2005-06-15 22:35 UTC (permalink / raw)
  To: Chen, Kenneth W; +Cc: Linux Kernel Mailing List, linux-mm

On Wed, 2005-06-15 at 14:39, Chen, Kenneth W wrote:
> Badari Pulavarty wrote on Wednesday, June 15, 2005 10:36 AM
> > I sniff tested 2K lun support with 2.6.12-rc6-mm1 on
> > my AMD64 box. I had to tweak qlogic driver and
> > scsi_scan.c to see all the luns.
> > 
> > (2.6.12-rc6 doesn't see all the LUNS due to max_lun
> > issue - which is fixed in scsi-git tree).
> > 
> > Test 1:
> > 	run dds on all 2048 "raw" devices - worked
> > great. No issues.
> 
> Just curious, how many physical disks do you have for this test?
> 

2048 luns are created using NetApp FAS 270C - which has 28 drives.
I am accessing the luns through fiber channel.


Thanks,
Badari


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: 2.6.12-rc6-mm1 & 2K lun testing
  2005-06-15 18:30 ` Nick Piggin
  2005-06-15 18:30   ` Badari Pulavarty
@ 2005-06-15 23:23   ` Dave Chinner
  1 sibling, 0 replies; 25+ messages in thread
From: Dave Chinner @ 2005-06-15 23:23 UTC (permalink / raw)
  To: Nick Piggin; +Cc: Badari Pulavarty, Linux Kernel Mailing List, linux-mm

On Thu, Jun 16, 2005 at 04:30:25AM +1000, Nick Piggin wrote:
> Badari Pulavarty wrote:
> 
> > ------------------------------------------------------------------------
> > 
> > elm3b29 login: dd: page allocation failure. order:0, mode:0x20
> > 
> > Call Trace: <IRQ> <ffffffff801632ae>{__alloc_pages+990} <ffffffff801668da>{cache_grow+314}
> >        <ffffffff80166d7f>{cache_alloc_refill+543} <ffffffff80166e86>{kmem_cache_alloc+54}
> >        <ffffffff8033d021>{scsi_get_command+81} <ffffffff8034181d>{scsi_prep_fn+301}
> 
> They look like they're all in scsi_get_command.

I've seen this before on a system with lots of luns, lots of memory
and lots of dd write I/O. basically, all the memory being flushed
was being pushed into the elevator queues before block congestion
was triggered (58GiB of RAM in the elevator queues waiting for I/O
to be done on them!) This caused OOM problems when things like slab
allocations were necessary and the above was a common location for
failures.

If you've got command tag queueing turned on, then you need a
scsi command structure for every I/O on the fly. Assuming the default
depth for linux (32 IIRC) - that's 2048 x 32 = 64k request structures.
Hence you're doing a few allocations here.

However, when you are oversubscribing the system like this, you can run
out of memory by the time you get to the SCSI layer because there's
so many block devices that none of them get enough I/O queued to
trigger congestion and throttle the incoming writers.

You can WAR this is to reduce the /sys/block/*/queue/nr_requests to a
small number (say 4 or 8). This should cause the system to throttle
writers at /proc/sys/vm/dirty_ratio percent of memory dirtied and
prevent these failures. The system responsiveness should be far
better as well.

HTH.

Cheers,

Dave.
-- 
Dave Chinner
R&D Software Engineer
SGI Australian Software Group

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: 2.6.12-rc6-mm1 & 2K lun testing
  2005-06-15 20:56       ` Badari Pulavarty
@ 2005-06-16  1:48         ` Nick Piggin
  0 siblings, 0 replies; 25+ messages in thread
From: Nick Piggin @ 2005-06-16  1:48 UTC (permalink / raw)
  To: Badari Pulavarty; +Cc: Linux Kernel Mailing List, linux-mm

Badari Pulavarty wrote:
> On Wed, 2005-06-15 at 12:02, Nick Piggin wrote:
> 

>>Yeah, take off GFP_HIGH and set GFP_NOWARN (always). I would be
>>interested to see how that goes.
>>
>>Obviously it won't eliminate your failures there (it will probably
>>produce more of them), however it might help the scsi command
>>allocation from overwhelming the system.
> 
> 
> Hmm.. seems to help little. IO rate is not great (compared to 90MB/sec
> with "raw") - but machine is making progress. But again, its pretty
> unresponsive.
> 

Anything measurable that we can use to maybe get the chage
picked up and tested in -mm for a while?

> Thanks,
> Badari
> 
> procs -----------memory---------- ---swap-- -----io---- --system--
> ----cpu----
>  r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy
> id wa
> 131 254  34896  31328   2540 4982740    0    0    29 101877 1086 11220 
> 0 100  0  0
> 149 268  34896  32824   2536 4983712   13    0    42 39505  439  4454  0
> 100  0  0
> 135 254  34896  31112   2536 4984768   11    0    20 36233  373  4078  0
> 100  0  0
> 130 242  34896  32600   2536 4987364    6    0   161 33626  377  3957  0
> 100  0  0
> 153 263  34896  32592   2532 4993560    0    0    14 37124  385  4468  0
> 100  0  0
> 144 236  34896  32668   2548 5013148    6    0   154 220366 2360 27530 
> 0 100  0  0

Though it can be difficult to judge performance based on vmstat
when you get these large spikes. vmstat is measuring requests
into the elevator so you see batching and throttling effects. I
would expect requests completing to be more even... your entire
vmstat listing looks like it is averaging about 60-70MB/s - does
this agree with your measurements?

Finally, do you see anything interesting on the profiles?

-- 
SUSE Labs, Novell Inc.

Send instant messages to your online friends http://au.messenger.yahoo.com 

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: 2.6.12-rc6-mm1 & 2K lun testing
  2005-06-15 17:36 2.6.12-rc6-mm1 & 2K lun testing Badari Pulavarty
  2005-06-15 18:30 ` Nick Piggin
  2005-06-15 21:39 ` Chen, Kenneth W
@ 2005-06-16  7:24 ` Andrew Morton
  2005-06-16 19:50   ` Badari Pulavarty
  2 siblings, 1 reply; 25+ messages in thread
From: Andrew Morton @ 2005-06-16  7:24 UTC (permalink / raw)
  To: Badari Pulavarty; +Cc: linux-kernel, linux-mm

Badari Pulavarty <pbadari@us.ibm.com> wrote:
>
> I sniff tested 2K lun support with 2.6.12-rc6-mm1 on
>  my AMD64 box. I had to tweak qlogic driver and
>  scsi_scan.c to see all the luns.
> 
>  (2.6.12-rc6 doesn't see all the LUNS due to max_lun
>  issue - which is fixed in scsi-git tree).
> 
>  Test 1:
>  	run dds on all 2048 "raw" devices - worked
>  great. No issues.
> 
>  Tests 2: 
>  	run "dds" on 2048 filesystems (one file
>  per filesystem). Kind of works. I was expecting better
>  responsiveness & stability.
> 
> 
>  Overall - Good news is, it works. 
> 
>  Not so good news - with filesystem tests, machine becomes 
>  unresponsive, lots of page allocation failures but machine 
>  stays up and completes the tests and recovers.

Any chance of getting a peek at /proc/slabinfo?

Presumably increasing /proc/sys/vm/min_free_kbytes will help.

We seem to be always ooming when allocating scsi command structures. 
Perhaps the block-level request structures are being allocated with
__GFP_WAIT, but it's a bit odd.  Which I/O scheduler?  If cfq, does
reducing /sys/block/*/queue/nr_requests help?


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: 2.6.12-rc6-mm1 & 2K lun testing
  2005-06-16  7:24 ` Andrew Morton
@ 2005-06-16 19:50   ` Badari Pulavarty
  2005-06-16 20:37     ` Andrew Morton
  2005-06-16 22:42     ` 2.6.12-rc6-mm1 & 2K lun testing William Lee Irwin III
  0 siblings, 2 replies; 25+ messages in thread
From: Badari Pulavarty @ 2005-06-16 19:50 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Linux Kernel Mailing List, linux-mm

[-- Attachment #1: Type: text/plain, Size: 1536 bytes --]

On Thu, 2005-06-16 at 00:24, Andrew Morton wrote:
> Badari Pulavarty <pbadari@us.ibm.com> wrote:
> >
> > I sniff tested 2K lun support with 2.6.12-rc6-mm1 on
> >  my AMD64 box. I had to tweak qlogic driver and
> >  scsi_scan.c to see all the luns.
> > 
> >  (2.6.12-rc6 doesn't see all the LUNS due to max_lun
> >  issue - which is fixed in scsi-git tree).
> > 
> >  Test 1:
> >  	run dds on all 2048 "raw" devices - worked
> >  great. No issues.
> > 
> >  Tests 2: 
> >  	run "dds" on 2048 filesystems (one file
> >  per filesystem). Kind of works. I was expecting better
> >  responsiveness & stability.
> > 
> > 
> >  Overall - Good news is, it works. 
> > 
> >  Not so good news - with filesystem tests, machine becomes 
> >  unresponsive, lots of page allocation failures but machine 
> >  stays up and completes the tests and recovers.
> 
> Any chance of getting a peek at /proc/slabinfo?
> 
> Presumably increasing /proc/sys/vm/min_free_kbytes will help.
> 
> We seem to be always ooming when allocating scsi command structures. 
> Perhaps the block-level request structures are being allocated with
> __GFP_WAIT, but it's a bit odd.  Which I/O scheduler?  If cfq, does
> reducing /sys/block/*/queue/nr_requests help?

Yes. I am using CFQ scheduler. I changed nr_requests to 4 for all
my devices. I also changed "min_free_kbytes" to 64M.

Response time is still bad. Here is the vmstat, meminfo, slabinfo
and profle output. I am not sure why profile output shows 
default_idle(), when vmstat shows 100% CPU sys.

Thanks
Badari



[-- Attachment #2: info --]
[-- Type: text/plain, Size: 32790 bytes --]

procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
153 1926      4  76340   2512 6021272    0    0     3 17117  622  1611  0 100  0  0
103 1951      4  75772   2512 6023852    0    0    12 16905  635  1596  0 99  0  1
96 1799      4  75880   2508 6027468    0    0    18 15918  635  1564  0 99  0  1
58 1835      4  75856   2520 6030552    0    0   134 16289  642  1576  0 99  0  1
64 1949      4  76076   2520 6029520    0    0     6 17625  652  1680  0 100  0  0
51 1913      4  76500   2520 6027972    0    0    10 17326  643  1530  0 98  0  2
72 1917      4  76384   2520 6027456    0    0     4 17583  639  1473  0 97  0  3
95 1806      4  76028   2580 6028428    0    0     6 16613  638  1408  0 99  0  1
130 1748      4  75528   2560 6028448    0    0     8 17307  641  1402  0 100  0  0
154 1574      4  76172   2560 6028448    0    0     8 16299  629  1541  0 99  0  1
56 1991      4  75100   2540 6026920    0    0    46 38906 1419  3793  0 81  0 19
 9 1855      4  75900   2540 6026404    0    0    10 20220  776  2214  0 85  0 15
127 1924      4  76096   2540 6026920    0    0     4 17924  640  1719  0 100  0  0
27 1859      4  75936   2540 6027436    0    0     3 18290  629  1783  0 98  0  2
72 1910      4  75856   2568 6022764    0    0    77 17395  628  1677  0 100  0  0
116 1729      4  75784   2540 6025372    0    0    28 19086  697  2001  0 98  0  2
35 1986      4  75856   2556 6025872    0    0     6 16844  612  1796  0 99  0  1
90 1844      4  75632   2580 6026364    0    0    11 17891  630  1585  0 100  0  0
59 1964      4  75928   2568 6024828    0    0     7 17234  626  1656  0 98  0  2
88 1959      4  76140   2568 6026892    0    0     4 16050  617  1657  0 97  0  3
55 1956      4  75996   2564 6028444    0    0    24 16584  636  1840  0 99  0  1

elm3b29:/home/netapp # echo 2 > /proc/profile; sleep 15; readprofile -m /usr/src/linux-2.6.12-rc6/System.map | sort -nr +2 | head -30
 32043 default_idle                             667.5625
 17480 __wake_up_bit                            364.1667
 17436 unlock_page                              272.4375
843789 shrink_zone                              214.3773
  5936 lru_add_drain                             74.2000
 21214 rotate_reclaimable_page                   73.6597
  4466 page_waitqueue                            46.5208
 14714 page_referenced                           43.7917
 15628 release_pages                             34.8839
  3480 cond_resched                              31.0714
  1408 __mod_page_state                          29.3333
  6132 scsi_end_request                          23.9531
 13384 check_poison_obj                          23.2361
  6289 copy_user_generic                         21.1040
 22178 scsi_request_fn                           19.2517
  8692 kmem_cache_free                           15.5214
  1929 kmem_cache_alloc                          15.0703
  2922 __do_softirq                              12.1750
   299 __pagevec_release                          9.3438
  2042 __pagevec_lru_add                          9.1161
   115 obj_dbghead                                7.1875
   316 bio_get_nr_vecs                            4.9375
    76 blk_unplug_work                            4.7500
  1190 test_set_page_writeback                    3.9145
   283 end_page_writeback                         3.5375
   654 memset                                     3.4062
   430 __read_page_state                          3.3594
   549 scsi_queue_insert                          3.1193
    48 obj_reallen                                3.0000
  3971 __make_request                             2.8859

MemTotal:      7209056 kB
MemFree:         75972 kB
Buffers:          2568 kB
Cached:        6001608 kB
SwapCached:          0 kB
Active:         412260 kB
Inactive:      5675056 kB
HighTotal:           0 kB
HighFree:            0 kB
LowTotal:      7209056 kB
LowFree:         75972 kB
SwapTotal:     1048784 kB
SwapFree:      1048780 kB
Dirty:         5896240 kB
Writeback:       81312 kB
Mapped:         410788 kB
Slab:           364192 kB
CommitLimit:   4653312 kB
Committed_AS:  2577920 kB
PageTables:      67528 kB
VmallocTotal: 34359738367 kB
VmallocUsed:      9920 kB
VmallocChunk: 34359728439 kB
HugePages_Total:     0
HugePages_Free:      0
Hugepagesize:     2048 kB
slabinfo - version: 2.1 (statistics)
# name            <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail> : globalstat <listallocs> <maxobjs> <grown> <reaped> <error> <maxfreeable> <nodeallocs> <remotefrees> : cpustat <allochit> <allocmiss> <freehit> <freemiss>
fib6_nodes             5     54     72   54    1 : tunables   32   16    8 : slabdata      1      1      0 : globalstat      32     32     2    1 				   0    0    0    5 : cpustat      8      2      0      0
ip6_dst_cache          4     12    320   12    1 : tunables   32   16    8 : slabdata      1      1      0 : globalstat      24     24     2    1 				   0    0    0    3 : cpustat     10      2      5      0
ndisc_cache            1     15    256   15    1 : tunables   32   16    8 : slabdata      1      1      0 : globalstat      30     30     2    1 				   0    0    0    1 : cpustat      2      2      2      0
RAWv6                  6      8    904    4    1 : tunables   32   16    8 : slabdata      2      2      0 : globalstat       8      8     2    0 				   0    0    0    0 : cpustat      4      2      0      0
UDPv6                  0      0    880    9    2 : tunables   32   16    8 : slabdata      0      0      0 : globalstat      90     29    10   10 				   0    0    0    0 : cpustat     14     10     24      0
request_sock_TCPv6      0      0    152   26    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
TCPv6                 10     20   1536    5    2 : tunables   24   12    8 : slabdata      4      4      0 : globalstat      38     13     4    0 				   0    0    0    0 : cpustat      8      9      7      0
scsi_cmd_cache      1649   1704    488    8    1 : tunables   32   16    8 : slabdata    209    213      0 : globalstat  271823   2040 19045  331 				   0    0    0 267119 : cpustat 7820052  27928 7578980    279
qla2xxx_srbs        1580   1700    160   25    1 : tunables   32   16    8 : slabdata     67     68      0 : globalstat  258249   1648  4839  353 				   0    0    0 254141 : cpustat 15170681  21793 14933929   2869
ip_fib_alias          11    207     56   69    1 : tunables   32   16    8 : slabdata      3      3      0 : globalstat      64     37     3    0 				   0    0    0    0 : cpustat      7      4      0      0
ip_fib_hash           11    183     64   61    1 : tunables   32   16    8 : slabdata      3      3      0 : globalstat      64     37     3    0 				   0    0    0    0 : cpustat      7      4      0      0
dm_tio                 0      0     48   81    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
dm_io                  0      0     56   69    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
rpc_buffers            8      9   2072    3    2 : tunables   24   12    8 : slabdata      3      3      0 : globalstat       9      9     3    0 				   0    0    0    0 : cpustat      5      3      0      0
rpc_tasks              8     10    384   10    1 : tunables   32   16    8 : slabdata      1      1      0 : globalstat      10     10     1    0 				   0    0    0    0 : cpustat      7      1      0      0
rpc_inode_cache        0      0    824    4    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
UNIX                  93    120    664    6    1 : tunables   32   16    8 : slabdata     20     20      0 : globalstat    5650   1530   269   32 				   0    1    0 4492 : cpustat  57136    750  53175    126
ip_mrt_cache           0      0    120   33    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
tcp_tw_bucket          0      0    200   20    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat      48     32     3    3 				   0    0    0    1 : cpustat      0      3      2      0
tcp_bind_bucket       11    276     56   69    1 : tunables   32   16    8 : slabdata      4      4      0 : globalstat     128     48     4    0 				   0    0    0    2 : cpustat      7      8      2      0
inet_peer_cache        1     45     88   45    1 : tunables   32   16    8 : slabdata      1      1      0 : globalstat      16     16     1    0 				   0    0    0    0 : cpustat      0      1      0      0
secpath_cache          0      0    160   25    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
xfrm_dst_cache         0      0    376   10    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
ip_dst_cache          34     55    352   11    1 : tunables   32   16    8 : slabdata      5      5      0 : globalstat    1372     45    20   15 				   0    0    0  238 : cpustat     99    171      1      0
arp_cache              2     16    248   16    1 : tunables   32   16    8 : slabdata      1      1      0 : globalstat      31     16     1    0 				   0    0    0    2 : cpustat      2      2      0      0
RAW                    5      5    728    5    1 : tunables   32   16    8 : slabdata      1      1      0 : globalstat       5      5     1    0 				   0    0    0    0 : cpustat      4      1      0      0
UDP                    8     20    736    5    1 : tunables   32   16    8 : slabdata      4      4      0 : globalstat      96     22    12    8 				   0    0    0    4 : cpustat     49     21     58      0
request_sock_TCP       0      0    104   38    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat      96     31     6    6 				   0    0    0    1 : cpustat      1      6      6      0
TCP                   12     25   1392    5    2 : tunables   24   12    8 : slabdata      5      5      0 : globalstat      63     20    10    5 				   0    0    0    3 : cpustat     19     14     18      0
flow_cache             0      0    136   29    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
cfq_ioc_pool           0      0    120   33    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
cfq_pool               0      0    176   22    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
crq_pool               0      0    112   35    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
deadline_drq           0      0    120   33    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
as_arq             20788  24354    144   27    1 : tunables   32   16    8 : slabdata    902    902    128 : globalstat 7302594  20552  3143   42 				   0    0    0 7028907 : cpustat 7831852 464581 1225487  21516
mqueue_inode_cache      1      4    912    4    1 : tunables   32   16    8 : slabdata      1      1      0 : globalstat       4      4     1    0 				   0    0    0    0 : cpustat      0      1      0      0
jfs_mp              9898  22572    144   27    1 : tunables   32   16    8 : slabdata    836    836      0 : globalstat   36371  19456   836    0 				   0    0    0 12035 : cpustat  22235   2728   2862    188
jfs_ip             14336  14340   1240    3    1 : tunables   24   12    8 : slabdata   4780   4780      0 : globalstat   14469  14338  4783    3 				   0    0    0    0 : cpustat   9562   4879    105      0
nfs_direct_cache       0      0     96   41    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
nfs_write_data        36     40    808    5    1 : tunables   32   16    8 : slabdata      8      8      0 : globalstat      40     40     8    0 				   0    0    0    0 : cpustat     28      8      0      0
nfs_read_data         32     35    776    5    1 : tunables   32   16    8 : slabdata      7      7      0 : globalstat      35     35     7    0 				   0    0    0    0 : cpustat     25      7      0      0
nfs_inode_cache        0      0   1024    4    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
nfs_page               0      0    120   33    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
isofs_inode_cache      0      0    672    6    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
minix_inode_cache      0      0    688    5    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
hugetlbfs_inode_cache      1      6    640    6    1 : tunables   32   16    8 : slabdata      1      1      0 : globalstat       6      6     1    0 				   0    0    0    0 : cpustat      0      1      0      0
ext2_inode_cache       0      0    784    5    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat      10      6     2    2 				   0    0    0    2 : cpustat      1      2      1      0
ext2_xattr             0      0    112   35    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
journal_handle         0      0     48   81    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
journal_head           0      0    120   33    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
revoke_table           0      0     40   96    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
revoke_record          0      0     56   69    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
ext3_inode_cache       0      0    832    4    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
ext3_xattr             0      0    112   35    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
reiser_inode_cache   2479   4645    744    5    1 : tunables   32   16    8 : slabdata    929    929      0 : globalstat   47749  38713  7992   12 				   0    5    0 36777 : cpustat  46050   8794  15147    441
dnotify_cache          0      0     64   61    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
dquot                  0      0    240   16    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
eventpoll_pwq          0      0     96   41    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
eventpoll_epi          0      0    176   22    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
inotify_event_cache      0      0     64   61    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
inotify_watch_cache      0      0     88   45    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
kioctx                 0      0    344   11    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
kiocb                  0      0    240   16    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
fasync_cache           1     81     48   81    1 : tunables   32   16    8 : slabdata      1      1      0 : globalstat      16     16     1    0 				   0    0    0    0 : cpustat      0      1      0      0
shmem_inode_cache     13     27    832    9    2 : tunables   32   16    8 : slabdata      3      3      0 : globalstat     207     36     7    4 				   0    0    0   13 : cpustat     45     27     46      0
posix_timers_cache      0      0    192   20    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
uid_cache              5    180     88   45    1 : tunables   32   16    8 : slabdata      4      4      0 : globalstat     160     48     4    0 				   0    0    0    2 : cpustat      1     10      4      0
sgpool-128            40     41   4096    1    1 : tunables   24   12    8 : slabdata     40     41      0 : globalstat   10180   1897  6838 3329 				   0   25    0 7536 : cpustat  23608   7788  23715    107
sgpool-64             48     54   2072    3    2 : tunables   24   12    8 : slabdata     17     18      0 : globalstat    9936    142   946  912 				   0    0    0 7932 : cpustat  32703   2991  27687     17
sgpool-32             76     91   1048    7    2 : tunables   24   12    8 : slabdata     12     13      0 : globalstat   23205    863   552  413 				   0    0    0 21347 : cpustat  88941   3350  70773     98
sgpool-16             57     84    536    7    1 : tunables   32   16    8 : slabdata     11     12      0 : globalstat   15172   1105   898  675 				   0    1    0 10527 : cpustat 104403   2486  96225     95
sgpool-8            1638   1694    280   14    1 : tunables   32   16    8 : slabdata    120    121      0 : globalstat  225720   1678 10531  485 				   0    0    0 221585 : cpustat 7581493  16716 7374798    222
blkdev_ioc          2396   3724     80   49    1 : tunables   32   16    8 : slabdata     76     76      0 : globalstat    8786   2807   124    1 				   0    0    0 3417 : cpustat   6496    668   1363      3
blkdev_queue        2077   2090    736    5    1 : tunables   32   16    8 : slabdata    418    418      0 : globalstat    2175   2088   418    0 				   0    0    0    0 : cpustat   1621    456      0      0
blkdev_requests    20704  24063    288   13    1 : tunables   32   16    8 : slabdata   1851   1851    128 : globalstat 7302343  20139  9680   74 				   0    1    0 7029127 : cpustat 7828154 468475 1225524  21502
biovec-(256)         260    260   4096    1    1 : tunables   24   12    8 : slabdata    260    260      0 : globalstat     260    260   260    0 				   0    0    0    0 : cpustat      0    260      0      0
biovec-128         20130  21342   2072    3    2 : tunables   24   12    8 : slabdata   7108   7114     96 : globalstat 14305237  50839 426881  321 				   0    9    0 13561933 : cpustat 14336409 1735813 2211056 104468
biovec-64            279    301   1048    7    2 : tunables   24   12    8 : slabdata     43     43      1 : globalstat    6048    349   566  461 				   0    0    0 3557 : cpustat   5398    887   2389     49
biovec-16            284    294    280   14    1 : tunables   32   16    8 : slabdata     21     21      0 : globalstat    4492    336   156  135 				   0    0    0  717 : cpustat   1590    365    966      0
biovec-4             288    315     88   45    1 : tunables   32   16    8 : slabdata      7      7      0 : globalstat    4115    334   104   97 				   0    0    0  351 : cpustat   1286    268    931      0
biovec-1             295    480     40   96    1 : tunables   32   16    8 : slabdata      5      5      0 : globalstat 1167384 998515 10923  307 				   0    1    0 950046 : cpustat 1172129  75246 282589  14462
bio                20200  26158    136   29    1 : tunables   32   16    8 : slabdata    902    902    128 : globalstat 15436635 998436 44317  109 				   0    2    0 14525763 : cpustat 16155826 1007812 2529700  88177
file_lock_cache        3     63    184   21    1 : tunables   32   16    8 : slabdata      3      3      0 : globalstat   23157 18446744073709551615   107   18 				   0    1    0 18144 : cpustat 235351   6364 218466   5102
sock_inode_cache     145    185    704    5    1 : tunables   32   16    8 : slabdata     37     37      0 : globalstat    3733   1520   311   27 				   0    0    0 1832 : cpustat  56540    630  55116     77
skbuff_head_cache    223    377    296   13    1 : tunables   32   16    8 : slabdata     28     29      0 : globalstat   89310    754   355  126 				   0    1    0 82912 : cpustat 166581   6053  89470     40
proc_inode_cache      30     72    656    6    1 : tunables   32   16    8 : slabdata     12     12      0 : globalstat   52914  14318  5085   17 				   0    1    0 35491 : cpustat 187956   7259 158540   1166
sigqueue              32     60    192   20    1 : tunables   32   16    8 : slabdata      3      3      0 : globalstat   43177    178   506  485 				   0    0    0 34643 : cpustat 161274   2921 129550      0
radix_tree_node    98990 100149    560    7    1 : tunables   32   16    8 : slabdata  14307  14307      0 : globalstat  308422  98990 18756   36 				   0    2    0 197446 : cpustat 340075  34275  77395    607
bdev_cache          2053   2065    816    5    1 : tunables   32   16    8 : slabdata    413    413      0 : globalstat    7161   2069   871   72 				   0    3    0 4039 : cpustat  12144   1224   7264     12
sysfs_dir_cache   107730 107830     96   41    1 : tunables   32   16    8 : slabdata   2630   2630      0 : globalstat  108179 107788  2633    3 				   0    0    0  237 : cpustat 223728   7915 123676      0
mnt_cache           2067   2116    168   23    1 : tunables   32   16    8 : slabdata     92     92      0 : globalstat    2236   2078    93    1 				   0    0    0    1 : cpustat   1986    190    108      0
inode_cache        15454  27840    624    6    1 : tunables   32   16    8 : slabdata   4640   4640      0 : globalstat   96641  84757 14283    4 				   0    2    0 60059 : cpustat 245872  15846 184712   1508
dentry_cache       22193  73488    248   16    1 : tunables   32   16    8 : slabdata   4593   4593     16 : globalstat  476186 155539 16442    4 				   0    2    0 162244 : cpustat 977680  42042 806406  28921
filp               17126  19558    272   14    1 : tunables   32   16    8 : slabdata   1397   1397      0 : globalstat  293377  86331  8745   14 				   0    0    0 148612 : cpustat 1991096  26474 1838295  13557
names_cache            6      6   4096    1    1 : tunables   24   12    8 : slabdata      6      6      0 : globalstat   31097   3452  8779 1892 				   0   25    0 28367 : cpustat 5531755  11901 5514840    446
idr_layer_cache       99    105    552    7    1 : tunables   32   16    8 : slabdata     15     15      0 : globalstat     214    102    16    1 				   0    0    0    5 : cpustat     91     48     35      0
buffer_head          685    770    112   35    1 : tunables   32   16    8 : slabdata     22     22      0 : globalstat 1174381 18446744073709547780 32092   15 				   0    1    0 845030 : cpustat 1087765 100094 321492  20698
mm_struct           4166   4466   1136    7    2 : tunables   24   12    8 : slabdata    638    638      0 : globalstat   77172  14665  3164   33 				   0    1    0 60988 : cpustat 480310   8968 422435   1700
vm_area_struct     53415  61826    208   19    1 : tunables   32   16    8 : slabdata   3254   3254      0 : globalstat 1922533 267113 23347    3 				   0    2    0 1422609 : cpustat 8994889 279735 7629557 169064
fs_cache            4170   4320     88   45    1 : tunables   32   16    8 : slabdata     96     96      0 : globalstat   68318  14653   450    2 				   0    0    0 54135 : cpustat 308299   5014 254064    959
files_cache         4161   4221    840    9    2 : tunables   32   16    8 : slabdata    469    469      0 : globalstat   67848  14651  2384   10 				   0    0    0 54135 : cpustat 307057   6257 254079    944
signal_cache        4213   4230    656    6    1 : tunables   32   16    8 : slabdata    705    705      0 : globalstat   42774  14765  3526   21 				   0    3    0 27535 : cpustat 308378   5860 281293   1197
sighand_cache       4205   4212   2088    3    2 : tunables   24   12    8 : slabdata   1403   1404      0 : globalstat   43170  14754  7299   35 				   0    1    0 27474 : cpustat 303872  10260 280698   1757
task_struct         4231   4240   1776    4    2 : tunables   24   12    8 : slabdata   1060   1060      0 : globalstat   43644  14780  5375   17 				   0    0    0 28078 : cpustat 305889   8357 280225   1723
anon_vma           17095  23409     48   81    1 : tunables   32   16    8 : slabdata    289    289      0 : globalstat  323371 113848  1827    0 				   0    0    0 130563 : cpustat 1324543  30791 1187916  19777
shared_policy_node      0      0     80   49    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
numa_policy           52    576     40   96    1 : tunables   32   16    8 : slabdata      6      6      0 : globalstat   52186  14679   156    2 				   0    0    0 44918 : cpustat 291287   3858 249238    937
size-131072(DMA)       0      0 131072    1   32 : tunables    8    4    0 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
size-131072            0      0 131072    1   32 : tunables    8    4    0 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
size-65536(DMA)        0      0  65536    1   16 : tunables    8    4    0 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
size-65536             1      1  65536    1   16 : tunables    8    4    0 : slabdata      1      1      0 : globalstat       1      1     1    0 				   0    0    0    0 : cpustat      0      1      0      0
size-32768(DMA)        0      0  32768    1    8 : tunables    8    4    0 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
size-32768             3      4  32768    1    8 : tunables    8    4    0 : slabdata      3      4      0 : globalstat       6      5     6    2 				   0    0    0    2 : cpustat      1      6      2      0
size-16384(DMA)        0      0  16384    1    4 : tunables    8    4    0 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
size-16384            16     24  16384    1    4 : tunables    8    4    0 : slabdata     16     24      0 : globalstat     107     42    74   14 				   0    0    0   82 : cpustat     27     93     22      0
size-8192(DMA)         0      0   8192    1    2 : tunables    8    4    0 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
size-8192           6205   6208   8192    1    2 : tunables    8    4    0 : slabdata   6205   6208      0 : globalstat    6435   6208  6238   30 				   0    0    0   54 : cpustat 133422   6374 133537      0
size-4096(DMA)         0      0   4096    1    1 : tunables   24   12    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
size-4096             78     92   4096    1    1 : tunables   24   12    8 : slabdata     78     92      0 : globalstat    2950     93   327  233 				   0    0    0  321 : cpustat 100236    723 100580      0
size-2048(DMA)         0      0   2072    3    2 : tunables   24   12    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
size-2048           4825   6261   2072    3    2 : tunables   24   12    8 : slabdata   2083   2087      0 : globalstat   49346  11316  4775    4 				   0    2  309 36653 : cpustat 151297   8238 117695    682
size-1024(DMA)         0      0   1048    7    2 : tunables   24   12    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
size-1024           8469   8519   1048    7    2 : tunables   24   12    8 : slabdata   1213   1217      0 : globalstat   17053   8533  1259   30 				   0    0   69 4532 : cpustat  42692   2096  31844     22
size-512(DMA)          0     21    536    7    1 : tunables   32   16    8 : slabdata      0      3      0 : globalstat     269     28     4    1 				   0    0    0  105 : cpustat   4056     40   3991      0
size-512            9372  12159    536    7    1 : tunables   32   16    8 : slabdata   1737   1737      0 : globalstat  122828  14020  2133    1 				   0    1  711 106723 : cpustat 817549   9562 711605    153
size-256(DMA)          0      0    280   14    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
size-256            7943   8036    280   14    1 : tunables   32   16    8 : slabdata    574    574      0 : globalstat   14951   8058   580    0 				   0    0  105 1031 : cpustat 278113   1176 270419      1
size-128(DMA)          0      0    152   26    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
size-64(DMA)           0      0     88   45    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
size-64            35404  37215     88   45    1 : tunables   32   16    8 : slabdata    827    827      0 : globalstat   56241  38196   940    0 				   0    0 12399 14124 : cpustat 188926   3815 155491    150
size-32(DMA)           0      0     56   69    1 : tunables   32   16    8 : slabdata      0      0      0 : globalstat       0      0     0    0 				   0    0    0    0 : cpustat      0      0      0      0
size-128           54824  56108    152   26    1 : tunables   32   16    8 : slabdata   2158   2158      0 : globalstat   63409  54921  2230    0 				   0    0 1698 3782 : cpustat 333072   5101 281240     54
size-32            49995  50163     56   69    1 : tunables   32   16    8 : slabdata    727    727      0 : globalstat   56325  50193   731    2 				   0    0   39 2001 : cpustat 281278   4059 233361     19
kmem_cache           138    138   1280    3    1 : tunables   24   12    8 : slabdata     46     46      0 : globalstat     138    138    46    0 				   0    0    0    0 : cpustat     69     64      0      0

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: 2.6.12-rc6-mm1 & 2K lun testing
  2005-06-16 19:50   ` Badari Pulavarty
@ 2005-06-16 20:37     ` Andrew Morton
  2005-06-16 23:43       ` Badari Pulavarty
  2005-06-16 22:42     ` 2.6.12-rc6-mm1 & 2K lun testing William Lee Irwin III
  1 sibling, 1 reply; 25+ messages in thread
From: Andrew Morton @ 2005-06-16 20:37 UTC (permalink / raw)
  To: Badari Pulavarty; +Cc: linux-kernel, linux-mm

Badari Pulavarty <pbadari@us.ibm.com> wrote:
>
> > 
> > We seem to be always ooming when allocating scsi command structures. 
> > Perhaps the block-level request structures are being allocated with
> > __GFP_WAIT, but it's a bit odd.  Which I/O scheduler?  If cfq, does
> > reducing /sys/block/*/queue/nr_requests help?
> 
> Yes. I am using CFQ scheduler. I changed nr_requests to 4 for all
> my devices. I also changed "min_free_kbytes" to 64M.

Yeah, that monster cfq queue depth continues to hurt in corner cases.

> Response time is still bad. Here is the vmstat, meminfo, slabinfo
> and profle output. I am not sure why profile output shows 
> default_idle(), when vmstat shows 100% CPU sys.

(please inline text rather then using attachments)

> MemTotal:      7209056 kB
> ...
> Dirty:         5896240 kB

That's not going to help - we're way over 40% there, so the VM is getting
into some trouble.

Try reducing the dirty limits in /proc/sys/vm by a lot to confirm that it
helps.

There are various bits of slop and hysteresis and deliberate overshoot in
page-writeback.c which are there to enhance IO batching and to reduce CPU
consumption.  A few megs here and there adds up when you multiply it by
2000...

Try this:

diff -puN mm/page-writeback.c~a mm/page-writeback.c
--- 25/mm/page-writeback.c~a	Thu Jun 16 13:36:29 2005
+++ 25-akpm/mm/page-writeback.c	Thu Jun 16 13:36:54 2005
@@ -501,6 +501,8 @@ void laptop_sync_completion(void)
 
 static void set_ratelimit(void)
 {
+	ratelimit_pages = 32;
+	return;
 	ratelimit_pages = total_pages / (num_online_cpus() * 32);
 	if (ratelimit_pages < 16)
 		ratelimit_pages = 16;
_


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: 2.6.12-rc6-mm1 & 2K lun testing
  2005-06-16 22:42     ` 2.6.12-rc6-mm1 & 2K lun testing William Lee Irwin III
@ 2005-06-16 22:25       ` Badari Pulavarty
  2005-06-16 22:58         ` William Lee Irwin III
  0 siblings, 1 reply; 25+ messages in thread
From: Badari Pulavarty @ 2005-06-16 22:25 UTC (permalink / raw)
  To: William Lee Irwin III; +Cc: Andrew Morton, Linux Kernel Mailing List, linux-mm

On Thu, 2005-06-16 at 15:42, William Lee Irwin III wrote:
> On Thu, Jun 16, 2005 at 12:50:59PM -0700, Badari Pulavarty wrote:
> > Yes. I am using CFQ scheduler. I changed nr_requests to 4 for all
> > my devices. I also changed "min_free_kbytes" to 64M.
> > Response time is still bad. Here is the vmstat, meminfo, slabinfo
> > and profle output. I am not sure why profile output shows 
> > default_idle(), when vmstat shows 100% CPU sys.
> 
> It's because you're sorting on the third field of readprofile(1),
> which is pure gibberish. Undoing this mistake will immediately
> enlighten you.

Hmm.. I was under the impression that its gives useful info ..

Here is readprofile man-page says:

       Print the 20 most loaded procedures:
          readprofile | sort -nr +2 | head -20



> Also, turn off slab poisoning when doing performance analyses.

Its already off. I am not trying to compare performance here.
I was trying to analyze VM behaviour with filesystem tests.
(with "raw" devices, machine is perfectly happy - but with
filesystem cache it crawls).

Thanks,
Badari


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: 2.6.12-rc6-mm1 & 2K lun testing
  2005-06-16 19:50   ` Badari Pulavarty
  2005-06-16 20:37     ` Andrew Morton
@ 2005-06-16 22:42     ` William Lee Irwin III
  2005-06-16 22:25       ` Badari Pulavarty
  1 sibling, 1 reply; 25+ messages in thread
From: William Lee Irwin III @ 2005-06-16 22:42 UTC (permalink / raw)
  To: Badari Pulavarty; +Cc: Andrew Morton, Linux Kernel Mailing List, linux-mm

On Thu, Jun 16, 2005 at 12:50:59PM -0700, Badari Pulavarty wrote:
> Yes. I am using CFQ scheduler. I changed nr_requests to 4 for all
> my devices. I also changed "min_free_kbytes" to 64M.
> Response time is still bad. Here is the vmstat, meminfo, slabinfo
> and profle output. I am not sure why profile output shows 
> default_idle(), when vmstat shows 100% CPU sys.

It's because you're sorting on the third field of readprofile(1),
which is pure gibberish. Undoing this mistake will immediately
enlighten you.

Also, turn off slab poisoning when doing performance analyses.


-- wli

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: 2.6.12-rc6-mm1 & 2K lun testing
  2005-06-16 22:25       ` Badari Pulavarty
@ 2005-06-16 22:58         ` William Lee Irwin III
  0 siblings, 0 replies; 25+ messages in thread
From: William Lee Irwin III @ 2005-06-16 22:58 UTC (permalink / raw)
  To: Badari Pulavarty; +Cc: Andrew Morton, Linux Kernel Mailing List, linux-mm

On Thu, 2005-06-16 at 15:42, William Lee Irwin III wrote:
>> It's because you're sorting on the third field of readprofile(1),
>> which is pure gibberish. Undoing this mistake will immediately
>> enlighten you.

On Thu, Jun 16, 2005 at 03:25:42PM -0700, Badari Pulavarty wrote:
> Hmm.. I was under the impression that its gives useful info ..
> Here is readprofile man-page says:
>        Print the 20 most loaded procedures:
>           readprofile | sort -nr +2 | head -20

Unfortunately it's bunk. Sorting by hits gives a much better idea
of where the time is going because it corresponds to time. That's
done with readprofile | sort -nr +0 | head -20


On Thu, 2005-06-16 at 15:42, William Lee Irwin III wrote:
>> Also, turn off slab poisoning when doing performance analyses.
> Its already off. I am not trying to compare performance here.
> I was trying to analyze VM behaviour with filesystem tests.
> (with "raw" devices, machine is perfectly happy - but with
> filesystem cache it crawls).

check_poison_obj(), which appears in your profile, exists only when
CONFIG_DEBUG_SLAB is set.


-- wli

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: 2.6.12-rc6-mm1 & 2K lun testing
  2005-06-16 20:37     ` Andrew Morton
@ 2005-06-16 23:43       ` Badari Pulavarty
  2005-06-17  0:51         ` Andrew Morton
  0 siblings, 1 reply; 25+ messages in thread
From: Badari Pulavarty @ 2005-06-16 23:43 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Linux Kernel Mailing List, linux-mm

On Thu, 2005-06-16 at 13:37, Andrew Morton wrote:
> Badari Pulavarty <pbadari@us.ibm.com> wrote:
> >
> > > 
> > > We seem to be always ooming when allocating scsi command structures. 
> > > Perhaps the block-level request structures are being allocated with
> > > __GFP_WAIT, but it's a bit odd.  Which I/O scheduler?  If cfq, does
> > > reducing /sys/block/*/queue/nr_requests help?
> > 
> > Yes. I am using CFQ scheduler. I changed nr_requests to 4 for all
> > my devices. I also changed "min_free_kbytes" to 64M.
> 
> Yeah, that monster cfq queue depth continues to hurt in corner cases.
> 
> > Response time is still bad. Here is the vmstat, meminfo, slabinfo
> > and profle output. I am not sure why profile output shows 
> > default_idle(), when vmstat shows 100% CPU sys.
> 
> (please inline text rather then using attachments)
> 
> > MemTotal:      7209056 kB
> > ...
> > Dirty:         5896240 kB
> 
> That's not going to help - we're way over 40% there, so the VM is getting
> into some trouble.
> 
> Try reducing the dirty limits in /proc/sys/vm by a lot to confirm that it
> helps.
> 
> There are various bits of slop and hysteresis and deliberate overshoot in
> page-writeback.c which are there to enhance IO batching and to reduce CPU
> consumption.  A few megs here and there adds up when you multiply it by
> 2000...
> 
> Try this:
> 
> diff -puN mm/page-writeback.c~a mm/page-writeback.c
> --- 25/mm/page-writeback.c~a	Thu Jun 16 13:36:29 2005
> +++ 25-akpm/mm/page-writeback.c	Thu Jun 16 13:36:54 2005
> @@ -501,6 +501,8 @@ void laptop_sync_completion(void)
>  
>  static void set_ratelimit(void)
>  {
> +	ratelimit_pages = 32;
> +	return;
>  	ratelimit_pages = total_pages / (num_online_cpus() * 32);
>  	if (ratelimit_pages < 16)
>  		ratelimit_pages = 16;
> _
> 

Wow !! Reducing the dirty ratios and the above patch did the trick.
Instead of 100% sys CPU, now I have only 50% in sys.

Of course, my IO rate is not so great, but machine responds really
really well. :) 

Thanks,
Badari

procs -----------memory---------- ---swap-- -----io---- --system--
----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy
id wa
 4 3667      8  76068 285016 4777976    0    0    51 22883  419  1900  0
49  0 51
20 3667      8  76068 285744 4779312    0    0    50 23108  433  1908  0
53  0 47
10 3680      8  76080 286492 4772888    0    0    58 26266  419  1805  0
56  0 44
 6 3661      8  76024 287116 4768136    0    0    50 27894  426  1765  0
59  0 41
 7 3679      8  76156 288052 4764620    0    0   270 24391  442  1852  0
53  0 47
 3 3691      8  77604 288732 4759296    0    0    44 24312  425  1809  0
57  0 43
 3 3697      8  75896 288868 4747808    0    0    82 29504  868  3605  2
64  0 34



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: 2.6.12-rc6-mm1 & 2K lun testing
  2005-06-16 23:43       ` Badari Pulavarty
@ 2005-06-17  0:51         ` Andrew Morton
  2005-06-17 15:10           ` Badari Pulavarty
  0 siblings, 1 reply; 25+ messages in thread
From: Andrew Morton @ 2005-06-17  0:51 UTC (permalink / raw)
  To: Badari Pulavarty; +Cc: linux-kernel, linux-mm

Badari Pulavarty <pbadari@us.ibm.com> wrote:
>
> > Try this:
>  > 
>  > diff -puN mm/page-writeback.c~a mm/page-writeback.c
>  > --- 25/mm/page-writeback.c~a	Thu Jun 16 13:36:29 2005
>  > +++ 25-akpm/mm/page-writeback.c	Thu Jun 16 13:36:54 2005
>  > @@ -501,6 +501,8 @@ void laptop_sync_completion(void)
>  >  
>  >  static void set_ratelimit(void)
>  >  {
>  > +	ratelimit_pages = 32;
>  > +	return;
>  >  	ratelimit_pages = total_pages / (num_online_cpus() * 32);
>  >  	if (ratelimit_pages < 16)
>  >  		ratelimit_pages = 16;
>  > _
>  > 
> 
>  Wow !! Reducing the dirty ratios and the above patch did the trick.
>  Instead of 100% sys CPU, now I have only 50% in sys.

It shouldn't be necessary to do both.  Either the patch or the tuning
should fix it.  Please confirm.

Also please determine whether the deep CFQ queue depth is a problem when
the VFS tuning/patching is in place.

IOW: let's work out which of these three areas needs to be addressed.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: 2.6.12-rc6-mm1 & 2K lun testing
  2005-06-17  0:51         ` Andrew Morton
@ 2005-06-17 15:10           ` Badari Pulavarty
  2005-06-17 21:13             ` Andrew Morton
  0 siblings, 1 reply; 25+ messages in thread
From: Badari Pulavarty @ 2005-06-17 15:10 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-kernel, linux-mm

Andrew Morton wrote:
> Badari Pulavarty <pbadari@us.ibm.com> wrote:
> 
>>>Try this:
>>
...
>>
>> Wow !! Reducing the dirty ratios and the above patch did the trick.
>> Instead of 100% sys CPU, now I have only 50% in sys.
> 
> 
> It shouldn't be necessary to do both.  Either the patch or the tuning
> should fix it.  Please confirm.
> 
> Also please determine whether the deep CFQ queue depth is a problem when
> the VFS tuning/patching is in place.
> 
> IOW: let's work out which of these three areas needs to be addressed.
> 

Andrew,

Sorry for not getting back earlier. I am running into weird problems.
When running "dd" write tests to 2048 ext3 filesystems, just with your
patch (no dirty ratio or CFS queue depth tuning), I see "buff" 
increasing instead of "cache" and I see "bi" instead of "bo".
Whats going on here ?

But files are getting written to and increasing in size. I am
really confused. Why are we reading stuff to buffers and not
writing to cache or disk ?

Thanks,
Badari


procs -----------memory---------- ---swap-- -----io---- --system-- 
----cpu----
  r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us 
sy id wa
..
  2  0      4 6339920  42712  24884    0    0     0    19  413  1237 46 
  6 48  0
  2  0      4 6233728  42748  25364    0    0     0    13  380   732 50 
  3 47  0
  0  0      4 6168192  42784  25328    0    0     0    13  336   485 44 
  2 54  0
  7 914      4 6156672  71520  24456    0    0  1559    15  336  3498  2 
13 61 24
18 1525      4 6081296 145752  24528    0    0  3789    13  813  7354 19 
36  0 45
  7 1843      4 5995024 220824  24276    0    0  3807    13  883  6637 
25 47  0 29
  6 2046      4 5898740 299228  23788    0    0  3955    13  876  6372 
25 60  0 15
13 2046      4 5790588 385156  24548    0    0  4301    13  860  7171  0 
59  0 41
13 2044      4 5676452 475736  24784    0    0  4533     0  848  7169  0 
69  0 31
18 2044      4 5557836 569840  24592    0    0  4710     0  841  7227  0 
74  0 26
procs -----------memory---------- ---swap-- -----io---- --system-- 
----cpu----
  r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us 
sy id wa
  9 2046      4 5438480 665492  24400    0    0  4785     0  833  7244 
0 77  0 23
  5 2046      4 5318292 761148  24204    0    0  4787     0  835  7152 
0 71  0 29
  5 2046      4 5197580 857284  24560    0    0  4806     8  838  7340 
0 71  0 29
18 2045      4 5078328 953056  24248    0    0  4789     0  833  7565  0 
68  0 32
11 2046      4 4958484 1049240  24556    0    0  4809     0  838  7558 
0 68  0 32
20 2044      4 4839300 1144704  24552    0    0  4777     0  840  7479 
0 67  0 33
10 2045      4 4720740 1239784  24416    0    0  4757     0  842  7446 
0 68  0 32
  6 2048      4 4602264 1334736  24408    0    0  4755     0  844  7437 
  0 66  0 34
10 2044      4 4483712 1429272  24300    0    0  4741     0  873  7433 
0 68  0 31
12 2044      4 4366552 1523220  24780    0    0  4712    17  933  7431 
0 68  0 31
  5 2047      4 4248488 1617952  24476    0    0  4753     0  873  7396 
  0 69  0 31
10 2047      4 4130764 1712128  24212    0    0  4717     0  840  7437 
0 66  0 34
  9 2046      4 4011812 1807536  24264    0    0  4776     0  839  7402 
  0 68  0 32
  9 2046      4 3892444 1903080  24180    0    0  4781     0  839  7386 
  0 70  0 30
15 2046      4 3772796 1998880  24872    0    0  4794     5  835  7358 
0 72  0 28
  5 2047      4 3656624 2092104  24528    0    0  4701     0  838  7358 
  0 67  0 33
12 2047      4 3547912 2178752  24568    0    0  4392     0  866  7246 
0 57  0 43
10 2046      4 3443524 2261348  25048    0    0  4188    16  880  7038 
0 53  0 47
  7 2045      4 3340784 2342932  24992    0    0  4149     0  868  7132 
  0 51  0 49
  1 2048      4 3239292 2423576  24328    0    0  4097     0  874  7172 
  0 50  0 50
12 2044      4 3134688 2507120  24892    0    0  4217     0  873  7294 
0 54  0 46
procs -----------memory---------- ---swap-- -----io---- --system-- 
----cpu----
  r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us 
sy id wa
  4 2047      4 3026604 2593700  24484    0    0  4349     0  869  7335 
  0 56  0 44
16 2046      4 2916504 2681900  24520    0    0  4421     0  861  7317 
0 58  0 42
  2 2047      4 2806032 2770784  24388    0    0  4451     0  865  7326 
  0 59  0 41
12 2046      4 2694388 2860208  24748    0    0  4481     0  865  7304 
0 61  0 39
  8 2044      4 2584656 2948112  24564    0    0  4423     0  857  7245 
  0 61  0 39



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: 2.6.12-rc6-mm1 & 2K lun testing
  2005-06-17 15:10           ` Badari Pulavarty
@ 2005-06-17 21:13             ` Andrew Morton
  2005-06-22  0:34               ` 2.6.12-mm1 & 2K lun testing (JFS problem ?) Badari Pulavarty
  0 siblings, 1 reply; 25+ messages in thread
From: Andrew Morton @ 2005-06-17 21:13 UTC (permalink / raw)
  To: Badari Pulavarty; +Cc: linux-kernel, linux-mm

Badari Pulavarty <pbadari@us.ibm.com> wrote:
>
> > It shouldn't be necessary to do both.  Either the patch or the tuning
> > should fix it.  Please confirm.
> > 
> > Also please determine whether the deep CFQ queue depth is a problem when
> > the VFS tuning/patching is in place.
> > 
> > IOW: let's work out which of these three areas needs to be addressed.
> > 
> 
> Andrew,
> 
> Sorry for not getting back earlier. I am running into weird problems.
> When running "dd" write tests to 2048 ext3 filesystems, just with your
> patch (no dirty ratio or CFS queue depth tuning), I see "buff" 
> increasing instead of "cache" and I see "bi" instead of "bo".
> Whats going on here ?

Beats me.  Are you sure you're not running a broken vmstat?

`buff' would increase if you were accidentally writing to /dev/sda1 rather
than /dev/sda1/some-filename, but I don't know why vmstat would be getting
confused over the direction of the I/O.

> 
> procs -----------memory---------- ---swap-- -----io---- --system-- 
> ----cpu----
>   r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us 
> sy id wa
> ..
>   2  0      4 6339920  42712  24884    0    0     0    19  413  1237 46 
>   6 48  0

You're wordwrapping...

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: 2.6.12-mm1 & 2K lun testing  (JFS problem ?)
  2005-06-17 21:13             ` Andrew Morton
@ 2005-06-22  0:34               ` Badari Pulavarty
  2005-06-22  1:41                 ` William Lee Irwin III
  2005-06-22 13:50                 ` Dave Kleikamp
  0 siblings, 2 replies; 25+ messages in thread
From: Badari Pulavarty @ 2005-06-22  0:34 UTC (permalink / raw)
  To: Andrew Morton, shaggy; +Cc: linux-kernel, linux-mm

Hi Andrew & Shaggy,

Here is the summary of 2K lun testing on 2.6.12-mm1.

When I tune dirty ratios and CFQ queue depths, things
seems to be running fine.

	echo 20 > /proc/sys/vm/dirty_ratio
	echo 20 > /proc/sys/vm/overcommit_ratio
	echo 4 > /sys/block/<device>/queue/nr_requests
	

But, I am running into JFS problem. I can't kill my
"dd" process. They all get stuck in:

(I am going to try ext3).

dd            D 0000000000000000     0 12943      1               12939
(NOTLB)
ffff81010612d8f8 0000000000000086 ffff81019677a380 000000000003ffff
       00000000d5b95298 ffff81010612d918 0000000000000003
ffff810169f63880
       00000076d9f1ea00 0000000000000001
Call Trace:<ffffffff802fb31f>{submit_bio+223} <ffffffff8026a8e1>{txBegin
+625}
       <ffffffff80130540>{default_wake_function+0}
<ffffffff80130540>{default_wake_function+0}
       <ffffffff80250a8b>{jfs_commit_inode+155}
<ffffffff80250daa>{jfs_write_inode+58}
       <ffffffff801a8857>{__writeback_single_inode+551}
<ffffffff80250929>{jfs_get_blocks+521}
       <ffffffff8015dd4c>{find_get_page+92}
<ffffffff80185555>{__find_get_block_slow+85}
       <ffffffff801a8e7c>{generic_sync_sb_inodes+524}
<ffffffff801a91cd>{writeback_inodes+125}
       <ffffffff80164aa4>{balance_dirty_pages_ratelimited+228}
       <ffffffff8015eb65>{generic_file_buffered_write+1221}
       <ffffffff8013b3a5>{current_fs_time+85}
<ffffffff801a9254>{__mark_inode_dirty+52}
       <ffffffff8019e4ac>{inode_update_time+188}
<ffffffff8015effa>{__generic_file_aio_write_nolock+938}
       <ffffffff8016efa5>{unmap_vmas+965}
<ffffffff8015f1de>{__generic_file_write_nolock+158}
       <ffffffff8017149e>{zeromap_page_range+990}
<ffffffff8014d0c0>{autoremove_wake_function+0}
       <ffffffff802941b1>{__up_read+33}
<ffffffff8015f345>{generic_file_write+101}
       <ffffffff80183b39>{vfs_write+233} <ffffffff80183ce3>{sys_write
+83}
       <ffffffff8010dc8e>{system_call+126}

# ps -alx 

...
0     0 12923     1  18   0   2900   512 txBegi D    pts/1      0:01 dd
if /dev/zero of /mnt2030/0     0 12925     1  18   0   2896   512 txBegi
D    pts/1      0:02 dd if /dev/zero of /mnt2029/0     0 12927     1  18
0   2896   512 txBegi D    pts/1      0:01 dd if /dev/zero of /mnt2032/0
0 12928     1  18   0   2900   512 txBegi D    pts/1      0:02 dd
if /dev/zero of /mnt2034/0     0 12930     1  18   0   2900   512 txBegi
D    pts/1      0:02 dd if /dev/zero of /mnt2035/0     0 12932     1  18
0   2896   508 txBegi D    pts/1      0:02 dd if /dev/zero of /mnt2037/0
0 12933     1  18   0   2896   512 txBegi D    pts/1      0:02 dd
if /dev/zero of /mnt2038/0     0 12935     1  18   0   2900   512 txBegi
D    pts/1      0:03 dd if /dev/zero of /mnt2040/


Thanks,
Badari


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: 2.6.12-mm1 & 2K lun testing  (JFS problem ?)
  2005-06-22  0:34               ` 2.6.12-mm1 & 2K lun testing (JFS problem ?) Badari Pulavarty
@ 2005-06-22  1:41                 ` William Lee Irwin III
  2005-06-22 16:23                   ` Badari Pulavarty
  2005-06-22 13:50                 ` Dave Kleikamp
  1 sibling, 1 reply; 25+ messages in thread
From: William Lee Irwin III @ 2005-06-22  1:41 UTC (permalink / raw)
  To: Badari Pulavarty; +Cc: Andrew Morton, shaggy, linux-kernel, linux-mm

On Tue, Jun 21, 2005 at 05:34:54PM -0700, Badari Pulavarty wrote:
> Hi Andrew & Shaggy,
> Here is the summary of 2K lun testing on 2.6.12-mm1.
> When I tune dirty ratios and CFQ queue depths, things
> seems to be running fine.
> 	echo 20 > /proc/sys/vm/dirty_ratio
> 	echo 20 > /proc/sys/vm/overcommit_ratio
> 	echo 4 > /sys/block/<device>/queue/nr_requests
> But, I am running into JFS problem. I can't kill my
> "dd" process. They all get stuck in:
> (I am going to try ext3).

If you could get unabridged profiling data for raw vs. fs (so it can
be properly sorted) I would be interested in that. Early indications
were large amounts of time spent in shrink_zone(), obtained by
re-sorting the truncated profile listings. It indicated the time spent
in shrink_zone() was 26.3 times as much time spent in default_idle().
Typically copying to and from userspace are enormous overheads, but
aren't observable in the truncated/mis-sorted profiles, which calls
them into question, barring unreported usage of O_DIRECT. There are
also no totals reported, which are helpful for interpreting realtime
behavior.


-- wli

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: 2.6.12-mm1 & 2K lun testing  (JFS problem ?)
  2005-06-22  0:34               ` 2.6.12-mm1 & 2K lun testing (JFS problem ?) Badari Pulavarty
  2005-06-22  1:41                 ` William Lee Irwin III
@ 2005-06-22 13:50                 ` Dave Kleikamp
  2005-06-22 16:56                   ` Badari Pulavarty
  2005-06-22 21:02                   ` Badari Pulavarty
  1 sibling, 2 replies; 25+ messages in thread
From: Dave Kleikamp @ 2005-06-22 13:50 UTC (permalink / raw)
  To: Badari Pulavarty; +Cc: Andrew Morton, linux-kernel, linux-mm

On Tue, 2005-06-21 at 17:34 -0700, Badari Pulavarty wrote:
> Hi Andrew & Shaggy,
> 
> Here is the summary of 2K lun testing on 2.6.12-mm1.
> 
> When I tune dirty ratios and CFQ queue depths, things
> seems to be running fine.
> 
> 	echo 20 > /proc/sys/vm/dirty_ratio
> 	echo 20 > /proc/sys/vm/overcommit_ratio
> 	echo 4 > /sys/block/<device>/queue/nr_requests
> 	
> 
> But, I am running into JFS problem. I can't kill my
> "dd" process.

Assuming you built the kernel with CONFIG_JFS_STATISTICS, can you send
me the contents of /proc/fs/jfs/txstats?

> They all get stuck in:
> 
> (I am going to try ext3).
> 
> dd            D 0000000000000000     0 12943      1               12939
> (NOTLB)
> ffff81010612d8f8 0000000000000086 ffff81019677a380 000000000003ffff
>        00000000d5b95298 ffff81010612d918 0000000000000003
> ffff810169f63880
>        00000076d9f1ea00 0000000000000001
> Call Trace:<ffffffff802fb31f>{submit_bio+223} 
> <ffffffff8026a8e1>{txBegin+625}

Looks like txBegin is the problem.  Probably ran out of txBlocks.  Maybe
a stack trace of jfsCommit, jfsIO, and jfsSync threads might be useful
too.

>        <ffffffff80130540>{default_wake_function+0}
> <ffffffff80130540>{default_wake_function+0}
>        <ffffffff80250a8b>{jfs_commit_inode+155}
> <ffffffff80250daa>{jfs_write_inode+58}
>        <ffffffff801a8857>{__writeback_single_inode+551}
> <ffffffff80250929>{jfs_get_blocks+521}
>        <ffffffff8015dd4c>{find_get_page+92}
> <ffffffff80185555>{__find_get_block_slow+85}
>        <ffffffff801a8e7c>{generic_sync_sb_inodes+524}
> <ffffffff801a91cd>{writeback_inodes+125}
>        <ffffffff80164aa4>{balance_dirty_pages_ratelimited+228}
>        <ffffffff8015eb65>{generic_file_buffered_write+1221}
>        <ffffffff8013b3a5>{current_fs_time+85}
> <ffffffff801a9254>{__mark_inode_dirty+52}
>        <ffffffff8019e4ac>{inode_update_time+188}
> <ffffffff8015effa>{__generic_file_aio_write_nolock+938}
>        <ffffffff8016efa5>{unmap_vmas+965}
> <ffffffff8015f1de>{__generic_file_write_nolock+158}
>        <ffffffff8017149e>{zeromap_page_range+990}
> <ffffffff8014d0c0>{autoremove_wake_function+0}
>        <ffffffff802941b1>{__up_read+33}
> <ffffffff8015f345>{generic_file_write+101}
>        <ffffffff80183b39>{vfs_write+233} <ffffffff80183ce3>{sys_write
> +83}
>        <ffffffff8010dc8e>{system_call+126}
> 

> Thanks,
> Badari

-- 
David Kleikamp
IBM Linux Technology Center


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: 2.6.12-mm1 & 2K lun testing  (JFS problem ?)
  2005-06-22  1:41                 ` William Lee Irwin III
@ 2005-06-22 16:23                   ` Badari Pulavarty
  0 siblings, 0 replies; 25+ messages in thread
From: Badari Pulavarty @ 2005-06-22 16:23 UTC (permalink / raw)
  To: William Lee Irwin III; +Cc: Andrew Morton, shaggy, linux-kernel, linux-mm


Hi Wli,

Okay. Let me go through all the tests one more time with collecting
data. I will compare raw vs filesystem writes with dirty ratio
tuning. I will provide vmstat, slabinfo, meminfo, profile data
for each of those. You need anything else ?

BTW, I will send the data offline. I don't want to polute the
list with megabytes of data.

Thanks,
Badari

On Tue, 2005-06-21 at 18:41 -0700, William Lee Irwin III wrote:
> On Tue, Jun 21, 2005 at 05:34:54PM -0700, Badari Pulavarty wrote:
> > Hi Andrew & Shaggy,
> > Here is the summary of 2K lun testing on 2.6.12-mm1.
> > When I tune dirty ratios and CFQ queue depths, things
> > seems to be running fine.
> > 	echo 20 > /proc/sys/vm/dirty_ratio
> > 	echo 20 > /proc/sys/vm/overcommit_ratio
> > 	echo 4 > /sys/block/<device>/queue/nr_requests
> > But, I am running into JFS problem. I can't kill my
> > "dd" process. They all get stuck in:
> > (I am going to try ext3).
> 
> If you could get unabridged profiling data for raw vs. fs (so it can
> be properly sorted) I would be interested in that. Early indications
> were large amounts of time spent in shrink_zone(), obtained by
> re-sorting the truncated profile listings. It indicated the time spent
> in shrink_zone() was 26.3 times as much time spent in default_idle().
> Typically copying to and from userspace are enormous overheads, but
> aren't observable in the truncated/mis-sorted profiles, which calls
> them into question, barring unreported usage of O_DIRECT. There are
> also no totals reported, which are helpful for interpreting realtime
> behavior.
> 
> 
> -- wli
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>
> 


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: 2.6.12-mm1 & 2K lun testing  (JFS problem ?)
  2005-06-22 13:50                 ` Dave Kleikamp
@ 2005-06-22 16:56                   ` Badari Pulavarty
  2005-06-22 21:02                   ` Badari Pulavarty
  1 sibling, 0 replies; 25+ messages in thread
From: Badari Pulavarty @ 2005-06-22 16:56 UTC (permalink / raw)
  To: Dave Kleikamp; +Cc: Andrew Morton, linux-kernel, linux-mm

I need to re-create the problem to capture stats.
I don't see any stacks for jfsCommit, jfsSync, jfsIO
threads in sysrq-t output (in /var/log/messages).
Hmm. Let me re-create to capture this.

Thanks,
Badari

On Wed, 2005-06-22 at 08:50 -0500, Dave Kleikamp wrote:
> On Tue, 2005-06-21 at 17:34 -0700, Badari Pulavarty wrote:
> > Hi Andrew & Shaggy,
> > 
> > Here is the summary of 2K lun testing on 2.6.12-mm1.
> > 
> > When I tune dirty ratios and CFQ queue depths, things
> > seems to be running fine.
> > 
> > 	echo 20 > /proc/sys/vm/dirty_ratio
> > 	echo 20 > /proc/sys/vm/overcommit_ratio
> > 	echo 4 > /sys/block/<device>/queue/nr_requests
> > 	
> > 
> > But, I am running into JFS problem. I can't kill my
> > "dd" process.
> 
> Assuming you built the kernel with CONFIG_JFS_STATISTICS, can you send
> me the contents of /proc/fs/jfs/txstats?
> 
> > They all get stuck in:
> > 
> > (I am going to try ext3).
> > 
> > dd            D 0000000000000000     0 12943      1               12939
> > (NOTLB)
> > ffff81010612d8f8 0000000000000086 ffff81019677a380 000000000003ffff
> >        00000000d5b95298 ffff81010612d918 0000000000000003
> > ffff810169f63880
> >        00000076d9f1ea00 0000000000000001
> > Call Trace:<ffffffff802fb31f>{submit_bio+223} 
> > <ffffffff8026a8e1>{txBegin+625}
> 
> Looks like txBegin is the problem.  Probably ran out of txBlocks.  Maybe
> a stack trace of jfsCommit, jfsIO, and jfsSync threads might be useful
> too.
> 
> >        <ffffffff80130540>{default_wake_function+0}
> > <ffffffff80130540>{default_wake_function+0}
> >        <ffffffff80250a8b>{jfs_commit_inode+155}
> > <ffffffff80250daa>{jfs_write_inode+58}
> >        <ffffffff801a8857>{__writeback_single_inode+551}
> > <ffffffff80250929>{jfs_get_blocks+521}
> >        <ffffffff8015dd4c>{find_get_page+92}
> > <ffffffff80185555>{__find_get_block_slow+85}
> >        <ffffffff801a8e7c>{generic_sync_sb_inodes+524}
> > <ffffffff801a91cd>{writeback_inodes+125}
> >        <ffffffff80164aa4>{balance_dirty_pages_ratelimited+228}
> >        <ffffffff8015eb65>{generic_file_buffered_write+1221}
> >        <ffffffff8013b3a5>{current_fs_time+85}
> > <ffffffff801a9254>{__mark_inode_dirty+52}
> >        <ffffffff8019e4ac>{inode_update_time+188}
> > <ffffffff8015effa>{__generic_file_aio_write_nolock+938}
> >        <ffffffff8016efa5>{unmap_vmas+965}
> > <ffffffff8015f1de>{__generic_file_write_nolock+158}
> >        <ffffffff8017149e>{zeromap_page_range+990}
> > <ffffffff8014d0c0>{autoremove_wake_function+0}
> >        <ffffffff802941b1>{__up_read+33}
> > <ffffffff8015f345>{generic_file_write+101}
> >        <ffffffff80183b39>{vfs_write+233} <ffffffff80183ce3>{sys_write
> > +83}
> >        <ffffffff8010dc8e>{system_call+126}
> > 
> 
> > Thanks,
> > Badari
> 


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: 2.6.12-mm1 & 2K lun testing  (JFS problem ?)
  2005-06-22 13:50                 ` Dave Kleikamp
  2005-06-22 16:56                   ` Badari Pulavarty
@ 2005-06-22 21:02                   ` Badari Pulavarty
  1 sibling, 0 replies; 25+ messages in thread
From: Badari Pulavarty @ 2005-06-22 21:02 UTC (permalink / raw)
  To: Dave Kleikamp; +Cc: Andrew Morton, linux-kernel, linux-mm

On Wed, 2005-06-22 at 08:50 -0500, Dave Kleikamp wrote:

> > But, I am running into JFS problem. I can't kill my
> > "dd" process.
> 
> Assuming you built the kernel with CONFIG_JFS_STATISTICS, can you send
> me the contents of /proc/fs/jfs/txstats?

Reproduced the problem. Here are the stats..

JFS TxStats
===========
calls to txBegin = 26783
txBegin blocked by sync barrier = 0
txBegin blocked by tlocks low = 0
txBegin blocked by no free tid = 930528
calls to txBeginAnon = 8700659
txBeginAnon blocked by sync barrier = 0
txBeginAnon blocked by tlocks low = 0
calls to txLockAlloc = 50601
tLockAlloc blocked by no free lock = 0


> Looks like txBegin is the problem.  Probably ran out of txBlocks.  Maybe
> a stack trace of jfsCommit, jfsIO, and jfsSync threads might be useful
> too.

I don't see the stacks for these jfs threads in the sysrq-t
output. I wonder why sysrq-t is skipping them. Any Idea ?

elm3b29:/proc/sys/fs # ps -aef | grep -i jfs
root       174     1  0 02:11 ?        00:00:00 [jfsIO]
root       175     1  0 02:11 ?        00:00:01 [jfsCommit]
root       176     1  0 02:11 ?        00:00:01 [jfsCommit]
root       177     1  0 02:11 ?        00:00:02 [jfsCommit]
root       178     1  0 02:11 ?        00:00:02 [jfsCommit]
root       179     1  0 02:11 ?        00:00:00 [jfsSync]
root      7200  7759  0 05:54 pts/1    00:00:00 grep -i jfs

Thanks,
Badari


^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2005-06-22 21:07 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2005-06-15 17:36 2.6.12-rc6-mm1 & 2K lun testing Badari Pulavarty
2005-06-15 18:30 ` Nick Piggin
2005-06-15 18:30   ` Badari Pulavarty
2005-06-15 19:02     ` Nick Piggin
2005-06-15 20:56       ` Badari Pulavarty
2005-06-16  1:48         ` Nick Piggin
2005-06-15 23:23   ` Dave Chinner
2005-06-15 21:39 ` Chen, Kenneth W
2005-06-15 22:35   ` Badari Pulavarty
2005-06-16  7:24 ` Andrew Morton
2005-06-16 19:50   ` Badari Pulavarty
2005-06-16 20:37     ` Andrew Morton
2005-06-16 23:43       ` Badari Pulavarty
2005-06-17  0:51         ` Andrew Morton
2005-06-17 15:10           ` Badari Pulavarty
2005-06-17 21:13             ` Andrew Morton
2005-06-22  0:34               ` 2.6.12-mm1 & 2K lun testing (JFS problem ?) Badari Pulavarty
2005-06-22  1:41                 ` William Lee Irwin III
2005-06-22 16:23                   ` Badari Pulavarty
2005-06-22 13:50                 ` Dave Kleikamp
2005-06-22 16:56                   ` Badari Pulavarty
2005-06-22 21:02                   ` Badari Pulavarty
2005-06-16 22:42     ` 2.6.12-rc6-mm1 & 2K lun testing William Lee Irwin III
2005-06-16 22:25       ` Badari Pulavarty
2005-06-16 22:58         ` William Lee Irwin III

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).