linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC] Reproducible OOM with just a few sleeps
@ 2013-01-12  3:31 paul.szabo
  2013-01-14 15:00 ` Dave Hansen
  0 siblings, 1 reply; 26+ messages in thread
From: paul.szabo @ 2013-01-12  3:31 UTC (permalink / raw)
  To: linux-mm; +Cc: 695182, linux-kernel

Dear Linux-MM,

Seems that any i386 PAE machine will go OOM just by running a few
processes. To reproduce:
  sh -c 'n=0; while [ $n -lt 19999 ]; do sleep 600 & ((n=n+1)); done'
My machine has 64GB RAM. With previous OOM episodes, it seemed that
running (booting) it with mem=32G might avoid OOM; but an OOM was
obtained just the same, and also with lower memory:
  Memory    sleeps to OOM       free shows total
  (mem=64G)  5300               64447796
  mem=32G   10200               31155512
  mem=16G   13400               14509364
  mem=8G    14200               6186296
  mem=6G    15200               4105532
  mem=4G    16400               2041364
The machine does not run out of highmem, nor does it use any swap.

Comparing with my desktop PC: has 4GB RAM installed, free shows 3978592
total. Running the "sleep test", it simply froze after 16400 running...
no response to ping, will need to press the RESET button.

---

On my large machine, 'free' fails to show about 2GB memory, e.g. with
mem=16G it shows:

root@zeno:~# free -l
             total       used       free     shared    buffers     cached
Mem:      14509364     435440   14073924          0       4068     111328
Low:        769044     120232     648812
High:     13740320     315208   13425112
-/+ buffers/cache:     320044   14189320
Swap:    134217724          0  134217724

---

Please let me know of any ideas, or if you want me to run some other
test or want to see some other output.

Thanks, Paul

Paul Szabo   psz@maths.usyd.edu.au   http://www.maths.usyd.edu.au/u/psz/
School of Mathematics and Statistics   University of Sydney    Australia


-----

Details for when my machine was running with 64GB RAM:

In another window I was running
  cat /proc/slabinfo; free -l
repeatedly, and output of that (just before OOM) was:

+ cat /proc/slabinfo
slabinfo - version: 2.1
# name            <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail>
fuse_request           0      0    376   43    4 : tunables    0    0    0 : slabdata      0      0      0
fuse_inode             0      0    448   36    4 : tunables    0    0    0 : slabdata      0      0      0
bsg_cmd                0      0    288   28    2 : tunables    0    0    0 : slabdata      0      0      0
ntfs_big_inode_cache      0      0    512   32    4 : tunables    0    0    0 : slabdata      0      0      0
ntfs_inode_cache       0      0    176   46    2 : tunables    0    0    0 : slabdata      0      0      0
nfs_direct_cache       0      0     80   51    1 : tunables    0    0    0 : slabdata      0      0      0
nfs_inode_cache       28     28    584   28    4 : tunables    0    0    0 : slabdata      1      1      0
isofs_inode_cache      0      0    360   45    4 : tunables    0    0    0 : slabdata      0      0      0
fat_inode_cache        0      0    408   40    4 : tunables    0    0    0 : slabdata      0      0      0
fat_cache              0      0     24  170    1 : tunables    0    0    0 : slabdata      0      0      0
jbd2_revoke_record      0      0     32  128    1 : tunables    0    0    0 : slabdata      0      0      0
journal_handle      4080   4080     24  170    1 : tunables    0    0    0 : slabdata     24     24      0
journal_head        1024   1024     64   64    1 : tunables    0    0    0 : slabdata     16     16      0
revoke_record        768    768     16  256    1 : tunables    0    0    0 : slabdata      3      3      0
ext4_inode_cache       0      0    584   28    4 : tunables    0    0    0 : slabdata      0      0      0
ext4_free_data         0      0     40  102    1 : tunables    0    0    0 : slabdata      0      0      0
ext4_allocation_context      0      0    112   36    1 : tunables    0    0    0 : slabdata      0      0      0
ext4_prealloc_space      0      0     72   56    1 : tunables    0    0    0 : slabdata      0      0      0
ext4_io_end            0      0    576   28    4 : tunables    0    0    0 : slabdata      0      0      0
ext4_io_page           0      0      8  512    1 : tunables    0    0    0 : slabdata      0      0      0
ext2_inode_cache       0      0    480   34    4 : tunables    0    0    0 : slabdata      0      0      0
ext3_inode_cache    1467   2079    488   33    4 : tunables    0    0    0 : slabdata     63     63      0
ext3_xattr             0      0     48   85    1 : tunables    0    0    0 : slabdata      0      0      0
dquot                168    168    192   42    2 : tunables    0    0    0 : slabdata      4      4      0
rpc_inode_cache      108    108    448   36    4 : tunables    0    0    0 : slabdata      3      3      0
UDP-Lite               0      0    576   28    4 : tunables    0    0    0 : slabdata      0      0      0
xfrm_dst_cache         0      0    320   51    4 : tunables    0    0    0 : slabdata      0      0      0
UDP                  336    336    576   28    4 : tunables    0    0    0 : slabdata     12     12      0
tw_sock_TCP           32     32    128   32    1 : tunables    0    0    0 : slabdata      1      1      0
TCP                  504    504   1152   28    8 : tunables    0    0    0 : slabdata     18     18      0
eventpoll_pwq          0      0     40  102    1 : tunables    0    0    0 : slabdata      0      0      0
blkdev_queue         264    264    968   33    8 : tunables    0    0    0 : slabdata      8      8      0
blkdev_requests      925    925    216   37    2 : tunables    0    0    0 : slabdata     25     25      0
biovec-256            10     10   3072   10    8 : tunables    0    0    0 : slabdata      1      1      0
biovec-128           105    105   1536   21    8 : tunables    0    0    0 : slabdata      5      5      0
biovec-64            588    588    768   42    8 : tunables    0    0    0 : slabdata     14     14      0
sock_inode_cache    1512   1512    384   42    4 : tunables    0    0    0 : slabdata     36     36      0
skbuff_fclone_cache    966    966    384   42    4 : tunables    0    0    0 : slabdata     23     23      0
file_lock_cache      648    648    112   36    1 : tunables    0    0    0 : slabdata     18     18      0
shmem_inode_cache   1716   1716    368   44    4 : tunables    0    0    0 : slabdata     39     39      0
Acpi-State         75990  75990     48   85    1 : tunables    0    0    0 : slabdata    894    894      0
taskstats              0      0    328   49    4 : tunables    0    0    0 : slabdata      0      0      0
proc_inode_cache    5326   5588    368   44    4 : tunables    0    0    0 : slabdata    127    127      0
sigqueue             980    980    144   28    1 : tunables    0    0    0 : slabdata     35     35      0
bdev_cache           544    544    512   32    4 : tunables    0    0    0 : slabdata     17     17      0
sysfs_dir_cache    25245  25245     80   51    1 : tunables    0    0    0 : slabdata    495    495      0
inode_cache         2083   2592    336   48    4 : tunables    0    0    0 : slabdata     54     54      0
dentry              7956  10944    128   32    1 : tunables    0    0    0 : slabdata    342    342      0
buffer_head         2847   2847     56   73    1 : tunables    0    0    0 : slabdata     39     39      0
vm_area_struct    103684 103684     88   46    1 : tunables    0    0    0 : slabdata   2254   2254      0
mm_struct           6444   6444    448   36    4 : tunables    0    0    0 : slabdata    179    179      0
signal_cache        6692   6692    576   28    4 : tunables    0    0    0 : slabdata    239    239      0
sighand_cache       6312   6312   1344   24    8 : tunables    0    0    0 : slabdata    263    263      0
task_xstate         6357   6357    832   39    8 : tunables    0    0    0 : slabdata    163    163      0
task_struct         6720   6720   1008   32    8 : tunables    0    0    0 : slabdata    210    210      0
anon_vma_chain     91970  91970     24  170    1 : tunables    0    0    0 : slabdata    541    541      0
anon_vma           57018  57018     40  102    1 : tunables    0    0    0 : slabdata    559    559      0
radix_tree_node     2357   2862    304   53    4 : tunables    0    0    0 : slabdata     54     54      0
idr_layer_cache     1908   1908    152   53    2 : tunables    0    0    0 : slabdata     36     36      0
dma-kmalloc-8192       0      0   8192    4    8 : tunables    0    0    0 : slabdata      0      0      0
dma-kmalloc-4096       0      0   4096    8    8 : tunables    0    0    0 : slabdata      0      0      0
dma-kmalloc-2048       0      0   2048   16    8 : tunables    0    0    0 : slabdata      0      0      0
dma-kmalloc-1024       0      0   1024   32    8 : tunables    0    0    0 : slabdata      0      0      0
dma-kmalloc-512       64     64    512   32    4 : tunables    0    0    0 : slabdata      2      2      0
dma-kmalloc-256        0      0    256   32    2 : tunables    0    0    0 : slabdata      0      0      0
dma-kmalloc-128        0      0    128   32    1 : tunables    0    0    0 : slabdata      0      0      0
dma-kmalloc-64         0      0     64   64    1 : tunables    0    0    0 : slabdata      0      0      0
dma-kmalloc-32         0      0     32  128    1 : tunables    0    0    0 : slabdata      0      0      0
dma-kmalloc-16         0      0     16  256    1 : tunables    0    0    0 : slabdata      0      0      0
dma-kmalloc-8          0      0      8  512    1 : tunables    0    0    0 : slabdata      0      0      0
dma-kmalloc-192        0      0    192   42    2 : tunables    0    0    0 : slabdata      0      0      0
dma-kmalloc-96         0      0     96   42    1 : tunables    0    0    0 : slabdata      0      0      0
kmalloc-8192          88     88   8192    4    8 : tunables    0    0    0 : slabdata     22     22      0
kmalloc-4096         408    408   4096    8    8 : tunables    0    0    0 : slabdata     51     51      0
kmalloc-2048         512    512   2048   16    8 : tunables    0    0    0 : slabdata     32     32      0
kmalloc-1024        3264   3264   1024   32    8 : tunables    0    0    0 : slabdata    102    102      0
kmalloc-512         2048   2048    512   32    4 : tunables    0    0    0 : slabdata     64     64      0
kmalloc-256         6816   6816    256   32    2 : tunables    0    0    0 : slabdata    213    213      0
kmalloc-128        14432  14432    128   32    1 : tunables    0    0    0 : slabdata    451    451      0
kmalloc-64         17728  17728     64   64    1 : tunables    0    0    0 : slabdata    277    277      0
kmalloc-32         27008  27008     32  128    1 : tunables    0    0    0 : slabdata    211    211      0
kmalloc-16         11520  11520     16  256    1 : tunables    0    0    0 : slabdata     45     45      0
kmalloc-8          18432  18432      8  512    1 : tunables    0    0    0 : slabdata     36     36      0
kmalloc-192        33514  33810    192   42    2 : tunables    0    0    0 : slabdata    805    805      0
kmalloc-96          7014   7014     96   42    1 : tunables    0    0    0 : slabdata    167    167      0
kmem_cache            32     32    128   32    1 : tunables    0    0    0 : slabdata      1      1      0
kmem_cache_node      384    384     32  128    1 : tunables    0    0    0 : slabdata      3      3      0
+ free -l
             total       used       free     shared    buffers     cached
Mem:      64447796    1086840   63360956          0        664      16428
Low:        375828     367556       8272
High:     64071968     719284   63352684
-/+ buffers/cache:    1069748   63378048
Swap:    134217724          0  134217724
+ 


Lines in syslog from just before OOM (my patched kernel with
drop_caches):

Jan 12 11:04:25 zeno kernel: drop_caches with zone=1 nr_slab=0 reclaimed_slab=0 RECLAIMABLE=1852 FREE=911
Jan 12 11:04:25 zeno kernel: after drop_caches reclaimed_slab=0 RECLAIMABLE=1852 FREE=911
Jan 12 11:04:25 zeno kernel: sh invoked oom-killer: gfp_mask=0xd0, order=1, oom_adj=0, oom_score_adj=0
Jan 12 11:04:25 zeno kernel: Pid: 6344, comm: sh Not tainted 3.2.32-pk06.11-i386 #1
Jan 12 11:04:25 zeno kernel: Call Trace:
Jan 12 11:04:25 zeno kernel:  [<c1607653>] ? printk+0x18/0x1a
Jan 12 11:04:25 zeno kernel:  [<c10776b8>] dump_header.isra.10+0x68/0x180
Jan 12 11:04:25 zeno kernel:  [<c1069807>] ? delayacct_end+0x97/0xb0
Jan 12 11:04:25 zeno kernel:  [<c11d676e>] ? ___ratelimit+0x7e/0xf0
Jan 12 11:04:25 zeno kernel:  [<c1077929>] oom_kill_process.constprop.15+0x49/0x230
Jan 12 11:04:25 zeno kernel:  [<c107a188>] ? get_page_from_freelist+0x2f8/0x4c0
Jan 12 11:04:25 zeno kernel:  [<c1077e03>] out_of_memory+0x1d3/0x2c0
Jan 12 11:04:25 zeno kernel:  [<c107a8a8>] __alloc_pages_nodemask+0x558/0x570
Jan 12 11:04:25 zeno kernel:  [<c102f4bb>] copy_process.part.39+0x5b/0xfa0
Jan 12 11:04:25 zeno kernel:  [<c103054c>] do_fork+0x12c/0x260
Jan 12 11:04:25 zeno kernel:  [<c103e9de>] ? set_current_blocked+0x2e/0x50
Jan 12 11:04:25 zeno kernel:  [<c1009a7f>] sys_clone+0x2f/0x40
Jan 12 11:04:25 zeno kernel:  [<c160ff15>] ptregs_clone+0x15/0x40
Jan 12 11:04:25 zeno kernel:  [<c160fe14>] ? sysenter_do_call+0x12/0x26
Jan 12 11:04:25 zeno kernel: Mem-Info:
Jan 12 11:04:25 zeno kernel: DMA per-cpu:
Jan 12 11:04:25 zeno kernel: CPU    0: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU    1: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU    2: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU    3: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU    4: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU    5: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU    6: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU    7: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU    8: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU    9: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   10: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   11: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   12: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   13: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   14: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   15: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   16: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   17: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   18: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   19: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   20: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   21: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   22: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   23: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   24: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   25: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   26: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   27: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   28: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   29: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   30: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   31: hi:    0, btch:   1 usd:   0
Jan 12 11:04:25 zeno kernel: Normal per-cpu:
Jan 12 11:04:25 zeno kernel: CPU    0: hi:  186, btch:  31 usd:  30
Jan 12 11:04:25 zeno kernel: CPU    1: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU    2: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU    3: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU    4: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU    5: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU    6: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU    7: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU    8: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU    9: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   10: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   11: hi:  186, btch:  31 usd:   6
Jan 12 11:04:25 zeno kernel: CPU   12: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   13: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   14: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   15: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   16: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   17: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   18: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   19: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   20: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   21: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   22: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   23: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   24: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   25: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   26: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   27: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   28: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   29: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   30: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   31: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: HighMem per-cpu:
Jan 12 11:04:25 zeno kernel: CPU    0: hi:  186, btch:  31 usd:  30
Jan 12 11:04:25 zeno kernel: CPU    1: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU    2: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU    3: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU    4: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU    5: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU    6: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU    7: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU    8: hi:  186, btch:  31 usd:  29
Jan 12 11:04:25 zeno kernel: CPU    9: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   10: hi:  186, btch:  31 usd:  29
Jan 12 11:04:25 zeno kernel: CPU   11: hi:  186, btch:  31 usd:  20
Jan 12 11:04:25 zeno kernel: CPU   12: hi:  186, btch:  31 usd:  29
Jan 12 11:04:25 zeno kernel: CPU   13: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   14: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   15: hi:  186, btch:  31 usd:  29
Jan 12 11:04:25 zeno kernel: CPU   16: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   17: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   18: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   19: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   20: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   21: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   22: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   23: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   24: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   25: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   26: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   27: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   28: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   29: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   30: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: CPU   31: hi:  186, btch:  31 usd:   0
Jan 12 11:04:25 zeno kernel: active_anon:160474 inactive_anon:6747 isolated_anon:0
Jan 12 11:04:25 zeno kernel:  active_file:1283 inactive_file:2866 isolated_file:0
Jan 12 11:04:25 zeno kernel:  unevictable:0 dirty:6 writeback:0 unstable:0
Jan 12 11:04:25 zeno kernel:  free:15839916 slab_reclaimable:1852 slab_unreclaimable:17519
Jan 12 11:04:25 zeno kernel:  mapped:4059 shmem:144 pagetables:24767 bounce:0
Jan 12 11:04:25 zeno kernel: DMA free:3516kB min:64kB low:80kB high:96kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15780kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:1908kB kernel_stack:2024kB pagetables:4440kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
Jan 12 11:04:25 zeno kernel: lowmem_reserve[]: 0 867 62932 62932
Jan 12 11:04:25 zeno kernel: Normal free:3644kB min:3732kB low:4664kB high:5596kB active_anon:0kB inactive_anon:0kB active_file:248kB inactive_file:280kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:887976kB mlocked:0kB dirty:0kB writeback:0kB mapped:4kB shmem:0kB slab_reclaimable:7408kB slab_unreclaimable:68168kB kernel_stack:43256kB pagetables:94628kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:1197 all_unreclaimable? yes
Jan 12 11:04:25 zeno kernel: lowmem_reserve[]: 0 0 496521 496521
Jan 12 11:04:25 zeno kernel: HighMem free:63352504kB min:512kB low:67316kB high:134124kB active_anon:641896kB inactive_anon:26988kB active_file:4884kB inactive_file:11184kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:63554796kB mlocked:0kB dirty:24kB writeback:0kB mapped:16232kB shmem:576kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Jan 12 11:04:25 zeno kernel: lowmem_reserve[]: 0 0 0 0
Jan 12 11:04:25 zeno kernel: DMA: 11*4kB 2*8kB 0*16kB 0*32kB 0*64kB 1*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 0*4096kB = 3516kB
Jan 12 11:04:25 zeno kernel: Normal: 217*4kB 10*8kB 0*16kB 0*32kB 1*64kB 1*128kB 0*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 3700kB
Jan 12 11:04:25 zeno kernel: HighMem: 151*4kB 136*8kB 232*16kB 198*32kB 43*64kB 9*128kB 4*256kB 1*512kB 2*1024kB 2*2048kB 15461*4096kB = 63351580kB
Jan 12 11:04:25 zeno kernel: 4156 total pagecache pages
Jan 12 11:04:25 zeno kernel: 0 pages in swap cache
Jan 12 11:04:25 zeno kernel: Swap cache stats: add 0, delete 0, find 0/0
Jan 12 11:04:25 zeno kernel: Free swap  = 134217724kB
Jan 12 11:04:25 zeno kernel: Total swap = 134217724kB
Jan 12 11:04:25 zeno kernel: 16777200 pages RAM
Jan 12 11:04:25 zeno kernel: 16549378 pages HighMem
Jan 12 11:04:25 zeno kernel: 665251 pages reserved
Jan 12 11:04:25 zeno kernel: 635488 pages shared
Jan 12 11:04:25 zeno kernel: 261163 pages non-shared
Jan 12 11:04:25 zeno kernel: Out of memory (oom_kill_allocating_task): Kill process 6344 (sh) score 0 or sacrifice child
Jan 12 11:04:25 zeno kernel: Killed process 6345 (sleep) total-vm:1736kB, anon-rss:44kB, file-rss:200kB


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] Reproducible OOM with just a few sleeps
  2013-01-12  3:31 [RFC] Reproducible OOM with just a few sleeps paul.szabo
@ 2013-01-14 15:00 ` Dave Hansen
  2013-01-14 20:36   ` paul.szabo
  2013-02-17  9:10   ` Simon Jeons
  0 siblings, 2 replies; 26+ messages in thread
From: Dave Hansen @ 2013-01-14 15:00 UTC (permalink / raw)
  To: paul.szabo; +Cc: linux-mm, 695182, linux-kernel

On 01/11/2013 07:31 PM, paul.szabo@sydney.edu.au wrote:
> Seems that any i386 PAE machine will go OOM just by running a few
> processes. To reproduce:
>   sh -c 'n=0; while [ $n -lt 19999 ]; do sleep 600 & ((n=n+1)); done'
> My machine has 64GB RAM. With previous OOM episodes, it seemed that
> running (booting) it with mem=32G might avoid OOM; but an OOM was
> obtained just the same, and also with lower memory:
>   Memory    sleeps to OOM       free shows total
>   (mem=64G)  5300               64447796
>   mem=32G   10200               31155512
>   mem=16G   13400               14509364
>   mem=8G    14200               6186296
>   mem=6G    15200               4105532
>   mem=4G    16400               2041364
> The machine does not run out of highmem, nor does it use any swap.

I think what you're seeing here is that, as the amount of total memory
increases, the amount of lowmem available _decreases_ due to inflation
of mem_map[] (and a few other more minor things).  The number of sleeps
you can do is bound by the number of processes, as you noticed from
ulimit.  Creating processes that don't use much memory eats a relatively
large amount of low memory.

This is a sad (and counterintuitive) fact: more RAM actually *CREATES*
RAM bottlenecks on 32-bit systems.

> On my large machine, 'free' fails to show about 2GB memory, e.g. with
> mem=16G it shows:
> 
> root@zeno:~# free -l
>              total       used       free     shared    buffers     cached
> Mem:      14509364     435440   14073924          0       4068     111328
> Low:        769044     120232     648812
> High:     13740320     315208   13425112
> -/+ buffers/cache:     320044   14189320
> Swap:    134217724          0  134217724

You probably have a memory hole.  mem=16G means "give me all the memory
below the physical address at 16GB".  It does *NOT* mean, "give me
enough memory such that 'free' will show ~16G available."  If you have a
1.5GB hole below 16GB, and you do mem=16G, you'll end up with ~14.5GB
available.

The e820 map (during early boot in dmesg) or /proc/iomem will let you
locate your memory holes.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] Reproducible OOM with just a few sleeps
  2013-01-14 15:00 ` Dave Hansen
@ 2013-01-14 20:36   ` paul.szabo
  2013-01-15  0:34     ` Bug#695182: " Ben Hutchings
  2013-01-15  0:56     ` Dave Hansen
  2013-02-17  9:10   ` Simon Jeons
  1 sibling, 2 replies; 26+ messages in thread
From: paul.szabo @ 2013-01-14 20:36 UTC (permalink / raw)
  To: dave; +Cc: 695182, linux-kernel, linux-mm

Dear Dave,

>> Seems that any i386 PAE machine will go OOM just by running a few
>> processes. To reproduce:
>>   sh -c 'n=0; while [ $n -lt 19999 ]; do sleep 600 & ((n=n+1)); done'
>> ...
> I think what you're seeing here is that, as the amount of total memory
> increases, the amount of lowmem available _decreases_ due to inflation
> of mem_map[] (and a few other more minor things).  The number of sleeps
> you can do is bound by the number of processes, as you noticed from
> ulimit.  Creating processes that don't use much memory eats a relatively
> large amount of low memory.
> This is a sad (and counterintuitive) fact: more RAM actually *CREATES*
> RAM bottlenecks on 32-bit systems.

I understand that more RAM leaves less lowmem. What is unacceptable is
that PAE crashes or freezes with OOM: it should gracefully handle the
issue. Noting that (for a machine with 4GB or under) PAE fails where the
HIGHMEM4G kernel succeeds and survives.

>> On my large machine, 'free' fails to show about 2GB memory ...
> You probably have a memory hole. ...
> The e820 map (during early boot in dmesg) or /proc/iomem will let you
> locate your memory holes.

Thanks, that might explain it. Output of /proc/iomem below: sorry I do
not know how to interpret it.

Cheers, Paul

Paul Szabo   psz@maths.usyd.edu.au   http://www.maths.usyd.edu.au/u/psz/
School of Mathematics and Statistics   University of Sydney    Australia


---
root@zeno:~# cat /proc/iomem
00000000-0000ffff : reserved
00010000-00099bff : System RAM
00099c00-0009ffff : reserved
000a0000-000bffff : PCI Bus 0000:00
  000a0000-000bffff : Video RAM area
000c0000-000dffff : PCI Bus 0000:00
  000c0000-000c7fff : Video ROM
  000c8000-000cf5ff : Adapter ROM
  000cf800-000d07ff : Adapter ROM
  000d0800-000d0bff : Adapter ROM
000e0000-000fffff : reserved
  000f0000-000fffff : System ROM
00100000-7e445fff : System RAM
  01000000-01610e15 : Kernel code
  01610e16-01802dff : Kernel data
  01880000-018b2fff : Kernel bss
7e446000-7e565fff : ACPI Non-volatile Storage
7e566000-7f1e2fff : reserved
7f1e3000-7f25efff : ACPI Tables
7f25f000-7f31cfff : reserved
7f31d000-7f323fff : ACPI Non-volatile Storage
7f324000-7f333fff : reserved
7f334000-7f33bfff : ACPI Non-volatile Storage
7f33c000-7f365fff : reserved
7f366000-7f7fffff : ACPI Non-volatile Storage
7f800000-7fffffff : RAM buffer
80000000-dfffffff : PCI Bus 0000:00
  80000000-8fffffff : PCI MMCONFIG 0000 [bus 00-ff]
    80000000-8fffffff : reserved
  90000000-9000000f : 0000:00:16.0
  90000010-9000001f : 0000:00:16.1
  dd000000-ddffffff : PCI Bus 0000:08
    dd000000-ddffffff : 0000:08:03.0
  de000000-de4fffff : PCI Bus 0000:07
    de000000-de3fffff : 0000:07:00.0
    de47c000-de47ffff : 0000:07:00.0
  de600000-de6fffff : PCI Bus 0000:02
  df000000-df8fffff : PCI Bus 0000:08
    df000000-df7fffff : 0000:08:03.0
    df800000-df803fff : 0000:08:03.0
  df900000-df9fffff : PCI Bus 0000:07
  dfa00000-dfafffff : PCI Bus 0000:02
    dfa00000-dfa1ffff : 0000:02:00.1
      dfa00000-dfa1ffff : igb
    dfa20000-dfa3ffff : 0000:02:00.0
      dfa20000-dfa3ffff : igb
    dfa40000-dfa43fff : 0000:02:00.1
      dfa40000-dfa43fff : igb
    dfa44000-dfa47fff : 0000:02:00.0
      dfa44000-dfa47fff : igb
  dfb00000-dfb03fff : 0000:00:04.7
  dfb04000-dfb07fff : 0000:00:04.6
  dfb08000-dfb0bfff : 0000:00:04.5
  dfb0c000-dfb0ffff : 0000:00:04.4
  dfb10000-dfb13fff : 0000:00:04.3
  dfb14000-dfb17fff : 0000:00:04.2
  dfb18000-dfb1bfff : 0000:00:04.1
  dfb1c000-dfb1ffff : 0000:00:04.0
  dfb20000-dfb200ff : 0000:00:1f.3
  dfb21000-dfb217ff : 0000:00:1f.2
    dfb21000-dfb217ff : ahci
  dfb22000-dfb223ff : 0000:00:1d.0
    dfb22000-dfb223ff : ehci_hcd
  dfb23000-dfb233ff : 0000:00:1a.0
    dfb23000-dfb233ff : ehci_hcd
  dfb25000-dfb25fff : 0000:00:05.4
  dfffc000-dfffdfff : pnp 00:02
e0000000-fbffffff : PCI Bus 0000:80
  fbe00000-fbefffff : PCI Bus 0000:84
    fbe00000-fbe3ffff : 0000:84:00.0
    fbe40000-fbe5ffff : 0000:84:00.0
    fbe60000-fbe63fff : 0000:84:00.0
  fbf00000-fbf03fff : 0000:80:04.7
  fbf04000-fbf07fff : 0000:80:04.6
  fbf08000-fbf0bfff : 0000:80:04.5
  fbf0c000-fbf0ffff : 0000:80:04.4
  fbf10000-fbf13fff : 0000:80:04.3
  fbf14000-fbf17fff : 0000:80:04.2
  fbf18000-fbf1bfff : 0000:80:04.1
  fbf1c000-fbf1ffff : 0000:80:04.0
  fbf20000-fbf20fff : 0000:80:05.4
  fbffe000-fbffffff : pnp 00:12
fc000000-fcffffff : pnp 00:01
fd000000-fdffffff : pnp 00:01
fe000000-feafffff : pnp 00:01
feb00000-febfffff : pnp 00:01
fec00000-fec003ff : IOAPIC 0
fec01000-fec013ff : IOAPIC 1
fec40000-fec403ff : IOAPIC 2
fed00000-fed003ff : HPET 0
fed08000-fed08fff : pnp 00:0c
fed1c000-fed3ffff : reserved
  fed1c000-fed1ffff : pnp 00:0c
fed45000-fedfffff : pnp 00:01
fee00000-fee00fff : Local APIC
ff000000-ffffffff : reserved
  ff000000-ffffffff : pnp 00:0c
100000000-107fffffff : System RAM
root@zeno:~# 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Bug#695182: [RFC] Reproducible OOM with just a few sleeps
  2013-01-14 20:36   ` paul.szabo
@ 2013-01-15  0:34     ` Ben Hutchings
  2013-01-15  0:56     ` Dave Hansen
  1 sibling, 0 replies; 26+ messages in thread
From: Ben Hutchings @ 2013-01-15  0:34 UTC (permalink / raw)
  To: paul.szabo, 695182; +Cc: dave, linux-kernel, linux-mm

[-- Attachment #1: Type: text/plain, Size: 1124 bytes --]

On Tue, 2013-01-15 at 07:36 +1100, paul.szabo@sydney.edu.au wrote:
> Dear Dave,
> 
> >> Seems that any i386 PAE machine will go OOM just by running a few
> >> processes. To reproduce:
> >>   sh -c 'n=0; while [ $n -lt 19999 ]; do sleep 600 & ((n=n+1)); done'
> >> ...
> > I think what you're seeing here is that, as the amount of total memory
> > increases, the amount of lowmem available _decreases_ due to inflation
> > of mem_map[] (and a few other more minor things).  The number of sleeps
> > you can do is bound by the number of processes, as you noticed from
> > ulimit.  Creating processes that don't use much memory eats a relatively
> > large amount of low memory.
> > This is a sad (and counterintuitive) fact: more RAM actually *CREATES*
> > RAM bottlenecks on 32-bit systems.
> 
> I understand that more RAM leaves less lowmem. What is unacceptable is
> that PAE crashes or freezes with OOM: it should gracefully handle the
> issue.
[...]

Sorry, let me know where to send your refund.

Ben.

-- 
Ben Hutchings
Quantity is no substitute for quality, but it's the only one we've got.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] Reproducible OOM with just a few sleeps
  2013-01-14 20:36   ` paul.szabo
  2013-01-15  0:34     ` Bug#695182: " Ben Hutchings
@ 2013-01-15  0:56     ` Dave Hansen
  2013-01-15  2:16       ` paul.szabo
  2013-01-30 12:51       ` Pavel Machek
  1 sibling, 2 replies; 26+ messages in thread
From: Dave Hansen @ 2013-01-15  0:56 UTC (permalink / raw)
  To: paul.szabo; +Cc: 695182, linux-kernel, linux-mm

On 01/14/2013 12:36 PM, paul.szabo@sydney.edu.au wrote:
> I understand that more RAM leaves less lowmem. What is unacceptable is
> that PAE crashes or freezes with OOM: it should gracefully handle the
> issue. Noting that (for a machine with 4GB or under) PAE fails where the
> HIGHMEM4G kernel succeeds and survives.

You have found a delta, but you're not really making apples-to-apples
comparisons.  The page tables (a huge consumer of lowmem in your bug
reports) have much more overhead on a PAE kernel.  A process with a
single page faulted in with PAE will take at least 4 pagetable pages
(it's 7 in practice for me with sleeps).  It's 2 pages minimum (and in
practice with sleeps) on HIGHMEM4G.

There's probably a bug here.  But, it's incredibly unlikely to be seen
in practice on anything resembling a modern system.  The 'sleep' issue
is easily worked around by upgrading to a 64-bit kernel, or using sane
ulimit values.  Raising the vm.min_free_kbytes sysctl (to perhaps 10x of
its current value on your system) is likely to help the hangs too,
although it will further "consume" lowmem.

I appreciate your persistence here, but for a bug with such a specific
use case, and with so many reasonable workarounds, it's not something I
want to dig in to much deeper.  I'll be happy to answer any questions if
you want to go digging deeper, or want some pointers on where to go
looking to fix this properly.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] Reproducible OOM with just a few sleeps
  2013-01-15  0:56     ` Dave Hansen
@ 2013-01-15  2:16       ` paul.szabo
  2013-01-30 12:51       ` Pavel Machek
  1 sibling, 0 replies; 26+ messages in thread
From: paul.szabo @ 2013-01-15  2:16 UTC (permalink / raw)
  To: dave; +Cc: 695182, linux-kernel, linux-mm

Dear Dave,

>> ... What is unacceptable is that PAE crashes or freezes with OOM:
>> it should gracefully handle the issue. Noting that (for a machine
>> with 4GB or under) PAE fails where the HIGHMEM4G kernel succeeds ...
>
> You have found a delta, but you're not really making apples-to-apples
> comparisons.  The page tables ...

I understand that the exact sizes of page tables are very important to
developers. To the rest of us, all that matters is that the kernel moves
them to highmem or swap or whatever, that it maybe emits some error
message but that it does not crash or freeze.

> There's probably a bug here.  But, it's incredibly unlikely to be seen
> in practice on anything resembling a modern system. ...

Probably, I found the bug on a very modern and brand-new system, just
trying to copy a few ISO image files and trying to log in a hundred
students. My machine crashed under those very practical and normal
circumstances. The demos with dd and sleep were just that: easily
reproducible demos.

> ... easily worked around by upgrading to a 64-bit kernel ...

Do you mean that PAE should never be used, but to use amd64 instead?

> ... Raising the vm.min_free_kbytes sysctl (to perhaps 10x of
> its current value on your system) is likely to help the hangs too,
> although it will further "consume" lowmem.

I have tried that, it did not work. As you say, it is backward.

> ... for a bug with ... so many reasonable workarounds ...

Only one workaround was proposed: use amd64.

PAE is buggy and useless, should be deprecated and removed.

Cheers, Paul

Paul Szabo   psz@maths.usyd.edu.au   http://www.maths.usyd.edu.au/u/psz/
School of Mathematics and Statistics   University of Sydney    Australia

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] Reproducible OOM with just a few sleeps
  2013-01-15  0:56     ` Dave Hansen
  2013-01-15  2:16       ` paul.szabo
@ 2013-01-30 12:51       ` Pavel Machek
  2013-01-30 15:32         ` Dave Hansen
  1 sibling, 1 reply; 26+ messages in thread
From: Pavel Machek @ 2013-01-30 12:51 UTC (permalink / raw)
  To: Dave Hansen; +Cc: paul.szabo, 695182, linux-kernel, linux-mm

Hi!

> > I understand that more RAM leaves less lowmem. What is unacceptable is
> > that PAE crashes or freezes with OOM: it should gracefully handle the
> > issue. Noting that (for a machine with 4GB or under) PAE fails where the
> > HIGHMEM4G kernel succeeds and survives.
> 
> You have found a delta, but you're not really making apples-to-apples
> comparisons.  The page tables (a huge consumer of lowmem in your bug
> reports) have much more overhead on a PAE kernel.  A process with a
> single page faulted in with PAE will take at least 4 pagetable pages
> (it's 7 in practice for me with sleeps).  It's 2 pages minimum (and in
> practice with sleeps) on HIGHMEM4G.
> 
> There's probably a bug here.  But, it's incredibly unlikely to be seen
> in practice on anything resembling a modern system.  The 'sleep' issue
> is easily worked around by upgrading to a 64-bit kernel, or using

Are you saying that HIGHMEM configuration with 4GB ram is not expected
to work?
									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] Reproducible OOM with just a few sleeps
  2013-01-30 12:51       ` Pavel Machek
@ 2013-01-30 15:32         ` Dave Hansen
  2013-01-30 19:40           ` paul.szabo
  0 siblings, 1 reply; 26+ messages in thread
From: Dave Hansen @ 2013-01-30 15:32 UTC (permalink / raw)
  To: Pavel Machek; +Cc: paul.szabo, 695182, linux-kernel, linux-mm

On 01/30/2013 04:51 AM, Pavel Machek wrote:
> Are you saying that HIGHMEM configuration with 4GB ram is not expected
> to work?

Not really.

The assertion was that 4GB with no PAE passed a forkbomb test (ooming)
while 4GB of RAM with PAE hung, thus _PAE_ is broken.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] Reproducible OOM with just a few sleeps
  2013-01-30 15:32         ` Dave Hansen
@ 2013-01-30 19:40           ` paul.szabo
  2013-01-31  5:15             ` Bug#695182: " Ben Hutchings
  0 siblings, 1 reply; 26+ messages in thread
From: paul.szabo @ 2013-01-30 19:40 UTC (permalink / raw)
  To: dave, pavel; +Cc: 695182, linux-kernel, linux-mm

Dear Pavel and Dave,

> The assertion was that 4GB with no PAE passed a forkbomb test (ooming)
> while 4GB of RAM with PAE hung, thus _PAE_ is broken.

Yes, PAE is broken. Still, maybe the above needs slight correction:
non-PAE HIGHMEM4G passed the "sleep test": no OOM, nothing unexpected;
whereas PAE OOMed then hung (tested with various RAM from 3GB to 64GB).

The feeling I get is that amd64 is proposed as a drop-in replacement for
PAE, that support and development of PAE is gone, that PAE is dead.

Cheers, Paul

Paul Szabo   psz@maths.usyd.edu.au   http://www.maths.usyd.edu.au/u/psz/
School of Mathematics and Statistics   University of Sydney    Australia

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Bug#695182: [RFC] Reproducible OOM with just a few sleeps
  2013-01-30 19:40           ` paul.szabo
@ 2013-01-31  5:15             ` Ben Hutchings
  2013-01-31  9:07               ` paul.szabo
  0 siblings, 1 reply; 26+ messages in thread
From: Ben Hutchings @ 2013-01-31  5:15 UTC (permalink / raw)
  To: paul.szabo, 695182; +Cc: dave, pavel, linux-kernel, linux-mm

[-- Attachment #1: Type: text/plain, Size: 2300 bytes --]

On Thu, 2013-01-31 at 06:40 +1100, paul.szabo@sydney.edu.au wrote:
> Dear Pavel and Dave,
> 
> > The assertion was that 4GB with no PAE passed a forkbomb test (ooming)
> > while 4GB of RAM with PAE hung, thus _PAE_ is broken.
> 
> Yes, PAE is broken. Still, maybe the above needs slight correction:
> non-PAE HIGHMEM4G passed the "sleep test": no OOM, nothing unexpected;
> whereas PAE OOMed then hung (tested with various RAM from 3GB to 64GB).
> 
> The feeling I get is that amd64 is proposed as a drop-in replacement for
> PAE, that support and development of PAE is gone, that PAE is dead.

PAE was a stop-gap that kept x86-32 alive on servers until x86-64 came
along (though it was supposed to be ia64...).  That's why I was
surprised you were still trying to run a 32-bit kernel.

The fundamental problem with Linux on 32-bit systems for the past ~10
years has been that RAM sizes approached and exceeded the 32-bit virtual
address space and the kernel can't keep it all mapped.

Whenever a task makes a system call the kernel will continue to use the
same virtual memory mappings to access that task's memory, as well as
its own memory.  Which means both of those have to fit within the
virtual address space.  (The alternative of using separate address
spaces is pretty bad for performance - see OS X as an example.  And it
only helps you as far as 4GB RAM.)

The usual split on 32-bit machines is 3GB virtual address space for each
task and 1GB for the kernel.  Part of that 1GB is reserved for memory-
mapped I/O and temporary mappings, and the rest is mapped to the
beginning of RAM (lowmem).  All the remainder of RAM is highmem,
available for allocation by tasks but not for the kernel's private data
(in general).

Switching to PAE does not change the amount of lowmem, but it does make
hardware page table entries (which of course live in lowmem) twice as
big.  This increases the pressure on lowmem a little, which probably
explains the negative result of your 'sleep test'.  However if you then
try to take full advantage of the 64GB range of PAE, as you saw earlier,
the shortage of lowmem relative to highmem becomes completely untenable.

Ben.

-- 
Ben Hutchings
If more than one person is responsible for a bug, no one is at fault.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Bug#695182: [RFC] Reproducible OOM with just a few sleeps
  2013-01-31  5:15             ` Bug#695182: " Ben Hutchings
@ 2013-01-31  9:07               ` paul.szabo
  2013-01-31 13:38                 ` Ben Hutchings
  0 siblings, 1 reply; 26+ messages in thread
From: paul.szabo @ 2013-01-31  9:07 UTC (permalink / raw)
  To: 695182, ben; +Cc: dave, linux-kernel, linux-mm, pavel

Dear Ben,

Thanks for the repeated explanations.

> PAE was a stop-gap ...
> ... [PAE] completely untenable.

Is this a good time to withdraw PAE, to tell the world that it does not
work? Maybe you should have had such comments in the code.

Seems that amd64 now works "somewhat": on Debian the linux-image package
is tricky to install, and linux-headers is even harder. Is there work
being done to make this smoother?

---

I am still not convinced by the "lowmem starvation" explanation: because
then PAE should have worked fine on my 3GB machine; maybe I should also
try PAE on my 512MB laptop. - Though, what do I know, have not yet found
the buggy line of code I believe is lurking there...

Thanks, Paul

Paul Szabo   psz@maths.usyd.edu.au   http://www.maths.usyd.edu.au/u/psz/
School of Mathematics and Statistics   University of Sydney    Australia

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Bug#695182: [RFC] Reproducible OOM with just a few sleeps
  2013-01-31  9:07               ` paul.szabo
@ 2013-01-31 13:38                 ` Ben Hutchings
  2013-01-31 23:06                   ` paul.szabo
  0 siblings, 1 reply; 26+ messages in thread
From: Ben Hutchings @ 2013-01-31 13:38 UTC (permalink / raw)
  To: paul.szabo, 695182; +Cc: dave, linux-kernel, linux-mm, pavel

[-- Attachment #1: Type: text/plain, Size: 1248 bytes --]

On Thu, 2013-01-31 at 20:07 +1100, paul.szabo@sydney.edu.au wrote:
> Dear Ben,
> 
> Thanks for the repeated explanations.
> 
> > PAE was a stop-gap ...
> > ... [PAE] completely untenable.
> 
> Is this a good time to withdraw PAE, to tell the world that it does not
> work? Maybe you should have had such comments in the code.
> 
> Seems that amd64 now works "somewhat": on Debian the linux-image package
> is tricky to install,

If you do an i386 (userland) installation then you must either select
expert mode to get a choice of kernel packages, or else install the
'amd64' kernel package afterward.

> and linux-headers is even harder.

In what way?

> Is there work being done to make this smoother?
[...]

Debian users are now generally installing a full amd64 (userland and
kernel installation.  The default installation image linked from
www.debian.org is the 32/64-bit net-installer which will install amd64
if the system is capable of it.

Based on your experience I might propose to change the automatic kernel
selection for i386 so that we use 'amd64' on a system with >16GB RAM and
a capable processor.

Ben.

-- 
Ben Hutchings
If more than one person is responsible for a bug, no one is at fault.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Bug#695182: [RFC] Reproducible OOM with just a few sleeps
  2013-01-31 13:38                 ` Ben Hutchings
@ 2013-01-31 23:06                   ` paul.szabo
  2013-02-01  1:07                     ` Ben Hutchings
  0 siblings, 1 reply; 26+ messages in thread
From: paul.szabo @ 2013-01-31 23:06 UTC (permalink / raw)
  To: 695182, ben; +Cc: dave, linux-kernel, linux-mm, pavel

Dear Ben,

> Based on your experience I might propose to change the automatic kernel
> selection for i386 so that we use 'amd64' on a system with >16GB RAM and
> a capable processor.

Don't you mean change to amd64 for >4GB (or any RAM), never using PAE?
PAE is broken for any amount of RAM. More precisely, PAE with any RAM
fails the "sleep test":
  n=0; while [ $n -lt 33000 ]; do sleep 600 & ((n=n+1)); done
and with >32GB fails the "write test":
  n=0; while [ $n -lt 99 ]; do dd bs=1M count=1024 if=/dev/zero of=x$n; ((n=n+1)); done
Why do you think 16GB is significant?

Thanks, Paul

Paul Szabo   psz@maths.usyd.edu.au   http://www.maths.usyd.edu.au/u/psz/
School of Mathematics and Statistics   University of Sydney    Australia

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Bug#695182: [RFC] Reproducible OOM with just a few sleeps
  2013-01-31 23:06                   ` paul.szabo
@ 2013-02-01  1:07                     ` Ben Hutchings
  2013-02-01  2:12                       ` paul.szabo
  0 siblings, 1 reply; 26+ messages in thread
From: Ben Hutchings @ 2013-02-01  1:07 UTC (permalink / raw)
  To: paul.szabo; +Cc: 695182, dave, linux-kernel, linux-mm, pavel

[-- Attachment #1: Type: text/plain, Size: 582 bytes --]

On Fri, 2013-02-01 at 10:06 +1100, paul.szabo@sydney.edu.au wrote:
> Dear Ben,
> 
> > Based on your experience I might propose to change the automatic kernel
> > selection for i386 so that we use 'amd64' on a system with >16GB RAM and
> > a capable processor.
> 
> Don't you mean change to amd64 for >4GB (or any RAM), never using PAE?
> PAE is broken for any amount of RAM.
[...]

No it isn't.

Ben.

-- 
Ben Hutchings
Everything should be made as simple as possible, but not simpler.
                                                           - Albert Einstein

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Bug#695182: [RFC] Reproducible OOM with just a few sleeps
  2013-02-01  1:07                     ` Ben Hutchings
@ 2013-02-01  2:12                       ` paul.szabo
  2013-02-01  2:57                         ` Ben Hutchings
  0 siblings, 1 reply; 26+ messages in thread
From: paul.szabo @ 2013-02-01  2:12 UTC (permalink / raw)
  To: ben; +Cc: 695182, dave, linux-kernel, linux-mm, pavel

Dear Ben,

>> PAE is broken for any amount of RAM.
>
> No it isn't.

Could I please ask you to expand on that?

Thanks, Paul

Paul Szabo   psz@maths.usyd.edu.au   http://www.maths.usyd.edu.au/u/psz/
School of Mathematics and Statistics   University of Sydney    Australia

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Bug#695182: [RFC] Reproducible OOM with just a few sleeps
  2013-02-01  2:12                       ` paul.szabo
@ 2013-02-01  2:57                         ` Ben Hutchings
  2013-02-01  3:13                           ` paul.szabo
  0 siblings, 1 reply; 26+ messages in thread
From: Ben Hutchings @ 2013-02-01  2:57 UTC (permalink / raw)
  To: paul.szabo; +Cc: 695182, dave, linux-kernel, linux-mm, pavel

[-- Attachment #1: Type: text/plain, Size: 414 bytes --]

On Fri, 2013-02-01 at 13:12 +1100, paul.szabo@sydney.edu.au wrote:
> Dear Ben,
> 
> >> PAE is broken for any amount of RAM.
> >
> > No it isn't.
> 
> Could I please ask you to expand on that?

I already did, a few messages back.

Ben.

-- 
Ben Hutchings
Everything should be made as simple as possible, but not simpler.
                                                           - Albert Einstein

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Bug#695182: [RFC] Reproducible OOM with just a few sleeps
  2013-02-01  2:57                         ` Ben Hutchings
@ 2013-02-01  3:13                           ` paul.szabo
  2013-02-01  4:38                             ` Phil Turmel
  0 siblings, 1 reply; 26+ messages in thread
From: paul.szabo @ 2013-02-01  3:13 UTC (permalink / raw)
  To: ben; +Cc: 695182, dave, linux-kernel, linux-mm, pavel

Dear Ben,

>>>> PAE is broken for any amount of RAM.
>>> No it isn't.
>> Could I please ask you to expand on that?
>
> I already did, a few messages back.

OK, thanks. Noting however that fewer than those back, I said:
  ... PAE with any RAM fails the "sleep test":
  n=0; while [ $n -lt 33000 ]; do sleep 600 & ((n=n+1)); done
and somewhere also said that non-PAE passes. Does not that prove
that PAE is broken?

Cheers, Paul

Paul Szabo   psz@maths.usyd.edu.au   http://www.maths.usyd.edu.au/u/psz/
School of Mathematics and Statistics   University of Sydney    Australia

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Bug#695182: [RFC] Reproducible OOM with just a few sleeps
  2013-02-01  3:13                           ` paul.szabo
@ 2013-02-01  4:38                             ` Phil Turmel
  2013-02-01 10:20                               ` Pavel Machek
  0 siblings, 1 reply; 26+ messages in thread
From: Phil Turmel @ 2013-02-01  4:38 UTC (permalink / raw)
  To: paul.szabo; +Cc: ben, 695182, dave, linux-kernel, linux-mm, pavel

On 01/31/2013 10:13 PM, paul.szabo@sydney.edu.au wrote:
> [trim /] Does not that prove that PAE is broken?

Please, Paul, take *yes* for an answer.  It is broken.  You've received
multiple dissertations on why it is going to stay that way.  Unless you
fix it yourself, and everyone seems to be politely wishing you the best
of luck with that.

> Cheers, Paul

Regards,

Phil

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: Bug#695182: [RFC] Reproducible OOM with just a few sleeps
  2013-02-01  4:38                             ` Phil Turmel
@ 2013-02-01 10:20                               ` Pavel Machek
  2013-02-01 10:25                                 ` PAE problems was " Pavel Machek
  0 siblings, 1 reply; 26+ messages in thread
From: Pavel Machek @ 2013-02-01 10:20 UTC (permalink / raw)
  To: Phil Turmel; +Cc: paul.szabo, ben, 695182, dave, linux-kernel, linux-mm

On Thu 2013-01-31 23:38:27, Phil Turmel wrote:
> On 01/31/2013 10:13 PM, paul.szabo@sydney.edu.au wrote:
> > [trim /] Does not that prove that PAE is broken?
> 
> Please, Paul, take *yes* for an answer.  It is broken.  You've received
> multiple dissertations on why it is going to stay that way.  Unless you
> fix it yourself, and everyone seems to be politely wishing you the best
> of luck with that.

It is not Paul's job to fix PAE. It is job of whoever broke it to do
so.

If it is broken with 2GB of RAM, it is clearly not the known "lowmem
starvation" issue, it is something else... and probably worth
debugging.

So, Paul, if you have time and interest... Try to find some old kernel
version where sleep test works with PAE. Hopefully there is one. Then
do bisection... author of the patch should then fix it. (And if not,
at least you have patch you can revert.)

rjw is worth cc-ing at that point.
									Pavel 
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* PAE problems was [RFC] Reproducible OOM with just a few sleeps
  2013-02-01 10:20                               ` Pavel Machek
@ 2013-02-01 10:25                                 ` Pavel Machek
  2013-02-01 16:57                                   ` H. Peter Anvin
  2013-02-07  0:28                                   ` Dave Hansen
  0 siblings, 2 replies; 26+ messages in thread
From: Pavel Machek @ 2013-02-01 10:25 UTC (permalink / raw)
  To: Phil Turmel, H. Peter Anvin
  Cc: paul.szabo, ben, dave, linux-kernel, linux-mm, H. Peter Anvin


On Fri 2013-02-01 11:20:44, Pavel Machek wrote:
> On Thu 2013-01-31 23:38:27, Phil Turmel wrote:
> > On 01/31/2013 10:13 PM, paul.szabo@sydney.edu.au wrote:
> > > [trim /] Does not that prove that PAE is broken?
> > 
> > Please, Paul, take *yes* for an answer.  It is broken.  You've received
> > multiple dissertations on why it is going to stay that way.  Unless you
> > fix it yourself, and everyone seems to be politely wishing you the best
> > of luck with that.
> 
> It is not Paul's job to fix PAE. It is job of whoever broke it to do
> so.
> 
> If it is broken with 2GB of RAM, it is clearly not the known "lowmem
> starvation" issue, it is something else... and probably worth
> debugging.
> 
> So, Paul, if you have time and interest... Try to find some old kernel
> version where sleep test works with PAE. Hopefully there is one. Then
> do bisection... author of the patch should then fix it. (And if not,
> at least you have patch you can revert.)
> 
> rjw is worth cc-ing at that point.

Ouch, and... IIRC (hpa should know for sure), PAE is neccessary for
R^X support on x86, thus getting more common, not less. If it does not
work, that's bad news.

Actually, if PAE is known broken, it should probably get marked as
such in Kconfig. That's sure to get some discussion started...
									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: PAE problems was [RFC] Reproducible OOM with just a few sleeps
  2013-02-01 10:25                                 ` PAE problems was " Pavel Machek
@ 2013-02-01 16:57                                   ` H. Peter Anvin
  2013-02-01 17:45                                     ` Ben Hutchings
  2013-02-07  0:28                                   ` Dave Hansen
  1 sibling, 1 reply; 26+ messages in thread
From: H. Peter Anvin @ 2013-02-01 16:57 UTC (permalink / raw)
  To: Pavel Machek; +Cc: Phil Turmel, paul.szabo, ben, dave, linux-kernel, linux-mm

On 02/01/2013 02:25 AM, Pavel Machek wrote:
>
> On Fri 2013-02-01 11:20:44, Pavel Machek wrote:
>> On Thu 2013-01-31 23:38:27, Phil Turmel wrote:
>>> On 01/31/2013 10:13 PM, paul.szabo@sydney.edu.au wrote:
>>>> [trim /] Does not that prove that PAE is broken?
>>>
>>> Please, Paul, take *yes* for an answer.  It is broken.  You've received
>>> multiple dissertations on why it is going to stay that way.  Unless you
>>> fix it yourself, and everyone seems to be politely wishing you the best
>>> of luck with that.
>>
>> It is not Paul's job to fix PAE. It is job of whoever broke it to do
>> so.
>>
>> If it is broken with 2GB of RAM, it is clearly not the known "lowmem
>> starvation" issue, it is something else... and probably worth
>> debugging.
>>
>> So, Paul, if you have time and interest... Try to find some old kernel
>> version where sleep test works with PAE. Hopefully there is one. Then
>> do bisection... author of the patch should then fix it. (And if not,
>> at least you have patch you can revert.)
>>
>> rjw is worth cc-ing at that point.
>
> Ouch, and... IIRC (hpa should know for sure), PAE is neccessary for
> R^X support on x86, thus getting more common, not less. If it does not
> work, that's bad news.
>
> Actually, if PAE is known broken, it should probably get marked as
> such in Kconfig. That's sure to get some discussion started...
> 									Pavel
>

OK, so by the time this thread gets to me there is of course no 
information in it.

The vast majority of all 32-bit kernels compiled these days are PAE, so 
it would seem rather odd if PAE was totally broken.

	-hpa

-- 
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel.  I don't speak on their behalf.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: PAE problems was [RFC] Reproducible OOM with just a few sleeps
  2013-02-01 16:57                                   ` H. Peter Anvin
@ 2013-02-01 17:45                                     ` Ben Hutchings
  0 siblings, 0 replies; 26+ messages in thread
From: Ben Hutchings @ 2013-02-01 17:45 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Pavel Machek, Phil Turmel, paul.szabo, dave, linux-kernel, linux-mm

[-- Attachment #1: Type: text/plain, Size: 549 bytes --]

On Fri, 2013-02-01 at 08:57 -0800, H. Peter Anvin wrote:
[...]
> OK, so by the time this thread gets to me there is of course no 
> information in it.

Here's the history: http://thread.gmane.org/gmane.linux.kernel.mm/93278

> The vast majority of all 32-bit kernels compiled these days are PAE, so 
> it would seem rather odd if PAE was totally broken.

Indeed.

Ben.

-- 
Ben Hutchings
Everything should be made as simple as possible, but not simpler.
                                                           - Albert Einstein

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: PAE problems was [RFC] Reproducible OOM with just a few sleeps
  2013-02-01 10:25                                 ` PAE problems was " Pavel Machek
  2013-02-01 16:57                                   ` H. Peter Anvin
@ 2013-02-07  0:28                                   ` Dave Hansen
  2013-02-10 19:09                                     ` Pavel Machek
  1 sibling, 1 reply; 26+ messages in thread
From: Dave Hansen @ 2013-02-07  0:28 UTC (permalink / raw)
  To: Pavel Machek
  Cc: Phil Turmel, H. Peter Anvin, paul.szabo, ben, linux-kernel, linux-mm

On 02/01/2013 02:25 AM, Pavel Machek wrote:
> Ouch, and... IIRC (hpa should know for sure), PAE is neccessary for
> R^X support on x86, thus getting more common, not less. If it does not
> work, that's bad news.

Dare I ask what "R^X" is?


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: PAE problems was [RFC] Reproducible OOM with just a few sleeps
  2013-02-07  0:28                                   ` Dave Hansen
@ 2013-02-10 19:09                                     ` Pavel Machek
  0 siblings, 0 replies; 26+ messages in thread
From: Pavel Machek @ 2013-02-10 19:09 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Phil Turmel, H. Peter Anvin, paul.szabo, ben, linux-kernel, linux-mm

On Wed 2013-02-06 16:28:08, Dave Hansen wrote:
> On 02/01/2013 02:25 AM, Pavel Machek wrote:
> > Ouch, and... IIRC (hpa should know for sure), PAE is neccessary for
> > R^X support on x86, thus getting more common, not less. If it does not
> > work, that's bad news.
> 
> Dare I ask what "R^X" is?

Read xor Execute, aka NX.... support for executable but not readable
pages. Usefull for making exploits harder iirc.
									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] Reproducible OOM with just a few sleeps
  2013-01-14 15:00 ` Dave Hansen
  2013-01-14 20:36   ` paul.szabo
@ 2013-02-17  9:10   ` Simon Jeons
  2013-02-24 22:10     ` paul.szabo
  1 sibling, 1 reply; 26+ messages in thread
From: Simon Jeons @ 2013-02-17  9:10 UTC (permalink / raw)
  To: Dave Hansen; +Cc: paul.szabo, linux-mm, 695182, linux-kernel

On 01/14/2013 11:00 PM, Dave Hansen wrote:
> On 01/11/2013 07:31 PM, paul.szabo@sydney.edu.au wrote:
>> Seems that any i386 PAE machine will go OOM just by running a few
>> processes. To reproduce:
>>    sh -c 'n=0; while [ $n -lt 19999 ]; do sleep 600 & ((n=n+1)); done'
>> My machine has 64GB RAM. With previous OOM episodes, it seemed that
>> running (booting) it with mem=32G might avoid OOM; but an OOM was
>> obtained just the same, and also with lower memory:
>>    Memory    sleeps to OOM       free shows total
>>    (mem=64G)  5300               64447796
>>    mem=32G   10200               31155512
>>    mem=16G   13400               14509364
>>    mem=8G    14200               6186296
>>    mem=6G    15200               4105532
>>    mem=4G    16400               2041364
>> The machine does not run out of highmem, nor does it use any swap.
> I think what you're seeing here is that, as the amount of total memory
> increases, the amount of lowmem available _decreases_ due to inflation
> of mem_map[] (and a few other more minor things).  The number of sleeps

So if he config sparse memory, the issue can be solved I think.

> you can do is bound by the number of processes, as you noticed from
> ulimit.  Creating processes that don't use much memory eats a relatively
> large amount of low memory.
>
> This is a sad (and counterintuitive) fact: more RAM actually *CREATES*
> RAM bottlenecks on 32-bit systems.
>
>> On my large machine, 'free' fails to show about 2GB memory, e.g. with
>> mem=16G it shows:
>>
>> root@zeno:~# free -l
>>               total       used       free     shared    buffers     cached
>> Mem:      14509364     435440   14073924          0       4068     111328
>> Low:        769044     120232     648812
>> High:     13740320     315208   13425112
>> -/+ buffers/cache:     320044   14189320
>> Swap:    134217724          0  134217724
> You probably have a memory hole.  mem=16G means "give me all the memory
> below the physical address at 16GB".  It does *NOT* mean, "give me
> enough memory such that 'free' will show ~16G available."  If you have a
> 1.5GB hole below 16GB, and you do mem=16G, you'll end up with ~14.5GB
> available.
>
> The e820 map (during early boot in dmesg) or /proc/iomem will let you
> locate your memory holes.

Dear Dave, two questions here:

1) e820 map is read from BIOS, correct? So if all kinds of ranges dump 
from /proc/iomem are setup by BIOS?
2) only "System RAM" range dump from /proc/iomem can be treated as real 
memory, all other ranges can be treated as holes, correct?

>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] Reproducible OOM with just a few sleeps
  2013-02-17  9:10   ` Simon Jeons
@ 2013-02-24 22:10     ` paul.szabo
  0 siblings, 0 replies; 26+ messages in thread
From: paul.szabo @ 2013-02-24 22:10 UTC (permalink / raw)
  To: dave, simon.jeons; +Cc: 695182, linux-kernel, linux-mm

Dear Simon,

> So if he config sparse memory, the issue can be solved I think.

In my config file I have:

CONFIG_HAVE_SPARSE_IRQ=y
CONFIG_SPARSE_IRQ=y
CONFIG_ARCH_SPARSEMEM_ENABLE=y
# CONFIG_SPARSEMEM_MANUAL is not set
CONFIG_SPARSEMEM_STATIC=y
# CONFIG_INPUT_SPARSEKMAP is not set
# CONFIG_SPARSE_RCU_POINTER is not set

Is that sufficient for sparse memory, or should I try something else?
Or maybe, you meant that some kernel source patches might be possible
in the sparse memory code?

Thanks, Paul

Paul Szabo   psz@maths.usyd.edu.au   http://www.maths.usyd.edu.au/u/psz/
School of Mathematics and Statistics   University of Sydney    Australia

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2013-02-24 22:10 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-01-12  3:31 [RFC] Reproducible OOM with just a few sleeps paul.szabo
2013-01-14 15:00 ` Dave Hansen
2013-01-14 20:36   ` paul.szabo
2013-01-15  0:34     ` Bug#695182: " Ben Hutchings
2013-01-15  0:56     ` Dave Hansen
2013-01-15  2:16       ` paul.szabo
2013-01-30 12:51       ` Pavel Machek
2013-01-30 15:32         ` Dave Hansen
2013-01-30 19:40           ` paul.szabo
2013-01-31  5:15             ` Bug#695182: " Ben Hutchings
2013-01-31  9:07               ` paul.szabo
2013-01-31 13:38                 ` Ben Hutchings
2013-01-31 23:06                   ` paul.szabo
2013-02-01  1:07                     ` Ben Hutchings
2013-02-01  2:12                       ` paul.szabo
2013-02-01  2:57                         ` Ben Hutchings
2013-02-01  3:13                           ` paul.szabo
2013-02-01  4:38                             ` Phil Turmel
2013-02-01 10:20                               ` Pavel Machek
2013-02-01 10:25                                 ` PAE problems was " Pavel Machek
2013-02-01 16:57                                   ` H. Peter Anvin
2013-02-01 17:45                                     ` Ben Hutchings
2013-02-07  0:28                                   ` Dave Hansen
2013-02-10 19:09                                     ` Pavel Machek
2013-02-17  9:10   ` Simon Jeons
2013-02-24 22:10     ` paul.szabo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).