linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: oom-killer
       [not found] <CACDBo54Jbueeq1XbtbrFOeOEyF-Q4ipZJab8mB7+0cyK1Foqyw@mail.gmail.com>
@ 2019-08-05 11:24 ` Michal Hocko
  2019-08-05 11:56   ` oom-killer Vlastimil Babka
  0 siblings, 1 reply; 11+ messages in thread
From: Michal Hocko @ 2019-08-05 11:24 UTC (permalink / raw)
  To: Pankaj Suryawanshi; +Cc: linux-kernel, linux-mm, pankaj.suryawanshi

On Sat 03-08-19 18:53:50, Pankaj Suryawanshi wrote:
> Hello,
> 
> Below are the logs from oom-kller. I am not able to interpret/decode the
> logs as well as not able to find root cause of oom-killer.
> 
> Note: CPU Arch: Arm 32-bit , Kernel - 4.14.65

Fixed up line wrapping and trimmed to the bare minimum

> [  727.941258] kworker/u8:2 invoked oom-killer: gfp_mask=0x15080c0(GFP_KERNEL_ACCOUNT|__GFP_ZERO), nodemask=(null),  order=1, oom_score_adj=0

This tells us that this is order 1 (two physically contiguous pages)
request restricted to GFP_KERNEL (GFP_KERNEL_ACCOUNT is GFP_KERNEL |
__GFP_ACCOUNT) and that means that the request can be satisfied only
from the low memory zone. This is important because you are running 32b
system and that means that only low 1G is directly addressable by the
kernel. The rest is in highmem.

> [  727.954355] CPU: 0 PID: 56 Comm: kworker/u8:2 Tainted: P           O  4.14.65 #606
[...]
> [  728.029390] [<c034a094>] (oom_kill_process) from [<c034af24>] (out_of_memory+0x140/0x368)
> [  728.037569]  r10:00000001 r9:c12169bc r8:00000041 r7:c121e680 r6:c1216588 r5:dd347d7c > [  728.045392]  r4:d5737080
> [  728.047929] [<c034ade4>] (out_of_memory) from [<c03519ac>]  (__alloc_pages_nodemask+0x1178/0x124c)
> [  728.056798]  r7:c141e7d0 r6:c12166a4 r5:00000000 r4:00001155
> [  728.062460] [<c0350834>] (__alloc_pages_nodemask) from [<c021e9d4>] (copy_process.part.5+0x114/0x1a28)
> [  728.071764]  r10:00000000 r9:dd358000 r8:00000000 r7:c1447e08 r6:c1216588 r5:00808111
> [  728.079587]  r4:d1063c00
> [  728.082119] [<c021e8c0>] (copy_process.part.5) from [<c0220470>] (_do_fork+0xd0/0x464)
> [  728.090034]  r10:00000000 r9:00000000 r8:dd008400 r7:00000000 r6:c1216588 r5:d2d58ac0
> [  728.097857]  r4:00808111

The call trace tells that this is a fork (of a usermodhlper but that is
not all that important.
[...]
> [  728.260031] DMA free:17960kB min:16384kB low:25664kB high:29760kB active_anon:3556kB inactive_anon:0kB active_file:280kB inactive_file:28kB unevictable:0kB writepending:0kB present:458752kB managed:422896kB mlocked:0kB kernel_stack:6496kB pagetables:9904kB bounce:0kB free_pcp:348kB local_pcp:0kB free_cma:0kB
> [  728.287402] lowmem_reserve[]: 0 0 579 579

So this is the only usable zone and you are close to the min watermark
which means that your system is under a serious memory pressure but not
yet under OOM for order-0 request. The situation is not great though
because there is close to no reclaimable memory (look at *_anon, *_file)
counters and it is quite likely that compaction will stubmle over
unmovable pages very often as well.

> [  728.326634] DMA: 71*4kB (EH) 113*8kB (UH) 207*16kB (UMH) 103*32kB (UMH) 70*64kB (UMH) 27*128kB (UMH) 5*256kB (UMH) 1*512kB (H) 0*1024kB 0*2048kB 0*4096kB 0*8192kB 0*16384kB = 17524kB

This is more interesting because there seem to be order-1+ blocks to
be used for this allocation. H stands for High atomic reserve, U for
unmovable blocks and GFP_KERNEL belong to such an allocation and M is
for movable pageblock (see show_migration_types for all migration
types). From the above it would mean that the allocation should pass
through but note that the information is dumped after the last watermark
check so the situation might have changed.

In any case your system seems to be tight on the lowmem and I would
expect it could get to OOM in peak memory demand on top of the current
state.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: oom-killer
  2019-08-05 11:24 ` oom-killer Michal Hocko
@ 2019-08-05 11:56   ` Vlastimil Babka
  2019-08-05 12:05     ` oom-killer Michal Hocko
  0 siblings, 1 reply; 11+ messages in thread
From: Vlastimil Babka @ 2019-08-05 11:56 UTC (permalink / raw)
  To: Michal Hocko, Pankaj Suryawanshi
  Cc: linux-kernel, linux-mm, pankaj.suryawanshi

On 8/5/19 1:24 PM, Michal Hocko wrote:
>> [  727.954355] CPU: 0 PID: 56 Comm: kworker/u8:2 Tainted: P           O  4.14.65 #606
> [...]
>> [  728.029390] [<c034a094>] (oom_kill_process) from [<c034af24>] (out_of_memory+0x140/0x368)
>> [  728.037569]  r10:00000001 r9:c12169bc r8:00000041 r7:c121e680 r6:c1216588 r5:dd347d7c > [  728.045392]  r4:d5737080
>> [  728.047929] [<c034ade4>] (out_of_memory) from [<c03519ac>]  (__alloc_pages_nodemask+0x1178/0x124c)
>> [  728.056798]  r7:c141e7d0 r6:c12166a4 r5:00000000 r4:00001155
>> [  728.062460] [<c0350834>] (__alloc_pages_nodemask) from [<c021e9d4>] (copy_process.part.5+0x114/0x1a28)
>> [  728.071764]  r10:00000000 r9:dd358000 r8:00000000 r7:c1447e08 r6:c1216588 r5:00808111
>> [  728.079587]  r4:d1063c00
>> [  728.082119] [<c021e8c0>] (copy_process.part.5) from [<c0220470>] (_do_fork+0xd0/0x464)
>> [  728.090034]  r10:00000000 r9:00000000 r8:dd008400 r7:00000000 r6:c1216588 r5:d2d58ac0
>> [  728.097857]  r4:00808111
> 
> The call trace tells that this is a fork (of a usermodhlper but that is
> not all that important.
> [...]
>> [  728.260031] DMA free:17960kB min:16384kB low:25664kB high:29760kB active_anon:3556kB inactive_anon:0kB active_file:280kB inactive_file:28kB unevictable:0kB writepending:0kB present:458752kB managed:422896kB mlocked:0kB kernel_stack:6496kB pagetables:9904kB bounce:0kB free_pcp:348kB local_pcp:0kB free_cma:0kB
>> [  728.287402] lowmem_reserve[]: 0 0 579 579
> 
> So this is the only usable zone and you are close to the min watermark
> which means that your system is under a serious memory pressure but not
> yet under OOM for order-0 request. The situation is not great though

Looking at lowmem_reserve above, wonder if 579 applies here? What does
/proc/zoneinfo say?

> because there is close to no reclaimable memory (look at *_anon, *_file)
> counters and it is quite likely that compaction will stubmle over
> unmovable pages very often as well.
> 
>> [  728.326634] DMA: 71*4kB (EH) 113*8kB (UH) 207*16kB (UMH) 103*32kB (UMH) 70*64kB (UMH) 27*128kB (UMH) 5*256kB (UMH) 1*512kB (H) 0*1024kB 0*2048kB 0*4096kB 0*8192kB 0*16384kB = 17524kB
> 
> This is more interesting because there seem to be order-1+ blocks to
> be used for this allocation. H stands for High atomic reserve, U for
> unmovable blocks and GFP_KERNEL belong to such an allocation and M is
> for movable pageblock (see show_migration_types for all migration
> types). From the above it would mean that the allocation should pass
> through but note that the information is dumped after the last watermark
> check so the situation might have changed.
> 
> In any case your system seems to be tight on the lowmem and I would
> expect it could get to OOM in peak memory demand on top of the current
> state.
> 


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: oom-killer
  2019-08-05 11:56   ` oom-killer Vlastimil Babka
@ 2019-08-05 12:05     ` Michal Hocko
  2019-08-05 15:34       ` oom-killer Pankaj Suryawanshi
  0 siblings, 1 reply; 11+ messages in thread
From: Michal Hocko @ 2019-08-05 12:05 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Pankaj Suryawanshi, linux-kernel, linux-mm, pankaj.suryawanshi

On Mon 05-08-19 13:56:20, Vlastimil Babka wrote:
> On 8/5/19 1:24 PM, Michal Hocko wrote:
> >> [  727.954355] CPU: 0 PID: 56 Comm: kworker/u8:2 Tainted: P           O  4.14.65 #606
> > [...]
> >> [  728.029390] [<c034a094>] (oom_kill_process) from [<c034af24>] (out_of_memory+0x140/0x368)
> >> [  728.037569]  r10:00000001 r9:c12169bc r8:00000041 r7:c121e680 r6:c1216588 r5:dd347d7c > [  728.045392]  r4:d5737080
> >> [  728.047929] [<c034ade4>] (out_of_memory) from [<c03519ac>]  (__alloc_pages_nodemask+0x1178/0x124c)
> >> [  728.056798]  r7:c141e7d0 r6:c12166a4 r5:00000000 r4:00001155
> >> [  728.062460] [<c0350834>] (__alloc_pages_nodemask) from [<c021e9d4>] (copy_process.part.5+0x114/0x1a28)
> >> [  728.071764]  r10:00000000 r9:dd358000 r8:00000000 r7:c1447e08 r6:c1216588 r5:00808111
> >> [  728.079587]  r4:d1063c00
> >> [  728.082119] [<c021e8c0>] (copy_process.part.5) from [<c0220470>] (_do_fork+0xd0/0x464)
> >> [  728.090034]  r10:00000000 r9:00000000 r8:dd008400 r7:00000000 r6:c1216588 r5:d2d58ac0
> >> [  728.097857]  r4:00808111
> > 
> > The call trace tells that this is a fork (of a usermodhlper but that is
> > not all that important.
> > [...]
> >> [  728.260031] DMA free:17960kB min:16384kB low:25664kB high:29760kB active_anon:3556kB inactive_anon:0kB active_file:280kB inactive_file:28kB unevictable:0kB writepending:0kB present:458752kB managed:422896kB mlocked:0kB kernel_stack:6496kB pagetables:9904kB bounce:0kB free_pcp:348kB local_pcp:0kB free_cma:0kB
> >> [  728.287402] lowmem_reserve[]: 0 0 579 579
> > 
> > So this is the only usable zone and you are close to the min watermark
> > which means that your system is under a serious memory pressure but not
> > yet under OOM for order-0 request. The situation is not great though
> 
> Looking at lowmem_reserve above, wonder if 579 applies here? What does
> /proc/zoneinfo say?

This is GFP_KERNEL request essentially so there shouldn't be any lowmem
reserve here, no?
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: oom-killer
  2019-08-05 12:05     ` oom-killer Michal Hocko
@ 2019-08-05 15:34       ` Pankaj Suryawanshi
  2019-08-05 20:16         ` oom-killer Michal Hocko
  2019-08-06 10:04         ` oom-killer Vlastimil Babka
  0 siblings, 2 replies; 11+ messages in thread
From: Pankaj Suryawanshi @ 2019-08-05 15:34 UTC (permalink / raw)
  To: Michal Hocko; +Cc: Vlastimil Babka, linux-kernel, linux-mm, pankaj.suryawanshi

On Mon, Aug 5, 2019 at 5:35 PM Michal Hocko <mhocko@kernel.org> wrote:
>
> On Mon 05-08-19 13:56:20, Vlastimil Babka wrote:
> > On 8/5/19 1:24 PM, Michal Hocko wrote:
> > >> [  727.954355] CPU: 0 PID: 56 Comm: kworker/u8:2 Tainted: P           O  4.14.65 #606
> > > [...]
> > >> [  728.029390] [<c034a094>] (oom_kill_process) from [<c034af24>] (out_of_memory+0x140/0x368)
> > >> [  728.037569]  r10:00000001 r9:c12169bc r8:00000041 r7:c121e680 r6:c1216588 r5:dd347d7c > [  728.045392]  r4:d5737080
> > >> [  728.047929] [<c034ade4>] (out_of_memory) from [<c03519ac>]  (__alloc_pages_nodemask+0x1178/0x124c)
> > >> [  728.056798]  r7:c141e7d0 r6:c12166a4 r5:00000000 r4:00001155
> > >> [  728.062460] [<c0350834>] (__alloc_pages_nodemask) from [<c021e9d4>] (copy_process.part.5+0x114/0x1a28)
> > >> [  728.071764]  r10:00000000 r9:dd358000 r8:00000000 r7:c1447e08 r6:c1216588 r5:00808111
> > >> [  728.079587]  r4:d1063c00
> > >> [  728.082119] [<c021e8c0>] (copy_process.part.5) from [<c0220470>] (_do_fork+0xd0/0x464)
> > >> [  728.090034]  r10:00000000 r9:00000000 r8:dd008400 r7:00000000 r6:c1216588 r5:d2d58ac0
> > >> [  728.097857]  r4:00808111
> > >
> > > The call trace tells that this is a fork (of a usermodhlper but that is
> > > not all that important.
> > > [...]
> > >> [  728.260031] DMA free:17960kB min:16384kB low:25664kB high:29760kB active_anon:3556kB inactive_anon:0kB active_file:280kB inactive_file:28kB unevictable:0kB writepending:0kB present:458752kB managed:422896kB mlocked:0kB kernel_stack:6496kB pagetables:9904kB bounce:0kB free_pcp:348kB local_pcp:0kB free_cma:0kB
> > >> [  728.287402] lowmem_reserve[]: 0 0 579 579
> > >
> > > So this is the only usable zone and you are close to the min watermark
> > > which means that your system is under a serious memory pressure but not
> > > yet under OOM for order-0 request. The situation is not great though
> >
> > Looking at lowmem_reserve above, wonder if 579 applies here? What does
> > /proc/zoneinfo say?


What is  lowmem_reserve[]: 0 0 579 579 ?

$cat /proc/sys/vm/lowmem_reserve_ratio
256     32      32

$cat /proc/sys/vm/min_free_kbytes
16384

here is cat /proc/zoneinfo (in normal situation not when oom)

$cat /proc/zoneinfo
Node 0, zone      DMA
  per-node stats
      nr_inactive_anon 120
      nr_active_anon 94870
      nr_inactive_file 101188
      nr_active_file 74656
      nr_unevictable 614
      nr_slab_reclaimable 12489
      nr_slab_unreclaimable 8519
      nr_isolated_anon 0
      nr_isolated_file 0
      workingset_refault 7163
      workingset_activate 7163
      workingset_nodereclaim 0
      nr_anon_pages 94953
      nr_mapped    109148
      nr_file_pages 176502
      nr_dirty     0
      nr_writeback 0
      nr_writeback_temp 0
      nr_shmem     166
      nr_shmem_hugepages 0
      nr_shmem_pmdmapped 0
      nr_anon_transparent_hugepages 0
      nr_unstable  0
      nr_vmscan_write 0
      nr_vmscan_immediate_reclaim 0
      nr_dirtied   7701
      nr_written   6978
  pages free     49492
        min      4096
        low      6416
        high     7440
        spanned  131072
        present  114688
        managed  105724
        protection: (0, 0, 1491, 1491)
      nr_free_pages 49492
      nr_zone_inactive_anon 0
      nr_zone_active_anon 0
      nr_zone_inactive_file 65
      nr_zone_active_file 4859
      nr_zone_unevictable 0
      nr_zone_write_pending 0
      nr_mlock     0
      nr_page_table_pages 4352
      nr_kernel_stack 9056
      nr_bounce    0
      nr_zspages   0
      nr_free_cma  0
  pagesets
    cpu: 0
              count: 16
              high:  186
              batch: 31
  vm stats threshold: 18
    cpu: 1
              count: 138
              high:  186
              batch: 31
  vm stats threshold: 18
    cpu: 2
              count: 156
              high:  186
              batch: 31
  vm stats threshold: 18
    cpu: 3
              count: 170
              high:  186
              batch: 31
  vm stats threshold: 18
  node_unreclaimable:  0
  start_pfn:           131072
  node_inactive_ratio: 0
Node 0, zone   Normal
  pages free     0
        min      0
        low      0
        high     0
        spanned  0
        present  0
        managed  0
        protection: (0, 0, 11928, 11928)
Node 0, zone  HighMem
  pages free     63096
        min      128
        low      8506
        high     12202
        spanned  393216
        present  381696
        managed  381696
        protection: (0, 0, 0, 0)
      nr_free_pages 63096
      nr_zone_inactive_anon 120
      nr_zone_active_anon 94863
      nr_zone_inactive_file 101123
      nr_zone_active_file 69797
      nr_zone_unevictable 614
      nr_zone_write_pending 0
      nr_mlock     614
      nr_page_table_pages 1478
      nr_kernel_stack 0
      nr_bounce    0
      nr_zspages   0
      nr_free_cma  62429
  pagesets
    cpu: 0
              count: 30
              high:  186
              batch: 31
  vm stats threshold: 30
    cpu: 1
              count: 13
              high:  186
              batch: 31
  vm stats threshold: 30
    cpu: 2
              count: 9
              high:  186
              batch: 31
  vm stats threshold: 30
    cpu: 3
              count: 46
              high:  186
              batch: 31
  vm stats threshold: 30
  node_unreclaimable:  0
  start_pfn:           262144
  node_inactive_ratio: 0
Node 0, zone  Movable
  pages free     0
        min      32
        low      32
        high     32
        spanned  0
        present  0
        managed  0
        protection: (0, 0, 0, 0)
>
>
> This is GFP_KERNEL request essentially so there shouldn't be any lowmem
> reserve here, no?


Why only low 1G is accessible by kernel in 32-bit system ?


My system configuration is :-
3G/1G - vmsplit
vmalloc = 480M (I think vmalloc size will set your highmem ?)

here is my memory layout :-
[    0.000000] Virtual kernel memory layout:
[    0.000000]     vector  : 0xffff0000 - 0xffff1000   (   4 kB)
[    0.000000]     fixmap  : 0xffc00000 - 0xfff00000   (3072 kB)
[    0.000000]     vmalloc : 0xe0800000 - 0xff800000   ( 496 MB)
[    0.000000]     lowmem  : 0xc0000000 - 0xe0000000   ( 512 MB)
[    0.000000]     pkmap   : 0xbfe00000 - 0xc0000000   (   2 MB)
[    0.000000]     modules : 0xbf000000 - 0xbfe00000   (  14 MB)
[    0.000000]       .text : 0xc0008000 - 0xc0c00000   (12256 kB)
[    0.000000]       .init : 0xc1000000 - 0xc1200000   (2048 kB)
[    0.000000]       .data : 0xc1200000 - 0xc143c760   (2290 kB)
[    0.000000]        .bss : 0xc1447840 - 0xc14c3ad4   ( 497 kB)
>
> --
> Michal Hocko
> SUSE Labs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: oom-killer
  2019-08-05 15:34       ` oom-killer Pankaj Suryawanshi
@ 2019-08-05 20:16         ` Michal Hocko
  2019-08-06 14:54           ` oom-killer Pankaj Suryawanshi
  2019-08-06 10:04         ` oom-killer Vlastimil Babka
  1 sibling, 1 reply; 11+ messages in thread
From: Michal Hocko @ 2019-08-05 20:16 UTC (permalink / raw)
  To: Pankaj Suryawanshi
  Cc: Vlastimil Babka, linux-kernel, linux-mm, pankaj.suryawanshi

On Mon 05-08-19 21:04:53, Pankaj Suryawanshi wrote:
> On Mon, Aug 5, 2019 at 5:35 PM Michal Hocko <mhocko@kernel.org> wrote:
> >
> > On Mon 05-08-19 13:56:20, Vlastimil Babka wrote:
> > > On 8/5/19 1:24 PM, Michal Hocko wrote:
> > > >> [  727.954355] CPU: 0 PID: 56 Comm: kworker/u8:2 Tainted: P           O  4.14.65 #606
> > > > [...]
> > > >> [  728.029390] [<c034a094>] (oom_kill_process) from [<c034af24>] (out_of_memory+0x140/0x368)
> > > >> [  728.037569]  r10:00000001 r9:c12169bc r8:00000041 r7:c121e680 r6:c1216588 r5:dd347d7c > [  728.045392]  r4:d5737080
> > > >> [  728.047929] [<c034ade4>] (out_of_memory) from [<c03519ac>]  (__alloc_pages_nodemask+0x1178/0x124c)
> > > >> [  728.056798]  r7:c141e7d0 r6:c12166a4 r5:00000000 r4:00001155
> > > >> [  728.062460] [<c0350834>] (__alloc_pages_nodemask) from [<c021e9d4>] (copy_process.part.5+0x114/0x1a28)
> > > >> [  728.071764]  r10:00000000 r9:dd358000 r8:00000000 r7:c1447e08 r6:c1216588 r5:00808111
> > > >> [  728.079587]  r4:d1063c00
> > > >> [  728.082119] [<c021e8c0>] (copy_process.part.5) from [<c0220470>] (_do_fork+0xd0/0x464)
> > > >> [  728.090034]  r10:00000000 r9:00000000 r8:dd008400 r7:00000000 r6:c1216588 r5:d2d58ac0
> > > >> [  728.097857]  r4:00808111
> > > >
> > > > The call trace tells that this is a fork (of a usermodhlper but that is
> > > > not all that important.
> > > > [...]
> > > >> [  728.260031] DMA free:17960kB min:16384kB low:25664kB high:29760kB active_anon:3556kB inactive_anon:0kB active_file:280kB inactive_file:28kB unevictable:0kB writepending:0kB present:458752kB managed:422896kB mlocked:0kB kernel_stack:6496kB pagetables:9904kB bounce:0kB free_pcp:348kB local_pcp:0kB free_cma:0kB
> > > >> [  728.287402] lowmem_reserve[]: 0 0 579 579
> > > >
> > > > So this is the only usable zone and you are close to the min watermark
> > > > which means that your system is under a serious memory pressure but not
> > > > yet under OOM for order-0 request. The situation is not great though
> > >
> > > Looking at lowmem_reserve above, wonder if 579 applies here? What does
> > > /proc/zoneinfo say?
> 
> 
> What is  lowmem_reserve[]: 0 0 579 579 ?

This controls how much of memory from a lower zone you might an
allocation request for a higher zone consume. E.g. __GFP_HIGHMEM is
allowed to use both lowmem and highmem zones. It is preferable to use
highmem zone because other requests are not allowed to use it.

Please see __zone_watermark_ok for more details.

> > This is GFP_KERNEL request essentially so there shouldn't be any lowmem
> > reserve here, no?
> 
> 
> Why only low 1G is accessible by kernel in 32-bit system ?

https://www.kernel.org/doc/gorman/, https://lwn.net/Articles/75174/
and many more articles. In very short, the 32b virtual address space
is quite small and it has to cover both the users space and the
kernel. That is why we do split it into 3G reserved for userspace and 1G
for kernel. Kernel can only access its 1G portion directly everything
else has to be mapped explicitly (e.g. while data is copied).

> My system configuration is :-
> 3G/1G - vmsplit
> vmalloc = 480M (I think vmalloc size will set your highmem ?)

No, vmalloc is part of the 1GB kernel adress space.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: oom-killer
  2019-08-05 15:34       ` oom-killer Pankaj Suryawanshi
  2019-08-05 20:16         ` oom-killer Michal Hocko
@ 2019-08-06 10:04         ` Vlastimil Babka
       [not found]           ` <CACDBo57Yjuc69GX+V7w_efSHPkpeU3D9RUr0TEd64oUTi4o8Ag@mail.gmail.com>
  1 sibling, 1 reply; 11+ messages in thread
From: Vlastimil Babka @ 2019-08-06 10:04 UTC (permalink / raw)
  To: Pankaj Suryawanshi, Michal Hocko
  Cc: linux-kernel, linux-mm, pankaj.suryawanshi

On 8/5/19 5:34 PM, Pankaj Suryawanshi wrote:
> On Mon, Aug 5, 2019 at 5:35 PM Michal Hocko <mhocko@kernel.org> wrote:
>>
>> On Mon 05-08-19 13:56:20, Vlastimil Babka wrote:
>> > On 8/5/19 1:24 PM, Michal Hocko wrote:
>> > >> [  727.954355] CPU: 0 PID: 56 Comm: kworker/u8:2 Tainted: P           O  4.14.65 #606
>> > > [...]
>> > >> [  728.029390] [<c034a094>] (oom_kill_process) from [<c034af24>] (out_of_memory+0x140/0x368)
>> > >> [  728.037569]  r10:00000001 r9:c12169bc r8:00000041 r7:c121e680 r6:c1216588 r5:dd347d7c > [  728.045392]  r4:d5737080
>> > >> [  728.047929] [<c034ade4>] (out_of_memory) from [<c03519ac>]  (__alloc_pages_nodemask+0x1178/0x124c)
>> > >> [  728.056798]  r7:c141e7d0 r6:c12166a4 r5:00000000 r4:00001155
>> > >> [  728.062460] [<c0350834>] (__alloc_pages_nodemask) from [<c021e9d4>] (copy_process.part.5+0x114/0x1a28)
>> > >> [  728.071764]  r10:00000000 r9:dd358000 r8:00000000 r7:c1447e08 r6:c1216588 r5:00808111
>> > >> [  728.079587]  r4:d1063c00
>> > >> [  728.082119] [<c021e8c0>] (copy_process.part.5) from [<c0220470>] (_do_fork+0xd0/0x464)
>> > >> [  728.090034]  r10:00000000 r9:00000000 r8:dd008400 r7:00000000 r6:c1216588 r5:d2d58ac0
>> > >> [  728.097857]  r4:00808111
>> > >
>> > > The call trace tells that this is a fork (of a usermodhlper but that is
>> > > not all that important.
>> > > [...]
>> > >> [  728.260031] DMA free:17960kB min:16384kB low:25664kB high:29760kB active_anon:3556kB inactive_anon:0kB active_file:280kB inactive_file:28kB unevictable:0kB writepending:0kB present:458752kB managed:422896kB mlocked:0kB kernel_stack:6496kB pagetables:9904kB bounce:0kB free_pcp:348kB local_pcp:0kB free_cma:0kB
>> > >> [  728.287402] lowmem_reserve[]: 0 0 579 579
>> > >
>> > > So this is the only usable zone and you are close to the min watermark
>> > > which means that your system is under a serious memory pressure but not
>> > > yet under OOM for order-0 request. The situation is not great though
>> >
>> > Looking at lowmem_reserve above, wonder if 579 applies here? What does
>> > /proc/zoneinfo say?
> 
> 
> What is  lowmem_reserve[]: 0 0 579 579 ?
> 
> $cat /proc/sys/vm/lowmem_reserve_ratio
> 256     32      32
> 
> $cat /proc/sys/vm/min_free_kbytes
> 16384
> 
> here is cat /proc/zoneinfo (in normal situation not when oom)

Thanks, that shows the lowmem reserve was indeed 0 for the GFP_KERNEL allocation
checking watermarks in the DMA zone. The zone was probably genuinely below min
watermark when the check happened, and things changed while the allocation
failure was printing memory info.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: oom-killer
  2019-08-05 20:16         ` oom-killer Michal Hocko
@ 2019-08-06 14:54           ` Pankaj Suryawanshi
  2019-08-06 15:07             ` oom-killer Michal Hocko
  0 siblings, 1 reply; 11+ messages in thread
From: Pankaj Suryawanshi @ 2019-08-06 14:54 UTC (permalink / raw)
  To: Michal Hocko; +Cc: Vlastimil Babka, linux-kernel, linux-mm, pankaj.suryawanshi

On Tue, 6 Aug, 2019, 1:46 AM Michal Hocko, <mhocko@kernel.org> wrote:
>
> On Mon 05-08-19 21:04:53, Pankaj Suryawanshi wrote:
> > On Mon, Aug 5, 2019 at 5:35 PM Michal Hocko <mhocko@kernel.org> wrote:
> > >
> > > On Mon 05-08-19 13:56:20, Vlastimil Babka wrote:
> > > > On 8/5/19 1:24 PM, Michal Hocko wrote:
> > > > >> [  727.954355] CPU: 0 PID: 56 Comm: kworker/u8:2 Tainted: P           O  4.14.65 #606
> > > > > [...]
> > > > >> [  728.029390] [<c034a094>] (oom_kill_process) from [<c034af24>] (out_of_memory+0x140/0x368)
> > > > >> [  728.037569]  r10:00000001 r9:c12169bc r8:00000041 r7:c121e680 r6:c1216588 r5:dd347d7c > [  728.045392]  r4:d5737080
> > > > >> [  728.047929] [<c034ade4>] (out_of_memory) from [<c03519ac>]  (__alloc_pages_nodemask+0x1178/0x124c)
> > > > >> [  728.056798]  r7:c141e7d0 r6:c12166a4 r5:00000000 r4:00001155
> > > > >> [  728.062460] [<c0350834>] (__alloc_pages_nodemask) from [<c021e9d4>] (copy_process.part.5+0x114/0x1a28)
> > > > >> [  728.071764]  r10:00000000 r9:dd358000 r8:00000000 r7:c1447e08 r6:c1216588 r5:00808111
> > > > >> [  728.079587]  r4:d1063c00
> > > > >> [  728.082119] [<c021e8c0>] (copy_process.part.5) from [<c0220470>] (_do_fork+0xd0/0x464)
> > > > >> [  728.090034]  r10:00000000 r9:00000000 r8:dd008400 r7:00000000 r6:c1216588 r5:d2d58ac0
> > > > >> [  728.097857]  r4:00808111
> > > > >
> > > > > The call trace tells that this is a fork (of a usermodhlper but that is
> > > > > not all that important.
> > > > > [...]
> > > > >> [  728.260031] DMA free:17960kB min:16384kB low:25664kB high:29760kB active_anon:3556kB inactive_anon:0kB active_file:280kB inactive_file:28kB unevictable:0kB writepending:0kB present:458752kB managed:422896kB mlocked:0kB kernel_stack:6496kB pagetables:9904kB bounce:0kB free_pcp:348kB local_pcp:0kB free_cma:0kB
> > > > >> [  728.287402] lowmem_reserve[]: 0 0 579 579
> > > > >
> > > > > So this is the only usable zone and you are close to the min watermark
> > > > > which means that your system is under a serious memory pressure but not
> > > > > yet under OOM for order-0 request. The situation is not great though
> > > >
> > > > Looking at lowmem_reserve above, wonder if 579 applies here? What does
> > > > /proc/zoneinfo say?
> >
> >
> > What is  lowmem_reserve[]: 0 0 579 579 ?
>
> This controls how much of memory from a lower zone you might an
> allocation request for a higher zone consume. E.g. __GFP_HIGHMEM is
> allowed to use both lowmem and highmem zones. It is preferable to use
> highmem zone because other requests are not allowed to use it.
>
> Please see __zone_watermark_ok for more details.
>
>
> > > This is GFP_KERNEL request essentially so there shouldn't be any lowmem
> > > reserve here, no?
> >
> >
> > Why only low 1G is accessible by kernel in 32-bit system ?


1G ivirtual or physical memory (I have 2GB of RAM) ?
>
>
> https://www.kernel.org/doc/gorman/, https://lwn.net/Articles/75174/
> and many more articles. In very short, the 32b virtual address space
> is quite small and it has to cover both the users space and the
> kernel. That is why we do split it into 3G reserved for userspace and 1G
> for kernel. Kernel can only access its 1G portion directly everything
> else has to be mapped explicitly (e.g. while data is copied).
> Thanks Michal.


>
> > My system configuration is :-
> > 3G/1G - vmsplit
> > vmalloc = 480M (I think vmalloc size will set your highmem ?)
>
> No, vmalloc is part of the 1GB kernel adress space.

I read in one article , vmalloc end is fixed if you increase vmalloc
size it decrease highmem. ?
Total = lowmem + (vmalloc + high mem)
>
>
> --
> Michal Hocko
> SUSE Labs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: oom-killer
  2019-08-06 14:54           ` oom-killer Pankaj Suryawanshi
@ 2019-08-06 15:07             ` Michal Hocko
  2019-08-06 15:09               ` oom-killer Pankaj Suryawanshi
  0 siblings, 1 reply; 11+ messages in thread
From: Michal Hocko @ 2019-08-06 15:07 UTC (permalink / raw)
  To: Pankaj Suryawanshi
  Cc: Vlastimil Babka, linux-kernel, linux-mm, pankaj.suryawanshi

On Tue 06-08-19 20:24:03, Pankaj Suryawanshi wrote:
> On Tue, 6 Aug, 2019, 1:46 AM Michal Hocko, <mhocko@kernel.org> wrote:
> >
> > On Mon 05-08-19 21:04:53, Pankaj Suryawanshi wrote:
> > > On Mon, Aug 5, 2019 at 5:35 PM Michal Hocko <mhocko@kernel.org> wrote:
> > > >
> > > > On Mon 05-08-19 13:56:20, Vlastimil Babka wrote:
> > > > > On 8/5/19 1:24 PM, Michal Hocko wrote:
> > > > > >> [  727.954355] CPU: 0 PID: 56 Comm: kworker/u8:2 Tainted: P           O  4.14.65 #606
> > > > > > [...]
> > > > > >> [  728.029390] [<c034a094>] (oom_kill_process) from [<c034af24>] (out_of_memory+0x140/0x368)
> > > > > >> [  728.037569]  r10:00000001 r9:c12169bc r8:00000041 r7:c121e680 r6:c1216588 r5:dd347d7c > [  728.045392]  r4:d5737080
> > > > > >> [  728.047929] [<c034ade4>] (out_of_memory) from [<c03519ac>]  (__alloc_pages_nodemask+0x1178/0x124c)
> > > > > >> [  728.056798]  r7:c141e7d0 r6:c12166a4 r5:00000000 r4:00001155
> > > > > >> [  728.062460] [<c0350834>] (__alloc_pages_nodemask) from [<c021e9d4>] (copy_process.part.5+0x114/0x1a28)
> > > > > >> [  728.071764]  r10:00000000 r9:dd358000 r8:00000000 r7:c1447e08 r6:c1216588 r5:00808111
> > > > > >> [  728.079587]  r4:d1063c00
> > > > > >> [  728.082119] [<c021e8c0>] (copy_process.part.5) from [<c0220470>] (_do_fork+0xd0/0x464)
> > > > > >> [  728.090034]  r10:00000000 r9:00000000 r8:dd008400 r7:00000000 r6:c1216588 r5:d2d58ac0
> > > > > >> [  728.097857]  r4:00808111
> > > > > >
> > > > > > The call trace tells that this is a fork (of a usermodhlper but that is
> > > > > > not all that important.
> > > > > > [...]
> > > > > >> [  728.260031] DMA free:17960kB min:16384kB low:25664kB high:29760kB active_anon:3556kB inactive_anon:0kB active_file:280kB inactive_file:28kB unevictable:0kB writepending:0kB present:458752kB managed:422896kB mlocked:0kB kernel_stack:6496kB pagetables:9904kB bounce:0kB free_pcp:348kB local_pcp:0kB free_cma:0kB
> > > > > >> [  728.287402] lowmem_reserve[]: 0 0 579 579
> > > > > >
> > > > > > So this is the only usable zone and you are close to the min watermark
> > > > > > which means that your system is under a serious memory pressure but not
> > > > > > yet under OOM for order-0 request. The situation is not great though
> > > > >
> > > > > Looking at lowmem_reserve above, wonder if 579 applies here? What does
> > > > > /proc/zoneinfo say?
> > >
> > >
> > > What is  lowmem_reserve[]: 0 0 579 579 ?
> >
> > This controls how much of memory from a lower zone you might an
> > allocation request for a higher zone consume. E.g. __GFP_HIGHMEM is
> > allowed to use both lowmem and highmem zones. It is preferable to use
> > highmem zone because other requests are not allowed to use it.
> >
> > Please see __zone_watermark_ok for more details.
> >
> >
> > > > This is GFP_KERNEL request essentially so there shouldn't be any lowmem
> > > > reserve here, no?
> > >
> > >
> > > Why only low 1G is accessible by kernel in 32-bit system ?
> 
> 
> 1G ivirtual or physical memory (I have 2GB of RAM) ?

virtual

> > https://www.kernel.org/doc/gorman/, https://lwn.net/Articles/75174/
> > and many more articles. In very short, the 32b virtual address space
> > is quite small and it has to cover both the users space and the
> > kernel. That is why we do split it into 3G reserved for userspace and 1G
> > for kernel. Kernel can only access its 1G portion directly everything
> > else has to be mapped explicitly (e.g. while data is copied).
> > Thanks Michal.
> 
> 
> >
> > > My system configuration is :-
> > > 3G/1G - vmsplit
> > > vmalloc = 480M (I think vmalloc size will set your highmem ?)
> >
> > No, vmalloc is part of the 1GB kernel adress space.
> 
> I read in one article , vmalloc end is fixed if you increase vmalloc
> size it decrease highmem. ?
> Total = lowmem + (vmalloc + high mem)

As the kernel is using vmalloc area _directly_ then it has to be a part
of the kernel address space - thus reducing the lowmem.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: oom-killer
  2019-08-06 15:07             ` oom-killer Michal Hocko
@ 2019-08-06 15:09               ` Pankaj Suryawanshi
  2019-08-06 15:12                 ` oom-killer Michal Hocko
  0 siblings, 1 reply; 11+ messages in thread
From: Pankaj Suryawanshi @ 2019-08-06 15:09 UTC (permalink / raw)
  To: Michal Hocko; +Cc: Vlastimil Babka, linux-kernel, linux-mm, pankaj.suryawanshi

On Tue, Aug 6, 2019 at 8:37 PM Michal Hocko <mhocko@kernel.org> wrote:
>
> On Tue 06-08-19 20:24:03, Pankaj Suryawanshi wrote:
> > On Tue, 6 Aug, 2019, 1:46 AM Michal Hocko, <mhocko@kernel.org> wrote:
> > >
> > > On Mon 05-08-19 21:04:53, Pankaj Suryawanshi wrote:
> > > > On Mon, Aug 5, 2019 at 5:35 PM Michal Hocko <mhocko@kernel.org> wrote:
> > > > >
> > > > > On Mon 05-08-19 13:56:20, Vlastimil Babka wrote:
> > > > > > On 8/5/19 1:24 PM, Michal Hocko wrote:
> > > > > > >> [  727.954355] CPU: 0 PID: 56 Comm: kworker/u8:2 Tainted: P           O  4.14.65 #606
> > > > > > > [...]
> > > > > > >> [  728.029390] [<c034a094>] (oom_kill_process) from [<c034af24>] (out_of_memory+0x140/0x368)
> > > > > > >> [  728.037569]  r10:00000001 r9:c12169bc r8:00000041 r7:c121e680 r6:c1216588 r5:dd347d7c > [  728.045392]  r4:d5737080
> > > > > > >> [  728.047929] [<c034ade4>] (out_of_memory) from [<c03519ac>]  (__alloc_pages_nodemask+0x1178/0x124c)
> > > > > > >> [  728.056798]  r7:c141e7d0 r6:c12166a4 r5:00000000 r4:00001155
> > > > > > >> [  728.062460] [<c0350834>] (__alloc_pages_nodemask) from [<c021e9d4>] (copy_process.part.5+0x114/0x1a28)
> > > > > > >> [  728.071764]  r10:00000000 r9:dd358000 r8:00000000 r7:c1447e08 r6:c1216588 r5:00808111
> > > > > > >> [  728.079587]  r4:d1063c00
> > > > > > >> [  728.082119] [<c021e8c0>] (copy_process.part.5) from [<c0220470>] (_do_fork+0xd0/0x464)
> > > > > > >> [  728.090034]  r10:00000000 r9:00000000 r8:dd008400 r7:00000000 r6:c1216588 r5:d2d58ac0
> > > > > > >> [  728.097857]  r4:00808111
> > > > > > >
> > > > > > > The call trace tells that this is a fork (of a usermodhlper but that is
> > > > > > > not all that important.
> > > > > > > [...]
> > > > > > >> [  728.260031] DMA free:17960kB min:16384kB low:25664kB high:29760kB active_anon:3556kB inactive_anon:0kB active_file:280kB inactive_file:28kB unevictable:0kB writepending:0kB present:458752kB managed:422896kB mlocked:0kB kernel_stack:6496kB pagetables:9904kB bounce:0kB free_pcp:348kB local_pcp:0kB free_cma:0kB
> > > > > > >> [  728.287402] lowmem_reserve[]: 0 0 579 579
> > > > > > >
> > > > > > > So this is the only usable zone and you are close to the min watermark
> > > > > > > which means that your system is under a serious memory pressure but not
> > > > > > > yet under OOM for order-0 request. The situation is not great though
> > > > > >
> > > > > > Looking at lowmem_reserve above, wonder if 579 applies here? What does
> > > > > > /proc/zoneinfo say?
> > > >
> > > >
> > > > What is  lowmem_reserve[]: 0 0 579 579 ?
> > >
> > > This controls how much of memory from a lower zone you might an
> > > allocation request for a higher zone consume. E.g. __GFP_HIGHMEM is
> > > allowed to use both lowmem and highmem zones. It is preferable to use
> > > highmem zone because other requests are not allowed to use it.
> > >
> > > Please see __zone_watermark_ok for more details.
> > >
> > >
> > > > > This is GFP_KERNEL request essentially so there shouldn't be any lowmem
> > > > > reserve here, no?
> > > >
> > > >
> > > > Why only low 1G is accessible by kernel in 32-bit system ?
> >
> >
> > 1G ivirtual or physical memory (I have 2GB of RAM) ?
>
> virtual
>
 I have set 2G/2G still it works ?

>
> > > https://www.kernel.org/doc/gorman/, https://lwn.net/Articles/75174/
> > > and many more articles. In very short, the 32b virtual address space
> > > is quite small and it has to cover both the users space and the
> > > kernel. That is why we do split it into 3G reserved for userspace and 1G
> > > for kernel. Kernel can only access its 1G portion directly everything
> > > else has to be mapped explicitly (e.g. while data is copied).
> > > Thanks Michal.
> >
> >
> > >
> > > > My system configuration is :-
> > > > 3G/1G - vmsplit
> > > > vmalloc = 480M (I think vmalloc size will set your highmem ?)
> > >
> > > No, vmalloc is part of the 1GB kernel adress space.
> >
> > I read in one article , vmalloc end is fixed if you increase vmalloc
> > size it decrease highmem. ?
> > Total = lowmem + (vmalloc + high mem)
>
> As the kernel is using vmalloc area _directly_ then it has to be a part
> of the kernel address space - thus reducing the lowmem.
> --
> Michal Hocko
> SUSE Labs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: oom-killer
       [not found]           ` <CACDBo57Yjuc69GX+V7w_efSHPkpeU3D9RUr0TEd64oUTi4o8Ag@mail.gmail.com>
@ 2019-08-06 15:11             ` Michal Hocko
  0 siblings, 0 replies; 11+ messages in thread
From: Michal Hocko @ 2019-08-06 15:11 UTC (permalink / raw)
  To: Pankaj Suryawanshi
  Cc: Vlastimil Babka, linux-kernel, linux-mm, pankaj.suryawanshi

On Tue 06-08-19 20:25:51, Pankaj Suryawanshi wrote:
[...]
> lowmem reserve ? it is min_free_kbytes or something else.

Nope. Lowmem rezerve is a measure to protect from allocations targetting
higher zones (have a look at setup_per_zone_lowmem_reserve). The value
for each zone depends on the amount of memory managed by the zone
and the ratio which can be tuned from the userspace. min_free_kbytes
controls reclaim watermarks.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: oom-killer
  2019-08-06 15:09               ` oom-killer Pankaj Suryawanshi
@ 2019-08-06 15:12                 ` Michal Hocko
  0 siblings, 0 replies; 11+ messages in thread
From: Michal Hocko @ 2019-08-06 15:12 UTC (permalink / raw)
  To: Pankaj Suryawanshi
  Cc: Vlastimil Babka, linux-kernel, linux-mm, pankaj.suryawanshi

On Tue 06-08-19 20:39:22, Pankaj Suryawanshi wrote:
> On Tue, Aug 6, 2019 at 8:37 PM Michal Hocko <mhocko@kernel.org> wrote:
> >
> > On Tue 06-08-19 20:24:03, Pankaj Suryawanshi wrote:
> > > On Tue, 6 Aug, 2019, 1:46 AM Michal Hocko, <mhocko@kernel.org> wrote:
> > > >
> > > > On Mon 05-08-19 21:04:53, Pankaj Suryawanshi wrote:
> > > > > On Mon, Aug 5, 2019 at 5:35 PM Michal Hocko <mhocko@kernel.org> wrote:
> > > > > >
> > > > > > On Mon 05-08-19 13:56:20, Vlastimil Babka wrote:
> > > > > > > On 8/5/19 1:24 PM, Michal Hocko wrote:
> > > > > > > >> [  727.954355] CPU: 0 PID: 56 Comm: kworker/u8:2 Tainted: P           O  4.14.65 #606
> > > > > > > > [...]
> > > > > > > >> [  728.029390] [<c034a094>] (oom_kill_process) from [<c034af24>] (out_of_memory+0x140/0x368)
> > > > > > > >> [  728.037569]  r10:00000001 r9:c12169bc r8:00000041 r7:c121e680 r6:c1216588 r5:dd347d7c > [  728.045392]  r4:d5737080
> > > > > > > >> [  728.047929] [<c034ade4>] (out_of_memory) from [<c03519ac>]  (__alloc_pages_nodemask+0x1178/0x124c)
> > > > > > > >> [  728.056798]  r7:c141e7d0 r6:c12166a4 r5:00000000 r4:00001155
> > > > > > > >> [  728.062460] [<c0350834>] (__alloc_pages_nodemask) from [<c021e9d4>] (copy_process.part.5+0x114/0x1a28)
> > > > > > > >> [  728.071764]  r10:00000000 r9:dd358000 r8:00000000 r7:c1447e08 r6:c1216588 r5:00808111
> > > > > > > >> [  728.079587]  r4:d1063c00
> > > > > > > >> [  728.082119] [<c021e8c0>] (copy_process.part.5) from [<c0220470>] (_do_fork+0xd0/0x464)
> > > > > > > >> [  728.090034]  r10:00000000 r9:00000000 r8:dd008400 r7:00000000 r6:c1216588 r5:d2d58ac0
> > > > > > > >> [  728.097857]  r4:00808111
> > > > > > > >
> > > > > > > > The call trace tells that this is a fork (of a usermodhlper but that is
> > > > > > > > not all that important.
> > > > > > > > [...]
> > > > > > > >> [  728.260031] DMA free:17960kB min:16384kB low:25664kB high:29760kB active_anon:3556kB inactive_anon:0kB active_file:280kB inactive_file:28kB unevictable:0kB writepending:0kB present:458752kB managed:422896kB mlocked:0kB kernel_stack:6496kB pagetables:9904kB bounce:0kB free_pcp:348kB local_pcp:0kB free_cma:0kB
> > > > > > > >> [  728.287402] lowmem_reserve[]: 0 0 579 579
> > > > > > > >
> > > > > > > > So this is the only usable zone and you are close to the min watermark
> > > > > > > > which means that your system is under a serious memory pressure but not
> > > > > > > > yet under OOM for order-0 request. The situation is not great though
> > > > > > >
> > > > > > > Looking at lowmem_reserve above, wonder if 579 applies here? What does
> > > > > > > /proc/zoneinfo say?
> > > > >
> > > > >
> > > > > What is  lowmem_reserve[]: 0 0 579 579 ?
> > > >
> > > > This controls how much of memory from a lower zone you might an
> > > > allocation request for a higher zone consume. E.g. __GFP_HIGHMEM is
> > > > allowed to use both lowmem and highmem zones. It is preferable to use
> > > > highmem zone because other requests are not allowed to use it.
> > > >
> > > > Please see __zone_watermark_ok for more details.
> > > >
> > > >
> > > > > > This is GFP_KERNEL request essentially so there shouldn't be any lowmem
> > > > > > reserve here, no?
> > > > >
> > > > >
> > > > > Why only low 1G is accessible by kernel in 32-bit system ?
> > >
> > >
> > > 1G ivirtual or physical memory (I have 2GB of RAM) ?
> >
> > virtual
> >
>  I have set 2G/2G still it works ?

It would reduce the amount of memory that userspace might use. It may
work for your particular case but the fundamental restriction is still
there.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2019-08-06 15:12 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CACDBo54Jbueeq1XbtbrFOeOEyF-Q4ipZJab8mB7+0cyK1Foqyw@mail.gmail.com>
2019-08-05 11:24 ` oom-killer Michal Hocko
2019-08-05 11:56   ` oom-killer Vlastimil Babka
2019-08-05 12:05     ` oom-killer Michal Hocko
2019-08-05 15:34       ` oom-killer Pankaj Suryawanshi
2019-08-05 20:16         ` oom-killer Michal Hocko
2019-08-06 14:54           ` oom-killer Pankaj Suryawanshi
2019-08-06 15:07             ` oom-killer Michal Hocko
2019-08-06 15:09               ` oom-killer Pankaj Suryawanshi
2019-08-06 15:12                 ` oom-killer Michal Hocko
2019-08-06 10:04         ` oom-killer Vlastimil Babka
     [not found]           ` <CACDBo57Yjuc69GX+V7w_efSHPkpeU3D9RUr0TEd64oUTi4o8Ag@mail.gmail.com>
2019-08-06 15:11             ` oom-killer Michal Hocko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).