From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757286AbcJGPpm (ORCPT ); Fri, 7 Oct 2016 11:45:42 -0400 Received: from LGEAMRELO12.lge.com ([156.147.23.52]:33684 "EHLO lgeamrelo12.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756822AbcJGPT3 (ORCPT ); Fri, 7 Oct 2016 11:19:29 -0400 X-Original-SENDERIP: 156.147.1.151 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 10.177.223.161 X-Original-MAILFROM: minchan@kernel.org Date: Sat, 8 Oct 2016 00:04:25 +0900 From: Minchan Kim To: Michal Hocko Cc: Andrew Morton , Mel Gorman , Vlastimil Babka , Joonsoo Kim , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Sangseok Lee Subject: Re: [PATCH 0/4] use up highorder free pages before OOM Message-ID: <20161007150425.GD3060@bbox> References: <1475819136-24358-1-git-send-email-minchan@kernel.org> <20161007091625.GB18447@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20161007091625.GB18447@dhcp22.suse.cz> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Oct 07, 2016 at 11:16:26AM +0200, Michal Hocko wrote: > On Fri 07-10-16 14:45:32, Minchan Kim wrote: > > I got OOM report from production team with v4.4 kernel. > > It has enough free memory but failed to allocate order-0 page and > > finally encounter OOM kill. > > I could reproduce it with my test easily. Look at below. > > The reason is free pages(19M) of DMA32 zone are reserved for > > HIGHORDERATOMIC and doesn't unreserved before the OOM. > > Is this really reproducible? I can reproduce in 1 hour. > > [...] > > active_anon:383949 inactive_anon:106724 isolated_anon:0 > > active_file:15 inactive_file:44 isolated_file:0 > > unevictable:0 dirty:0 writeback:24 unstable:0 > > slab_reclaimable:2483 slab_unreclaimable:3326 > > mapped:0 shmem:0 pagetables:1906 bounce:0 > > free:6898 free_pcp:291 free_cma:0 > [...] > > Free swap = 8kB > > Total swap = 255996kB > > 524158 pages RAM > > 0 pages HighMem/MovableOnly > > 12658 pages reserved > > 0 pages cma reserved > > 0 pages hwpoisoned > > From the above you can see that you are pretty much out of memory. There > is basically no pagecache to reclaim and your anon memory is not > reclaimable either because the swap is basically full. It is true that > the high atomic reserves consume 19MB which could be reused but this > less than 1%, especially when you compare that to the amount of reserved > memory. I can show other log which reserve greater than 1%. See the DMA32 zone free pages. It was GFP_ATOMIC allocation so it's different with I posted but important thing is VM can reserve memory greater than 1% by the race which was really what we want. in:imklog: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK) CPU: 0 PID: 476 Comm: in:imklog Tainted: G E 4.8.0-rc7-00217-g266ef83c51e5-dirty #3135 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014 0000000000000000 ffff880077c37590 ffffffff81389033 0000000000000000 0000000000000000 ffff880077c37618 ffffffff8117519b 0228002000000000 ffffffffffffffff ffffffff81cedb40 0000000000000000 0000000000000040 Call Trace: [] dump_stack+0x63/0x90 [] warn_alloc_failed+0xdb/0x130 [] __alloc_pages_nodemask+0x4d6/0xdb0 [] ? bdev_write_page+0xa9/0xd0 [] ? __page_check_address+0xd3/0x130 [] ? deactivate_slab+0x12a/0x3e0 [] new_slab+0x339/0x490 [] ___slab_alloc.constprop.74+0x367/0x480 [] ? alloc_indirect.isra.14+0x1d/0x50 [] ? default_wake_function+0x12/0x20 [] __slab_alloc.constprop.73+0x20/0x40 [] __kmalloc+0x1a4/0x1e0 [] alloc_indirect.isra.14+0x1d/0x50 [] virtqueue_add_sgs+0x1c4/0x470 [] ? __bt_get.isra.8+0xe5/0x1c0 [] __virtblk_add_req+0xae/0x1f0 [] ? wake_atomic_t_function+0x60/0x60 [] ? sched_clock+0x9/0x10 [] ? __blk_mq_alloc_request+0x10b/0x230 [] ? blk_rq_map_sg+0x213/0x550 [] virtio_queue_rq+0x12d/0x290 [] __blk_mq_run_hw_queue+0x239/0x370 [] blk_mq_run_hw_queue+0x8f/0xb0 [] blk_mq_insert_requests+0x18c/0x1a0 [] blk_mq_flush_plug_list+0x125/0x140 [] blk_flush_plug_list+0xc7/0x220 [] blk_finish_plug+0x2c/0x40 [] __do_page_cache_readahead+0x196/0x230 [] ? zram_free_page+0x3a/0xb0 [zram] [] filemap_fault+0x448/0x4f0 [] ? alloc_set_pte+0xe4/0x350 [] ext4_filemap_fault+0x36/0x50 [] __do_fault+0x75/0x140 [] handle_mm_fault+0x84d/0xbe0 [] ? kmsg_read+0x44/0x60 [] __do_page_fault+0x1dd/0x4d0 [] trace_do_page_fault+0x43/0x130 [] do_async_page_fault+0x1a/0xa0 [] async_page_fault+0x28/0x30 Mem-Info: active_anon:363826 inactive_anon:121283 isolated_anon:32 active_file:65 inactive_file:152 isolated_file:0 unevictable:0 dirty:0 writeback:46 unstable:0 slab_reclaimable:2778 slab_unreclaimable:3070 mapped:112 shmem:0 pagetables:1822 bounce:0 free:9469 free_pcp:231 free_cma:0 Node 0 active_anon:1455304kB inactive_anon:485132kB active_file:260kB inactive_file:608kB unevictable:0kB isolated(anon):128kB isolated(file):0kB mapped:448kB dirty:0kB writeback:184kB shmem:0kB writeback_tmp:0kB unstable:0kB pages_scanned:13641 all_unreclaimable? no DMA free:7748kB min:44kB low:56kB high:68kB active_anon:7944kB inactive_anon:104kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15908kB mlocked:0kB slab_reclaimable:0kB slab_unreclaimable:108kB kernel_stack:0kB pagetables:4kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB lowmem_reserve[]: 0 1952 1952 1952 DMA32 free:30128kB min:5628kB low:7624kB high:9620kB active_anon:1447360kB inactive_anon:485028kB active_file:260kB inactive_file:608kB unevictable:0kB writepending:184kB present:2080640kB managed:2030132kB mlocked:0kB slab_reclaimable:11112kB slab_unreclaimable:12172kB kernel_stack:2400kB pagetables:7284kB bounce:0kB free_pcp:924kB local_pcp:72kB free_cma:0kB lowmem_reserve[]: 0 0 0 0 DMA: 7*4kB (UE) 3*8kB (UH) 1*16kB (M) 0*32kB 2*64kB (U) 1*128kB (M) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (U) 1*4096kB (H) = 7748kB DMA32: 10*4kB (H) 3*8kB (H) 47*16kB (H) 38*32kB (H) 5*64kB (H) 1*128kB (H) 2*256kB (H) 3*512kB (H) 3*1024kB (H) 3*2048kB (H) 4*4096kB (H) = 30128kB 2775 total pagecache pages 2536 pages in swap cache Swap cache stats: add 206786828, delete 206784292, find 7323106/106686077 Free swap = 108744kB Total swap = 255996kB 524158 pages RAM 0 pages HighMem/MovableOnly 12648 pages reserved 0 pages cma reserved 0 pages hwpoisoned > > So while I do agree that potential issues - misaccounting and others you > are addressing in the follow up patch - are good to fix but I believe that > draining last 19M is not something that would reliably get you over the > edge. Your workload (93% of memory sitting on anon LRU with swap full) > simply doesn't fit into the amount of memory you have available. What happens if the workload fit into additional 19M memory? I admit my testing aimed for proving the problem but with this patchset, there is no OOM killing with many free pages and the number of OOM was reduced highly. It is definitely better than old. Please don't ignore 1% memory in embedded system. 20M memory in 2G system, If we can use those for zram, it is 60~80M memory via compression. You should know how many engineers try to reduce 1M of their driver to cost down of the product, seriously. > -- > Michal Hocko > SUSE Labs From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-it0-f69.google.com (mail-it0-f69.google.com [209.85.214.69]) by kanga.kvack.org (Postfix) with ESMTP id 7C3CC6B0038 for ; Fri, 7 Oct 2016 11:04:28 -0400 (EDT) Received: by mail-it0-f69.google.com with SMTP id o21so28004450itb.4 for ; Fri, 07 Oct 2016 08:04:28 -0700 (PDT) Received: from lgeamrelo12.lge.com (LGEAMRELO12.lge.com. [156.147.23.52]) by mx.google.com with ESMTP id 28si24523410ioj.205.2016.10.07.08.04.26 for ; Fri, 07 Oct 2016 08:04:27 -0700 (PDT) Date: Sat, 8 Oct 2016 00:04:25 +0900 From: Minchan Kim Subject: Re: [PATCH 0/4] use up highorder free pages before OOM Message-ID: <20161007150425.GD3060@bbox> References: <1475819136-24358-1-git-send-email-minchan@kernel.org> <20161007091625.GB18447@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20161007091625.GB18447@dhcp22.suse.cz> Sender: owner-linux-mm@kvack.org List-ID: To: Michal Hocko Cc: Andrew Morton , Mel Gorman , Vlastimil Babka , Joonsoo Kim , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Sangseok Lee On Fri, Oct 07, 2016 at 11:16:26AM +0200, Michal Hocko wrote: > On Fri 07-10-16 14:45:32, Minchan Kim wrote: > > I got OOM report from production team with v4.4 kernel. > > It has enough free memory but failed to allocate order-0 page and > > finally encounter OOM kill. > > I could reproduce it with my test easily. Look at below. > > The reason is free pages(19M) of DMA32 zone are reserved for > > HIGHORDERATOMIC and doesn't unreserved before the OOM. > > Is this really reproducible? I can reproduce in 1 hour. > > [...] > > active_anon:383949 inactive_anon:106724 isolated_anon:0 > > active_file:15 inactive_file:44 isolated_file:0 > > unevictable:0 dirty:0 writeback:24 unstable:0 > > slab_reclaimable:2483 slab_unreclaimable:3326 > > mapped:0 shmem:0 pagetables:1906 bounce:0 > > free:6898 free_pcp:291 free_cma:0 > [...] > > Free swap = 8kB > > Total swap = 255996kB > > 524158 pages RAM > > 0 pages HighMem/MovableOnly > > 12658 pages reserved > > 0 pages cma reserved > > 0 pages hwpoisoned > > From the above you can see that you are pretty much out of memory. There > is basically no pagecache to reclaim and your anon memory is not > reclaimable either because the swap is basically full. It is true that > the high atomic reserves consume 19MB which could be reused but this > less than 1%, especially when you compare that to the amount of reserved > memory. I can show other log which reserve greater than 1%. See the DMA32 zone free pages. It was GFP_ATOMIC allocation so it's different with I posted but important thing is VM can reserve memory greater than 1% by the race which was really what we want. in:imklog: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK) CPU: 0 PID: 476 Comm: in:imklog Tainted: G E 4.8.0-rc7-00217-g266ef83c51e5-dirty #3135 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014 0000000000000000 ffff880077c37590 ffffffff81389033 0000000000000000 0000000000000000 ffff880077c37618 ffffffff8117519b 0228002000000000 ffffffffffffffff ffffffff81cedb40 0000000000000000 0000000000000040 Call Trace: [] dump_stack+0x63/0x90 [] warn_alloc_failed+0xdb/0x130 [] __alloc_pages_nodemask+0x4d6/0xdb0 [] ? bdev_write_page+0xa9/0xd0 [] ? __page_check_address+0xd3/0x130 [] ? deactivate_slab+0x12a/0x3e0 [] new_slab+0x339/0x490 [] ___slab_alloc.constprop.74+0x367/0x480 [] ? alloc_indirect.isra.14+0x1d/0x50 [] ? default_wake_function+0x12/0x20 [] __slab_alloc.constprop.73+0x20/0x40 [] __kmalloc+0x1a4/0x1e0 [] alloc_indirect.isra.14+0x1d/0x50 [] virtqueue_add_sgs+0x1c4/0x470 [] ? __bt_get.isra.8+0xe5/0x1c0 [] __virtblk_add_req+0xae/0x1f0 [] ? wake_atomic_t_function+0x60/0x60 [] ? sched_clock+0x9/0x10 [] ? __blk_mq_alloc_request+0x10b/0x230 [] ? blk_rq_map_sg+0x213/0x550 [] virtio_queue_rq+0x12d/0x290 [] __blk_mq_run_hw_queue+0x239/0x370 [] blk_mq_run_hw_queue+0x8f/0xb0 [] blk_mq_insert_requests+0x18c/0x1a0 [] blk_mq_flush_plug_list+0x125/0x140 [] blk_flush_plug_list+0xc7/0x220 [] blk_finish_plug+0x2c/0x40 [] __do_page_cache_readahead+0x196/0x230 [] ? zram_free_page+0x3a/0xb0 [zram] [] filemap_fault+0x448/0x4f0 [] ? alloc_set_pte+0xe4/0x350 [] ext4_filemap_fault+0x36/0x50 [] __do_fault+0x75/0x140 [] handle_mm_fault+0x84d/0xbe0 [] ? kmsg_read+0x44/0x60 [] __do_page_fault+0x1dd/0x4d0 [] trace_do_page_fault+0x43/0x130 [] do_async_page_fault+0x1a/0xa0 [] async_page_fault+0x28/0x30 Mem-Info: active_anon:363826 inactive_anon:121283 isolated_anon:32 active_file:65 inactive_file:152 isolated_file:0 unevictable:0 dirty:0 writeback:46 unstable:0 slab_reclaimable:2778 slab_unreclaimable:3070 mapped:112 shmem:0 pagetables:1822 bounce:0 free:9469 free_pcp:231 free_cma:0 Node 0 active_anon:1455304kB inactive_anon:485132kB active_file:260kB inactive_file:608kB unevictable:0kB isolated(anon):128kB isolated(file):0kB mapped:448kB dirty:0kB writeback:184kB shmem:0kB writeback_tmp:0kB unstable:0kB pages_scanned:13641 all_unreclaimable? no DMA free:7748kB min:44kB low:56kB high:68kB active_anon:7944kB inactive_anon:104kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15908kB mlocked:0kB slab_reclaimable:0kB slab_unreclaimable:108kB kernel_stack:0kB pagetables:4kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB lowmem_reserve[]: 0 1952 1952 1952 DMA32 free:30128kB min:5628kB low:7624kB high:9620kB active_anon:1447360kB inactive_anon:485028kB active_file:260kB inactive_file:608kB unevictable:0kB writepending:184kB present:2080640kB managed:2030132kB mlocked:0kB slab_reclaimable:11112kB slab_unreclaimable:12172kB kernel_stack:2400kB pagetables:7284kB bounce:0kB free_pcp:924kB local_pcp:72kB free_cma:0kB lowmem_reserve[]: 0 0 0 0 DMA: 7*4kB (UE) 3*8kB (UH) 1*16kB (M) 0*32kB 2*64kB (U) 1*128kB (M) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (U) 1*4096kB (H) = 7748kB DMA32: 10*4kB (H) 3*8kB (H) 47*16kB (H) 38*32kB (H) 5*64kB (H) 1*128kB (H) 2*256kB (H) 3*512kB (H) 3*1024kB (H) 3*2048kB (H) 4*4096kB (H) = 30128kB 2775 total pagecache pages 2536 pages in swap cache Swap cache stats: add 206786828, delete 206784292, find 7323106/106686077 Free swap = 108744kB Total swap = 255996kB 524158 pages RAM 0 pages HighMem/MovableOnly 12648 pages reserved 0 pages cma reserved 0 pages hwpoisoned > > So while I do agree that potential issues - misaccounting and others you > are addressing in the follow up patch - are good to fix but I believe that > draining last 19M is not something that would reliably get you over the > edge. Your workload (93% of memory sitting on anon LRU with swap full) > simply doesn't fit into the amount of memory you have available. What happens if the workload fit into additional 19M memory? I admit my testing aimed for proving the problem but with this patchset, there is no OOM killing with many free pages and the number of OOM was reduced highly. It is definitely better than old. Please don't ignore 1% memory in embedded system. 20M memory in 2G system, If we can use those for zram, it is 60~80M memory via compression. You should know how many engineers try to reduce 1M of their driver to cost down of the product, seriously. > -- > Michal Hocko > SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org