From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752551AbcAGIOK (ORCPT ); Thu, 7 Jan 2016 03:14:10 -0500 Received: from mail-wm0-f47.google.com ([74.125.82.47]:33754 "EHLO mail-wm0-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752311AbcAGIOG (ORCPT ); Thu, 7 Jan 2016 03:14:06 -0500 Date: Thu, 7 Jan 2016 09:14:02 +0100 From: Michal Hocko To: Andrew Morton Cc: Mel Gorman , Tetsuo Handa , David Rientjes , Linus Torvalds , Oleg Nesterov , Hugh Dickins , Andrea Argangeli , Rik van Riel , linux-mm@kvack.org, LKML Subject: Re: [PATCH 2/2] oom reaper: handle anonymous mlocked pages Message-ID: <20160107081402.GA27868@dhcp22.suse.cz> References: <1452094975-551-1-git-send-email-mhocko@kernel.org> <1452094975-551-3-git-send-email-mhocko@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1452094975-551-3-git-send-email-mhocko@kernel.org> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed 06-01-16 16:42:55, Michal Hocko wrote: > Anonymous mappings > are not visible by any other process so doing a munlock before unmap > is safe to do from the semantic point of view. I was too conservative here. I have completely forgoten about the lazy mlock handling during try_to_unmap which would keep the page mlocked if there is an mlocked vma mapping that page. So we can safely do what I was proposing originally. I hope I am not missing anything now. Here is the replacement patch --- >>From 9aa92fc1c7f0f1c55d2efab0239dbb10a9dce001 Mon Sep 17 00:00:00 2001 From: Michal Hocko Date: Wed, 6 Jan 2016 10:48:39 +0100 Subject: [PATCH] oom reaper: handle mlocked pages __oom_reap_vmas current skips over all mlocked vmas because they need a special treatment before they are unmapped. This is primarily done for simplicity. There is no reason to skip over them and reduce the amount of reclaimed memory. This is safe from the semantic point of view because try_to_unmap_one during rmap walk would keep tell the reclaim to cull the page back and mlock it again. munlock_vma_pages_all is also safe to be called from the oom reaper context because it doesn't sit on any locks but mmap_sem (for read). Signed-off-by: Michal Hocko --- mm/oom_kill.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 1ece40b94725..0e4af31db96f 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -445,13 +445,6 @@ static bool __oom_reap_vmas(struct mm_struct *mm) continue; /* - * mlocked VMAs require explicit munlocking before unmap. - * Let's keep it simple here and skip such VMAs. - */ - if (vma->vm_flags & VM_LOCKED) - continue; - - /* * Only anonymous pages have a good chance to be dropped * without additional steps which we cannot afford as we * are OOM already. @@ -461,9 +454,12 @@ static bool __oom_reap_vmas(struct mm_struct *mm) * we do not want to block exit_mmap by keeping mm ref * count elevated without a good reason. */ - if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) + if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) { + if (vma->vm_flags & VM_LOCKED) + munlock_vma_pages_all(vma); unmap_page_range(&tlb, vma, vma->vm_start, vma->vm_end, &details); + } } tlb_finish_mmu(&tlb, 0, -1); up_read(&mm->mmap_sem); -- 2.6.4 -- Michal Hocko SUSE Labs