From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751365AbdBFHqI convert rfc822-to-8bit (ORCPT ); Mon, 6 Feb 2017 02:46:08 -0500 Received: from tyo161.gate.nec.co.jp ([114.179.232.161]:47544 "EHLO tyo161.gate.nec.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750932AbdBFHqG (ORCPT ); Mon, 6 Feb 2017 02:46:06 -0500 From: Naoya Horiguchi To: Zi Yan CC: "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "kirill.shutemov@linux.intel.com" , "akpm@linux-foundation.org" , "minchan@kernel.org" , "vbabka@suse.cz" , "mgorman@techsingularity.net" , "khandual@linux.vnet.ibm.com" , "zi.yan@cs.rutgers.edu" , Zi Yan Subject: Re: [PATCH v3 03/14] mm: use pmd lock instead of racy checks in zap_pmd_range() Thread-Topic: [PATCH v3 03/14] mm: use pmd lock instead of racy checks in zap_pmd_range() Thread-Index: AQHSf8ryDIvDaWTIhEu+4ssb1TbIV6FbAv2A Date: Mon, 6 Feb 2017 07:43:38 +0000 Message-ID: <20170206074337.GB30339@hori1.linux.bs1.fc.nec.co.jp> References: <20170205161252.85004-1-zi.yan@sent.com> <20170205161252.85004-4-zi.yan@sent.com> In-Reply-To: <20170205161252.85004-4-zi.yan@sent.com> Accept-Language: en-US, ja-JP Content-Language: ja-JP X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.128.101.12] Content-Type: text/plain; charset="iso-2022-jp" Content-ID: Content-Transfer-Encoding: 8BIT MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, Feb 05, 2017 at 11:12:41AM -0500, Zi Yan wrote: > From: Zi Yan > > Originally, zap_pmd_range() checks pmd value without taking pmd lock. > This can cause pmd_protnone entry not being freed. > > Because there are two steps in changing a pmd entry to a pmd_protnone > entry. First, the pmd entry is cleared to a pmd_none entry, then, > the pmd_none entry is changed into a pmd_protnone entry. > The racy check, even with barrier, might only see the pmd_none entry > in zap_pmd_range(), thus, the mapping is neither split nor zapped. > > Later, in free_pmd_range(), pmd_none_or_clear() will see the > pmd_protnone entry and clear it as a pmd_bad entry. Furthermore, > since the pmd_protnone entry is not properly freed, the corresponding > deposited pte page table is not freed either. > > This causes memory leak or kernel crashing, if VM_BUG_ON() is enabled. > > This patch relies on __split_huge_pmd_locked() and > __zap_huge_pmd_locked(). > > Signed-off-by: Zi Yan > --- > mm/memory.c | 24 +++++++++++------------- > 1 file changed, 11 insertions(+), 13 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index 3929b015faf7..7cfdd5208ef5 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -1233,33 +1233,31 @@ static inline unsigned long zap_pmd_range(struct mmu_gather *tlb, > struct zap_details *details) > { > pmd_t *pmd; > + spinlock_t *ptl; > unsigned long next; > > pmd = pmd_offset(pud, addr); > + ptl = pmd_lock(vma->vm_mm, pmd); If USE_SPLIT_PMD_PTLOCKS is true, pmd_lock() returns different ptl for each pmd. The following code runs over pmds within [addr, end) with a single ptl (of the first pmd,) so I suspect this locking really works. Maybe pmd_lock() should be called inside while loop? Thanks, Naoya Horiguchi > do { > next = pmd_addr_end(addr, end); > if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) { > if (next - addr != HPAGE_PMD_SIZE) { > VM_BUG_ON_VMA(vma_is_anonymous(vma) && > !rwsem_is_locked(&tlb->mm->mmap_sem), vma); > - __split_huge_pmd(vma, pmd, addr, false, NULL); > - } else if (zap_huge_pmd(tlb, vma, pmd, addr)) > - goto next; > + __split_huge_pmd_locked(vma, pmd, addr, false); > + } else if (__zap_huge_pmd_locked(tlb, vma, pmd, addr)) > + continue; > /* fall through */ > } > - /* > - * Here there can be other concurrent MADV_DONTNEED or > - * trans huge page faults running, and if the pmd is > - * none or trans huge it can change under us. This is > - * because MADV_DONTNEED holds the mmap_sem in read > - * mode. > - */ > - if (pmd_none_or_trans_huge_or_clear_bad(pmd)) > - goto next; > + > + if (pmd_none_or_clear_bad(pmd)) > + continue; > + spin_unlock(ptl); > next = zap_pte_range(tlb, vma, pmd, addr, next, details); > -next: > cond_resched(); > + spin_lock(ptl); > } while (pmd++, addr = next, addr != end); > + spin_unlock(ptl); > > return addr; > } > -- > 2.11.0 >