From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754826AbbCaUHs (ORCPT ); Tue, 31 Mar 2015 16:07:48 -0400 Received: from youngberry.canonical.com ([91.189.89.112]:39448 "EHLO youngberry.canonical.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753510AbbCaTt4 (ORCPT ); Tue, 31 Mar 2015 15:49:56 -0400 From: Kamal Mostafa To: linux-kernel@vger.kernel.org, stable@vger.kernel.org, kernel-team@lists.ubuntu.com Cc: Naoya Horiguchi , Hugh Dickins , James Hogan , David Rientjes , Mel Gorman , Johannes Weiner , Michal Hocko , Rik van Riel , Andrea Arcangeli , Luiz Capitulino , Nishanth Aravamudan , Lee Schermerhorn , Steve Capper , Andrew Morton , Linus Torvalds , Luis Henriques , Kamal Mostafa Subject: [PATCH 3.13.y-ckt 081/143] mm/hugetlb: pmd_huge() returns true for non-present hugepage Date: Tue, 31 Mar 2015 12:47:26 -0700 Message-Id: <1427831308-1854-82-git-send-email-kamal@canonical.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1427831308-1854-1-git-send-email-kamal@canonical.com> References: <1427831308-1854-1-git-send-email-kamal@canonical.com> X-Extended-Stable: 3.13 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 3.13.11-ckt18 -stable review patch. If anyone has any objections, please let me know. ------------------ From: Naoya Horiguchi commit cbef8478bee55775ac312a574aad48af7bb9cf9f upstream. Migrating hugepages and hwpoisoned hugepages are considered as non-present hugepages, and they are referenced via migration entries and hwpoison entries in their page table slots. This behavior causes race condition because pmd_huge() doesn't tell non-huge pages from migrating/hwpoisoned hugepages. follow_page_mask() is one example where the kernel would call follow_page_pte() for such hugepage while this function is supposed to handle only normal pages. To avoid this, this patch makes pmd_huge() return true when pmd_none() is true *and* pmd_present() is false. We don't have to worry about mixing up non-present pmd entry with normal pmd (pointing to leaf level pte entry) because pmd_present() is true in normal pmd. The same race condition could happen in (x86-specific) gup_pmd_range(), where this patch simply adds pmd_present() check instead of pmd_huge(). This is because gup_pmd_range() is fast path. If we have non-present hugepage in this function, we will go into gup_huge_pmd(), then return 0 at flag mask check, and finally fall back to the slow path. Fixes: 290408d4a2 ("hugetlb: hugepage migration core") Signed-off-by: Naoya Horiguchi Cc: Hugh Dickins Cc: James Hogan Cc: David Rientjes Cc: Mel Gorman Cc: Johannes Weiner Cc: Michal Hocko Cc: Rik van Riel Cc: Andrea Arcangeli Cc: Luiz Capitulino Cc: Nishanth Aravamudan Cc: Lee Schermerhorn Cc: Steve Capper Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Luis Henriques Signed-off-by: Kamal Mostafa --- arch/x86/mm/gup.c | 2 +- arch/x86/mm/hugetlbpage.c | 8 +++++++- mm/hugetlb.c | 2 ++ 3 files changed, 10 insertions(+), 2 deletions(-) diff --git a/arch/x86/mm/gup.c b/arch/x86/mm/gup.c index 0596e8e..5bb7b36 100644 --- a/arch/x86/mm/gup.c +++ b/arch/x86/mm/gup.c @@ -172,7 +172,7 @@ static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end, */ if (pmd_none(pmd) || pmd_trans_splitting(pmd)) return 0; - if (unlikely(pmd_large(pmd))) { + if (unlikely(pmd_large(pmd) || !pmd_present(pmd))) { /* * NUMA hinting faults need to be handled in the GUP * slowpath for accounting purposes and so that they diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c index fa029fb..e473dbe 100644 --- a/arch/x86/mm/hugetlbpage.c +++ b/arch/x86/mm/hugetlbpage.c @@ -66,9 +66,15 @@ follow_huge_addr(struct mm_struct *mm, unsigned long address, int write) return ERR_PTR(-EINVAL); } +/* + * pmd_huge() returns 1 if @pmd is hugetlb related entry, that is normal + * hugetlb entry or non-present (migration or hwpoisoned) hugetlb entry. + * Otherwise, returns 0. + */ int pmd_huge(pmd_t pmd) { - return !!(pmd_val(pmd) & _PAGE_PSE); + return !pmd_none(pmd) && + (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT; } int pud_huge(pud_t pud) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index d821a7e..2a9e991 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3441,6 +3441,8 @@ follow_huge_pmd(struct mm_struct *mm, unsigned long address, { struct page *page; + if (!pmd_present(*pmd)) + return NULL; page = pte_page(*(pte_t *)pmd); if (page) page += ((address & ~PMD_MASK) >> PAGE_SHIFT); -- 1.9.1