From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S964804AbbFCRH2 (ORCPT ); Wed, 3 Jun 2015 13:07:28 -0400 Received: from mga01.intel.com ([192.55.52.88]:52355 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933227AbbFCRHE (ORCPT ); Wed, 3 Jun 2015 13:07:04 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,548,1427785200"; d="scan'208";a="704747364" From: "Kirill A. Shutemov" To: Andrew Morton , Andrea Arcangeli , Hugh Dickins Cc: Dave Hansen , Mel Gorman , Rik van Riel , Vlastimil Babka , Christoph Lameter , Naoya Horiguchi , Steve Capper , "Aneesh Kumar K.V" , Johannes Weiner , Michal Hocko , Jerome Marchand , Sasha Levin , linux-kernel@vger.kernel.org, linux-mm@kvack.org, "Kirill A. Shutemov" Subject: [PATCHv6 07/36] thp, mlock: do not allow huge pages in mlocked area Date: Wed, 3 Jun 2015 20:05:38 +0300 Message-Id: <1433351167-125878-8-git-send-email-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1433351167-125878-1-git-send-email-kirill.shutemov@linux.intel.com> References: <1433351167-125878-1-git-send-email-kirill.shutemov@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org With new refcounting THP can belong to several VMAs. This makes tricky to track THP pages, when they partially mlocked. It can lead to leaking mlocked pages to non-VM_LOCKED vmas and other problems. With this patch we will split all pages on mlock and avoid fault-in/collapse new THP in VM_LOCKED vmas. I've tried alternative approach: do not mark THP pages mlocked and keep them on normal LRUs. This way vmscan could try to split huge pages on memory pressure and free up subpages which doesn't belong to VM_LOCKED vmas. But this is user-visible change: we screw up Mlocked accouting reported in meminfo, so I had to leave this approach aside. We can bring something better later, but this should be good enough for now. Signed-off-by: Kirill A. Shutemov Tested-by: Sasha Levin Acked-by: Jerome Marchand --- mm/gup.c | 2 ++ mm/huge_memory.c | 5 ++++- mm/memory.c | 3 ++- mm/mlock.c | 51 +++++++++++++++++++-------------------------------- 4 files changed, 27 insertions(+), 34 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index d975bdc2465f..2c2fa62f54f0 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -882,6 +882,8 @@ long populate_vma_page_range(struct vm_area_struct *vma, VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_sem), mm); gup_flags = FOLL_TOUCH | FOLL_POPULATE; + if (vma->vm_flags & VM_LOCKED) + gup_flags |= FOLL_SPLIT; /* * We want to touch writable mappings with a write fault in order * to break COW, except for shared mappings because these don't COW diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 6035cc92be6b..4ad975506c1b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -815,6 +815,8 @@ int do_huge_pmd_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma, if (haddr < vma->vm_start || haddr + HPAGE_PMD_SIZE > vma->vm_end) return VM_FAULT_FALLBACK; + if (vma->vm_flags & VM_LOCKED) + return VM_FAULT_FALLBACK; if (unlikely(anon_vma_prepare(vma))) return VM_FAULT_OOM; if (unlikely(khugepaged_enter(vma, vma->vm_flags))) @@ -2493,7 +2495,8 @@ static bool hugepage_vma_check(struct vm_area_struct *vma) if ((!(vma->vm_flags & VM_HUGEPAGE) && !khugepaged_always()) || (vma->vm_flags & VM_NOHUGEPAGE)) return false; - + if (vma->vm_flags & VM_LOCKED) + return false; if (!vma->anon_vma || vma->vm_ops) return false; if (is_vma_temporary_stack(vma)) diff --git a/mm/memory.c b/mm/memory.c index f48dfa6d8859..8067c47ecf14 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2161,7 +2161,8 @@ static int wp_page_copy(struct mm_struct *mm, struct vm_area_struct *vma, pte_unmap_unlock(page_table, ptl); mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); - if (old_page) { + /* THP pages are never mlocked */ + if (old_page && !PageTransCompound(old_page)) { /* * Don't let another task, with possibly unlocked vma, * keep the mlocked page. diff --git a/mm/mlock.c b/mm/mlock.c index 25936680064f..1d47b98407de 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -443,39 +443,26 @@ void munlock_vma_pages_range(struct vm_area_struct *vma, page = follow_page_mask(vma, start, FOLL_GET | FOLL_DUMP, &page_mask); - if (page && !IS_ERR(page)) { - if (PageTransHuge(page)) { - lock_page(page); - /* - * Any THP page found by follow_page_mask() may - * have gotten split before reaching - * munlock_vma_page(), so we need to recompute - * the page_mask here. - */ - page_mask = munlock_vma_page(page); - unlock_page(page); - put_page(page); /* follow_page_mask() */ - } else { - /* - * Non-huge pages are handled in batches via - * pagevec. The pin from follow_page_mask() - * prevents them from collapsing by THP. - */ - pagevec_add(&pvec, page); - zone = page_zone(page); - zoneid = page_zone_id(page); + if (page && !IS_ERR(page) && !PageTransCompound(page)) { + /* + * Non-huge pages are handled in batches via + * pagevec. The pin from follow_page_mask() + * prevents them from collapsing by THP. + */ + pagevec_add(&pvec, page); + zone = page_zone(page); + zoneid = page_zone_id(page); - /* - * Try to fill the rest of pagevec using fast - * pte walk. This will also update start to - * the next page to process. Then munlock the - * pagevec. - */ - start = __munlock_pagevec_fill(&pvec, vma, - zoneid, start, end); - __munlock_pagevec(&pvec, zone); - goto next; - } + /* + * Try to fill the rest of pagevec using fast + * pte walk. This will also update start to + * the next page to process. Then munlock the + * pagevec. + */ + start = __munlock_pagevec_fill(&pvec, vma, + zoneid, start, end); + __munlock_pagevec(&pvec, zone); + goto next; } /* It's a bug to munlock in the middle of a THP page */ VM_BUG_ON((start >> PAGE_SHIFT) & page_mask); -- 2.1.4