From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756630Ab3JIR0X (ORCPT ); Wed, 9 Oct 2013 13:26:23 -0400 Received: from terminus.zytor.com ([198.137.202.10]:55967 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754878Ab3JIR0T (ORCPT ); Wed, 9 Oct 2013 13:26:19 -0400 Date: Wed, 9 Oct 2013 10:25:42 -0700 From: tip-bot for Mel Gorman Message-ID: Cc: linux-kernel@vger.kernel.org, hpa@zytor.com, mingo@kernel.org, peterz@infradead.org, hannes@cmpxchg.org, riel@redhat.com, aarcange@redhat.com, srikar@linux.vnet.ibm.com, mgorman@suse.de, tglx@linutronix.de Reply-To: mingo@kernel.org, hpa@zytor.com, linux-kernel@vger.kernel.org, peterz@infradead.org, hannes@cmpxchg.org, riel@redhat.com, aarcange@redhat.com, srikar@linux.vnet.ibm.com, mgorman@suse.de, tglx@linutronix.de In-Reply-To: <1381141781-10992-12-git-send-email-mgorman@suse.de> References: <1381141781-10992-12-git-send-email-mgorman@suse.de> To: linux-tip-commits@vger.kernel.org Subject: [tip:sched/core] mm: Only flush TLBs if a transhuge PMD is modified for NUMA pte scanning Git-Commit-ID: f123d74abf91574837d14e5ea58f6a779a387bf5 X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.7 (terminus.zytor.com [127.0.0.1]); Wed, 09 Oct 2013 10:25:49 -0700 (PDT) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit-ID: f123d74abf91574837d14e5ea58f6a779a387bf5 Gitweb: http://git.kernel.org/tip/f123d74abf91574837d14e5ea58f6a779a387bf5 Author: Mel Gorman AuthorDate: Mon, 7 Oct 2013 11:28:49 +0100 Committer: Ingo Molnar CommitDate: Wed, 9 Oct 2013 12:39:49 +0200 mm: Only flush TLBs if a transhuge PMD is modified for NUMA pte scanning NUMA PTE scanning is expensive both in terms of the scanning itself and the TLB flush if there are any updates. The TLB flush is avoided if no PTEs are updated but there is a bug where transhuge PMDs are considered to be updated even if they were already pmd_numa. This patch addresses the problem and TLB flushes should be reduced. Cc: Andrea Arcangeli Cc: Johannes Weiner Cc: Srikar Dronamraju Reviewed-by: Rik van Riel Signed-off-by: Mel Gorman Signed-off-by: Peter Zijlstra Link: http://lkml.kernel.org/r/1381141781-10992-12-git-send-email-mgorman@suse.de Signed-off-by: Ingo Molnar --- mm/huge_memory.c | 19 ++++++++++++++++--- mm/mprotect.c | 14 ++++++++++---- 2 files changed, 26 insertions(+), 7 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index d4928769..de8d5cf 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1458,6 +1458,12 @@ out: return ret; } +/* + * Returns + * - 0 if PMD could not be locked + * - 1 if PMD was locked but protections unchange and TLB flush unnecessary + * - HPAGE_PMD_NR is protections changed and TLB flush necessary + */ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, pgprot_t newprot, int prot_numa) { @@ -1466,9 +1472,11 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, if (__pmd_trans_huge_lock(pmd, vma) == 1) { pmd_t entry; - entry = pmdp_get_and_clear(mm, addr, pmd); + ret = 1; if (!prot_numa) { + entry = pmdp_get_and_clear(mm, addr, pmd); entry = pmd_modify(entry, newprot); + ret = HPAGE_PMD_NR; BUG_ON(pmd_write(entry)); } else { struct page *page = pmd_page(*pmd); @@ -1476,12 +1484,17 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, /* only check non-shared pages */ if (page_mapcount(page) == 1 && !pmd_numa(*pmd)) { + entry = pmdp_get_and_clear(mm, addr, pmd); entry = pmd_mknuma(entry); + ret = HPAGE_PMD_NR; } } - set_pmd_at(mm, addr, pmd, entry); + + /* Set PMD if cleared earlier */ + if (ret == HPAGE_PMD_NR) + set_pmd_at(mm, addr, pmd, entry); + spin_unlock(&vma->vm_mm->page_table_lock); - ret = 1; } return ret; diff --git a/mm/mprotect.c b/mm/mprotect.c index 7bdbd4b..2da33dc 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -144,10 +144,16 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma, if (pmd_trans_huge(*pmd)) { if (next - addr != HPAGE_PMD_SIZE) split_huge_page_pmd(vma, addr, pmd); - else if (change_huge_pmd(vma, pmd, addr, newprot, - prot_numa)) { - pages++; - continue; + else { + int nr_ptes = change_huge_pmd(vma, pmd, addr, + newprot, prot_numa); + + if (nr_ptes) { + if (nr_ptes == HPAGE_PMD_NR) + pages++; + + continue; + } } /* fall through */ }