From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753970Ab3I0N2u (ORCPT ); Fri, 27 Sep 2013 09:28:50 -0400 Received: from cantor2.suse.de ([195.135.220.15]:55786 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753645Ab3I0N20 (ORCPT ); Fri, 27 Sep 2013 09:28:26 -0400 From: Mel Gorman To: Peter Zijlstra , Rik van Riel Cc: Srikar Dronamraju , Ingo Molnar , Andrea Arcangeli , Johannes Weiner , Linux-MM , LKML , Mel Gorman Subject: [PATCH 36/63] mm: numa: Only trap pmd hinting faults if we would otherwise trap PTE faults Date: Fri, 27 Sep 2013 14:27:21 +0100 Message-Id: <1380288468-5551-37-git-send-email-mgorman@suse.de> X-Mailer: git-send-email 1.8.1.4 In-Reply-To: <1380288468-5551-1-git-send-email-mgorman@suse.de> References: <1380288468-5551-1-git-send-email-mgorman@suse.de> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Base page PMD faulting is meant to batch handle NUMA hinting faults from PTEs. However, even is no PTE faults would ever be handled within a range the kernel still traps PMD hinting faults. This patch avoids the overhead. Signed-off-by: Mel Gorman --- mm/mprotect.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/mm/mprotect.c b/mm/mprotect.c index f0b087d..5aae390 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -146,6 +146,8 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma, pmd = pmd_offset(pud, addr); do { + unsigned long this_pages; + next = pmd_addr_end(addr, end); if (pmd_trans_huge(*pmd)) { if (next - addr != HPAGE_PMD_SIZE) @@ -165,8 +167,9 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma, } if (pmd_none_or_clear_bad(pmd)) continue; - pages += change_pte_range(vma, pmd, addr, next, newprot, + this_pages = change_pte_range(vma, pmd, addr, next, newprot, dirty_accountable, prot_numa, &all_same_nidpid); + pages += this_pages; /* * If we are changing protections for NUMA hinting faults then @@ -174,7 +177,7 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma, * node. This allows a regular PMD to be handled as one fault * and effectively batches the taking of the PTL */ - if (prot_numa && all_same_nidpid) + if (prot_numa && this_pages && all_same_nidpid) change_pmd_protnuma(vma->vm_mm, addr, pmd); } while (pmd++, addr = next, addr != end); -- 1.8.1.4 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pd0-f169.google.com (mail-pd0-f169.google.com [209.85.192.169]) by kanga.kvack.org (Postfix) with ESMTP id A44A1900020 for ; Fri, 27 Sep 2013 09:28:28 -0400 (EDT) Received: by mail-pd0-f169.google.com with SMTP id r10so2612569pdi.14 for ; Fri, 27 Sep 2013 06:28:28 -0700 (PDT) From: Mel Gorman Subject: [PATCH 36/63] mm: numa: Only trap pmd hinting faults if we would otherwise trap PTE faults Date: Fri, 27 Sep 2013 14:27:21 +0100 Message-Id: <1380288468-5551-37-git-send-email-mgorman@suse.de> In-Reply-To: <1380288468-5551-1-git-send-email-mgorman@suse.de> References: <1380288468-5551-1-git-send-email-mgorman@suse.de> Sender: owner-linux-mm@kvack.org List-ID: To: Peter Zijlstra , Rik van Riel Cc: Srikar Dronamraju , Ingo Molnar , Andrea Arcangeli , Johannes Weiner , Linux-MM , LKML , Mel Gorman Base page PMD faulting is meant to batch handle NUMA hinting faults from PTEs. However, even is no PTE faults would ever be handled within a range the kernel still traps PMD hinting faults. This patch avoids the overhead. Signed-off-by: Mel Gorman --- mm/mprotect.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/mm/mprotect.c b/mm/mprotect.c index f0b087d..5aae390 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -146,6 +146,8 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma, pmd = pmd_offset(pud, addr); do { + unsigned long this_pages; + next = pmd_addr_end(addr, end); if (pmd_trans_huge(*pmd)) { if (next - addr != HPAGE_PMD_SIZE) @@ -165,8 +167,9 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma, } if (pmd_none_or_clear_bad(pmd)) continue; - pages += change_pte_range(vma, pmd, addr, next, newprot, + this_pages = change_pte_range(vma, pmd, addr, next, newprot, dirty_accountable, prot_numa, &all_same_nidpid); + pages += this_pages; /* * If we are changing protections for NUMA hinting faults then @@ -174,7 +177,7 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma, * node. This allows a regular PMD to be handled as one fault * and effectively batches the taking of the PTL */ - if (prot_numa && all_same_nidpid) + if (prot_numa && this_pages && all_same_nidpid) change_pmd_protnuma(vma->vm_mm, addr, pmd); } while (pmd++, addr = next, addr != end); -- 1.8.1.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org