From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1955350AbdDZId7 (ORCPT ); Wed, 26 Apr 2017 04:33:59 -0400 Received: from terminus.zytor.com ([65.50.211.136]:58169 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1434645AbdDZIdy (ORCPT ); Wed, 26 Apr 2017 04:33:54 -0400 Date: Wed, 26 Apr 2017 01:27:32 -0700 From: tip-bot for Andy Lutomirski Message-ID: Cc: riel@redhat.com, jpoimboe@redhat.com, torvalds@linux-foundation.org, bp@alien8.de, luto@kernel.org, mingo@kernel.org, dave.hansen@intel.com, brgerst@gmail.com, peterz@infradead.org, namit@vmware.com, linux-kernel@vger.kernel.org, dvlasenk@redhat.com, akpm@linux-foundation.org, mhocko@suse.com, hpa@zytor.com, tglx@linutronix.de Reply-To: riel@redhat.com, jpoimboe@redhat.com, torvalds@linux-foundation.org, mingo@kernel.org, luto@kernel.org, bp@alien8.de, peterz@infradead.org, namit@vmware.com, dave.hansen@intel.com, brgerst@gmail.com, dvlasenk@redhat.com, linux-kernel@vger.kernel.org, akpm@linux-foundation.org, mhocko@suse.com, tglx@linutronix.de, hpa@zytor.com In-Reply-To: <4b29b771d9975aad7154c314534fec235618175a.1492844372.git.luto@kernel.org> References: <4b29b771d9975aad7154c314534fec235618175a.1492844372.git.luto@kernel.org> To: linux-tip-commits@vger.kernel.org Subject: [tip:x86/mm] x86/mm: Make flush_tlb_mm_range() more predictable Git-Commit-ID: ce27374fabf553153c3f53efcaa9bfab9216bd8c X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit-ID: ce27374fabf553153c3f53efcaa9bfab9216bd8c Gitweb: http://git.kernel.org/tip/ce27374fabf553153c3f53efcaa9bfab9216bd8c Author: Andy Lutomirski AuthorDate: Sat, 22 Apr 2017 00:01:21 -0700 Committer: Ingo Molnar CommitDate: Wed, 26 Apr 2017 10:02:06 +0200 x86/mm: Make flush_tlb_mm_range() more predictable I'm about to rewrite the function almost completely, but first I want to get a functional change out of the way. Currently, if flush_tlb_mm_range() does not flush the local TLB at all, it will never do individual page flushes on remote CPUs. This seems to be an accident, and preserving it will be awkward. Let's change it first so that any regressions in the rewrite will be easier to bisect and so that the rewrite can attempt to change no visible behavior at all. The fix is simple: we can simply avoid short-circuiting the calculation of base_pages_to_flush. As a side effect, this also eliminates a potential corner case: if tlb_single_page_flush_ceiling == TLB_FLUSH_ALL, flush_tlb_mm_range() could have ended up flushing the entire address space one page at a time. Signed-off-by: Andy Lutomirski Acked-by: Dave Hansen Cc: Andrew Morton Cc: Borislav Petkov Cc: Brian Gerst Cc: Denys Vlasenko Cc: H. Peter Anvin Cc: Josh Poimboeuf Cc: Linus Torvalds Cc: Michal Hocko Cc: Nadav Amit Cc: Peter Zijlstra Cc: Rik van Riel Cc: Thomas Gleixner Link: http://lkml.kernel.org/r/4b29b771d9975aad7154c314534fec235618175a.1492844372.git.luto@kernel.org Signed-off-by: Ingo Molnar --- arch/x86/mm/tlb.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 92ec37f..9db9260 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -309,6 +309,12 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, unsigned long base_pages_to_flush = TLB_FLUSH_ALL; preempt_disable(); + + if ((end != TLB_FLUSH_ALL) && !(vmflag & VM_HUGETLB)) + base_pages_to_flush = (end - start) >> PAGE_SHIFT; + if (base_pages_to_flush > tlb_single_page_flush_ceiling) + base_pages_to_flush = TLB_FLUSH_ALL; + if (current->active_mm != mm) { /* Synchronize with switch_mm. */ smp_mb(); @@ -325,15 +331,11 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, goto out; } - if ((end != TLB_FLUSH_ALL) && !(vmflag & VM_HUGETLB)) - base_pages_to_flush = (end - start) >> PAGE_SHIFT; - /* * Both branches below are implicit full barriers (MOV to CR or * INVLPG) that synchronize with switch_mm. */ - if (base_pages_to_flush > tlb_single_page_flush_ceiling) { - base_pages_to_flush = TLB_FLUSH_ALL; + if (base_pages_to_flush == TLB_FLUSH_ALL) { count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL); local_flush_tlb(); } else {