linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] x86/mm: Straightforward TLB flush fixes/cleanups
@ 2017-04-21 18:15 Andy Lutomirski
  2017-04-21 18:15 ` [PATCH v2 1/3] x86/mm: Remove flush_tlb() and flush_tlb_current_task() Andy Lutomirski
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Andy Lutomirski @ 2017-04-21 18:15 UTC (permalink / raw)
  To: x86; +Cc: linux-kernel, Borislav Petkov, Andy Lutomirski

I have some less straightforward cleanups coming, but these three
are easy and might even be okay for 4.12 assuming that someone feels
like reviewing them.

Changes from v1:
 - Fixed the Cc list on patch 3.  No code changes at all.

Andy Lutomirski (3):
  x86/mm: Remove flush_tlb() and flush_tlb_current_task()
  x86/mm: Make flush_tlb_mm_range() more predictable
  x86/mm: Fix flush_tlb_page() on Xen

 arch/x86/include/asm/tlbflush.h |  3 ---
 arch/x86/mm/tlb.c               | 33 ++++++++-------------------------
 2 files changed, 8 insertions(+), 28 deletions(-)

-- 
2.9.3

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v2 1/3] x86/mm: Remove flush_tlb() and flush_tlb_current_task()
  2017-04-21 18:15 [PATCH v2 0/3] x86/mm: Straightforward TLB flush fixes/cleanups Andy Lutomirski
@ 2017-04-21 18:15 ` Andy Lutomirski
  2017-04-21 21:59   ` Andy Lutomirski
  2017-04-21 18:15 ` [PATCH v2 2/3] x86/mm: Make flush_tlb_mm_range() more predictable Andy Lutomirski
  2017-04-21 18:15 ` [PATCH v2 3/3] x86/mm: Fix flush_tlb_page() on Xen Andy Lutomirski
  2 siblings, 1 reply; 7+ messages in thread
From: Andy Lutomirski @ 2017-04-21 18:15 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Andy Lutomirski, Rik van Riel,
	Dave Hansen, Nadav Amit, Michal Hocko, Sasha Levin,
	Andrew Morton

I was trying to figure out what how flush_tlb_current_task() would
possibly work correctly if current->mm != current->active_mm, but I
realized I could spare myself the effort: it has no callers except
the unused flush_tlb() macro.  In fact, it doesn't even exist on
!SMP builds.

Cc: Rik van Riel <riel@redhat.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 arch/x86/include/asm/tlbflush.h |  3 ---
 arch/x86/mm/tlb.c               | 17 -----------------
 2 files changed, 20 deletions(-)

diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index fc5abff9b7fd..cc7ea9a7d0c1 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -303,14 +303,11 @@ static inline void flush_tlb_kernel_range(unsigned long start,
 		flush_tlb_mm_range(vma->vm_mm, start, end, vma->vm_flags)
 
 extern void flush_tlb_all(void);
-extern void flush_tlb_current_task(void);
 extern void flush_tlb_page(struct vm_area_struct *, unsigned long);
 extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
 				unsigned long end, unsigned long vmflag);
 extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
 
-#define flush_tlb()	flush_tlb_current_task()
-
 void native_flush_tlb_others(const struct cpumask *cpumask,
 				struct mm_struct *mm,
 				unsigned long start, unsigned long end);
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index a7655f6caf7d..92ec37f517ab 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -289,23 +289,6 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
 	smp_call_function_many(cpumask, flush_tlb_func, &info, 1);
 }
 
-void flush_tlb_current_task(void)
-{
-	struct mm_struct *mm = current->mm;
-
-	preempt_disable();
-
-	count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
-
-	/* This is an implicit full barrier that synchronizes with switch_mm. */
-	local_flush_tlb();
-
-	trace_tlb_flush(TLB_LOCAL_SHOOTDOWN, TLB_FLUSH_ALL);
-	if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids)
-		flush_tlb_others(mm_cpumask(mm), mm, 0UL, TLB_FLUSH_ALL);
-	preempt_enable();
-}
-
 /*
  * See Documentation/x86/tlb.txt for details.  We choose 33
  * because it is large enough to cover the vast majority (at
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 2/3] x86/mm: Make flush_tlb_mm_range() more predictable
  2017-04-21 18:15 [PATCH v2 0/3] x86/mm: Straightforward TLB flush fixes/cleanups Andy Lutomirski
  2017-04-21 18:15 ` [PATCH v2 1/3] x86/mm: Remove flush_tlb() and flush_tlb_current_task() Andy Lutomirski
@ 2017-04-21 18:15 ` Andy Lutomirski
  2017-04-21 18:54   ` Dave Hansen
  2017-04-21 18:15 ` [PATCH v2 3/3] x86/mm: Fix flush_tlb_page() on Xen Andy Lutomirski
  2 siblings, 1 reply; 7+ messages in thread
From: Andy Lutomirski @ 2017-04-21 18:15 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Andy Lutomirski, Rik van Riel,
	Dave Hansen, Nadav Amit, Michal Hocko, Sasha Levin,
	Andrew Morton

I'm about to rewrite the function almost completely, but first I
want to get a functional change out of the way.  Currently, if
flush_tlb_mm_range() does not flush the local TLB at all, it will
never do individual page flushes on remote CPUs.  This seems to be
an accident, and preserving it will be awkward.  Let's change it
first so that any regressions in the rewrite will be easier to
bisect and so that the rewrite can attempt to change no visible
behavior at all.

The fix is simple: we can simply avoid short-circuiting the
calculation of base_pages_to_flush.

As a side effect, this also eliminates a potential corner case: if
tlb_single_page_flush_ceiling == TLB_FLUSH_ALL, flush_tlb_mm_range()
could have ended up flushing the entire address space one page at a
time.

Cc: Rik van Riel <riel@redhat.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 arch/x86/mm/tlb.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 92ec37f517ab..9db9260a5e9f 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -309,6 +309,12 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
 	unsigned long base_pages_to_flush = TLB_FLUSH_ALL;
 
 	preempt_disable();
+
+	if ((end != TLB_FLUSH_ALL) && !(vmflag & VM_HUGETLB))
+		base_pages_to_flush = (end - start) >> PAGE_SHIFT;
+	if (base_pages_to_flush > tlb_single_page_flush_ceiling)
+		base_pages_to_flush = TLB_FLUSH_ALL;
+
 	if (current->active_mm != mm) {
 		/* Synchronize with switch_mm. */
 		smp_mb();
@@ -325,15 +331,11 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
 		goto out;
 	}
 
-	if ((end != TLB_FLUSH_ALL) && !(vmflag & VM_HUGETLB))
-		base_pages_to_flush = (end - start) >> PAGE_SHIFT;
-
 	/*
 	 * Both branches below are implicit full barriers (MOV to CR or
 	 * INVLPG) that synchronize with switch_mm.
 	 */
-	if (base_pages_to_flush > tlb_single_page_flush_ceiling) {
-		base_pages_to_flush = TLB_FLUSH_ALL;
+	if (base_pages_to_flush == TLB_FLUSH_ALL) {
 		count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
 		local_flush_tlb();
 	} else {
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 3/3] x86/mm: Fix flush_tlb_page() on Xen
  2017-04-21 18:15 [PATCH v2 0/3] x86/mm: Straightforward TLB flush fixes/cleanups Andy Lutomirski
  2017-04-21 18:15 ` [PATCH v2 1/3] x86/mm: Remove flush_tlb() and flush_tlb_current_task() Andy Lutomirski
  2017-04-21 18:15 ` [PATCH v2 2/3] x86/mm: Make flush_tlb_mm_range() more predictable Andy Lutomirski
@ 2017-04-21 18:15 ` Andy Lutomirski
  2017-04-21 18:46   ` Boris Ostrovsky
  2 siblings, 1 reply; 7+ messages in thread
From: Andy Lutomirski @ 2017-04-21 18:15 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Andy Lutomirski, Rik van Riel,
	Dave Hansen, Nadav Amit, Michal Hocko, Sasha Levin,
	Andrew Morton, Boris Ostrovsky, Juergen Gross,
	Konrad Rzeszutek Wilk

flush_tlb_page() passes a bogus range to flush_tlb_others() and
expects the latter to fix it up.  native_flush_tlb_others() has the
fixup but Xen's version doesn't.  Move the fixup to
flush_tlb_others().

AFAICS the only real effect is that, without this fix, Xen would
flush everything instead of just the one page on remote vCPUs in
when flush_tlb_page() was called.

Cc: Rik van Riel <riel@redhat.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Fixes: e7b52ffd45a6 ("x86/flush_tlb: try flush_tlb_single one by one in flush_tlb_range")
Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 arch/x86/mm/tlb.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 9db9260a5e9f..6e7bedf69af7 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -263,8 +263,6 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
 {
 	struct flush_tlb_info info;
 
-	if (end == 0)
-		end = start + PAGE_SIZE;
 	info.flush_mm = mm;
 	info.flush_start = start;
 	info.flush_end = end;
@@ -378,7 +376,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long start)
 	}
 
 	if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids)
-		flush_tlb_others(mm_cpumask(mm), mm, start, 0UL);
+		flush_tlb_others(mm_cpumask(mm), mm, start, start + PAGE_SIZE);
 
 	preempt_enable();
 }
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 3/3] x86/mm: Fix flush_tlb_page() on Xen
  2017-04-21 18:15 ` [PATCH v2 3/3] x86/mm: Fix flush_tlb_page() on Xen Andy Lutomirski
@ 2017-04-21 18:46   ` Boris Ostrovsky
  0 siblings, 0 replies; 7+ messages in thread
From: Boris Ostrovsky @ 2017-04-21 18:46 UTC (permalink / raw)
  To: Andy Lutomirski, x86
  Cc: linux-kernel, Borislav Petkov, Rik van Riel, Dave Hansen,
	Nadav Amit, Michal Hocko, Sasha Levin, Andrew Morton,
	Juergen Gross, Konrad Rzeszutek Wilk

On 04/21/2017 02:15 PM, Andy Lutomirski wrote:
> flush_tlb_page() passes a bogus range to flush_tlb_others() and
> expects the latter to fix it up.  native_flush_tlb_others() has the
> fixup but Xen's version doesn't.  Move the fixup to
> flush_tlb_others().
>
> AFAICS the only real effect is that, without this fix, Xen would
> flush everything instead of just the one page on remote vCPUs in
> when flush_tlb_page() was called.
>
> Cc: Rik van Riel <riel@redhat.com>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Nadav Amit <namit@vmware.com>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Sasha Levin <sasha.levin@oracle.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: Juergen Gross <jgross@suse.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Fixes: e7b52ffd45a6 ("x86/flush_tlb: try flush_tlb_single one by one in flush_tlb_range")
> Signed-off-by: Andy Lutomirski <luto@kernel.org>
> ---
>  arch/x86/mm/tlb.c | 4 +---
>  1 file changed, 1 insertion(+), 3 deletions(-)

Thanks, Andy.

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 2/3] x86/mm: Make flush_tlb_mm_range() more predictable
  2017-04-21 18:15 ` [PATCH v2 2/3] x86/mm: Make flush_tlb_mm_range() more predictable Andy Lutomirski
@ 2017-04-21 18:54   ` Dave Hansen
  0 siblings, 0 replies; 7+ messages in thread
From: Dave Hansen @ 2017-04-21 18:54 UTC (permalink / raw)
  To: Andy Lutomirski, x86
  Cc: linux-kernel, Borislav Petkov, Rik van Riel, Nadav Amit,
	Michal Hocko, Sasha Levin, Andrew Morton

On 04/21/2017 11:15 AM, Andy Lutomirski wrote:
> I'm about to rewrite the function almost completely, but first I
> want to get a functional change out of the way.  Currently, if
> flush_tlb_mm_range() does not flush the local TLB at all, it will
> never do individual page flushes on remote CPUs.  This seems to be
> an accident, and preserving it will be awkward.  Let's change it
> first so that any regressions in the rewrite will be easier to
> bisect and so that the rewrite can attempt to change no visible
> behavior at all.
> 
> The fix is simple: we can simply avoid short-circuiting the
> calculation of base_pages_to_flush.

This looks sane to me.  I think it makes things more straightforward.

Acked-by: Dave Hansen <dave.hansen@intel.com>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 1/3] x86/mm: Remove flush_tlb() and flush_tlb_current_task()
  2017-04-21 18:15 ` [PATCH v2 1/3] x86/mm: Remove flush_tlb() and flush_tlb_current_task() Andy Lutomirski
@ 2017-04-21 21:59   ` Andy Lutomirski
  0 siblings, 0 replies; 7+ messages in thread
From: Andy Lutomirski @ 2017-04-21 21:59 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: X86 ML, linux-kernel, Borislav Petkov, Rik van Riel, Dave Hansen,
	Nadav Amit, Michal Hocko, Sasha Levin, Andrew Morton

On Fri, Apr 21, 2017 at 11:15 AM, Andy Lutomirski <luto@kernel.org> wrote:
> I was trying to figure out what how flush_tlb_current_task() would
> possibly work correctly if current->mm != current->active_mm, but I
> realized I could spare myself the effort: it has no callers except
> the unused flush_tlb() macro.  In fact, it doesn't even exist on
> !SMP builds.

Please disregard this patch for now.  It has issues.

---Andy

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2017-04-21 22:00 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-21 18:15 [PATCH v2 0/3] x86/mm: Straightforward TLB flush fixes/cleanups Andy Lutomirski
2017-04-21 18:15 ` [PATCH v2 1/3] x86/mm: Remove flush_tlb() and flush_tlb_current_task() Andy Lutomirski
2017-04-21 21:59   ` Andy Lutomirski
2017-04-21 18:15 ` [PATCH v2 2/3] x86/mm: Make flush_tlb_mm_range() more predictable Andy Lutomirski
2017-04-21 18:54   ` Dave Hansen
2017-04-21 18:15 ` [PATCH v2 3/3] x86/mm: Fix flush_tlb_page() on Xen Andy Lutomirski
2017-04-21 18:46   ` Boris Ostrovsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).