linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] Fixes on top of the TLB fix in x86/urgent
@ 2017-10-14 16:59 Andy Lutomirski
  2017-10-14 16:59 ` [PATCH 1/3] x86/mm/64: Remove the last VM_BUG_ON from the TLB code Andy Lutomirski
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Andy Lutomirski @ 2017-10-14 16:59 UTC (permalink / raw)
  To: x86; +Cc: linux-kernel, Borislav Petkov, Andy Lutomirski

I redid the TLB fix to resolve Boris' comments, and then Ingo applied
the old version :-/

Here are my fixes respun on top of the version in -tip.  If they pass
muster (they're quite straightforward), can one of you get them to Linus?

(If you disagree with Boris, then skip patch 3.)

Andy Lutomirski (3):
  x86/mm/64: Remove the last VM_BUG_ON from the TLB code
  x86/mm: Tidy up "x86/mm: Flush more aggressively in lazy TLB mode"
  x86/mm: Remove debug/x86/tlb_defer_switch_to_init_mm

 arch/x86/include/asm/tlbflush.h | 21 ++++++++++----
 arch/x86/mm/tlb.c               | 64 ++++-------------------------------------
 2 files changed, 21 insertions(+), 64 deletions(-)

-- 
2.13.6

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/3] x86/mm/64: Remove the last VM_BUG_ON from the TLB code
  2017-10-14 16:59 [PATCH 0/3] Fixes on top of the TLB fix in x86/urgent Andy Lutomirski
@ 2017-10-14 16:59 ` Andy Lutomirski
  2017-10-18 16:23   ` [tip:x86/urgent] x86/mm/64: Remove the last VM_BUG_ON() " tip-bot for Andy Lutomirski
  2017-10-14 16:59 ` [PATCH 2/3] x86/mm: Tidy up "x86/mm: Flush more aggressively in lazy TLB mode" Andy Lutomirski
  2017-10-14 16:59 ` [PATCH 3/3] x86/mm: Remove debug/x86/tlb_defer_switch_to_init_mm Andy Lutomirski
  2 siblings, 1 reply; 7+ messages in thread
From: Andy Lutomirski @ 2017-10-14 16:59 UTC (permalink / raw)
  To: x86; +Cc: linux-kernel, Borislav Petkov, Andy Lutomirski

Let's avoid hard-to-diagnose crashes in the future.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 arch/x86/mm/tlb.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 658bf0090565..7db23f9f804e 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -147,8 +147,8 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
 	this_cpu_write(cpu_tlbstate.is_lazy, false);
 
 	if (real_prev == next) {
-		VM_BUG_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) !=
-			  next->context.ctx_id);
+		VM_WARN_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) !=
+			   next->context.ctx_id);
 
 		/*
 		 * We don't currently support having a real mm loaded without
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/3] x86/mm: Tidy up "x86/mm: Flush more aggressively in lazy TLB mode"
  2017-10-14 16:59 [PATCH 0/3] Fixes on top of the TLB fix in x86/urgent Andy Lutomirski
  2017-10-14 16:59 ` [PATCH 1/3] x86/mm/64: Remove the last VM_BUG_ON from the TLB code Andy Lutomirski
@ 2017-10-14 16:59 ` Andy Lutomirski
  2017-10-18 16:23   ` [tip:x86/urgent] " tip-bot for Andy Lutomirski
  2017-10-14 16:59 ` [PATCH 3/3] x86/mm: Remove debug/x86/tlb_defer_switch_to_init_mm Andy Lutomirski
  2 siblings, 1 reply; 7+ messages in thread
From: Andy Lutomirski @ 2017-10-14 16:59 UTC (permalink / raw)
  To: x86; +Cc: linux-kernel, Borislav Petkov, Andy Lutomirski

Due to timezones, commit b956575bed91 ("x86/mm: Flush more
aggressively in lazy TLB mode") is an outdated patch that didn't
address Borislav's comments.  Tidy it up:

 - The name "tlb_use_lazy_mode" was highly confusing.  Change it to
   "tlb_defer_switch_to_init_mm", which describes what it acutally
   means.

 - Move the static_branch crap into a helper.

 - Improve comments.

Actually removing the debugfs option is in the next patch.

Fixes: b956575bed91 ("x86/mm: Flush more aggressively in lazy TLB mode")
Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 arch/x86/include/asm/tlbflush.h |  7 ++++++-
 arch/x86/mm/tlb.c               | 30 ++++++++++++++++++------------
 2 files changed, 24 insertions(+), 13 deletions(-)

diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index d362161d3291..0d4a1bb7e303 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -87,7 +87,12 @@ static inline u64 inc_mm_tlb_gen(struct mm_struct *mm)
  * to init_mm when we switch to a kernel thread (e.g. the idle thread).  If
  * it's false, then we immediately switch CR3 when entering a kernel thread.
  */
-DECLARE_STATIC_KEY_TRUE(tlb_use_lazy_mode);
+DECLARE_STATIC_KEY_TRUE(__tlb_defer_switch_to_init_mm);
+
+static inline bool tlb_defer_switch_to_init_mm(void)
+{
+	return static_branch_unlikely(&__tlb_defer_switch_to_init_mm);
+}
 
 /*
  * 6 because 6 should be plenty and struct tlb_state will fit in
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 7db23f9f804e..5ee3b59baa85 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -30,7 +30,7 @@
 
 atomic64_t last_mm_ctx_id = ATOMIC64_INIT(1);
 
-DEFINE_STATIC_KEY_TRUE(tlb_use_lazy_mode);
+DEFINE_STATIC_KEY_TRUE(__tlb_defer_switch_to_init_mm);
 
 static void choose_new_asid(struct mm_struct *next, u64 next_tlb_gen,
 			    u16 *new_asid, bool *need_flush)
@@ -213,6 +213,9 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
 }
 
 /*
+ * Please ignore the name of this function.  It should be called
+ * switch_to_kernel_thread().
+ *
  * enter_lazy_tlb() is a hint from the scheduler that we are entering a
  * kernel thread or other context without an mm.  Acceptable implementations
  * include doing nothing whatsoever, switching to init_mm, or various clever
@@ -227,7 +230,7 @@ void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
 	if (this_cpu_read(cpu_tlbstate.loaded_mm) == &init_mm)
 		return;
 
-	if (static_branch_unlikely(&tlb_use_lazy_mode)) {
+	if (tlb_defer_switch_to_init_mm()) {
 		/*
 		 * There's a significant optimization that may be possible
 		 * here.  We have accurate enough TLB flush tracking that we
@@ -632,7 +635,8 @@ static ssize_t tlblazy_read_file(struct file *file, char __user *user_buf,
 {
 	char buf[2];
 
-	buf[0] = static_branch_likely(&tlb_use_lazy_mode) ? '1' : '0';
+	buf[0] = static_branch_likely(&__tlb_defer_switch_to_init_mm)
+		? '1' : '0';
 	buf[1] = '\n';
 
 	return simple_read_from_buffer(user_buf, count, ppos, buf, 2);
@@ -647,9 +651,9 @@ static ssize_t tlblazy_write_file(struct file *file,
 		return -EINVAL;
 
 	if (val)
-		static_branch_enable(&tlb_use_lazy_mode);
+		static_branch_enable(&__tlb_defer_switch_to_init_mm);
 	else
-		static_branch_disable(&tlb_use_lazy_mode);
+		static_branch_disable(&__tlb_defer_switch_to_init_mm);
 
 	return count;
 }
@@ -660,23 +664,25 @@ static const struct file_operations fops_tlblazy = {
 	.llseek = default_llseek,
 };
 
-static int __init init_tlb_use_lazy_mode(void)
+static int __init init_tlblazy(void)
 {
 	if (boot_cpu_has(X86_FEATURE_PCID)) {
 		/*
-		 * Heuristic: with PCID on, switching to and from
-		 * init_mm is reasonably fast, but remote flush IPIs
-		 * as expensive as ever, so turn off lazy TLB mode.
+		 * If we have PCID, then switching to init_mm is reasonably
+		 * fast.  If we don't have PCID, then switching to init_mm is
+		 * quite slow, so we default to trying to defer it in the
+		 * hopes that we can avoid it entirely.  The latter approach
+		 * runs the risk of receiving otherwise unnecessary IPIs.
 		 *
 		 * We can't do this in setup_pcid() because static keys
 		 * haven't been initialized yet, and it would blow up
 		 * badly.
 		 */
-		static_branch_disable(&tlb_use_lazy_mode);
+		static_branch_disable(&__tlb_defer_switch_to_init_mm);
 	}
 
-	debugfs_create_file("tlb_use_lazy_mode", S_IRUSR | S_IWUSR,
+	debugfs_create_file("tlb_defer_switch_to_init_mm", S_IRUSR | S_IWUSR,
 			    arch_debugfs_dir, NULL, &fops_tlblazy);
 	return 0;
 }
-late_initcall(init_tlb_use_lazy_mode);
+late_initcall(init_tlblazy);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/3] x86/mm: Remove debug/x86/tlb_defer_switch_to_init_mm
  2017-10-14 16:59 [PATCH 0/3] Fixes on top of the TLB fix in x86/urgent Andy Lutomirski
  2017-10-14 16:59 ` [PATCH 1/3] x86/mm/64: Remove the last VM_BUG_ON from the TLB code Andy Lutomirski
  2017-10-14 16:59 ` [PATCH 2/3] x86/mm: Tidy up "x86/mm: Flush more aggressively in lazy TLB mode" Andy Lutomirski
@ 2017-10-14 16:59 ` Andy Lutomirski
  2017-10-18 16:24   ` [tip:x86/urgent] " tip-bot for Andy Lutomirski
  2 siblings, 1 reply; 7+ messages in thread
From: Andy Lutomirski @ 2017-10-14 16:59 UTC (permalink / raw)
  To: x86; +Cc: linux-kernel, Borislav Petkov, Andy Lutomirski

Borislav thinks that we don't need this knob in a released kernel.
Get rid of it.

Fixes: b956575bed91 ("x86/mm: Flush more aggressively in lazy TLB mode")
Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 arch/x86/include/asm/tlbflush.h | 20 ++++++++------
 arch/x86/mm/tlb.c               | 58 -----------------------------------------
 2 files changed, 12 insertions(+), 66 deletions(-)

diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 0d4a1bb7e303..c4aed0de565e 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -82,16 +82,20 @@ static inline u64 inc_mm_tlb_gen(struct mm_struct *mm)
 #define __flush_tlb_single(addr) __native_flush_tlb_single(addr)
 #endif
 
-/*
- * If tlb_use_lazy_mode is true, then we try to avoid switching CR3 to point
- * to init_mm when we switch to a kernel thread (e.g. the idle thread).  If
- * it's false, then we immediately switch CR3 when entering a kernel thread.
- */
-DECLARE_STATIC_KEY_TRUE(__tlb_defer_switch_to_init_mm);
-
 static inline bool tlb_defer_switch_to_init_mm(void)
 {
-	return static_branch_unlikely(&__tlb_defer_switch_to_init_mm);
+	/*
+	 * If we have PCID, then switching to init_mm is reasonably
+	 * fast.  If we don't have PCID, then switching to init_mm is
+	 * quite slow, so we try to defer it in the hopes that we can
+	 * avoid it entirely.  The latter approach runs the risk of
+	 * receiving otherwise unnecessary IPIs.
+	 *
+	 * This choice is just a heuristic.  The tlb code can handle this
+	 * function returning true or false regardless of whether we have
+	 * PCID.
+	 */
+	return !static_cpu_has(X86_FEATURE_PCID);
 }
 
 /*
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 5ee3b59baa85..0f3d0cea4d00 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -30,7 +30,6 @@
 
 atomic64_t last_mm_ctx_id = ATOMIC64_INIT(1);
 
-DEFINE_STATIC_KEY_TRUE(__tlb_defer_switch_to_init_mm);
 
 static void choose_new_asid(struct mm_struct *next, u64 next_tlb_gen,
 			    u16 *new_asid, bool *need_flush)
@@ -629,60 +628,3 @@ static int __init create_tlb_single_page_flush_ceiling(void)
 	return 0;
 }
 late_initcall(create_tlb_single_page_flush_ceiling);
-
-static ssize_t tlblazy_read_file(struct file *file, char __user *user_buf,
-				 size_t count, loff_t *ppos)
-{
-	char buf[2];
-
-	buf[0] = static_branch_likely(&__tlb_defer_switch_to_init_mm)
-		? '1' : '0';
-	buf[1] = '\n';
-
-	return simple_read_from_buffer(user_buf, count, ppos, buf, 2);
-}
-
-static ssize_t tlblazy_write_file(struct file *file,
-		 const char __user *user_buf, size_t count, loff_t *ppos)
-{
-	bool val;
-
-	if (kstrtobool_from_user(user_buf, count, &val))
-		return -EINVAL;
-
-	if (val)
-		static_branch_enable(&__tlb_defer_switch_to_init_mm);
-	else
-		static_branch_disable(&__tlb_defer_switch_to_init_mm);
-
-	return count;
-}
-
-static const struct file_operations fops_tlblazy = {
-	.read = tlblazy_read_file,
-	.write = tlblazy_write_file,
-	.llseek = default_llseek,
-};
-
-static int __init init_tlblazy(void)
-{
-	if (boot_cpu_has(X86_FEATURE_PCID)) {
-		/*
-		 * If we have PCID, then switching to init_mm is reasonably
-		 * fast.  If we don't have PCID, then switching to init_mm is
-		 * quite slow, so we default to trying to defer it in the
-		 * hopes that we can avoid it entirely.  The latter approach
-		 * runs the risk of receiving otherwise unnecessary IPIs.
-		 *
-		 * We can't do this in setup_pcid() because static keys
-		 * haven't been initialized yet, and it would blow up
-		 * badly.
-		 */
-		static_branch_disable(&__tlb_defer_switch_to_init_mm);
-	}
-
-	debugfs_create_file("tlb_defer_switch_to_init_mm", S_IRUSR | S_IWUSR,
-			    arch_debugfs_dir, NULL, &fops_tlblazy);
-	return 0;
-}
-late_initcall(init_tlblazy);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [tip:x86/urgent] x86/mm/64: Remove the last VM_BUG_ON() from the TLB code
  2017-10-14 16:59 ` [PATCH 1/3] x86/mm/64: Remove the last VM_BUG_ON from the TLB code Andy Lutomirski
@ 2017-10-18 16:23   ` tip-bot for Andy Lutomirski
  0 siblings, 0 replies; 7+ messages in thread
From: tip-bot for Andy Lutomirski @ 2017-10-18 16:23 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: mingo, luto, linux-kernel, tglx, hpa, torvalds, peterz, bp

Commit-ID:  e8b9b0cc8269c85d8167aaee024bfcbb4976c031
Gitweb:     https://git.kernel.org/tip/e8b9b0cc8269c85d8167aaee024bfcbb4976c031
Author:     Andy Lutomirski <luto@kernel.org>
AuthorDate: Sat, 14 Oct 2017 09:59:49 -0700
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 18 Oct 2017 15:25:02 +0200

x86/mm/64: Remove the last VM_BUG_ON() from the TLB code

Let's avoid hard-to-diagnose crashes in the future.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/f423bbc97864089fbdeb813f1ea126c6eaed844a.1508000261.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/mm/tlb.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 658bf00..7db23f9 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -147,8 +147,8 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
 	this_cpu_write(cpu_tlbstate.is_lazy, false);
 
 	if (real_prev == next) {
-		VM_BUG_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) !=
-			  next->context.ctx_id);
+		VM_WARN_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) !=
+			   next->context.ctx_id);
 
 		/*
 		 * We don't currently support having a real mm loaded without

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [tip:x86/urgent] x86/mm: Tidy up "x86/mm: Flush more aggressively in lazy TLB mode"
  2017-10-14 16:59 ` [PATCH 2/3] x86/mm: Tidy up "x86/mm: Flush more aggressively in lazy TLB mode" Andy Lutomirski
@ 2017-10-18 16:23   ` tip-bot for Andy Lutomirski
  0 siblings, 0 replies; 7+ messages in thread
From: tip-bot for Andy Lutomirski @ 2017-10-18 16:23 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: peterz, tglx, mingo, torvalds, luto, hpa, bp, linux-kernel

Commit-ID:  4e57b94664fef55aa71cac33b4632fdfdd52b695
Gitweb:     https://git.kernel.org/tip/4e57b94664fef55aa71cac33b4632fdfdd52b695
Author:     Andy Lutomirski <luto@kernel.org>
AuthorDate: Sat, 14 Oct 2017 09:59:50 -0700
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 18 Oct 2017 15:25:02 +0200

x86/mm: Tidy up "x86/mm: Flush more aggressively in lazy TLB mode"

Due to timezones, commit:

  b956575bed91 ("x86/mm: Flush more aggressively in lazy TLB mode")

was an outdated patch that well tested and fixed the bug but didn't
address Borislav's review comments.

Tidy it up:

 - The name "tlb_use_lazy_mode()" was highly confusing.  Change it to
   "tlb_defer_switch_to_init_mm()", which describes what it actually
   means.

 - Move the static_branch crap into a helper.

 - Improve comments.

Actually removing the debugfs option is in the next patch.

Reported-by: Borislav Petkov <bp@alien8.de>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: b956575bed91 ("x86/mm: Flush more aggressively in lazy TLB mode")
Link: http://lkml.kernel.org/r/154ef95428d4592596b6e98b0af1d2747d6cfbf8.1508000261.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/include/asm/tlbflush.h |  7 ++++++-
 arch/x86/mm/tlb.c               | 30 ++++++++++++++++++------------
 2 files changed, 24 insertions(+), 13 deletions(-)

diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index d362161..0d4a1bb 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -87,7 +87,12 @@ static inline u64 inc_mm_tlb_gen(struct mm_struct *mm)
  * to init_mm when we switch to a kernel thread (e.g. the idle thread).  If
  * it's false, then we immediately switch CR3 when entering a kernel thread.
  */
-DECLARE_STATIC_KEY_TRUE(tlb_use_lazy_mode);
+DECLARE_STATIC_KEY_TRUE(__tlb_defer_switch_to_init_mm);
+
+static inline bool tlb_defer_switch_to_init_mm(void)
+{
+	return static_branch_unlikely(&__tlb_defer_switch_to_init_mm);
+}
 
 /*
  * 6 because 6 should be plenty and struct tlb_state will fit in
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 7db23f9..5ee3b59 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -30,7 +30,7 @@
 
 atomic64_t last_mm_ctx_id = ATOMIC64_INIT(1);
 
-DEFINE_STATIC_KEY_TRUE(tlb_use_lazy_mode);
+DEFINE_STATIC_KEY_TRUE(__tlb_defer_switch_to_init_mm);
 
 static void choose_new_asid(struct mm_struct *next, u64 next_tlb_gen,
 			    u16 *new_asid, bool *need_flush)
@@ -213,6 +213,9 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
 }
 
 /*
+ * Please ignore the name of this function.  It should be called
+ * switch_to_kernel_thread().
+ *
  * enter_lazy_tlb() is a hint from the scheduler that we are entering a
  * kernel thread or other context without an mm.  Acceptable implementations
  * include doing nothing whatsoever, switching to init_mm, or various clever
@@ -227,7 +230,7 @@ void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
 	if (this_cpu_read(cpu_tlbstate.loaded_mm) == &init_mm)
 		return;
 
-	if (static_branch_unlikely(&tlb_use_lazy_mode)) {
+	if (tlb_defer_switch_to_init_mm()) {
 		/*
 		 * There's a significant optimization that may be possible
 		 * here.  We have accurate enough TLB flush tracking that we
@@ -632,7 +635,8 @@ static ssize_t tlblazy_read_file(struct file *file, char __user *user_buf,
 {
 	char buf[2];
 
-	buf[0] = static_branch_likely(&tlb_use_lazy_mode) ? '1' : '0';
+	buf[0] = static_branch_likely(&__tlb_defer_switch_to_init_mm)
+		? '1' : '0';
 	buf[1] = '\n';
 
 	return simple_read_from_buffer(user_buf, count, ppos, buf, 2);
@@ -647,9 +651,9 @@ static ssize_t tlblazy_write_file(struct file *file,
 		return -EINVAL;
 
 	if (val)
-		static_branch_enable(&tlb_use_lazy_mode);
+		static_branch_enable(&__tlb_defer_switch_to_init_mm);
 	else
-		static_branch_disable(&tlb_use_lazy_mode);
+		static_branch_disable(&__tlb_defer_switch_to_init_mm);
 
 	return count;
 }
@@ -660,23 +664,25 @@ static const struct file_operations fops_tlblazy = {
 	.llseek = default_llseek,
 };
 
-static int __init init_tlb_use_lazy_mode(void)
+static int __init init_tlblazy(void)
 {
 	if (boot_cpu_has(X86_FEATURE_PCID)) {
 		/*
-		 * Heuristic: with PCID on, switching to and from
-		 * init_mm is reasonably fast, but remote flush IPIs
-		 * as expensive as ever, so turn off lazy TLB mode.
+		 * If we have PCID, then switching to init_mm is reasonably
+		 * fast.  If we don't have PCID, then switching to init_mm is
+		 * quite slow, so we default to trying to defer it in the
+		 * hopes that we can avoid it entirely.  The latter approach
+		 * runs the risk of receiving otherwise unnecessary IPIs.
 		 *
 		 * We can't do this in setup_pcid() because static keys
 		 * haven't been initialized yet, and it would blow up
 		 * badly.
 		 */
-		static_branch_disable(&tlb_use_lazy_mode);
+		static_branch_disable(&__tlb_defer_switch_to_init_mm);
 	}
 
-	debugfs_create_file("tlb_use_lazy_mode", S_IRUSR | S_IWUSR,
+	debugfs_create_file("tlb_defer_switch_to_init_mm", S_IRUSR | S_IWUSR,
 			    arch_debugfs_dir, NULL, &fops_tlblazy);
 	return 0;
 }
-late_initcall(init_tlb_use_lazy_mode);
+late_initcall(init_tlblazy);

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [tip:x86/urgent] x86/mm: Remove debug/x86/tlb_defer_switch_to_init_mm
  2017-10-14 16:59 ` [PATCH 3/3] x86/mm: Remove debug/x86/tlb_defer_switch_to_init_mm Andy Lutomirski
@ 2017-10-18 16:24   ` tip-bot for Andy Lutomirski
  0 siblings, 0 replies; 7+ messages in thread
From: tip-bot for Andy Lutomirski @ 2017-10-18 16:24 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, mingo, hpa, bp, torvalds, luto, tglx, peterz

Commit-ID:  7ac7f2c315ef76437f5119df354d334448534fb5
Gitweb:     https://git.kernel.org/tip/7ac7f2c315ef76437f5119df354d334448534fb5
Author:     Andy Lutomirski <luto@kernel.org>
AuthorDate: Sat, 14 Oct 2017 09:59:51 -0700
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 18 Oct 2017 15:25:02 +0200

x86/mm: Remove debug/x86/tlb_defer_switch_to_init_mm

Borislav thinks that we don't need this knob in a released kernel.
Get rid of it.

Requested-by: Borislav Petkov <bp@alien8.de>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: b956575bed91 ("x86/mm: Flush more aggressively in lazy TLB mode")
Link: http://lkml.kernel.org/r/1fa72431924e81e86c164ff7881bf9240d1f1a6c.1508000261.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/include/asm/tlbflush.h | 20 ++++++++------
 arch/x86/mm/tlb.c               | 58 -----------------------------------------
 2 files changed, 12 insertions(+), 66 deletions(-)

diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 0d4a1bb..c4aed0d 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -82,16 +82,20 @@ static inline u64 inc_mm_tlb_gen(struct mm_struct *mm)
 #define __flush_tlb_single(addr) __native_flush_tlb_single(addr)
 #endif
 
-/*
- * If tlb_use_lazy_mode is true, then we try to avoid switching CR3 to point
- * to init_mm when we switch to a kernel thread (e.g. the idle thread).  If
- * it's false, then we immediately switch CR3 when entering a kernel thread.
- */
-DECLARE_STATIC_KEY_TRUE(__tlb_defer_switch_to_init_mm);
-
 static inline bool tlb_defer_switch_to_init_mm(void)
 {
-	return static_branch_unlikely(&__tlb_defer_switch_to_init_mm);
+	/*
+	 * If we have PCID, then switching to init_mm is reasonably
+	 * fast.  If we don't have PCID, then switching to init_mm is
+	 * quite slow, so we try to defer it in the hopes that we can
+	 * avoid it entirely.  The latter approach runs the risk of
+	 * receiving otherwise unnecessary IPIs.
+	 *
+	 * This choice is just a heuristic.  The tlb code can handle this
+	 * function returning true or false regardless of whether we have
+	 * PCID.
+	 */
+	return !static_cpu_has(X86_FEATURE_PCID);
 }
 
 /*
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 5ee3b59..0f3d0ce 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -30,7 +30,6 @@
 
 atomic64_t last_mm_ctx_id = ATOMIC64_INIT(1);
 
-DEFINE_STATIC_KEY_TRUE(__tlb_defer_switch_to_init_mm);
 
 static void choose_new_asid(struct mm_struct *next, u64 next_tlb_gen,
 			    u16 *new_asid, bool *need_flush)
@@ -629,60 +628,3 @@ static int __init create_tlb_single_page_flush_ceiling(void)
 	return 0;
 }
 late_initcall(create_tlb_single_page_flush_ceiling);
-
-static ssize_t tlblazy_read_file(struct file *file, char __user *user_buf,
-				 size_t count, loff_t *ppos)
-{
-	char buf[2];
-
-	buf[0] = static_branch_likely(&__tlb_defer_switch_to_init_mm)
-		? '1' : '0';
-	buf[1] = '\n';
-
-	return simple_read_from_buffer(user_buf, count, ppos, buf, 2);
-}
-
-static ssize_t tlblazy_write_file(struct file *file,
-		 const char __user *user_buf, size_t count, loff_t *ppos)
-{
-	bool val;
-
-	if (kstrtobool_from_user(user_buf, count, &val))
-		return -EINVAL;
-
-	if (val)
-		static_branch_enable(&__tlb_defer_switch_to_init_mm);
-	else
-		static_branch_disable(&__tlb_defer_switch_to_init_mm);
-
-	return count;
-}
-
-static const struct file_operations fops_tlblazy = {
-	.read = tlblazy_read_file,
-	.write = tlblazy_write_file,
-	.llseek = default_llseek,
-};
-
-static int __init init_tlblazy(void)
-{
-	if (boot_cpu_has(X86_FEATURE_PCID)) {
-		/*
-		 * If we have PCID, then switching to init_mm is reasonably
-		 * fast.  If we don't have PCID, then switching to init_mm is
-		 * quite slow, so we default to trying to defer it in the
-		 * hopes that we can avoid it entirely.  The latter approach
-		 * runs the risk of receiving otherwise unnecessary IPIs.
-		 *
-		 * We can't do this in setup_pcid() because static keys
-		 * haven't been initialized yet, and it would blow up
-		 * badly.
-		 */
-		static_branch_disable(&__tlb_defer_switch_to_init_mm);
-	}
-
-	debugfs_create_file("tlb_defer_switch_to_init_mm", S_IRUSR | S_IWUSR,
-			    arch_debugfs_dir, NULL, &fops_tlblazy);
-	return 0;
-}
-late_initcall(init_tlblazy);

^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2017-10-18 16:28 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-14 16:59 [PATCH 0/3] Fixes on top of the TLB fix in x86/urgent Andy Lutomirski
2017-10-14 16:59 ` [PATCH 1/3] x86/mm/64: Remove the last VM_BUG_ON from the TLB code Andy Lutomirski
2017-10-18 16:23   ` [tip:x86/urgent] x86/mm/64: Remove the last VM_BUG_ON() " tip-bot for Andy Lutomirski
2017-10-14 16:59 ` [PATCH 2/3] x86/mm: Tidy up "x86/mm: Flush more aggressively in lazy TLB mode" Andy Lutomirski
2017-10-18 16:23   ` [tip:x86/urgent] " tip-bot for Andy Lutomirski
2017-10-14 16:59 ` [PATCH 3/3] x86/mm: Remove debug/x86/tlb_defer_switch_to_init_mm Andy Lutomirski
2017-10-18 16:24   ` [tip:x86/urgent] " tip-bot for Andy Lutomirski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).