All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
@ 2020-09-28 12:44 ` Lukas Bulwahn
  0 siblings, 0 replies; 42+ messages in thread
From: Lukas Bulwahn @ 2020-09-28 12:44 UTC (permalink / raw)
  To: Balbir Singh, Thomas Gleixner
  Cc: Dave Hansen, Andy Lutomirski, Peter Zijlstra, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, Nathan Chancellor,
	Nick Desaulniers, linux-kernel, clang-built-linux,
	kernel-janitors, linux-safety, Lukas Bulwahn

Commit b6724f118d44 ("prctl: Hook L1D flushing in via prctl") checks the
validity for enable_l1d_flush_for_task() and introduces some superfluous
local variables for that implementation.

make clang-analyzer on x86_64 tinyconfig caught my attention with:

  arch/x86/mm/tlb.c:332:2: warning: Value stored to 'cpu' is never read \
  [clang-analyzer-deadcode.DeadStores]

Compilers will detect these superfluous local variables and assignment and
optimize this anyway. So, the resulting binary is identical before and
after this change.

Simplify the code and remove superfluous local variables to make
clang-analyzer happy.

No functional change. No change in binary with supported compilers.

Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com>
---
applies cleanly on next-20200925

Balbir, please review and ack.
Thomas, please pick this minor non-urgent clean-up patch into the x86/pti
branch of tip as follow-up to: 
https://lore.kernel.org/lkml/160026187842.15536.285514864386042510.tip-bot2@tip-bot2/

I quickly confirmed that the binary did not change with this change to the
source code; The hash of tlb.o remained the same before and after the change.

So, in my setup:
md5sum tlb.o
7c7e096bab0fd87bd2c8437d8c7dc3fa  tlb.o

linux-safety, please verify and validate this change.

 arch/x86/mm/tlb.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 6b0f4c88b07c..90515c04d90a 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -316,7 +316,7 @@ EXPORT_SYMBOL_GPL(leave_mm);
 
 int enable_l1d_flush_for_task(struct task_struct *tsk)
 {
-	int cpu, ret = 0, i;
+	int i;
 
 	/*
 	 * Do not enable L1D_FLUSH_OUT if
@@ -329,7 +329,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
 			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
 		return -EINVAL;
 
-	cpu = get_cpu();
+	get_cpu();
 
 	for_each_cpu(i, &tsk->cpus_mask) {
 		if (cpu_data(i).smt_active == true) {
@@ -340,7 +340,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
 
 	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
 	put_cpu();
-	return ret;
+	return 0;
 }
 
 int disable_l1d_flush_for_task(struct task_struct *tsk)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
@ 2020-09-28 12:44 ` Lukas Bulwahn
  0 siblings, 0 replies; 42+ messages in thread
From: Lukas Bulwahn @ 2020-09-28 12:44 UTC (permalink / raw)
  To: Balbir Singh, Thomas Gleixner
  Cc: Dave Hansen, Andy Lutomirski, Peter Zijlstra, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, Nathan Chancellor,
	Nick Desaulniers, linux-kernel, clang-built-linux,
	kernel-janitors, linux-safety, Lukas Bulwahn

Commit b6724f118d44 ("prctl: Hook L1D flushing in via prctl") checks the
validity for enable_l1d_flush_for_task() and introduces some superfluous
local variables for that implementation.

make clang-analyzer on x86_64 tinyconfig caught my attention with:

  arch/x86/mm/tlb.c:332:2: warning: Value stored to 'cpu' is never read \
  [clang-analyzer-deadcode.DeadStores]

Compilers will detect these superfluous local variables and assignment and
optimize this anyway. So, the resulting binary is identical before and
after this change.

Simplify the code and remove superfluous local variables to make
clang-analyzer happy.

No functional change. No change in binary with supported compilers.

Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com>
---
applies cleanly on next-20200925

Balbir, please review and ack.
Thomas, please pick this minor non-urgent clean-up patch into the x86/pti
branch of tip as follow-up to: 
https://lore.kernel.org/lkml/160026187842.15536.285514864386042510.tip-bot2@tip-bot2/

I quickly confirmed that the binary did not change with this change to the
source code; The hash of tlb.o remained the same before and after the change.

So, in my setup:
md5sum tlb.o
7c7e096bab0fd87bd2c8437d8c7dc3fa  tlb.o

linux-safety, please verify and validate this change.

 arch/x86/mm/tlb.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 6b0f4c88b07c..90515c04d90a 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -316,7 +316,7 @@ EXPORT_SYMBOL_GPL(leave_mm);
 
 int enable_l1d_flush_for_task(struct task_struct *tsk)
 {
-	int cpu, ret = 0, i;
+	int i;
 
 	/*
 	 * Do not enable L1D_FLUSH_OUT if
@@ -329,7 +329,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
 			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
 		return -EINVAL;
 
-	cpu = get_cpu();
+	get_cpu();
 
 	for_each_cpu(i, &tsk->cpus_mask) {
 		if (cpu_data(i).smt_active = true) {
@@ -340,7 +340,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
 
 	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
 	put_cpu();
-	return ret;
+	return 0;
 }
 
 int disable_l1d_flush_for_task(struct task_struct *tsk)
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
  2020-09-28 12:44 ` Lukas Bulwahn
@ 2020-09-28 20:43   ` Nathan Chancellor
  -1 siblings, 0 replies; 42+ messages in thread
From: Nathan Chancellor @ 2020-09-28 20:43 UTC (permalink / raw)
  To: Lukas Bulwahn
  Cc: Balbir Singh, Thomas Gleixner, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Ingo Molnar, Borislav Petkov, x86,
	H. Peter Anvin, Nick Desaulniers, linux-kernel,
	clang-built-linux, kernel-janitors, linux-safety

On Mon, Sep 28, 2020 at 02:44:57PM +0200, Lukas Bulwahn wrote:
> Commit b6724f118d44 ("prctl: Hook L1D flushing in via prctl") checks the
> validity for enable_l1d_flush_for_task() and introduces some superfluous
> local variables for that implementation.
> 
> make clang-analyzer on x86_64 tinyconfig caught my attention with:
> 
>   arch/x86/mm/tlb.c:332:2: warning: Value stored to 'cpu' is never read \
>   [clang-analyzer-deadcode.DeadStores]
> 
> Compilers will detect these superfluous local variables and assignment and
> optimize this anyway. So, the resulting binary is identical before and
> after this change.
> 
> Simplify the code and remove superfluous local variables to make
> clang-analyzer happy.
> 
> No functional change. No change in binary with supported compilers.
> 
> Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com>
> ---

Reviewed-by: Nathan Chancellor <natechancellor@gmail.com>

> applies cleanly on next-20200925
> 
> Balbir, please review and ack.
> Thomas, please pick this minor non-urgent clean-up patch into the x86/pti
> branch of tip as follow-up to: 
> https://lore.kernel.org/lkml/160026187842.15536.285514864386042510.tip-bot2@tip-bot2/
> 
> I quickly confirmed that the binary did not change with this change to the
> source code; The hash of tlb.o remained the same before and after the change.
> 
> So, in my setup:
> md5sum tlb.o
> 7c7e096bab0fd87bd2c8437d8c7dc3fa  tlb.o
> 
> linux-safety, please verify and validate this change.
> 
>  arch/x86/mm/tlb.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 6b0f4c88b07c..90515c04d90a 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -316,7 +316,7 @@ EXPORT_SYMBOL_GPL(leave_mm);
>  
>  int enable_l1d_flush_for_task(struct task_struct *tsk)
>  {
> -	int cpu, ret = 0, i;
> +	int i;
>  
>  	/*
>  	 * Do not enable L1D_FLUSH_OUT if
> @@ -329,7 +329,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
>  			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
>  		return -EINVAL;
>  
> -	cpu = get_cpu();
> +	get_cpu();
>  
>  	for_each_cpu(i, &tsk->cpus_mask) {
>  		if (cpu_data(i).smt_active == true) {
> @@ -340,7 +340,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
>  
>  	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
>  	put_cpu();
> -	return ret;
> +	return 0;
>  }
>  
>  int disable_l1d_flush_for_task(struct task_struct *tsk)
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
@ 2020-09-28 20:43   ` Nathan Chancellor
  0 siblings, 0 replies; 42+ messages in thread
From: Nathan Chancellor @ 2020-09-28 20:43 UTC (permalink / raw)
  To: Lukas Bulwahn
  Cc: Balbir Singh, Thomas Gleixner, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Ingo Molnar, Borislav Petkov, x86,
	H. Peter Anvin, Nick Desaulniers, linux-kernel,
	clang-built-linux, kernel-janitors, linux-safety

On Mon, Sep 28, 2020 at 02:44:57PM +0200, Lukas Bulwahn wrote:
> Commit b6724f118d44 ("prctl: Hook L1D flushing in via prctl") checks the
> validity for enable_l1d_flush_for_task() and introduces some superfluous
> local variables for that implementation.
> 
> make clang-analyzer on x86_64 tinyconfig caught my attention with:
> 
>   arch/x86/mm/tlb.c:332:2: warning: Value stored to 'cpu' is never read \
>   [clang-analyzer-deadcode.DeadStores]
> 
> Compilers will detect these superfluous local variables and assignment and
> optimize this anyway. So, the resulting binary is identical before and
> after this change.
> 
> Simplify the code and remove superfluous local variables to make
> clang-analyzer happy.
> 
> No functional change. No change in binary with supported compilers.
> 
> Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com>
> ---

Reviewed-by: Nathan Chancellor <natechancellor@gmail.com>

> applies cleanly on next-20200925
> 
> Balbir, please review and ack.
> Thomas, please pick this minor non-urgent clean-up patch into the x86/pti
> branch of tip as follow-up to: 
> https://lore.kernel.org/lkml/160026187842.15536.285514864386042510.tip-bot2@tip-bot2/
> 
> I quickly confirmed that the binary did not change with this change to the
> source code; The hash of tlb.o remained the same before and after the change.
> 
> So, in my setup:
> md5sum tlb.o
> 7c7e096bab0fd87bd2c8437d8c7dc3fa  tlb.o
> 
> linux-safety, please verify and validate this change.
> 
>  arch/x86/mm/tlb.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 6b0f4c88b07c..90515c04d90a 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -316,7 +316,7 @@ EXPORT_SYMBOL_GPL(leave_mm);
>  
>  int enable_l1d_flush_for_task(struct task_struct *tsk)
>  {
> -	int cpu, ret = 0, i;
> +	int i;
>  
>  	/*
>  	 * Do not enable L1D_FLUSH_OUT if
> @@ -329,7 +329,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
>  			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
>  		return -EINVAL;
>  
> -	cpu = get_cpu();
> +	get_cpu();
>  
>  	for_each_cpu(i, &tsk->cpus_mask) {
>  		if (cpu_data(i).smt_active = true) {
> @@ -340,7 +340,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
>  
>  	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
>  	put_cpu();
> -	return ret;
> +	return 0;
>  }
>  
>  int disable_l1d_flush_for_task(struct task_struct *tsk)
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
  2020-09-28 12:44 ` Lukas Bulwahn
@ 2020-09-29  7:12   ` Peter Zijlstra
  -1 siblings, 0 replies; 42+ messages in thread
From: Peter Zijlstra @ 2020-09-29  7:12 UTC (permalink / raw)
  To: Lukas Bulwahn
  Cc: Balbir Singh, Thomas Gleixner, Dave Hansen, Andy Lutomirski,
	Ingo Molnar, Borislav Petkov, x86, H. Peter Anvin,
	Nathan Chancellor, Nick Desaulniers, linux-kernel,
	clang-built-linux, kernel-janitors, linux-safety

On Mon, Sep 28, 2020 at 02:44:57PM +0200, Lukas Bulwahn wrote:
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 6b0f4c88b07c..90515c04d90a 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -316,7 +316,7 @@ EXPORT_SYMBOL_GPL(leave_mm);
>  
>  int enable_l1d_flush_for_task(struct task_struct *tsk)
>  {
> -	int cpu, ret = 0, i;
> +	int i;
>  
>  	/*
>  	 * Do not enable L1D_FLUSH_OUT if
> @@ -329,7 +329,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
>  			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
>  		return -EINVAL;
>  
> -	cpu = get_cpu();
> +	get_cpu();
>  
>  	for_each_cpu(i, &tsk->cpus_mask) {
>  		if (cpu_data(i).smt_active == true) {
> @@ -340,7 +340,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
>  
>  	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
>  	put_cpu();
> -	return ret;
> +	return 0;
>  }

If you don't use the return value of get_cpu(), then change it over to
preempt_{dis,en}able(), but this got me looking at the function, wtf is
that garbage supposed to do in the first place

What do we need to disable preemption for?

Please explain the desired semantics against sched_setaffinity().

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
@ 2020-09-29  7:12   ` Peter Zijlstra
  0 siblings, 0 replies; 42+ messages in thread
From: Peter Zijlstra @ 2020-09-29  7:12 UTC (permalink / raw)
  To: Lukas Bulwahn
  Cc: Balbir Singh, Thomas Gleixner, Dave Hansen, Andy Lutomirski,
	Ingo Molnar, Borislav Petkov, x86, H. Peter Anvin,
	Nathan Chancellor, Nick Desaulniers, linux-kernel,
	clang-built-linux, kernel-janitors, linux-safety

On Mon, Sep 28, 2020 at 02:44:57PM +0200, Lukas Bulwahn wrote:
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 6b0f4c88b07c..90515c04d90a 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -316,7 +316,7 @@ EXPORT_SYMBOL_GPL(leave_mm);
>  
>  int enable_l1d_flush_for_task(struct task_struct *tsk)
>  {
> -	int cpu, ret = 0, i;
> +	int i;
>  
>  	/*
>  	 * Do not enable L1D_FLUSH_OUT if
> @@ -329,7 +329,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
>  			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
>  		return -EINVAL;
>  
> -	cpu = get_cpu();
> +	get_cpu();
>  
>  	for_each_cpu(i, &tsk->cpus_mask) {
>  		if (cpu_data(i).smt_active = true) {
> @@ -340,7 +340,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
>  
>  	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
>  	put_cpu();
> -	return ret;
> +	return 0;
>  }

If you don't use the return value of get_cpu(), then change it over to
preempt_{dis,en}able(), but this got me looking at the function, wtf is
that garbage supposed to do in the first place

What do we need to disable preemption for?

Please explain the desired semantics against sched_setaffinity().

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
  2020-09-29  7:12   ` Peter Zijlstra
  (?)
@ 2020-09-29  8:33     ` Lukas Bulwahn
  -1 siblings, 0 replies; 42+ messages in thread
From: Lukas Bulwahn @ 2020-09-29  8:33 UTC (permalink / raw)
  To: Peter Zijlstra, Balbir Singh
  Cc: Lukas Bulwahn, Thomas Gleixner, Dave Hansen, Andy Lutomirski,
	Ingo Molnar, Borislav Petkov, x86, H. Peter Anvin,
	Nathan Chancellor, Nick Desaulniers, linux-kernel,
	clang-built-linux, kernel-janitors, linux-safety



On Tue, 29 Sep 2020, Peter Zijlstra wrote:

> On Mon, Sep 28, 2020 at 02:44:57PM +0200, Lukas Bulwahn wrote:
> > diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> > index 6b0f4c88b07c..90515c04d90a 100644
> > --- a/arch/x86/mm/tlb.c
> > +++ b/arch/x86/mm/tlb.c
> > @@ -316,7 +316,7 @@ EXPORT_SYMBOL_GPL(leave_mm);
> >  
> >  int enable_l1d_flush_for_task(struct task_struct *tsk)
> >  {
> > -	int cpu, ret = 0, i;
> > +	int i;
> >  
> >  	/*
> >  	 * Do not enable L1D_FLUSH_OUT if
> > @@ -329,7 +329,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
> >  			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
> >  		return -EINVAL;
> >  
> > -	cpu = get_cpu();
> > +	get_cpu();
> >  
> >  	for_each_cpu(i, &tsk->cpus_mask) {
> >  		if (cpu_data(i).smt_active == true) {
> > @@ -340,7 +340,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
> >  
> >  	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
> >  	put_cpu();
> > -	return ret;
> > +	return 0;
> >  }
> 
> If you don't use the return value of get_cpu(), then change it over to
> preempt_{dis,en}able(), but this got me looking at the function, wtf is
> that garbage supposed to do in the first place

I also thought that preempt_{dis,en}able() would do, but thought maybe 
Balbir just considered {get,put}_cpu stylistically nicer... so I stayed 
with the functions as-is.

> 
> What do we need to disable preemption for?
>

I have no clue... not a good premise for touching the code, but I just 
wanted to make clang-analyzer happy without modifying any semantics.

Balbir, can you help out here? What was your intent?
 
> Please explain the desired semantics against sched_setaffinity().
> 

I am happy to send a proper v2 once I understand if disabling preemption 
is required and the preempt_{dis,en}able() function are preferred to the 
{get,put}_cpu functions.

Lukas

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
@ 2020-09-29  8:33     ` Lukas Bulwahn
  0 siblings, 0 replies; 42+ messages in thread
From: Lukas Bulwahn @ 2020-09-29  8:33 UTC (permalink / raw)
  To: Peter Zijlstra, Balbir Singh
  Cc: Lukas Bulwahn, Thomas Gleixner, Dave Hansen, Andy Lutomirski,
	Ingo Molnar, Borislav Petkov, x86, H. Peter Anvin,
	Nathan Chancellor, Nick Desaulniers, linux-kernel,
	clang-built-linux, kernel-janitors, linux-safety



On Tue, 29 Sep 2020, Peter Zijlstra wrote:

> On Mon, Sep 28, 2020 at 02:44:57PM +0200, Lukas Bulwahn wrote:
> > diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> > index 6b0f4c88b07c..90515c04d90a 100644
> > --- a/arch/x86/mm/tlb.c
> > +++ b/arch/x86/mm/tlb.c
> > @@ -316,7 +316,7 @@ EXPORT_SYMBOL_GPL(leave_mm);
> >  
> >  int enable_l1d_flush_for_task(struct task_struct *tsk)
> >  {
> > -	int cpu, ret = 0, i;
> > +	int i;
> >  
> >  	/*
> >  	 * Do not enable L1D_FLUSH_OUT if
> > @@ -329,7 +329,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
> >  			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
> >  		return -EINVAL;
> >  
> > -	cpu = get_cpu();
> > +	get_cpu();
> >  
> >  	for_each_cpu(i, &tsk->cpus_mask) {
> >  		if (cpu_data(i).smt_active = true) {
> > @@ -340,7 +340,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
> >  
> >  	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
> >  	put_cpu();
> > -	return ret;
> > +	return 0;
> >  }
> 
> If you don't use the return value of get_cpu(), then change it over to
> preempt_{dis,en}able(), but this got me looking at the function, wtf is
> that garbage supposed to do in the first place

I also thought that preempt_{dis,en}able() would do, but thought maybe 
Balbir just considered {get,put}_cpu stylistically nicer... so I stayed 
with the functions as-is.

> 
> What do we need to disable preemption for?
>

I have no clue... not a good premise for touching the code, but I just 
wanted to make clang-analyzer happy without modifying any semantics.

Balbir, can you help out here? What was your intent?
 
> Please explain the desired semantics against sched_setaffinity().
> 

I am happy to send a proper v2 once I understand if disabling preemption 
is required and the preempt_{dis,en}able() function are preferred to the 
{get,put}_cpu functions.

Lukas

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
@ 2020-09-29  8:33     ` Lukas Bulwahn
  0 siblings, 0 replies; 42+ messages in thread
From: Lukas Bulwahn @ 2020-09-29  8:33 UTC (permalink / raw)
  To: Peter Zijlstra, Balbir Singh
  Cc: Lukas Bulwahn, Thomas Gleixner, Dave Hansen, Andy Lutomirski,
	Ingo Molnar, Borislav Petkov, x86, H. Peter Anvin,
	Nathan Chancellor, Nick Desaulniers, linux-kernel,
	clang-built-linux, kernel-janitors, linux-safety



On Tue, 29 Sep 2020, Peter Zijlstra wrote:

> On Mon, Sep 28, 2020 at 02:44:57PM +0200, Lukas Bulwahn wrote:
> > diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> > index 6b0f4c88b07c..90515c04d90a 100644
> > --- a/arch/x86/mm/tlb.c
> > +++ b/arch/x86/mm/tlb.c
> > @@ -316,7 +316,7 @@ EXPORT_SYMBOL_GPL(leave_mm);
> >  
> >  int enable_l1d_flush_for_task(struct task_struct *tsk)
> >  {
> > -	int cpu, ret = 0, i;
> > +	int i;
> >  
> >  	/*
> >  	 * Do not enable L1D_FLUSH_OUT if
> > @@ -329,7 +329,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
> >  			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
> >  		return -EINVAL;
> >  
> > -	cpu = get_cpu();
> > +	get_cpu();
> >  
> >  	for_each_cpu(i, &tsk->cpus_mask) {
> >  		if (cpu_data(i).smt_active == true) {
> > @@ -340,7 +340,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
> >  
> >  	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
> >  	put_cpu();
> > -	return ret;
> > +	return 0;
> >  }
> 
> If you don't use the return value of get_cpu(), then change it over to
> preempt_{dis,en}able(), but this got me looking at the function, wtf is
> that garbage supposed to do in the first place

I also thought that preempt_{dis,en}able() would do, but thought maybe 
Balbir just considered {get,put}_cpu stylistically nicer... so I stayed 
with the functions as-is.

> 
> What do we need to disable preemption for?
>

I have no clue... not a good premise for touching the code, but I just 
wanted to make clang-analyzer happy without modifying any semantics.

Balbir, can you help out here? What was your intent?
 
> Please explain the desired semantics against sched_setaffinity().
> 

I am happy to send a proper v2 once I understand if disabling preemption 
is required and the preempt_{dis,en}able() function are preferred to the 
{get,put}_cpu functions.

Lukas

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
  2020-09-29  7:12   ` Peter Zijlstra
@ 2020-09-29  8:37     ` Peter Zijlstra
  -1 siblings, 0 replies; 42+ messages in thread
From: Peter Zijlstra @ 2020-09-29  8:37 UTC (permalink / raw)
  To: Lukas Bulwahn
  Cc: Balbir Singh, Thomas Gleixner, Dave Hansen, Andy Lutomirski,
	Ingo Molnar, Borislav Petkov, x86, H. Peter Anvin,
	Nathan Chancellor, Nick Desaulniers, linux-kernel,
	clang-built-linux, kernel-janitors, linux-safety

On Tue, Sep 29, 2020 at 09:12:11AM +0200, Peter Zijlstra wrote:
> On Mon, Sep 28, 2020 at 02:44:57PM +0200, Lukas Bulwahn wrote:
> > diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> > index 6b0f4c88b07c..90515c04d90a 100644
> > --- a/arch/x86/mm/tlb.c
> > +++ b/arch/x86/mm/tlb.c
> > @@ -316,7 +316,7 @@ EXPORT_SYMBOL_GPL(leave_mm);
> >  
> >  int enable_l1d_flush_for_task(struct task_struct *tsk)
> >  {
> > -	int cpu, ret = 0, i;
> > +	int i;
> >  
> >  	/*
> >  	 * Do not enable L1D_FLUSH_OUT if
> > @@ -329,7 +329,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
> >  			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
> >  		return -EINVAL;
> >  
> > -	cpu = get_cpu();
> > +	get_cpu();
> >  
> >  	for_each_cpu(i, &tsk->cpus_mask) {
> >  		if (cpu_data(i).smt_active == true) {
> > @@ -340,7 +340,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
> >  
> >  	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
> >  	put_cpu();
> > -	return ret;
> > +	return 0;
> >  }
> 
> If you don't use the return value of get_cpu(), then change it over to
> preempt_{dis,en}able(), but this got me looking at the function, wtf is
> that garbage supposed to do in the first place
> 
> What do we need to disable preemption for?
> 
> Please explain the desired semantics against sched_setaffinity().

Here, I fixed it..

---
 arch/x86/mm/tlb.c | 24 +++---------------------
 1 file changed, 3 insertions(+), 21 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 6b0f4c88b07c..f02a2f1909da 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -316,31 +316,13 @@ EXPORT_SYMBOL_GPL(leave_mm);
 
 int enable_l1d_flush_for_task(struct task_struct *tsk)
 {
-	int cpu, ret = 0, i;
-
-	/*
-	 * Do not enable L1D_FLUSH_OUT if
-	 * b. The CPU is not affected by the L1TF bug
-	 * c. The CPU does not have L1D FLUSH feature support
-	 * c. The task's affinity is on cores with SMT on.
-	 */
-
 	if (!boot_cpu_has_bug(X86_BUG_L1TF) ||
-			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
+	    !boot_cpu_has(X86_FEATURE_FLUSH_L1D) ||
+	    sched_smt_active());
 		return -EINVAL;
 
-	cpu = get_cpu();
-
-	for_each_cpu(i, &tsk->cpus_mask) {
-		if (cpu_data(i).smt_active == true) {
-			put_cpu();
-			return -EINVAL;
-		}
-	}
-
 	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
-	put_cpu();
-	return ret;
+	return 0;
 }
 
 int disable_l1d_flush_for_task(struct task_struct *tsk)

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
@ 2020-09-29  8:37     ` Peter Zijlstra
  0 siblings, 0 replies; 42+ messages in thread
From: Peter Zijlstra @ 2020-09-29  8:37 UTC (permalink / raw)
  To: Lukas Bulwahn
  Cc: Balbir Singh, Thomas Gleixner, Dave Hansen, Andy Lutomirski,
	Ingo Molnar, Borislav Petkov, x86, H. Peter Anvin,
	Nathan Chancellor, Nick Desaulniers, linux-kernel,
	clang-built-linux, kernel-janitors, linux-safety

On Tue, Sep 29, 2020 at 09:12:11AM +0200, Peter Zijlstra wrote:
> On Mon, Sep 28, 2020 at 02:44:57PM +0200, Lukas Bulwahn wrote:
> > diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> > index 6b0f4c88b07c..90515c04d90a 100644
> > --- a/arch/x86/mm/tlb.c
> > +++ b/arch/x86/mm/tlb.c
> > @@ -316,7 +316,7 @@ EXPORT_SYMBOL_GPL(leave_mm);
> >  
> >  int enable_l1d_flush_for_task(struct task_struct *tsk)
> >  {
> > -	int cpu, ret = 0, i;
> > +	int i;
> >  
> >  	/*
> >  	 * Do not enable L1D_FLUSH_OUT if
> > @@ -329,7 +329,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
> >  			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
> >  		return -EINVAL;
> >  
> > -	cpu = get_cpu();
> > +	get_cpu();
> >  
> >  	for_each_cpu(i, &tsk->cpus_mask) {
> >  		if (cpu_data(i).smt_active = true) {
> > @@ -340,7 +340,7 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
> >  
> >  	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
> >  	put_cpu();
> > -	return ret;
> > +	return 0;
> >  }
> 
> If you don't use the return value of get_cpu(), then change it over to
> preempt_{dis,en}able(), but this got me looking at the function, wtf is
> that garbage supposed to do in the first place
> 
> What do we need to disable preemption for?
> 
> Please explain the desired semantics against sched_setaffinity().

Here, I fixed it..

---
 arch/x86/mm/tlb.c | 24 +++---------------------
 1 file changed, 3 insertions(+), 21 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 6b0f4c88b07c..f02a2f1909da 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -316,31 +316,13 @@ EXPORT_SYMBOL_GPL(leave_mm);
 
 int enable_l1d_flush_for_task(struct task_struct *tsk)
 {
-	int cpu, ret = 0, i;
-
-	/*
-	 * Do not enable L1D_FLUSH_OUT if
-	 * b. The CPU is not affected by the L1TF bug
-	 * c. The CPU does not have L1D FLUSH feature support
-	 * c. The task's affinity is on cores with SMT on.
-	 */
-
 	if (!boot_cpu_has_bug(X86_BUG_L1TF) ||
-			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
+	    !boot_cpu_has(X86_FEATURE_FLUSH_L1D) ||
+	    sched_smt_active());
 		return -EINVAL;
 
-	cpu = get_cpu();
-
-	for_each_cpu(i, &tsk->cpus_mask) {
-		if (cpu_data(i).smt_active = true) {
-			put_cpu();
-			return -EINVAL;
-		}
-	}
-
 	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
-	put_cpu();
-	return ret;
+	return 0;
 }
 
 int disable_l1d_flush_for_task(struct task_struct *tsk)

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
  2020-09-29  8:37     ` Peter Zijlstra
@ 2020-09-30 15:40       ` Thomas Gleixner
  -1 siblings, 0 replies; 42+ messages in thread
From: Thomas Gleixner @ 2020-09-30 15:40 UTC (permalink / raw)
  To: Peter Zijlstra, Lukas Bulwahn
  Cc: Balbir Singh, Dave Hansen, Andy Lutomirski, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, Nathan Chancellor,
	Nick Desaulniers, linux-kernel, clang-built-linux,
	kernel-janitors, linux-safety

On Tue, Sep 29 2020 at 10:37, Peter Zijlstra wrote:
> Here, I fixed it..

Well, no. What Balbir is trying to do here is to establish whether a
task runs on a !SMT core. sched_smt_active() is system wide, but their
setup is to have a bunch of SMT enabled cores and cores where SMT is off
because the sibling is offlined. They affine these processes to non SMT
cores and the check there validates that before it enabled that flush
thingy.

Of course this is best effort voodoo because if all CPUs in the mask are
offlined then the task is moved to a SMT enabled one where L1D flush is
useless. Though offlining their workhorse CPUs is probably not the daily
business for obvious raisins.

Thanks,

        tglx



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
@ 2020-09-30 15:40       ` Thomas Gleixner
  0 siblings, 0 replies; 42+ messages in thread
From: Thomas Gleixner @ 2020-09-30 15:40 UTC (permalink / raw)
  To: Peter Zijlstra, Lukas Bulwahn
  Cc: Balbir Singh, Dave Hansen, Andy Lutomirski, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, Nathan Chancellor,
	Nick Desaulniers, linux-kernel, clang-built-linux,
	kernel-janitors, linux-safety

On Tue, Sep 29 2020 at 10:37, Peter Zijlstra wrote:
> Here, I fixed it..

Well, no. What Balbir is trying to do here is to establish whether a
task runs on a !SMT core. sched_smt_active() is system wide, but their
setup is to have a bunch of SMT enabled cores and cores where SMT is off
because the sibling is offlined. They affine these processes to non SMT
cores and the check there validates that before it enabled that flush
thingy.

Of course this is best effort voodoo because if all CPUs in the mask are
offlined then the task is moved to a SMT enabled one where L1D flush is
useless. Though offlining their workhorse CPUs is probably not the daily
business for obvious raisins.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
  2020-09-30 15:40       ` Thomas Gleixner
  (?)
@ 2020-09-30 16:53         ` Lukas Bulwahn
  -1 siblings, 0 replies; 42+ messages in thread
From: Lukas Bulwahn @ 2020-09-30 16:53 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Peter Zijlstra, Lukas Bulwahn, Balbir Singh, Dave Hansen,
	Andy Lutomirski, Ingo Molnar, Borislav Petkov, x86,
	H. Peter Anvin, Nathan Chancellor, Nick Desaulniers,
	linux-kernel, clang-built-linux, kernel-janitors, linux-safety



On Wed, 30 Sep 2020, Thomas Gleixner wrote:

> On Tue, Sep 29 2020 at 10:37, Peter Zijlstra wrote:
> > Here, I fixed it..
> 
> Well, no. What Balbir is trying to do here is to establish whether a
> task runs on a !SMT core. sched_smt_active() is system wide, but their
> setup is to have a bunch of SMT enabled cores and cores where SMT is off
> because the sibling is offlined. They affine these processes to non SMT
> cores and the check there validates that before it enabled that flush
> thingy.
> 
> Of course this is best effort voodoo because if all CPUs in the mask are
> offlined then the task is moved to a SMT enabled one where L1D flush is
> useless. Though offlining their workhorse CPUs is probably not the daily
> business for obvious raisins.
>

Thanks, Thomas.

So, I will keep the semantics as-is, clean up the patch with 
preempt_{dis,en}able() and send out a v2.

Lukas 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
@ 2020-09-30 16:53         ` Lukas Bulwahn
  0 siblings, 0 replies; 42+ messages in thread
From: Lukas Bulwahn @ 2020-09-30 16:53 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Peter Zijlstra, Lukas Bulwahn, Balbir Singh, Dave Hansen,
	Andy Lutomirski, Ingo Molnar, Borislav Petkov, x86,
	H. Peter Anvin, Nathan Chancellor, Nick Desaulniers,
	linux-kernel, clang-built-linux, kernel-janitors, linux-safety



On Wed, 30 Sep 2020, Thomas Gleixner wrote:

> On Tue, Sep 29 2020 at 10:37, Peter Zijlstra wrote:
> > Here, I fixed it..
> 
> Well, no. What Balbir is trying to do here is to establish whether a
> task runs on a !SMT core. sched_smt_active() is system wide, but their
> setup is to have a bunch of SMT enabled cores and cores where SMT is off
> because the sibling is offlined. They affine these processes to non SMT
> cores and the check there validates that before it enabled that flush
> thingy.
> 
> Of course this is best effort voodoo because if all CPUs in the mask are
> offlined then the task is moved to a SMT enabled one where L1D flush is
> useless. Though offlining their workhorse CPUs is probably not the daily
> business for obvious raisins.
>

Thanks, Thomas.

So, I will keep the semantics as-is, clean up the patch with 
preempt_{dis,en}able() and send out a v2.

Lukas 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
@ 2020-09-30 16:53         ` Lukas Bulwahn
  0 siblings, 0 replies; 42+ messages in thread
From: Lukas Bulwahn @ 2020-09-30 16:53 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Peter Zijlstra, Lukas Bulwahn, Balbir Singh, Dave Hansen,
	Andy Lutomirski, Ingo Molnar, Borislav Petkov, x86,
	H. Peter Anvin, Nathan Chancellor, Nick Desaulniers,
	linux-kernel, clang-built-linux, kernel-janitors, linux-safety



On Wed, 30 Sep 2020, Thomas Gleixner wrote:

> On Tue, Sep 29 2020 at 10:37, Peter Zijlstra wrote:
> > Here, I fixed it..
> 
> Well, no. What Balbir is trying to do here is to establish whether a
> task runs on a !SMT core. sched_smt_active() is system wide, but their
> setup is to have a bunch of SMT enabled cores and cores where SMT is off
> because the sibling is offlined. They affine these processes to non SMT
> cores and the check there validates that before it enabled that flush
> thingy.
> 
> Of course this is best effort voodoo because if all CPUs in the mask are
> offlined then the task is moved to a SMT enabled one where L1D flush is
> useless. Though offlining their workhorse CPUs is probably not the daily
> business for obvious raisins.
>

Thanks, Thomas.

So, I will keep the semantics as-is, clean up the patch with 
preempt_{dis,en}able() and send out a v2.

Lukas 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
  2020-09-30 15:40       ` Thomas Gleixner
@ 2020-09-30 17:03         ` Peter Zijlstra
  -1 siblings, 0 replies; 42+ messages in thread
From: Peter Zijlstra @ 2020-09-30 17:03 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Lukas Bulwahn, Balbir Singh, Dave Hansen, Andy Lutomirski,
	Ingo Molnar, Borislav Petkov, x86, H. Peter Anvin,
	Nathan Chancellor, Nick Desaulniers, linux-kernel,
	clang-built-linux, kernel-janitors, linux-safety

On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
> On Tue, Sep 29 2020 at 10:37, Peter Zijlstra wrote:
> > Here, I fixed it..
> 
> Well, no. What Balbir is trying to do here is to establish whether a
> task runs on a !SMT core. sched_smt_active() is system wide, but their
> setup is to have a bunch of SMT enabled cores and cores where SMT is off
> because the sibling is offlined. They affine these processes to non SMT
> cores and the check there validates that before it enabled that flush
> thingy.

Yes, I see that it does that. But it's still complete shit.

> Of course this is best effort voodoo because if all CPUs in the mask are
> offlined then the task is moved to a SMT enabled one where L1D flush is
> useless. Though offlining their workhorse CPUs is probably not the daily
> business for obvious raisins.

Not only hotplug, you can trivially change the affinity after this
check.

Also, that preempt_disable() in there doesn't actually do anything.
Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
static_cpu_has() and boot_cpu_has() in the same bloody condition and has
a pointless ret variable.

It's shoddy code, that only works if you align the planets right. We
really shouldn't provide interfaces that are this bad.

It's correct operation is only by accident.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
@ 2020-09-30 17:03         ` Peter Zijlstra
  0 siblings, 0 replies; 42+ messages in thread
From: Peter Zijlstra @ 2020-09-30 17:03 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Lukas Bulwahn, Balbir Singh, Dave Hansen, Andy Lutomirski,
	Ingo Molnar, Borislav Petkov, x86, H. Peter Anvin,
	Nathan Chancellor, Nick Desaulniers, linux-kernel,
	clang-built-linux, kernel-janitors, linux-safety

On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
> On Tue, Sep 29 2020 at 10:37, Peter Zijlstra wrote:
> > Here, I fixed it..
> 
> Well, no. What Balbir is trying to do here is to establish whether a
> task runs on a !SMT core. sched_smt_active() is system wide, but their
> setup is to have a bunch of SMT enabled cores and cores where SMT is off
> because the sibling is offlined. They affine these processes to non SMT
> cores and the check there validates that before it enabled that flush
> thingy.

Yes, I see that it does that. But it's still complete shit.

> Of course this is best effort voodoo because if all CPUs in the mask are
> offlined then the task is moved to a SMT enabled one where L1D flush is
> useless. Though offlining their workhorse CPUs is probably not the daily
> business for obvious raisins.

Not only hotplug, you can trivially change the affinity after this
check.

Also, that preempt_disable() in there doesn't actually do anything.
Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
static_cpu_has() and boot_cpu_has() in the same bloody condition and has
a pointless ret variable.

It's shoddy code, that only works if you align the planets right. We
really shouldn't provide interfaces that are this bad.

It's correct operation is only by accident.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
  2020-09-30 17:03         ` Peter Zijlstra
@ 2020-09-30 18:00           ` Thomas Gleixner
  -1 siblings, 0 replies; 42+ messages in thread
From: Thomas Gleixner @ 2020-09-30 18:00 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Lukas Bulwahn, Balbir Singh, Dave Hansen, Andy Lutomirski,
	Ingo Molnar, Borislav Petkov, x86, H. Peter Anvin,
	Nathan Chancellor, Nick Desaulniers, linux-kernel,
	clang-built-linux, kernel-janitors, linux-safety

On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
> On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
> Also, that preempt_disable() in there doesn't actually do anything.
> Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
> static_cpu_has() and boot_cpu_has() in the same bloody condition and has
> a pointless ret variable.

I absolutely agree and I really missed it when looking at it before
merging. cpus_read_lock()/unlock() is the right thing to do if at all.

> It's shoddy code, that only works if you align the planets right. We
> really shouldn't provide interfaces that are this bad.
>
> It's correct operation is only by accident.

True :(

I understand Balbirs problem and it makes some sense to provide a
solution. We can:

    1) reject set_affinity() if the task has that flush muck enabled
       and user space tries to move it to a SMT enabled core

    2) disable the muck if if detects that it is runs on a SMT enabled
       core suddenly (hotplug says hello)

       This one is nasty because there is no feedback to user space
       about the wreckage.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
@ 2020-09-30 18:00           ` Thomas Gleixner
  0 siblings, 0 replies; 42+ messages in thread
From: Thomas Gleixner @ 2020-09-30 18:00 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Lukas Bulwahn, Balbir Singh, Dave Hansen, Andy Lutomirski,
	Ingo Molnar, Borislav Petkov, x86, H. Peter Anvin,
	Nathan Chancellor, Nick Desaulniers, linux-kernel,
	clang-built-linux, kernel-janitors, linux-safety

On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
> On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
> Also, that preempt_disable() in there doesn't actually do anything.
> Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
> static_cpu_has() and boot_cpu_has() in the same bloody condition and has
> a pointless ret variable.

I absolutely agree and I really missed it when looking at it before
merging. cpus_read_lock()/unlock() is the right thing to do if at all.

> It's shoddy code, that only works if you align the planets right. We
> really shouldn't provide interfaces that are this bad.
>
> It's correct operation is only by accident.

True :(

I understand Balbirs problem and it makes some sense to provide a
solution. We can:

    1) reject set_affinity() if the task has that flush muck enabled
       and user space tries to move it to a SMT enabled core

    2) disable the muck if if detects that it is runs on a SMT enabled
       core suddenly (hotplug says hello)

       This one is nasty because there is no feedback to user space
       about the wreckage.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
  2020-09-30 18:00           ` Thomas Gleixner
@ 2020-09-30 18:35             ` Peter Zijlstra
  -1 siblings, 0 replies; 42+ messages in thread
From: Peter Zijlstra @ 2020-09-30 18:35 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Lukas Bulwahn, Balbir Singh, Dave Hansen, Andy Lutomirski,
	Ingo Molnar, Borislav Petkov, x86, H. Peter Anvin,
	Nathan Chancellor, Nick Desaulniers, linux-kernel,
	clang-built-linux, kernel-janitors, linux-safety

On Wed, Sep 30, 2020 at 08:00:59PM +0200, Thomas Gleixner wrote:
> On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
> > On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
> > Also, that preempt_disable() in there doesn't actually do anything.
> > Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
> > static_cpu_has() and boot_cpu_has() in the same bloody condition and has
> > a pointless ret variable.

Also, I forgot to add, it accesses ->cpus_mask without the proper
locking, so it could be reading intermediate state from whatever cpumask
operation that's in progress.

> I absolutely agree and I really missed it when looking at it before
> merging. cpus_read_lock()/unlock() is the right thing to do if at all.
> 
> > It's shoddy code, that only works if you align the planets right. We
> > really shouldn't provide interfaces that are this bad.
> >
> > It's correct operation is only by accident.
> 
> True :(
> 
> I understand Balbirs problem and it makes some sense to provide a
> solution. We can:
> 
>     1) reject set_affinity() if the task has that flush muck enabled
>        and user space tries to move it to a SMT enabled core
> 
>     2) disable the muck if if detects that it is runs on a SMT enabled
>        core suddenly (hotplug says hello)
> 
>        This one is nasty because there is no feedback to user space
>        about the wreckage.

That's and, right, not or. because 1) deals with sched_setffinity()
and 2) deals wit hotplug.

Now 1) requires an arch hook in sched_setaffinity(), something I'm not
keen on providing, once we provide it, who knows what strange and
wonderful things archs will dream up.

And 2) also happens on hot-un-plug, when the task's affinity gets
forced because it became empty. No user feedback there either, and
information is lost.


I suppose we can do 2) but send a signal. That would cover all cases and
keep it in arch code. But yes, that's pretty terrible too.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
@ 2020-09-30 18:35             ` Peter Zijlstra
  0 siblings, 0 replies; 42+ messages in thread
From: Peter Zijlstra @ 2020-09-30 18:35 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Lukas Bulwahn, Balbir Singh, Dave Hansen, Andy Lutomirski,
	Ingo Molnar, Borislav Petkov, x86, H. Peter Anvin,
	Nathan Chancellor, Nick Desaulniers, linux-kernel,
	clang-built-linux, kernel-janitors, linux-safety

On Wed, Sep 30, 2020 at 08:00:59PM +0200, Thomas Gleixner wrote:
> On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
> > On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
> > Also, that preempt_disable() in there doesn't actually do anything.
> > Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
> > static_cpu_has() and boot_cpu_has() in the same bloody condition and has
> > a pointless ret variable.

Also, I forgot to add, it accesses ->cpus_mask without the proper
locking, so it could be reading intermediate state from whatever cpumask
operation that's in progress.

> I absolutely agree and I really missed it when looking at it before
> merging. cpus_read_lock()/unlock() is the right thing to do if at all.
> 
> > It's shoddy code, that only works if you align the planets right. We
> > really shouldn't provide interfaces that are this bad.
> >
> > It's correct operation is only by accident.
> 
> True :(
> 
> I understand Balbirs problem and it makes some sense to provide a
> solution. We can:
> 
>     1) reject set_affinity() if the task has that flush muck enabled
>        and user space tries to move it to a SMT enabled core
> 
>     2) disable the muck if if detects that it is runs on a SMT enabled
>        core suddenly (hotplug says hello)
> 
>        This one is nasty because there is no feedback to user space
>        about the wreckage.

That's and, right, not or. because 1) deals with sched_setffinity()
and 2) deals wit hotplug.

Now 1) requires an arch hook in sched_setaffinity(), something I'm not
keen on providing, once we provide it, who knows what strange and
wonderful things archs will dream up.

And 2) also happens on hot-un-plug, when the task's affinity gets
forced because it became empty. No user feedback there either, and
information is lost.


I suppose we can do 2) but send a signal. That would cover all cases and
keep it in arch code. But yes, that's pretty terrible too.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
  2020-09-30 18:35             ` Peter Zijlstra
@ 2020-09-30 21:38               ` Thomas Gleixner
  -1 siblings, 0 replies; 42+ messages in thread
From: Thomas Gleixner @ 2020-09-30 21:38 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Lukas Bulwahn, Balbir Singh, Dave Hansen, Andy Lutomirski,
	Ingo Molnar, Borislav Petkov, x86, H. Peter Anvin,
	Nathan Chancellor, Nick Desaulniers, linux-kernel,
	clang-built-linux, kernel-janitors, linux-safety

On Wed, Sep 30 2020 at 20:35, Peter Zijlstra wrote:
> On Wed, Sep 30, 2020 at 08:00:59PM +0200, Thomas Gleixner wrote:
>> On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
>> > On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
>> > Also, that preempt_disable() in there doesn't actually do anything.
>> > Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
>> > static_cpu_has() and boot_cpu_has() in the same bloody condition and has
>> > a pointless ret variable.
>
> Also, I forgot to add, it accesses ->cpus_mask without the proper
> locking, so it could be reading intermediate state from whatever cpumask
> operation that's in progress.

Yes. I saw that after hitting send. :(

>> I absolutely agree and I really missed it when looking at it before
>> merging. cpus_read_lock()/unlock() is the right thing to do if at all.
>> 
>> > It's shoddy code, that only works if you align the planets right. We
>> > really shouldn't provide interfaces that are this bad.
>> >
>> > It's correct operation is only by accident.
>> 
>> True :(
>> 
>> I understand Balbirs problem and it makes some sense to provide a
>> solution. We can:
>> 
>>     1) reject set_affinity() if the task has that flush muck enabled
>>        and user space tries to move it to a SMT enabled core
>> 
>>     2) disable the muck if if detects that it is runs on a SMT enabled
>>        core suddenly (hotplug says hello)
>> 
>>        This one is nasty because there is no feedback to user space
>>        about the wreckage.
>
> That's and, right, not or. because 1) deals with sched_setffinity()
> and 2) deals wit hotplug.

It was meant as AND of course.

> Now 1) requires an arch hook in sched_setaffinity(), something I'm not
> keen on providing, once we provide it, who knows what strange and
> wonderful things archs will dream up.

I don't think so. We can have that magic in core:

#ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
static bool paranoid_l1d_valid(struct task_struct *tsk,
                               const struct cpumask *msk)
{
	if (!test_tsk_thread_flag(tsk, TIF_SPEC_L1D_FLUSH))
        	return true;
        /* Do magic stuff */
        return res;
}
#else
static bool paranoid_l1d_valid(struct task_struct *tsk,
                               const struct cpumask *msk)
{
	return true;
}
#endif

It's a pretty well defined problem and having the magic in core code
prevents an arch hook which allows abuse of all sorts.

And the same applies to enable_l1d_flush_for_task(). The only
architecture specific nonsense are the checks whether the CPU bug is
there and whether the hardware supports L1D flushing.

So we can have:

#ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
int paranoid_l1d_enable(struct task_struct *tsk)
{
        /* Do the SMT validation under the proper locks */
        if (!res)
        	set_task_thread_flag(tsk, TIF_SPEC_L1D_FLUSH);
        return res;
}
#endif

> And 2) also happens on hot-un-plug, when the task's affinity gets
> forced because it became empty. No user feedback there either, and
> information is lost.

Of course. It's both that suddenly SMT gets enabled on a core which was
isolated and when the last isolated core in the tasks CPU mask goes
offline.

> I suppose we can do 2) but send a signal. That would cover all cases and
> keep it in arch code. But yes, that's pretty terrible too.

Bah. I just looked at the condition to flush:

        if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
                (prev_mm & LAST_USER_MM_L1D_FLUSH))
                l1d_flush_hw();

That fails to flush when SMT is disabled globally. Balbir?

Of course this should be:

        if (!this_cpu_read(cpu_info.smt_active) && (prev_mm & LAST_USER_MM_L1D_FLUSH))
                l1d_flush_hw();

Now we can make this:

        if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
        	if (!this_cpu_read(cpu_info.smt_active))
                	l1d_flush_hw();
                else
                	task_work_add(...);

And that task work clears the flag and sends a signal. We're not going
to send a signal from switch_mm() ....

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
@ 2020-09-30 21:38               ` Thomas Gleixner
  0 siblings, 0 replies; 42+ messages in thread
From: Thomas Gleixner @ 2020-09-30 21:38 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Lukas Bulwahn, Balbir Singh, Dave Hansen, Andy Lutomirski,
	Ingo Molnar, Borislav Petkov, x86, H. Peter Anvin,
	Nathan Chancellor, Nick Desaulniers, linux-kernel,
	clang-built-linux, kernel-janitors, linux-safety

On Wed, Sep 30 2020 at 20:35, Peter Zijlstra wrote:
> On Wed, Sep 30, 2020 at 08:00:59PM +0200, Thomas Gleixner wrote:
>> On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
>> > On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
>> > Also, that preempt_disable() in there doesn't actually do anything.
>> > Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
>> > static_cpu_has() and boot_cpu_has() in the same bloody condition and has
>> > a pointless ret variable.
>
> Also, I forgot to add, it accesses ->cpus_mask without the proper
> locking, so it could be reading intermediate state from whatever cpumask
> operation that's in progress.

Yes. I saw that after hitting send. :(

>> I absolutely agree and I really missed it when looking at it before
>> merging. cpus_read_lock()/unlock() is the right thing to do if at all.
>> 
>> > It's shoddy code, that only works if you align the planets right. We
>> > really shouldn't provide interfaces that are this bad.
>> >
>> > It's correct operation is only by accident.
>> 
>> True :(
>> 
>> I understand Balbirs problem and it makes some sense to provide a
>> solution. We can:
>> 
>>     1) reject set_affinity() if the task has that flush muck enabled
>>        and user space tries to move it to a SMT enabled core
>> 
>>     2) disable the muck if if detects that it is runs on a SMT enabled
>>        core suddenly (hotplug says hello)
>> 
>>        This one is nasty because there is no feedback to user space
>>        about the wreckage.
>
> That's and, right, not or. because 1) deals with sched_setffinity()
> and 2) deals wit hotplug.

It was meant as AND of course.

> Now 1) requires an arch hook in sched_setaffinity(), something I'm not
> keen on providing, once we provide it, who knows what strange and
> wonderful things archs will dream up.

I don't think so. We can have that magic in core:

#ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
static bool paranoid_l1d_valid(struct task_struct *tsk,
                               const struct cpumask *msk)
{
	if (!test_tsk_thread_flag(tsk, TIF_SPEC_L1D_FLUSH))
        	return true;
        /* Do magic stuff */
        return res;
}
#else
static bool paranoid_l1d_valid(struct task_struct *tsk,
                               const struct cpumask *msk)
{
	return true;
}
#endif

It's a pretty well defined problem and having the magic in core code
prevents an arch hook which allows abuse of all sorts.

And the same applies to enable_l1d_flush_for_task(). The only
architecture specific nonsense are the checks whether the CPU bug is
there and whether the hardware supports L1D flushing.

So we can have:

#ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
int paranoid_l1d_enable(struct task_struct *tsk)
{
        /* Do the SMT validation under the proper locks */
        if (!res)
        	set_task_thread_flag(tsk, TIF_SPEC_L1D_FLUSH);
        return res;
}
#endif

> And 2) also happens on hot-un-plug, when the task's affinity gets
> forced because it became empty. No user feedback there either, and
> information is lost.

Of course. It's both that suddenly SMT gets enabled on a core which was
isolated and when the last isolated core in the tasks CPU mask goes
offline.

> I suppose we can do 2) but send a signal. That would cover all cases and
> keep it in arch code. But yes, that's pretty terrible too.

Bah. I just looked at the condition to flush:

        if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
                (prev_mm & LAST_USER_MM_L1D_FLUSH))
                l1d_flush_hw();

That fails to flush when SMT is disabled globally. Balbir?

Of course this should be:

        if (!this_cpu_read(cpu_info.smt_active) && (prev_mm & LAST_USER_MM_L1D_FLUSH))
                l1d_flush_hw();

Now we can make this:

        if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
        	if (!this_cpu_read(cpu_info.smt_active))
                	l1d_flush_hw();
                else
                	task_work_add(...);

And that task work clears the flag and sends a signal. We're not going
to send a signal from switch_mm() ....

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
  2020-09-30 18:00           ` Thomas Gleixner
  (?)
@ 2020-09-30 22:46             ` Singh, Balbir
  -1 siblings, 0 replies; 42+ messages in thread
From: Singh, Balbir @ 2020-09-30 22:46 UTC (permalink / raw)
  To: Thomas Gleixner, Peter Zijlstra
  Cc: Lukas Bulwahn, Dave Hansen, Andy Lutomirski, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, Nathan Chancellor,
	Nick Desaulniers, linux-kernel, clang-built-linux,
	kernel-janitors, linux-safety

On 1/10/20 4:00 am, Thomas Gleixner wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
>> On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
>> Also, that preempt_disable() in there doesn't actually do anything.
>> Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
>> static_cpu_has() and boot_cpu_has() in the same bloody condition and has
>> a pointless ret variable.
> 

I was being a bit crazy in mixing the two, considering that there might
be CPUs that do not support L1D flush (others might in the same system,
which is insane)

> I absolutely agree and I really missed it when looking at it before
> merging. cpus_read_lock()/unlock() is the right thing to do if at all.
> 

It seems like the right thing to do, get_cpu() used to be the old method.
The idea is to use cpu_data(i) in a hotplug safe manner.

>> It's shoddy code, that only works if you align the planets right. We
>> really shouldn't provide interfaces that are this bad.
>>
>> It's correct operation is only by accident.



> 
> True :(
> 
> I understand Balbirs problem and it makes some sense to provide a
> solution. We can:
> 
>     1) reject set_affinity() if the task has that flush muck enabled
>        and user space tries to move it to a SMT enabled core

I thought of this and it would be difficult to debug for users, taskset -c
would not work on applications that flush, etc, etc.

> 
>     2) disable the muck if if detects that it is runs on a SMT enabled
>        core suddenly (hotplug says hello)
> 
>        This one is nasty because there is no feedback to user space
>        about the wreckage.

Yes, agreed.

> 

Trying to look at the concerns, I wonder if this can still be saved

-       if (!boot_cpu_has_bug(X86_BUG_L1TF) ||
-                       !static_cpu_has(X86_FEATURE_FLUSH_L1D))
+       if (!static_cpu_has(X86_BUG_L1TF) ||
+               !static_cpu_has(X86_FEATURE_FLUSH_L1D))
                return -EINVAL;

-       cpu = get_cpu();

+       cpus_read_lock();
        for_each_cpu(i, &tsk->cpus_mask) {
                if (cpu_data(i).smt_active == true) {
-                       put_cpu();
+                       cpus_read_unlock();
                        return -EINVAL;
                }
        }
+       cpus_read_unlock();

        set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
-       put_cpu();
        return ret;
 }

I don't like the idea of iterating CPUs in the cpumask to check if they
all have SMT disabled, but that is a requirement for flush

Balbir

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
@ 2020-09-30 22:46             ` Singh, Balbir
  0 siblings, 0 replies; 42+ messages in thread
From: Singh, Balbir @ 2020-09-30 22:46 UTC (permalink / raw)
  To: Thomas Gleixner, Peter Zijlstra
  Cc: Lukas Bulwahn, Dave Hansen, Andy Lutomirski, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, Nathan Chancellor,
	Nick Desaulniers, linux-kernel, clang-built-linux,
	kernel-janitors, linux-safety

On 1/10/20 4:00 am, Thomas Gleixner wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
>> On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
>> Also, that preempt_disable() in there doesn't actually do anything.
>> Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
>> static_cpu_has() and boot_cpu_has() in the same bloody condition and has
>> a pointless ret variable.
> 

I was being a bit crazy in mixing the two, considering that there might
be CPUs that do not support L1D flush (others might in the same system,
which is insane)

> I absolutely agree and I really missed it when looking at it before
> merging. cpus_read_lock()/unlock() is the right thing to do if at all.
> 

It seems like the right thing to do, get_cpu() used to be the old method.
The idea is to use cpu_data(i) in a hotplug safe manner.

>> It's shoddy code, that only works if you align the planets right. We
>> really shouldn't provide interfaces that are this bad.
>>
>> It's correct operation is only by accident.



> 
> True :(
> 
> I understand Balbirs problem and it makes some sense to provide a
> solution. We can:
> 
>     1) reject set_affinity() if the task has that flush muck enabled
>        and user space tries to move it to a SMT enabled core

I thought of this and it would be difficult to debug for users, taskset -c
would not work on applications that flush, etc, etc.

> 
>     2) disable the muck if if detects that it is runs on a SMT enabled
>        core suddenly (hotplug says hello)
> 
>        This one is nasty because there is no feedback to user space
>        about the wreckage.

Yes, agreed.

> 

Trying to look at the concerns, I wonder if this can still be saved

-       if (!boot_cpu_has_bug(X86_BUG_L1TF) ||
-                       !static_cpu_has(X86_FEATURE_FLUSH_L1D))
+       if (!static_cpu_has(X86_BUG_L1TF) ||
+               !static_cpu_has(X86_FEATURE_FLUSH_L1D))
                return -EINVAL;

-       cpu = get_cpu();

+       cpus_read_lock();
        for_each_cpu(i, &tsk->cpus_mask) {
                if (cpu_data(i).smt_active = true) {
-                       put_cpu();
+                       cpus_read_unlock();
                        return -EINVAL;
                }
        }
+       cpus_read_unlock();

        set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
-       put_cpu();
        return ret;
 }

I don't like the idea of iterating CPUs in the cpumask to check if they
all have SMT disabled, but that is a requirement for flush

Balbir

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
@ 2020-09-30 22:46             ` Singh, Balbir
  0 siblings, 0 replies; 42+ messages in thread
From: Singh, Balbir @ 2020-09-30 22:46 UTC (permalink / raw)
  To: Thomas Gleixner, Peter Zijlstra
  Cc: Lukas Bulwahn, Dave Hansen, Andy Lutomirski, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, Nathan Chancellor,
	Nick Desaulniers, linux-kernel, clang-built-linux,
	kernel-janitors, linux-safety

On 1/10/20 4:00 am, Thomas Gleixner wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
>> On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
>> Also, that preempt_disable() in there doesn't actually do anything.
>> Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
>> static_cpu_has() and boot_cpu_has() in the same bloody condition and has
>> a pointless ret variable.
> 

I was being a bit crazy in mixing the two, considering that there might
be CPUs that do not support L1D flush (others might in the same system,
which is insane)

> I absolutely agree and I really missed it when looking at it before
> merging. cpus_read_lock()/unlock() is the right thing to do if at all.
> 

It seems like the right thing to do, get_cpu() used to be the old method.
The idea is to use cpu_data(i) in a hotplug safe manner.

>> It's shoddy code, that only works if you align the planets right. We
>> really shouldn't provide interfaces that are this bad.
>>
>> It's correct operation is only by accident.



> 
> True :(
> 
> I understand Balbirs problem and it makes some sense to provide a
> solution. We can:
> 
>     1) reject set_affinity() if the task has that flush muck enabled
>        and user space tries to move it to a SMT enabled core

I thought of this and it would be difficult to debug for users, taskset -c
would not work on applications that flush, etc, etc.

> 
>     2) disable the muck if if detects that it is runs on a SMT enabled
>        core suddenly (hotplug says hello)
> 
>        This one is nasty because there is no feedback to user space
>        about the wreckage.

Yes, agreed.

> 

Trying to look at the concerns, I wonder if this can still be saved

-       if (!boot_cpu_has_bug(X86_BUG_L1TF) ||
-                       !static_cpu_has(X86_FEATURE_FLUSH_L1D))
+       if (!static_cpu_has(X86_BUG_L1TF) ||
+               !static_cpu_has(X86_FEATURE_FLUSH_L1D))
                return -EINVAL;

-       cpu = get_cpu();

+       cpus_read_lock();
        for_each_cpu(i, &tsk->cpus_mask) {
                if (cpu_data(i).smt_active == true) {
-                       put_cpu();
+                       cpus_read_unlock();
                        return -EINVAL;
                }
        }
+       cpus_read_unlock();

        set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
-       put_cpu();
        return ret;
 }

I don't like the idea of iterating CPUs in the cpumask to check if they
all have SMT disabled, but that is a requirement for flush

Balbir

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
  2020-09-30 21:38               ` Thomas Gleixner
  (?)
@ 2020-09-30 22:59                 ` Singh, Balbir
  -1 siblings, 0 replies; 42+ messages in thread
From: Singh, Balbir @ 2020-09-30 22:59 UTC (permalink / raw)
  To: Thomas Gleixner, Peter Zijlstra
  Cc: Lukas Bulwahn, Dave Hansen, Andy Lutomirski, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, Nathan Chancellor,
	Nick Desaulniers, linux-kernel, clang-built-linux,
	kernel-janitors, linux-safety

On 1/10/20 7:38 am, Thomas Gleixner wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On Wed, Sep 30 2020 at 20:35, Peter Zijlstra wrote:
>> On Wed, Sep 30, 2020 at 08:00:59PM +0200, Thomas Gleixner wrote:
>>> On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
>>>> On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
>>>> Also, that preempt_disable() in there doesn't actually do anything.
>>>> Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
>>>> static_cpu_has() and boot_cpu_has() in the same bloody condition and has
>>>> a pointless ret variable.
>>
>> Also, I forgot to add, it accesses ->cpus_mask without the proper
>> locking, so it could be reading intermediate state from whatever cpumask
>> operation that's in progress.
> 
> Yes. I saw that after hitting send. :(
> 
>>> I absolutely agree and I really missed it when looking at it before
>>> merging. cpus_read_lock()/unlock() is the right thing to do if at all.
>>>
>>>> It's shoddy code, that only works if you align the planets right. We
>>>> really shouldn't provide interfaces that are this bad.
>>>>
>>>> It's correct operation is only by accident.
>>>
>>> True :(
>>>
>>> I understand Balbirs problem and it makes some sense to provide a
>>> solution. We can:
>>>
>>>     1) reject set_affinity() if the task has that flush muck enabled
>>>        and user space tries to move it to a SMT enabled core
>>>
>>>     2) disable the muck if if detects that it is runs on a SMT enabled
>>>        core suddenly (hotplug says hello)
>>>
>>>        This one is nasty because there is no feedback to user space
>>>        about the wreckage.
>>
>> That's and, right, not or. because 1) deals with sched_setffinity()
>> and 2) deals wit hotplug.
> 
> It was meant as AND of course.
> 
>> Now 1) requires an arch hook in sched_setaffinity(), something I'm not
>> keen on providing, once we provide it, who knows what strange and
>> wonderful things archs will dream up.
> 
> I don't think so. We can have that magic in core:
> 
> #ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
> static bool paranoid_l1d_valid(struct task_struct *tsk,
>                                const struct cpumask *msk)
> {
>         if (!test_tsk_thread_flag(tsk, TIF_SPEC_L1D_FLUSH))
>                 return true;
>         /* Do magic stuff */
>         return res;
> }
> #else
> static bool paranoid_l1d_valid(struct task_struct *tsk,
>                                const struct cpumask *msk)
> {
>         return true;
> }
> #endif
> 
> It's a pretty well defined problem and having the magic in core code
> prevents an arch hook which allows abuse of all sorts.
> 
> And the same applies to enable_l1d_flush_for_task(). The only
> architecture specific nonsense are the checks whether the CPU bug is
> there and whether the hardware supports L1D flushing.
> 
> So we can have:
> 
> #ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
> int paranoid_l1d_enable(struct task_struct *tsk)
> {
>         /* Do the SMT validation under the proper locks */
>         if (!res)
>                 set_task_thread_flag(tsk, TIF_SPEC_L1D_FLUSH);
>         return res;
> }
> #endif
> 
>> And 2) also happens on hot-un-plug, when the task's affinity gets
>> forced because it became empty. No user feedback there either, and
>> information is lost.
> 
> Of course. It's both that suddenly SMT gets enabled on a core which was
> isolated and when the last isolated core in the tasks CPU mask goes
> offline.
> 
>> I suppose we can do 2) but send a signal. That would cover all cases and
>> keep it in arch code. But yes, that's pretty terrible too.
> 
> Bah. I just looked at the condition to flush:
> 
>         if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
>                 (prev_mm & LAST_USER_MM_L1D_FLUSH))
>                 l1d_flush_hw();
> 
> That fails to flush when SMT is disabled globally. Balbir?

It should have been 

!sched_smt_active() || (cond)

and not

sched_smt_active && (cond)

I'll fix that up, but your simplification below works as well.




> 
> Of course this should be:
> 
>         if (!this_cpu_read(cpu_info.smt_active) && (prev_mm & LAST_USER_MM_L1D_FLUSH))
>                 l1d_flush_hw();
> 
> Now we can make this:
> 
>         if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
>                 if (!this_cpu_read(cpu_info.smt_active))
>                         l1d_flush_hw();
>                 else
>                         task_work_add(...);
> 
> And that task work clears the flag and sends a signal. We're not going
> to send a signal from switch_mm() ....
> 

Yes, I see MCE handling uses a similar pattern, so SIGBUS for a task that migrates/moves
to a SMT disabled core?

Thanks,
Balbir


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
@ 2020-09-30 22:59                 ` Singh, Balbir
  0 siblings, 0 replies; 42+ messages in thread
From: Singh, Balbir @ 2020-09-30 22:59 UTC (permalink / raw)
  To: Thomas Gleixner, Peter Zijlstra
  Cc: Lukas Bulwahn, Dave Hansen, Andy Lutomirski, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, Nathan Chancellor,
	Nick Desaulniers, linux-kernel, clang-built-linux,
	kernel-janitors, linux-safety

On 1/10/20 7:38 am, Thomas Gleixner wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On Wed, Sep 30 2020 at 20:35, Peter Zijlstra wrote:
>> On Wed, Sep 30, 2020 at 08:00:59PM +0200, Thomas Gleixner wrote:
>>> On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
>>>> On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
>>>> Also, that preempt_disable() in there doesn't actually do anything.
>>>> Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
>>>> static_cpu_has() and boot_cpu_has() in the same bloody condition and has
>>>> a pointless ret variable.
>>
>> Also, I forgot to add, it accesses ->cpus_mask without the proper
>> locking, so it could be reading intermediate state from whatever cpumask
>> operation that's in progress.
> 
> Yes. I saw that after hitting send. :(
> 
>>> I absolutely agree and I really missed it when looking at it before
>>> merging. cpus_read_lock()/unlock() is the right thing to do if at all.
>>>
>>>> It's shoddy code, that only works if you align the planets right. We
>>>> really shouldn't provide interfaces that are this bad.
>>>>
>>>> It's correct operation is only by accident.
>>>
>>> True :(
>>>
>>> I understand Balbirs problem and it makes some sense to provide a
>>> solution. We can:
>>>
>>>     1) reject set_affinity() if the task has that flush muck enabled
>>>        and user space tries to move it to a SMT enabled core
>>>
>>>     2) disable the muck if if detects that it is runs on a SMT enabled
>>>        core suddenly (hotplug says hello)
>>>
>>>        This one is nasty because there is no feedback to user space
>>>        about the wreckage.
>>
>> That's and, right, not or. because 1) deals with sched_setffinity()
>> and 2) deals wit hotplug.
> 
> It was meant as AND of course.
> 
>> Now 1) requires an arch hook in sched_setaffinity(), something I'm not
>> keen on providing, once we provide it, who knows what strange and
>> wonderful things archs will dream up.
> 
> I don't think so. We can have that magic in core:
> 
> #ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
> static bool paranoid_l1d_valid(struct task_struct *tsk,
>                                const struct cpumask *msk)
> {
>         if (!test_tsk_thread_flag(tsk, TIF_SPEC_L1D_FLUSH))
>                 return true;
>         /* Do magic stuff */
>         return res;
> }
> #else
> static bool paranoid_l1d_valid(struct task_struct *tsk,
>                                const struct cpumask *msk)
> {
>         return true;
> }
> #endif
> 
> It's a pretty well defined problem and having the magic in core code
> prevents an arch hook which allows abuse of all sorts.
> 
> And the same applies to enable_l1d_flush_for_task(). The only
> architecture specific nonsense are the checks whether the CPU bug is
> there and whether the hardware supports L1D flushing.
> 
> So we can have:
> 
> #ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
> int paranoid_l1d_enable(struct task_struct *tsk)
> {
>         /* Do the SMT validation under the proper locks */
>         if (!res)
>                 set_task_thread_flag(tsk, TIF_SPEC_L1D_FLUSH);
>         return res;
> }
> #endif
> 
>> And 2) also happens on hot-un-plug, when the task's affinity gets
>> forced because it became empty. No user feedback there either, and
>> information is lost.
> 
> Of course. It's both that suddenly SMT gets enabled on a core which was
> isolated and when the last isolated core in the tasks CPU mask goes
> offline.
> 
>> I suppose we can do 2) but send a signal. That would cover all cases and
>> keep it in arch code. But yes, that's pretty terrible too.
> 
> Bah. I just looked at the condition to flush:
> 
>         if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
>                 (prev_mm & LAST_USER_MM_L1D_FLUSH))
>                 l1d_flush_hw();
> 
> That fails to flush when SMT is disabled globally. Balbir?

It should have been 

!sched_smt_active() || (cond)

and not

sched_smt_active && (cond)

I'll fix that up, but your simplification below works as well.




> 
> Of course this should be:
> 
>         if (!this_cpu_read(cpu_info.smt_active) && (prev_mm & LAST_USER_MM_L1D_FLUSH))
>                 l1d_flush_hw();
> 
> Now we can make this:
> 
>         if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
>                 if (!this_cpu_read(cpu_info.smt_active))
>                         l1d_flush_hw();
>                 else
>                         task_work_add(...);
> 
> And that task work clears the flag and sends a signal. We're not going
> to send a signal from switch_mm() ....
> 

Yes, I see MCE handling uses a similar pattern, so SIGBUS for a task that migrates/moves
to a SMT disabled core?

Thanks,
Balbir

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
@ 2020-09-30 22:59                 ` Singh, Balbir
  0 siblings, 0 replies; 42+ messages in thread
From: Singh, Balbir @ 2020-09-30 22:59 UTC (permalink / raw)
  To: Thomas Gleixner, Peter Zijlstra
  Cc: Lukas Bulwahn, Dave Hansen, Andy Lutomirski, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, Nathan Chancellor,
	Nick Desaulniers, linux-kernel, clang-built-linux,
	kernel-janitors, linux-safety

On 1/10/20 7:38 am, Thomas Gleixner wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On Wed, Sep 30 2020 at 20:35, Peter Zijlstra wrote:
>> On Wed, Sep 30, 2020 at 08:00:59PM +0200, Thomas Gleixner wrote:
>>> On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
>>>> On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
>>>> Also, that preempt_disable() in there doesn't actually do anything.
>>>> Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
>>>> static_cpu_has() and boot_cpu_has() in the same bloody condition and has
>>>> a pointless ret variable.
>>
>> Also, I forgot to add, it accesses ->cpus_mask without the proper
>> locking, so it could be reading intermediate state from whatever cpumask
>> operation that's in progress.
> 
> Yes. I saw that after hitting send. :(
> 
>>> I absolutely agree and I really missed it when looking at it before
>>> merging. cpus_read_lock()/unlock() is the right thing to do if at all.
>>>
>>>> It's shoddy code, that only works if you align the planets right. We
>>>> really shouldn't provide interfaces that are this bad.
>>>>
>>>> It's correct operation is only by accident.
>>>
>>> True :(
>>>
>>> I understand Balbirs problem and it makes some sense to provide a
>>> solution. We can:
>>>
>>>     1) reject set_affinity() if the task has that flush muck enabled
>>>        and user space tries to move it to a SMT enabled core
>>>
>>>     2) disable the muck if if detects that it is runs on a SMT enabled
>>>        core suddenly (hotplug says hello)
>>>
>>>        This one is nasty because there is no feedback to user space
>>>        about the wreckage.
>>
>> That's and, right, not or. because 1) deals with sched_setffinity()
>> and 2) deals wit hotplug.
> 
> It was meant as AND of course.
> 
>> Now 1) requires an arch hook in sched_setaffinity(), something I'm not
>> keen on providing, once we provide it, who knows what strange and
>> wonderful things archs will dream up.
> 
> I don't think so. We can have that magic in core:
> 
> #ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
> static bool paranoid_l1d_valid(struct task_struct *tsk,
>                                const struct cpumask *msk)
> {
>         if (!test_tsk_thread_flag(tsk, TIF_SPEC_L1D_FLUSH))
>                 return true;
>         /* Do magic stuff */
>         return res;
> }
> #else
> static bool paranoid_l1d_valid(struct task_struct *tsk,
>                                const struct cpumask *msk)
> {
>         return true;
> }
> #endif
> 
> It's a pretty well defined problem and having the magic in core code
> prevents an arch hook which allows abuse of all sorts.
> 
> And the same applies to enable_l1d_flush_for_task(). The only
> architecture specific nonsense are the checks whether the CPU bug is
> there and whether the hardware supports L1D flushing.
> 
> So we can have:
> 
> #ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
> int paranoid_l1d_enable(struct task_struct *tsk)
> {
>         /* Do the SMT validation under the proper locks */
>         if (!res)
>                 set_task_thread_flag(tsk, TIF_SPEC_L1D_FLUSH);
>         return res;
> }
> #endif
> 
>> And 2) also happens on hot-un-plug, when the task's affinity gets
>> forced because it became empty. No user feedback there either, and
>> information is lost.
> 
> Of course. It's both that suddenly SMT gets enabled on a core which was
> isolated and when the last isolated core in the tasks CPU mask goes
> offline.
> 
>> I suppose we can do 2) but send a signal. That would cover all cases and
>> keep it in arch code. But yes, that's pretty terrible too.
> 
> Bah. I just looked at the condition to flush:
> 
>         if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
>                 (prev_mm & LAST_USER_MM_L1D_FLUSH))
>                 l1d_flush_hw();
> 
> That fails to flush when SMT is disabled globally. Balbir?

It should have been 

!sched_smt_active() || (cond)

and not

sched_smt_active && (cond)

I'll fix that up, but your simplification below works as well.




> 
> Of course this should be:
> 
>         if (!this_cpu_read(cpu_info.smt_active) && (prev_mm & LAST_USER_MM_L1D_FLUSH))
>                 l1d_flush_hw();
> 
> Now we can make this:
> 
>         if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
>                 if (!this_cpu_read(cpu_info.smt_active))
>                         l1d_flush_hw();
>                 else
>                         task_work_add(...);
> 
> And that task work clears the flag and sends a signal. We're not going
> to send a signal from switch_mm() ....
> 

Yes, I see MCE handling uses a similar pattern, so SIGBUS for a task that migrates/moves
to a SMT disabled core?

Thanks,
Balbir

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
  2020-09-30 21:38               ` Thomas Gleixner
  (?)
@ 2020-09-30 23:49                 ` Singh, Balbir
  -1 siblings, 0 replies; 42+ messages in thread
From: Singh, Balbir @ 2020-09-30 23:49 UTC (permalink / raw)
  To: Thomas Gleixner, Peter Zijlstra
  Cc: Lukas Bulwahn, Dave Hansen, Andy Lutomirski, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, Nathan Chancellor,
	Nick Desaulniers, linux-kernel, clang-built-linux,
	kernel-janitors, linux-safety

On 1/10/20 7:38 am, Thomas Gleixner wrote:

> 
> 
> 
> On Wed, Sep 30 2020 at 20:35, Peter Zijlstra wrote:
>> On Wed, Sep 30, 2020 at 08:00:59PM +0200, Thomas Gleixner wrote:
>>> On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
>>>> On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
>>>> Also, that preempt_disable() in there doesn't actually do anything.
>>>> Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
>>>> static_cpu_has() and boot_cpu_has() in the same bloody condition and has
>>>> a pointless ret variable.
>>
>> Also, I forgot to add, it accesses ->cpus_mask without the proper
>> locking, so it could be reading intermediate state from whatever cpumask
>> operation that's in progress.
> 
> Yes. I saw that after hitting send. :(
> 
>>> I absolutely agree and I really missed it when looking at it before
>>> merging. cpus_read_lock()/unlock() is the right thing to do if at all.
>>>
>>>> It's shoddy code, that only works if you align the planets right. We
>>>> really shouldn't provide interfaces that are this bad.
>>>>
>>>> It's correct operation is only by accident.
>>>
>>> True :(
>>>
>>> I understand Balbirs problem and it makes some sense to provide a
>>> solution. We can:
>>>
>>>     1) reject set_affinity() if the task has that flush muck enabled
>>>        and user space tries to move it to a SMT enabled core
>>>
>>>     2) disable the muck if if detects that it is runs on a SMT enabled
>>>        core suddenly (hotplug says hello)
>>>
>>>        This one is nasty because there is no feedback to user space
>>>        about the wreckage.
>>
>> That's and, right, not or. because 1) deals with sched_setffinity()
>> and 2) deals wit hotplug.
> 
> It was meant as AND of course.
> 
>> Now 1) requires an arch hook in sched_setaffinity(), something I'm not
>> keen on providing, once we provide it, who knows what strange and
>> wonderful things archs will dream up.
> 
> I don't think so. We can have that magic in core:
> 
> #ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
> static bool paranoid_l1d_valid(struct task_struct *tsk,
>                                const struct cpumask *msk)
> {
>         if (!test_tsk_thread_flag(tsk, TIF_SPEC_L1D_FLUSH))
>                 return true;
>         /* Do magic stuff */
>         return res;
> }
> #else
> static bool paranoid_l1d_valid(struct task_struct *tsk,
>                                const struct cpumask *msk)
> {
>         return true;
> }
> #endif
> 
> It's a pretty well defined problem and having the magic in core code
> prevents an arch hook which allows abuse of all sorts.
> 
> And the same applies to enable_l1d_flush_for_task(). The only
> architecture specific nonsense are the checks whether the CPU bug is
> there and whether the hardware supports L1D flushing.
> 
> So we can have:
> 
> #ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
> int paranoid_l1d_enable(struct task_struct *tsk)
> {
>         /* Do the SMT validation under the proper locks */
>         if (!res)
>                 set_task_thread_flag(tsk, TIF_SPEC_L1D_FLUSH);
>         return res;
> }
> #endif
> 
>> And 2) also happens on hot-un-plug, when the task's affinity gets
>> forced because it became empty. No user feedback there either, and
>> information is lost.
> 
> Of course. It's both that suddenly SMT gets enabled on a core which was
> isolated and when the last isolated core in the tasks CPU mask goes
> offline.
> 
>> I suppose we can do 2) but send a signal. That would cover all cases and
>> keep it in arch code. But yes, that's pretty terrible too.
> 
> Bah. I just looked at the condition to flush:
> 
>         if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
>                 (prev_mm & LAST_USER_MM_L1D_FLUSH))
>                 l1d_flush_hw();
> 
> That fails to flush when SMT is disabled globally. Balbir?
> 
> Of course this should be:
> 
>         if (!this_cpu_read(cpu_info.smt_active) && (prev_mm & LAST_USER_MM_L1D_FLUSH))
>                 l1d_flush_hw();
> 
> Now we can make this:
> 
>         if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
>                 if (!this_cpu_read(cpu_info.smt_active))
>                         l1d_flush_hw();
>                 else
>                         task_work_add(...);
> 
> And that task work clears the flag and sends a signal. We're not going
> to send a signal from switch_mm() ....
> 
> Thanks,
> 


So this is the change I am playing with, I don't like the idea of killing the task, but it's better than silently not flushing, I guess system administrators will learn with time not to correctly the affinity of tasks flushing
L1D. For the affinity bits, not being able to change the affinity is better, but not being able to provide feedback on as to why is a bit weird as well, but I wonder if there are other cases where we might want to lock the affinity of a task for it's lifetime.

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 6b0f4c88b07c..6b0d0a9cd447 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -320,26 +320,15 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
 
 	/*
 	 * Do not enable L1D_FLUSH_OUT if
-	 * b. The CPU is not affected by the L1TF bug
-	 * c. The CPU does not have L1D FLUSH feature support
-	 * c. The task's affinity is on cores with SMT on.
+	 * a. The CPU is not affected by the L1TF bug
+	 * b. The CPU does not have L1D FLUSH feature support
 	 */
 
 	if (!boot_cpu_has_bug(X86_BUG_L1TF) ||
-			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
+		!boot_cpu_has(X86_FEATURE_FLUSH_L1D))
 		return -EINVAL;
 
-	cpu = get_cpu();
-
-	for_each_cpu(i, &tsk->cpus_mask) {
-		if (cpu_data(i).smt_active == true) {
-			put_cpu();
-			return -EINVAL;
-		}
-	}
-
 	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
-	put_cpu();
 	return ret;
 }
 
@@ -349,6 +338,12 @@ int disable_l1d_flush_for_task(struct task_struct *tsk)
 	return 0;
 }
 
+static void l1d_flush_kill(struct callback_head *ch)
+{
+	clear_ti_thread_flag(&current->thread_info, TIF_SPEC_L1D_FLUSH);
+	force_signal(SIGBUS);
+}
+
 void switch_mm(struct mm_struct *prev, struct mm_struct *next,
 	       struct task_struct *tsk)
 {
@@ -443,12 +438,14 @@ static void cond_mitigation(struct task_struct *next)
 	}
 
 	/*
-	 * Flush only if SMT is disabled as per the contract, which is checked
-	 * when the feature is enabled.
+	 * Flush only if SMT is disabled, if flushing is enabled
+	 * and we are on an SMT enabled core, kill the task
 	 */
-	if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
-		(prev_mm & LAST_USER_MM_L1D_FLUSH))
-		l1d_flush_hw();
+	if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
+		if (!this_cpu_read(cpu_info.smt_active))
+			l1d_flush_hw();
+		else
+			task_work_add(prev, l1d_flush_kill, true);
 
 	this_cpu_write(cpu_tlbstate.last_user_mm_spec, next_mm);
 }

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
@ 2020-09-30 23:49                 ` Singh, Balbir
  0 siblings, 0 replies; 42+ messages in thread
From: Singh, Balbir @ 2020-09-30 23:49 UTC (permalink / raw)
  To: Thomas Gleixner, Peter Zijlstra
  Cc: Lukas Bulwahn, Dave Hansen, Andy Lutomirski, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, Nathan Chancellor,
	Nick Desaulniers, linux-kernel, clang-built-linux,
	kernel-janitors, linux-safety

On 1/10/20 7:38 am, Thomas Gleixner wrote:

> 
> 
> 
> On Wed, Sep 30 2020 at 20:35, Peter Zijlstra wrote:
>> On Wed, Sep 30, 2020 at 08:00:59PM +0200, Thomas Gleixner wrote:
>>> On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
>>>> On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
>>>> Also, that preempt_disable() in there doesn't actually do anything.
>>>> Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
>>>> static_cpu_has() and boot_cpu_has() in the same bloody condition and has
>>>> a pointless ret variable.
>>
>> Also, I forgot to add, it accesses ->cpus_mask without the proper
>> locking, so it could be reading intermediate state from whatever cpumask
>> operation that's in progress.
> 
> Yes. I saw that after hitting send. :(
> 
>>> I absolutely agree and I really missed it when looking at it before
>>> merging. cpus_read_lock()/unlock() is the right thing to do if at all.
>>>
>>>> It's shoddy code, that only works if you align the planets right. We
>>>> really shouldn't provide interfaces that are this bad.
>>>>
>>>> It's correct operation is only by accident.
>>>
>>> True :(
>>>
>>> I understand Balbirs problem and it makes some sense to provide a
>>> solution. We can:
>>>
>>>     1) reject set_affinity() if the task has that flush muck enabled
>>>        and user space tries to move it to a SMT enabled core
>>>
>>>     2) disable the muck if if detects that it is runs on a SMT enabled
>>>        core suddenly (hotplug says hello)
>>>
>>>        This one is nasty because there is no feedback to user space
>>>        about the wreckage.
>>
>> That's and, right, not or. because 1) deals with sched_setffinity()
>> and 2) deals wit hotplug.
> 
> It was meant as AND of course.
> 
>> Now 1) requires an arch hook in sched_setaffinity(), something I'm not
>> keen on providing, once we provide it, who knows what strange and
>> wonderful things archs will dream up.
> 
> I don't think so. We can have that magic in core:
> 
> #ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
> static bool paranoid_l1d_valid(struct task_struct *tsk,
>                                const struct cpumask *msk)
> {
>         if (!test_tsk_thread_flag(tsk, TIF_SPEC_L1D_FLUSH))
>                 return true;
>         /* Do magic stuff */
>         return res;
> }
> #else
> static bool paranoid_l1d_valid(struct task_struct *tsk,
>                                const struct cpumask *msk)
> {
>         return true;
> }
> #endif
> 
> It's a pretty well defined problem and having the magic in core code
> prevents an arch hook which allows abuse of all sorts.
> 
> And the same applies to enable_l1d_flush_for_task(). The only
> architecture specific nonsense are the checks whether the CPU bug is
> there and whether the hardware supports L1D flushing.
> 
> So we can have:
> 
> #ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
> int paranoid_l1d_enable(struct task_struct *tsk)
> {
>         /* Do the SMT validation under the proper locks */
>         if (!res)
>                 set_task_thread_flag(tsk, TIF_SPEC_L1D_FLUSH);
>         return res;
> }
> #endif
> 
>> And 2) also happens on hot-un-plug, when the task's affinity gets
>> forced because it became empty. No user feedback there either, and
>> information is lost.
> 
> Of course. It's both that suddenly SMT gets enabled on a core which was
> isolated and when the last isolated core in the tasks CPU mask goes
> offline.
> 
>> I suppose we can do 2) but send a signal. That would cover all cases and
>> keep it in arch code. But yes, that's pretty terrible too.
> 
> Bah. I just looked at the condition to flush:
> 
>         if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
>                 (prev_mm & LAST_USER_MM_L1D_FLUSH))
>                 l1d_flush_hw();
> 
> That fails to flush when SMT is disabled globally. Balbir?
> 
> Of course this should be:
> 
>         if (!this_cpu_read(cpu_info.smt_active) && (prev_mm & LAST_USER_MM_L1D_FLUSH))
>                 l1d_flush_hw();
> 
> Now we can make this:
> 
>         if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
>                 if (!this_cpu_read(cpu_info.smt_active))
>                         l1d_flush_hw();
>                 else
>                         task_work_add(...);
> 
> And that task work clears the flag and sends a signal. We're not going
> to send a signal from switch_mm() ....
> 
> Thanks,
> 


So this is the change I am playing with, I don't like the idea of killing the task, but it's better than silently not flushing, I guess system administrators will learn with time not to correctly the affinity of tasks flushing
L1D. For the affinity bits, not being able to change the affinity is better, but not being able to provide feedback on as to why is a bit weird as well, but I wonder if there are other cases where we might want to lock the affinity of a task for it's lifetime.

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 6b0f4c88b07c..6b0d0a9cd447 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -320,26 +320,15 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
 
 	/*
 	 * Do not enable L1D_FLUSH_OUT if
-	 * b. The CPU is not affected by the L1TF bug
-	 * c. The CPU does not have L1D FLUSH feature support
-	 * c. The task's affinity is on cores with SMT on.
+	 * a. The CPU is not affected by the L1TF bug
+	 * b. The CPU does not have L1D FLUSH feature support
 	 */
 
 	if (!boot_cpu_has_bug(X86_BUG_L1TF) ||
-			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
+		!boot_cpu_has(X86_FEATURE_FLUSH_L1D))
 		return -EINVAL;
 
-	cpu = get_cpu();
-
-	for_each_cpu(i, &tsk->cpus_mask) {
-		if (cpu_data(i).smt_active = true) {
-			put_cpu();
-			return -EINVAL;
-		}
-	}
-
 	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
-	put_cpu();
 	return ret;
 }
 
@@ -349,6 +338,12 @@ int disable_l1d_flush_for_task(struct task_struct *tsk)
 	return 0;
 }
 
+static void l1d_flush_kill(struct callback_head *ch)
+{
+	clear_ti_thread_flag(&current->thread_info, TIF_SPEC_L1D_FLUSH);
+	force_signal(SIGBUS);
+}
+
 void switch_mm(struct mm_struct *prev, struct mm_struct *next,
 	       struct task_struct *tsk)
 {
@@ -443,12 +438,14 @@ static void cond_mitigation(struct task_struct *next)
 	}
 
 	/*
-	 * Flush only if SMT is disabled as per the contract, which is checked
-	 * when the feature is enabled.
+	 * Flush only if SMT is disabled, if flushing is enabled
+	 * and we are on an SMT enabled core, kill the task
 	 */
-	if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
-		(prev_mm & LAST_USER_MM_L1D_FLUSH))
-		l1d_flush_hw();
+	if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
+		if (!this_cpu_read(cpu_info.smt_active))
+			l1d_flush_hw();
+		else
+			task_work_add(prev, l1d_flush_kill, true);
 
 	this_cpu_write(cpu_tlbstate.last_user_mm_spec, next_mm);
 }

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
@ 2020-09-30 23:49                 ` Singh, Balbir
  0 siblings, 0 replies; 42+ messages in thread
From: Singh, Balbir @ 2020-09-30 23:49 UTC (permalink / raw)
  To: Thomas Gleixner, Peter Zijlstra
  Cc: Lukas Bulwahn, Dave Hansen, Andy Lutomirski, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, Nathan Chancellor,
	Nick Desaulniers, linux-kernel, clang-built-linux,
	kernel-janitors, linux-safety

On 1/10/20 7:38 am, Thomas Gleixner wrote:

> 
> 
> 
> On Wed, Sep 30 2020 at 20:35, Peter Zijlstra wrote:
>> On Wed, Sep 30, 2020 at 08:00:59PM +0200, Thomas Gleixner wrote:
>>> On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
>>>> On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
>>>> Also, that preempt_disable() in there doesn't actually do anything.
>>>> Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
>>>> static_cpu_has() and boot_cpu_has() in the same bloody condition and has
>>>> a pointless ret variable.
>>
>> Also, I forgot to add, it accesses ->cpus_mask without the proper
>> locking, so it could be reading intermediate state from whatever cpumask
>> operation that's in progress.
> 
> Yes. I saw that after hitting send. :(
> 
>>> I absolutely agree and I really missed it when looking at it before
>>> merging. cpus_read_lock()/unlock() is the right thing to do if at all.
>>>
>>>> It's shoddy code, that only works if you align the planets right. We
>>>> really shouldn't provide interfaces that are this bad.
>>>>
>>>> It's correct operation is only by accident.
>>>
>>> True :(
>>>
>>> I understand Balbirs problem and it makes some sense to provide a
>>> solution. We can:
>>>
>>>     1) reject set_affinity() if the task has that flush muck enabled
>>>        and user space tries to move it to a SMT enabled core
>>>
>>>     2) disable the muck if if detects that it is runs on a SMT enabled
>>>        core suddenly (hotplug says hello)
>>>
>>>        This one is nasty because there is no feedback to user space
>>>        about the wreckage.
>>
>> That's and, right, not or. because 1) deals with sched_setffinity()
>> and 2) deals wit hotplug.
> 
> It was meant as AND of course.
> 
>> Now 1) requires an arch hook in sched_setaffinity(), something I'm not
>> keen on providing, once we provide it, who knows what strange and
>> wonderful things archs will dream up.
> 
> I don't think so. We can have that magic in core:
> 
> #ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
> static bool paranoid_l1d_valid(struct task_struct *tsk,
>                                const struct cpumask *msk)
> {
>         if (!test_tsk_thread_flag(tsk, TIF_SPEC_L1D_FLUSH))
>                 return true;
>         /* Do magic stuff */
>         return res;
> }
> #else
> static bool paranoid_l1d_valid(struct task_struct *tsk,
>                                const struct cpumask *msk)
> {
>         return true;
> }
> #endif
> 
> It's a pretty well defined problem and having the magic in core code
> prevents an arch hook which allows abuse of all sorts.
> 
> And the same applies to enable_l1d_flush_for_task(). The only
> architecture specific nonsense are the checks whether the CPU bug is
> there and whether the hardware supports L1D flushing.
> 
> So we can have:
> 
> #ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
> int paranoid_l1d_enable(struct task_struct *tsk)
> {
>         /* Do the SMT validation under the proper locks */
>         if (!res)
>                 set_task_thread_flag(tsk, TIF_SPEC_L1D_FLUSH);
>         return res;
> }
> #endif
> 
>> And 2) also happens on hot-un-plug, when the task's affinity gets
>> forced because it became empty. No user feedback there either, and
>> information is lost.
> 
> Of course. It's both that suddenly SMT gets enabled on a core which was
> isolated and when the last isolated core in the tasks CPU mask goes
> offline.
> 
>> I suppose we can do 2) but send a signal. That would cover all cases and
>> keep it in arch code. But yes, that's pretty terrible too.
> 
> Bah. I just looked at the condition to flush:
> 
>         if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
>                 (prev_mm & LAST_USER_MM_L1D_FLUSH))
>                 l1d_flush_hw();
> 
> That fails to flush when SMT is disabled globally. Balbir?
> 
> Of course this should be:
> 
>         if (!this_cpu_read(cpu_info.smt_active) && (prev_mm & LAST_USER_MM_L1D_FLUSH))
>                 l1d_flush_hw();
> 
> Now we can make this:
> 
>         if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
>                 if (!this_cpu_read(cpu_info.smt_active))
>                         l1d_flush_hw();
>                 else
>                         task_work_add(...);
> 
> And that task work clears the flag and sends a signal. We're not going
> to send a signal from switch_mm() ....
> 
> Thanks,
> 


So this is the change I am playing with, I don't like the idea of killing the task, but it's better than silently not flushing, I guess system administrators will learn with time not to correctly the affinity of tasks flushing
L1D. For the affinity bits, not being able to change the affinity is better, but not being able to provide feedback on as to why is a bit weird as well, but I wonder if there are other cases where we might want to lock the affinity of a task for it's lifetime.

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 6b0f4c88b07c..6b0d0a9cd447 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -320,26 +320,15 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
 
 	/*
 	 * Do not enable L1D_FLUSH_OUT if
-	 * b. The CPU is not affected by the L1TF bug
-	 * c. The CPU does not have L1D FLUSH feature support
-	 * c. The task's affinity is on cores with SMT on.
+	 * a. The CPU is not affected by the L1TF bug
+	 * b. The CPU does not have L1D FLUSH feature support
 	 */
 
 	if (!boot_cpu_has_bug(X86_BUG_L1TF) ||
-			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
+		!boot_cpu_has(X86_FEATURE_FLUSH_L1D))
 		return -EINVAL;
 
-	cpu = get_cpu();
-
-	for_each_cpu(i, &tsk->cpus_mask) {
-		if (cpu_data(i).smt_active == true) {
-			put_cpu();
-			return -EINVAL;
-		}
-	}
-
 	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
-	put_cpu();
 	return ret;
 }
 
@@ -349,6 +338,12 @@ int disable_l1d_flush_for_task(struct task_struct *tsk)
 	return 0;
 }
 
+static void l1d_flush_kill(struct callback_head *ch)
+{
+	clear_ti_thread_flag(&current->thread_info, TIF_SPEC_L1D_FLUSH);
+	force_signal(SIGBUS);
+}
+
 void switch_mm(struct mm_struct *prev, struct mm_struct *next,
 	       struct task_struct *tsk)
 {
@@ -443,12 +438,14 @@ static void cond_mitigation(struct task_struct *next)
 	}
 
 	/*
-	 * Flush only if SMT is disabled as per the contract, which is checked
-	 * when the feature is enabled.
+	 * Flush only if SMT is disabled, if flushing is enabled
+	 * and we are on an SMT enabled core, kill the task
 	 */
-	if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
-		(prev_mm & LAST_USER_MM_L1D_FLUSH))
-		l1d_flush_hw();
+	if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
+		if (!this_cpu_read(cpu_info.smt_active))
+			l1d_flush_hw();
+		else
+			task_work_add(prev, l1d_flush_kill, true);
 
 	this_cpu_write(cpu_tlbstate.last_user_mm_spec, next_mm);
 }

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
  2020-09-30 23:49                 ` Singh, Balbir
  (?)
@ 2020-10-01  0:48                   ` Singh, Balbir
  -1 siblings, 0 replies; 42+ messages in thread
From: Singh, Balbir @ 2020-10-01  0:48 UTC (permalink / raw)
  To: Thomas Gleixner, Peter Zijlstra
  Cc: Lukas Bulwahn, Dave Hansen, Andy Lutomirski, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, Nathan Chancellor,
	Nick Desaulniers, linux-kernel, clang-built-linux,
	kernel-janitors, linux-safety

On 1/10/20 9:49 am, Singh, Balbir wrote:
> On 1/10/20 7:38 am, Thomas Gleixner wrote:
> 
>>
>>
>>
>> On Wed, Sep 30 2020 at 20:35, Peter Zijlstra wrote:
>>> On Wed, Sep 30, 2020 at 08:00:59PM +0200, Thomas Gleixner wrote:
>>>> On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
>>>>> On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
>>>>> Also, that preempt_disable() in there doesn't actually do anything.
>>>>> Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
>>>>> static_cpu_has() and boot_cpu_has() in the same bloody condition and has
>>>>> a pointless ret variable.
>>>
>>> Also, I forgot to add, it accesses ->cpus_mask without the proper
>>> locking, so it could be reading intermediate state from whatever cpumask
>>> operation that's in progress.
>>
>> Yes. I saw that after hitting send. :(
>>
>>>> I absolutely agree and I really missed it when looking at it before
>>>> merging. cpus_read_lock()/unlock() is the right thing to do if at all.
>>>>
>>>>> It's shoddy code, that only works if you align the planets right. We
>>>>> really shouldn't provide interfaces that are this bad.
>>>>>
>>>>> It's correct operation is only by accident.
>>>>
>>>> True :(
>>>>
>>>> I understand Balbirs problem and it makes some sense to provide a
>>>> solution. We can:
>>>>
>>>>     1) reject set_affinity() if the task has that flush muck enabled
>>>>        and user space tries to move it to a SMT enabled core
>>>>
>>>>     2) disable the muck if if detects that it is runs on a SMT enabled
>>>>        core suddenly (hotplug says hello)
>>>>
>>>>        This one is nasty because there is no feedback to user space
>>>>        about the wreckage.
>>>
>>> That's and, right, not or. because 1) deals with sched_setffinity()
>>> and 2) deals wit hotplug.
>>
>> It was meant as AND of course.
>>
>>> Now 1) requires an arch hook in sched_setaffinity(), something I'm not
>>> keen on providing, once we provide it, who knows what strange and
>>> wonderful things archs will dream up.
>>
>> I don't think so. We can have that magic in core:
>>
>> #ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
>> static bool paranoid_l1d_valid(struct task_struct *tsk,
>>                                const struct cpumask *msk)
>> {
>>         if (!test_tsk_thread_flag(tsk, TIF_SPEC_L1D_FLUSH))
>>                 return true;
>>         /* Do magic stuff */
>>         return res;
>> }
>> #else
>> static bool paranoid_l1d_valid(struct task_struct *tsk,
>>                                const struct cpumask *msk)
>> {
>>         return true;
>> }
>> #endif
>>
>> It's a pretty well defined problem and having the magic in core code
>> prevents an arch hook which allows abuse of all sorts.
>>
>> And the same applies to enable_l1d_flush_for_task(). The only
>> architecture specific nonsense are the checks whether the CPU bug is
>> there and whether the hardware supports L1D flushing.
>>
>> So we can have:
>>
>> #ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
>> int paranoid_l1d_enable(struct task_struct *tsk)
>> {
>>         /* Do the SMT validation under the proper locks */
>>         if (!res)
>>                 set_task_thread_flag(tsk, TIF_SPEC_L1D_FLUSH);
>>         return res;
>> }
>> #endif
>>
>>> And 2) also happens on hot-un-plug, when the task's affinity gets
>>> forced because it became empty. No user feedback there either, and
>>> information is lost.
>>
>> Of course. It's both that suddenly SMT gets enabled on a core which was
>> isolated and when the last isolated core in the tasks CPU mask goes
>> offline.
>>
>>> I suppose we can do 2) but send a signal. That would cover all cases and
>>> keep it in arch code. But yes, that's pretty terrible too.
>>
>> Bah. I just looked at the condition to flush:
>>
>>         if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
>>                 (prev_mm & LAST_USER_MM_L1D_FLUSH))
>>                 l1d_flush_hw();
>>
>> That fails to flush when SMT is disabled globally. Balbir?
>>
>> Of course this should be:
>>
>>         if (!this_cpu_read(cpu_info.smt_active) && (prev_mm & LAST_USER_MM_L1D_FLUSH))
>>                 l1d_flush_hw();
>>
>> Now we can make this:
>>
>>         if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
>>                 if (!this_cpu_read(cpu_info.smt_active))
>>                         l1d_flush_hw();
>>                 else
>>                         task_work_add(...);
>>
>> And that task work clears the flag and sends a signal. We're not going
>> to send a signal from switch_mm() ....
>>
>> Thanks,
>>
> 
> 
> So this is the change I am playing with, I don't like the idea of killing the task, but it's better than silently not flushing, I guess system administrators will learn with time not to correctly the affinity of tasks flushing
> L1D. For the affinity bits, not being able to change the affinity is better, but not being able to provide feedback on as to why is a bit weird as well, but I wonder if there are other cases where we might want to lock the affinity of a task for it's lifetime.
> 
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 6b0f4c88b07c..6b0d0a9cd447 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -320,26 +320,15 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
>  
>  	/*
>  	 * Do not enable L1D_FLUSH_OUT if
> -	 * b. The CPU is not affected by the L1TF bug
> -	 * c. The CPU does not have L1D FLUSH feature support
> -	 * c. The task's affinity is on cores with SMT on.
> +	 * a. The CPU is not affected by the L1TF bug
> +	 * b. The CPU does not have L1D FLUSH feature support
>  	 */
>  
>  	if (!boot_cpu_has_bug(X86_BUG_L1TF) ||
> -			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
> +		!boot_cpu_has(X86_FEATURE_FLUSH_L1D))
>  		return -EINVAL;
>  
> -	cpu = get_cpu();
> -
> -	for_each_cpu(i, &tsk->cpus_mask) {
> -		if (cpu_data(i).smt_active == true) {
> -			put_cpu();
> -			return -EINVAL;
> -		}
> -	}
> -
>  	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
> -	put_cpu();
>  	return ret;
>  }
>  
> @@ -349,6 +338,12 @@ int disable_l1d_flush_for_task(struct task_struct *tsk)
>  	return 0;
>  }
>  
> +static void l1d_flush_kill(struct callback_head *ch)
> +{
> +	clear_ti_thread_flag(&current->thread_info, TIF_SPEC_L1D_FLUSH);
> +	force_signal(SIGBUS);
> +}
> +
>  void switch_mm(struct mm_struct *prev, struct mm_struct *next,
>  	       struct task_struct *tsk)
>  {
> @@ -443,12 +438,14 @@ static void cond_mitigation(struct task_struct *next)
>  	}
>  
>  	/*
> -	 * Flush only if SMT is disabled as per the contract, which is checked
> -	 * when the feature is enabled.
> +	 * Flush only if SMT is disabled, if flushing is enabled
> +	 * and we are on an SMT enabled core, kill the task
>  	 */
> -	if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
> -		(prev_mm & LAST_USER_MM_L1D_FLUSH))
> -		l1d_flush_hw();
> +	if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
> +		if (!this_cpu_read(cpu_info.smt_active))
> +			l1d_flush_hw();
> +		else
> +			task_work_add(prev, l1d_flush_kill, true);

We have no access the to the previous task and mm->owner depends on MEMCG :)
We can do the magic in mm_mangle_tif_spec_bits(), I suppose

Balbir



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
@ 2020-10-01  0:48                   ` Singh, Balbir
  0 siblings, 0 replies; 42+ messages in thread
From: Singh, Balbir @ 2020-10-01  0:48 UTC (permalink / raw)
  To: Thomas Gleixner, Peter Zijlstra
  Cc: Lukas Bulwahn, Dave Hansen, Andy Lutomirski, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, Nathan Chancellor,
	Nick Desaulniers, linux-kernel, clang-built-linux,
	kernel-janitors, linux-safety

On 1/10/20 9:49 am, Singh, Balbir wrote:
> On 1/10/20 7:38 am, Thomas Gleixner wrote:
> 
>>
>>
>>
>> On Wed, Sep 30 2020 at 20:35, Peter Zijlstra wrote:
>>> On Wed, Sep 30, 2020 at 08:00:59PM +0200, Thomas Gleixner wrote:
>>>> On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
>>>>> On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
>>>>> Also, that preempt_disable() in there doesn't actually do anything.
>>>>> Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
>>>>> static_cpu_has() and boot_cpu_has() in the same bloody condition and has
>>>>> a pointless ret variable.
>>>
>>> Also, I forgot to add, it accesses ->cpus_mask without the proper
>>> locking, so it could be reading intermediate state from whatever cpumask
>>> operation that's in progress.
>>
>> Yes. I saw that after hitting send. :(
>>
>>>> I absolutely agree and I really missed it when looking at it before
>>>> merging. cpus_read_lock()/unlock() is the right thing to do if at all.
>>>>
>>>>> It's shoddy code, that only works if you align the planets right. We
>>>>> really shouldn't provide interfaces that are this bad.
>>>>>
>>>>> It's correct operation is only by accident.
>>>>
>>>> True :(
>>>>
>>>> I understand Balbirs problem and it makes some sense to provide a
>>>> solution. We can:
>>>>
>>>>     1) reject set_affinity() if the task has that flush muck enabled
>>>>        and user space tries to move it to a SMT enabled core
>>>>
>>>>     2) disable the muck if if detects that it is runs on a SMT enabled
>>>>        core suddenly (hotplug says hello)
>>>>
>>>>        This one is nasty because there is no feedback to user space
>>>>        about the wreckage.
>>>
>>> That's and, right, not or. because 1) deals with sched_setffinity()
>>> and 2) deals wit hotplug.
>>
>> It was meant as AND of course.
>>
>>> Now 1) requires an arch hook in sched_setaffinity(), something I'm not
>>> keen on providing, once we provide it, who knows what strange and
>>> wonderful things archs will dream up.
>>
>> I don't think so. We can have that magic in core:
>>
>> #ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
>> static bool paranoid_l1d_valid(struct task_struct *tsk,
>>                                const struct cpumask *msk)
>> {
>>         if (!test_tsk_thread_flag(tsk, TIF_SPEC_L1D_FLUSH))
>>                 return true;
>>         /* Do magic stuff */
>>         return res;
>> }
>> #else
>> static bool paranoid_l1d_valid(struct task_struct *tsk,
>>                                const struct cpumask *msk)
>> {
>>         return true;
>> }
>> #endif
>>
>> It's a pretty well defined problem and having the magic in core code
>> prevents an arch hook which allows abuse of all sorts.
>>
>> And the same applies to enable_l1d_flush_for_task(). The only
>> architecture specific nonsense are the checks whether the CPU bug is
>> there and whether the hardware supports L1D flushing.
>>
>> So we can have:
>>
>> #ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
>> int paranoid_l1d_enable(struct task_struct *tsk)
>> {
>>         /* Do the SMT validation under the proper locks */
>>         if (!res)
>>                 set_task_thread_flag(tsk, TIF_SPEC_L1D_FLUSH);
>>         return res;
>> }
>> #endif
>>
>>> And 2) also happens on hot-un-plug, when the task's affinity gets
>>> forced because it became empty. No user feedback there either, and
>>> information is lost.
>>
>> Of course. It's both that suddenly SMT gets enabled on a core which was
>> isolated and when the last isolated core in the tasks CPU mask goes
>> offline.
>>
>>> I suppose we can do 2) but send a signal. That would cover all cases and
>>> keep it in arch code. But yes, that's pretty terrible too.
>>
>> Bah. I just looked at the condition to flush:
>>
>>         if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
>>                 (prev_mm & LAST_USER_MM_L1D_FLUSH))
>>                 l1d_flush_hw();
>>
>> That fails to flush when SMT is disabled globally. Balbir?
>>
>> Of course this should be:
>>
>>         if (!this_cpu_read(cpu_info.smt_active) && (prev_mm & LAST_USER_MM_L1D_FLUSH))
>>                 l1d_flush_hw();
>>
>> Now we can make this:
>>
>>         if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
>>                 if (!this_cpu_read(cpu_info.smt_active))
>>                         l1d_flush_hw();
>>                 else
>>                         task_work_add(...);
>>
>> And that task work clears the flag and sends a signal. We're not going
>> to send a signal from switch_mm() ....
>>
>> Thanks,
>>
> 
> 
> So this is the change I am playing with, I don't like the idea of killing the task, but it's better than silently not flushing, I guess system administrators will learn with time not to correctly the affinity of tasks flushing
> L1D. For the affinity bits, not being able to change the affinity is better, but not being able to provide feedback on as to why is a bit weird as well, but I wonder if there are other cases where we might want to lock the affinity of a task for it's lifetime.
> 
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 6b0f4c88b07c..6b0d0a9cd447 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -320,26 +320,15 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
>  
>  	/*
>  	 * Do not enable L1D_FLUSH_OUT if
> -	 * b. The CPU is not affected by the L1TF bug
> -	 * c. The CPU does not have L1D FLUSH feature support
> -	 * c. The task's affinity is on cores with SMT on.
> +	 * a. The CPU is not affected by the L1TF bug
> +	 * b. The CPU does not have L1D FLUSH feature support
>  	 */
>  
>  	if (!boot_cpu_has_bug(X86_BUG_L1TF) ||
> -			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
> +		!boot_cpu_has(X86_FEATURE_FLUSH_L1D))
>  		return -EINVAL;
>  
> -	cpu = get_cpu();
> -
> -	for_each_cpu(i, &tsk->cpus_mask) {
> -		if (cpu_data(i).smt_active = true) {
> -			put_cpu();
> -			return -EINVAL;
> -		}
> -	}
> -
>  	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
> -	put_cpu();
>  	return ret;
>  }
>  
> @@ -349,6 +338,12 @@ int disable_l1d_flush_for_task(struct task_struct *tsk)
>  	return 0;
>  }
>  
> +static void l1d_flush_kill(struct callback_head *ch)
> +{
> +	clear_ti_thread_flag(&current->thread_info, TIF_SPEC_L1D_FLUSH);
> +	force_signal(SIGBUS);
> +}
> +
>  void switch_mm(struct mm_struct *prev, struct mm_struct *next,
>  	       struct task_struct *tsk)
>  {
> @@ -443,12 +438,14 @@ static void cond_mitigation(struct task_struct *next)
>  	}
>  
>  	/*
> -	 * Flush only if SMT is disabled as per the contract, which is checked
> -	 * when the feature is enabled.
> +	 * Flush only if SMT is disabled, if flushing is enabled
> +	 * and we are on an SMT enabled core, kill the task
>  	 */
> -	if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
> -		(prev_mm & LAST_USER_MM_L1D_FLUSH))
> -		l1d_flush_hw();
> +	if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
> +		if (!this_cpu_read(cpu_info.smt_active))
> +			l1d_flush_hw();
> +		else
> +			task_work_add(prev, l1d_flush_kill, true);

We have no access the to the previous task and mm->owner depends on MEMCG :)
We can do the magic in mm_mangle_tif_spec_bits(), I suppose

Balbir

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
@ 2020-10-01  0:48                   ` Singh, Balbir
  0 siblings, 0 replies; 42+ messages in thread
From: Singh, Balbir @ 2020-10-01  0:48 UTC (permalink / raw)
  To: Thomas Gleixner, Peter Zijlstra
  Cc: Lukas Bulwahn, Dave Hansen, Andy Lutomirski, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, Nathan Chancellor,
	Nick Desaulniers, linux-kernel, clang-built-linux,
	kernel-janitors, linux-safety

On 1/10/20 9:49 am, Singh, Balbir wrote:
> On 1/10/20 7:38 am, Thomas Gleixner wrote:
> 
>>
>>
>>
>> On Wed, Sep 30 2020 at 20:35, Peter Zijlstra wrote:
>>> On Wed, Sep 30, 2020 at 08:00:59PM +0200, Thomas Gleixner wrote:
>>>> On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
>>>>> On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
>>>>> Also, that preempt_disable() in there doesn't actually do anything.
>>>>> Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
>>>>> static_cpu_has() and boot_cpu_has() in the same bloody condition and has
>>>>> a pointless ret variable.
>>>
>>> Also, I forgot to add, it accesses ->cpus_mask without the proper
>>> locking, so it could be reading intermediate state from whatever cpumask
>>> operation that's in progress.
>>
>> Yes. I saw that after hitting send. :(
>>
>>>> I absolutely agree and I really missed it when looking at it before
>>>> merging. cpus_read_lock()/unlock() is the right thing to do if at all.
>>>>
>>>>> It's shoddy code, that only works if you align the planets right. We
>>>>> really shouldn't provide interfaces that are this bad.
>>>>>
>>>>> It's correct operation is only by accident.
>>>>
>>>> True :(
>>>>
>>>> I understand Balbirs problem and it makes some sense to provide a
>>>> solution. We can:
>>>>
>>>>     1) reject set_affinity() if the task has that flush muck enabled
>>>>        and user space tries to move it to a SMT enabled core
>>>>
>>>>     2) disable the muck if if detects that it is runs on a SMT enabled
>>>>        core suddenly (hotplug says hello)
>>>>
>>>>        This one is nasty because there is no feedback to user space
>>>>        about the wreckage.
>>>
>>> That's and, right, not or. because 1) deals with sched_setffinity()
>>> and 2) deals wit hotplug.
>>
>> It was meant as AND of course.
>>
>>> Now 1) requires an arch hook in sched_setaffinity(), something I'm not
>>> keen on providing, once we provide it, who knows what strange and
>>> wonderful things archs will dream up.
>>
>> I don't think so. We can have that magic in core:
>>
>> #ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
>> static bool paranoid_l1d_valid(struct task_struct *tsk,
>>                                const struct cpumask *msk)
>> {
>>         if (!test_tsk_thread_flag(tsk, TIF_SPEC_L1D_FLUSH))
>>                 return true;
>>         /* Do magic stuff */
>>         return res;
>> }
>> #else
>> static bool paranoid_l1d_valid(struct task_struct *tsk,
>>                                const struct cpumask *msk)
>> {
>>         return true;
>> }
>> #endif
>>
>> It's a pretty well defined problem and having the magic in core code
>> prevents an arch hook which allows abuse of all sorts.
>>
>> And the same applies to enable_l1d_flush_for_task(). The only
>> architecture specific nonsense are the checks whether the CPU bug is
>> there and whether the hardware supports L1D flushing.
>>
>> So we can have:
>>
>> #ifdef CONFIG_HAS_PARANOID_L1D_FLUSH
>> int paranoid_l1d_enable(struct task_struct *tsk)
>> {
>>         /* Do the SMT validation under the proper locks */
>>         if (!res)
>>                 set_task_thread_flag(tsk, TIF_SPEC_L1D_FLUSH);
>>         return res;
>> }
>> #endif
>>
>>> And 2) also happens on hot-un-plug, when the task's affinity gets
>>> forced because it became empty. No user feedback there either, and
>>> information is lost.
>>
>> Of course. It's both that suddenly SMT gets enabled on a core which was
>> isolated and when the last isolated core in the tasks CPU mask goes
>> offline.
>>
>>> I suppose we can do 2) but send a signal. That would cover all cases and
>>> keep it in arch code. But yes, that's pretty terrible too.
>>
>> Bah. I just looked at the condition to flush:
>>
>>         if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
>>                 (prev_mm & LAST_USER_MM_L1D_FLUSH))
>>                 l1d_flush_hw();
>>
>> That fails to flush when SMT is disabled globally. Balbir?
>>
>> Of course this should be:
>>
>>         if (!this_cpu_read(cpu_info.smt_active) && (prev_mm & LAST_USER_MM_L1D_FLUSH))
>>                 l1d_flush_hw();
>>
>> Now we can make this:
>>
>>         if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
>>                 if (!this_cpu_read(cpu_info.smt_active))
>>                         l1d_flush_hw();
>>                 else
>>                         task_work_add(...);
>>
>> And that task work clears the flag and sends a signal. We're not going
>> to send a signal from switch_mm() ....
>>
>> Thanks,
>>
> 
> 
> So this is the change I am playing with, I don't like the idea of killing the task, but it's better than silently not flushing, I guess system administrators will learn with time not to correctly the affinity of tasks flushing
> L1D. For the affinity bits, not being able to change the affinity is better, but not being able to provide feedback on as to why is a bit weird as well, but I wonder if there are other cases where we might want to lock the affinity of a task for it's lifetime.
> 
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 6b0f4c88b07c..6b0d0a9cd447 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -320,26 +320,15 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
>  
>  	/*
>  	 * Do not enable L1D_FLUSH_OUT if
> -	 * b. The CPU is not affected by the L1TF bug
> -	 * c. The CPU does not have L1D FLUSH feature support
> -	 * c. The task's affinity is on cores with SMT on.
> +	 * a. The CPU is not affected by the L1TF bug
> +	 * b. The CPU does not have L1D FLUSH feature support
>  	 */
>  
>  	if (!boot_cpu_has_bug(X86_BUG_L1TF) ||
> -			!static_cpu_has(X86_FEATURE_FLUSH_L1D))
> +		!boot_cpu_has(X86_FEATURE_FLUSH_L1D))
>  		return -EINVAL;
>  
> -	cpu = get_cpu();
> -
> -	for_each_cpu(i, &tsk->cpus_mask) {
> -		if (cpu_data(i).smt_active == true) {
> -			put_cpu();
> -			return -EINVAL;
> -		}
> -	}
> -
>  	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
> -	put_cpu();
>  	return ret;
>  }
>  
> @@ -349,6 +338,12 @@ int disable_l1d_flush_for_task(struct task_struct *tsk)
>  	return 0;
>  }
>  
> +static void l1d_flush_kill(struct callback_head *ch)
> +{
> +	clear_ti_thread_flag(&current->thread_info, TIF_SPEC_L1D_FLUSH);
> +	force_signal(SIGBUS);
> +}
> +
>  void switch_mm(struct mm_struct *prev, struct mm_struct *next,
>  	       struct task_struct *tsk)
>  {
> @@ -443,12 +438,14 @@ static void cond_mitigation(struct task_struct *next)
>  	}
>  
>  	/*
> -	 * Flush only if SMT is disabled as per the contract, which is checked
> -	 * when the feature is enabled.
> +	 * Flush only if SMT is disabled, if flushing is enabled
> +	 * and we are on an SMT enabled core, kill the task
>  	 */
> -	if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
> -		(prev_mm & LAST_USER_MM_L1D_FLUSH))
> -		l1d_flush_hw();
> +	if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
> +		if (!this_cpu_read(cpu_info.smt_active))
> +			l1d_flush_hw();
> +		else
> +			task_work_add(prev, l1d_flush_kill, true);

We have no access the to the previous task and mm->owner depends on MEMCG :)
We can do the magic in mm_mangle_tif_spec_bits(), I suppose

Balbir

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
  2020-10-01  0:48                   ` Singh, Balbir
  (?)
@ 2020-10-01  8:17                     ` Thomas Gleixner
  -1 siblings, 0 replies; 42+ messages in thread
From: Thomas Gleixner @ 2020-10-01  8:17 UTC (permalink / raw)
  To: Singh, Balbir, Peter Zijlstra
  Cc: Lukas Bulwahn, Dave Hansen, Andy Lutomirski, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, Nathan Chancellor,
	Nick Desaulniers, linux-kernel, clang-built-linux,
	kernel-janitors, linux-safety

On Thu, Oct 01 2020 at 10:48, Balbir Singh wrote:
> On 1/10/20 9:49 am, Singh, Balbir wrote:
>>  
>> +static void l1d_flush_kill(struct callback_head *ch)
>> +{
>> +	clear_ti_thread_flag(&current->thread_info, TIF_SPEC_L1D_FLUSH);
>> +	force_signal(SIGBUS);
>> +}
>> +
>>  void switch_mm(struct mm_struct *prev, struct mm_struct *next,
>>  	       struct task_struct *tsk)
>>  {
>> @@ -443,12 +438,14 @@ static void cond_mitigation(struct task_struct *next)
>>  	}
>>  
>>  	/*
>> -	 * Flush only if SMT is disabled as per the contract, which is checked
>> -	 * when the feature is enabled.
>> +	 * Flush only if SMT is disabled, if flushing is enabled
>> +	 * and we are on an SMT enabled core, kill the task
>>  	 */
>> -	if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
>> -		(prev_mm & LAST_USER_MM_L1D_FLUSH))
>> -		l1d_flush_hw();
>> +	if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
>> +		if (!this_cpu_read(cpu_info.smt_active))
>> +			l1d_flush_hw();
>> +		else
>> +			task_work_add(prev, l1d_flush_kill, true);
>
> We have no access the to the previous task and mm->owner depends on MEMCG :)
> We can do the magic in mm_mangle_tif_spec_bits(), I suppose

No, because we don't have access to prev task there either. Interesting
problem to solve.

Thanks,

        tglx



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
@ 2020-10-01  8:17                     ` Thomas Gleixner
  0 siblings, 0 replies; 42+ messages in thread
From: Thomas Gleixner @ 2020-10-01  8:17 UTC (permalink / raw)
  To: Singh, Balbir, Peter Zijlstra
  Cc: Lukas Bulwahn, Dave Hansen, Andy Lutomirski, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, Nathan Chancellor,
	Nick Desaulniers, linux-kernel, clang-built-linux,
	kernel-janitors, linux-safety

On Thu, Oct 01 2020 at 10:48, Balbir Singh wrote:
> On 1/10/20 9:49 am, Singh, Balbir wrote:
>>  
>> +static void l1d_flush_kill(struct callback_head *ch)
>> +{
>> +	clear_ti_thread_flag(&current->thread_info, TIF_SPEC_L1D_FLUSH);
>> +	force_signal(SIGBUS);
>> +}
>> +
>>  void switch_mm(struct mm_struct *prev, struct mm_struct *next,
>>  	       struct task_struct *tsk)
>>  {
>> @@ -443,12 +438,14 @@ static void cond_mitigation(struct task_struct *next)
>>  	}
>>  
>>  	/*
>> -	 * Flush only if SMT is disabled as per the contract, which is checked
>> -	 * when the feature is enabled.
>> +	 * Flush only if SMT is disabled, if flushing is enabled
>> +	 * and we are on an SMT enabled core, kill the task
>>  	 */
>> -	if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
>> -		(prev_mm & LAST_USER_MM_L1D_FLUSH))
>> -		l1d_flush_hw();
>> +	if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
>> +		if (!this_cpu_read(cpu_info.smt_active))
>> +			l1d_flush_hw();
>> +		else
>> +			task_work_add(prev, l1d_flush_kill, true);
>
> We have no access the to the previous task and mm->owner depends on MEMCG :)
> We can do the magic in mm_mangle_tif_spec_bits(), I suppose

No, because we don't have access to prev task there either. Interesting
problem to solve.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
@ 2020-10-01  8:17                     ` Thomas Gleixner
  0 siblings, 0 replies; 42+ messages in thread
From: Thomas Gleixner @ 2020-10-01  8:17 UTC (permalink / raw)
  To: Singh, Balbir, Peter Zijlstra
  Cc: Lukas Bulwahn, Dave Hansen, Andy Lutomirski, Ingo Molnar,
	Borislav Petkov, x86, H. Peter Anvin, Nathan Chancellor,
	Nick Desaulniers, linux-kernel, clang-built-linux,
	kernel-janitors, linux-safety

On Thu, Oct 01 2020 at 10:48, Balbir Singh wrote:
> On 1/10/20 9:49 am, Singh, Balbir wrote:
>>  
>> +static void l1d_flush_kill(struct callback_head *ch)
>> +{
>> +	clear_ti_thread_flag(&current->thread_info, TIF_SPEC_L1D_FLUSH);
>> +	force_signal(SIGBUS);
>> +}
>> +
>>  void switch_mm(struct mm_struct *prev, struct mm_struct *next,
>>  	       struct task_struct *tsk)
>>  {
>> @@ -443,12 +438,14 @@ static void cond_mitigation(struct task_struct *next)
>>  	}
>>  
>>  	/*
>> -	 * Flush only if SMT is disabled as per the contract, which is checked
>> -	 * when the feature is enabled.
>> +	 * Flush only if SMT is disabled, if flushing is enabled
>> +	 * and we are on an SMT enabled core, kill the task
>>  	 */
>> -	if (sched_smt_active() && !this_cpu_read(cpu_info.smt_active) &&
>> -		(prev_mm & LAST_USER_MM_L1D_FLUSH))
>> -		l1d_flush_hw();
>> +	if (unlikely(prev_mm & LAST_USER_MM_L1D_FLUSH)) {
>> +		if (!this_cpu_read(cpu_info.smt_active))
>> +			l1d_flush_hw();
>> +		else
>> +			task_work_add(prev, l1d_flush_kill, true);
>
> We have no access the to the previous task and mm->owner depends on MEMCG :)
> We can do the magic in mm_mangle_tif_spec_bits(), I suppose

No, because we don't have access to prev task there either. Interesting
problem to solve.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
  2020-09-30 23:49                 ` Singh, Balbir
  (?)
@ 2020-10-01  8:19                   ` Peter Zijlstra
  -1 siblings, 0 replies; 42+ messages in thread
From: Peter Zijlstra @ 2020-10-01  8:19 UTC (permalink / raw)
  To: Singh, Balbir
  Cc: Thomas Gleixner, Lukas Bulwahn, Dave Hansen, Andy Lutomirski,
	Ingo Molnar, Borislav Petkov, x86, H. Peter Anvin,
	Nathan Chancellor, Nick Desaulniers, linux-kernel,
	clang-built-linux, kernel-janitors, linux-safety


Your MUA is having trouble wrapping text at 78 chars.

On Thu, Oct 01, 2020 at 09:49:30AM +1000, Singh, Balbir wrote:

> So this is the change I am playing with, I don't like the idea of
> killing the task, but it's better than silently not flushing, I guess
> system administrators will learn with time not to correctly the
> affinity of tasks flushing L1D. For the affinity bits, not being able
> to change the affinity is better, but not being able to provide
> feedback on as to why is a bit weird as well, but I wonder if there
> are other cases where we might want to lock the affinity of a task for
> it's lifetime.

You can't really do that, hot-unplug can (and will) destroy any affinity
setting, and if the task/admin wants to recover it needs to be able to
re-set affinity after that.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
@ 2020-10-01  8:19                   ` Peter Zijlstra
  0 siblings, 0 replies; 42+ messages in thread
From: Peter Zijlstra @ 2020-10-01  8:19 UTC (permalink / raw)
  To: Singh, Balbir
  Cc: Thomas Gleixner, Lukas Bulwahn, Dave Hansen, Andy Lutomirski,
	Ingo Molnar, Borislav Petkov, x86, H. Peter Anvin,
	Nathan Chancellor, Nick Desaulniers, linux-kernel,
	clang-built-linux, kernel-janitors, linux-safety


Your MUA is having trouble wrapping text at 78 chars.

On Thu, Oct 01, 2020 at 09:49:30AM +1000, Singh, Balbir wrote:

> So this is the change I am playing with, I don't like the idea of
> killing the task, but it's better than silently not flushing, I guess
> system administrators will learn with time not to correctly the
> affinity of tasks flushing L1D. For the affinity bits, not being able
> to change the affinity is better, but not being able to provide
> feedback on as to why is a bit weird as well, but I wonder if there
> are other cases where we might want to lock the affinity of a task for
> it's lifetime.

You can't really do that, hot-unplug can (and will) destroy any affinity
setting, and if the task/admin wants to recover it needs to be able to
re-set affinity after that.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()
@ 2020-10-01  8:19                   ` Peter Zijlstra
  0 siblings, 0 replies; 42+ messages in thread
From: Peter Zijlstra @ 2020-10-01  8:19 UTC (permalink / raw)
  To: Singh, Balbir
  Cc: Thomas Gleixner, Lukas Bulwahn, Dave Hansen, Andy Lutomirski,
	Ingo Molnar, Borislav Petkov, x86, H. Peter Anvin,
	Nathan Chancellor, Nick Desaulniers, linux-kernel,
	clang-built-linux, kernel-janitors, linux-safety


Your MUA is having trouble wrapping text at 78 chars.

On Thu, Oct 01, 2020 at 09:49:30AM +1000, Singh, Balbir wrote:

> So this is the change I am playing with, I don't like the idea of
> killing the task, but it's better than silently not flushing, I guess
> system administrators will learn with time not to correctly the
> affinity of tasks flushing L1D. For the affinity bits, not being able
> to change the affinity is better, but not being able to provide
> feedback on as to why is a bit weird as well, but I wonder if there
> are other cases where we might want to lock the affinity of a task for
> it's lifetime.

You can't really do that, hot-unplug can (and will) destroy any affinity
setting, and if the task/admin wants to recover it needs to be able to
re-set affinity after that.

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2020-10-01  8:20 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-28 12:44 [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task() Lukas Bulwahn
2020-09-28 12:44 ` Lukas Bulwahn
2020-09-28 20:43 ` Nathan Chancellor
2020-09-28 20:43   ` Nathan Chancellor
2020-09-29  7:12 ` Peter Zijlstra
2020-09-29  7:12   ` Peter Zijlstra
2020-09-29  8:33   ` Lukas Bulwahn
2020-09-29  8:33     ` Lukas Bulwahn
2020-09-29  8:33     ` Lukas Bulwahn
2020-09-29  8:37   ` Peter Zijlstra
2020-09-29  8:37     ` Peter Zijlstra
2020-09-30 15:40     ` Thomas Gleixner
2020-09-30 15:40       ` Thomas Gleixner
2020-09-30 16:53       ` Lukas Bulwahn
2020-09-30 16:53         ` Lukas Bulwahn
2020-09-30 16:53         ` Lukas Bulwahn
2020-09-30 17:03       ` Peter Zijlstra
2020-09-30 17:03         ` Peter Zijlstra
2020-09-30 18:00         ` Thomas Gleixner
2020-09-30 18:00           ` Thomas Gleixner
2020-09-30 18:35           ` Peter Zijlstra
2020-09-30 18:35             ` Peter Zijlstra
2020-09-30 21:38             ` Thomas Gleixner
2020-09-30 21:38               ` Thomas Gleixner
2020-09-30 22:59               ` Singh, Balbir
2020-09-30 22:59                 ` Singh, Balbir
2020-09-30 22:59                 ` Singh, Balbir
2020-09-30 23:49               ` Singh, Balbir
2020-09-30 23:49                 ` Singh, Balbir
2020-09-30 23:49                 ` Singh, Balbir
2020-10-01  0:48                 ` Singh, Balbir
2020-10-01  0:48                   ` Singh, Balbir
2020-10-01  0:48                   ` Singh, Balbir
2020-10-01  8:17                   ` Thomas Gleixner
2020-10-01  8:17                     ` Thomas Gleixner
2020-10-01  8:17                     ` Thomas Gleixner
2020-10-01  8:19                 ` Peter Zijlstra
2020-10-01  8:19                   ` Peter Zijlstra
2020-10-01  8:19                   ` Peter Zijlstra
2020-09-30 22:46           ` Singh, Balbir
2020-09-30 22:46             ` Singh, Balbir
2020-09-30 22:46             ` Singh, Balbir

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.