linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] x86/asm: Simplify __smp_mb() definition
@ 2021-05-12  9:33 Borislav Petkov
  2021-05-12  9:50 ` Peter Zijlstra
  2021-05-12 11:16 ` [tip: x86/cleanups] " tip-bot2 for Borislav Petkov
  0 siblings, 2 replies; 3+ messages in thread
From: Borislav Petkov @ 2021-05-12  9:33 UTC (permalink / raw)
  To: X86 ML; +Cc: LKML

From: Borislav Petkov <bp@suse.de>

Drop the bitness ifdeffery in favor of using the rSP register
specification for 32 and 64 bit depending on the build.

No functional changes.

Signed-off-by: Borislav Petkov <bp@suse.de>
---
 arch/x86/include/asm/barrier.h | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index 4819d5e5a335..3ba772a69cc8 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -54,11 +54,8 @@ static inline unsigned long array_index_mask_nospec(unsigned long index,
 #define dma_rmb()	barrier()
 #define dma_wmb()	barrier()
 
-#ifdef CONFIG_X86_32
-#define __smp_mb()	asm volatile("lock; addl $0,-4(%%esp)" ::: "memory", "cc")
-#else
-#define __smp_mb()	asm volatile("lock; addl $0,-4(%%rsp)" ::: "memory", "cc")
-#endif
+#define __smp_mb()	asm volatile("lock; addl $0,-4(%%" _ASM_SP ")" ::: "memory", "cc")
+
 #define __smp_rmb()	dma_rmb()
 #define __smp_wmb()	barrier()
 #define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] x86/asm: Simplify __smp_mb() definition
  2021-05-12  9:33 [PATCH] x86/asm: Simplify __smp_mb() definition Borislav Petkov
@ 2021-05-12  9:50 ` Peter Zijlstra
  2021-05-12 11:16 ` [tip: x86/cleanups] " tip-bot2 for Borislav Petkov
  1 sibling, 0 replies; 3+ messages in thread
From: Peter Zijlstra @ 2021-05-12  9:50 UTC (permalink / raw)
  To: Borislav Petkov; +Cc: X86 ML, LKML

On Wed, May 12, 2021 at 11:33:10AM +0200, Borislav Petkov wrote:
> From: Borislav Petkov <bp@suse.de>
> 
> Drop the bitness ifdeffery in favor of using the rSP register
> specification for 32 and 64 bit depending on the build.
> 
> No functional changes.
> 
> Signed-off-by: Borislav Petkov <bp@suse.de>
> ---
>  arch/x86/include/asm/barrier.h | 7 ++-----
>  1 file changed, 2 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
> index 4819d5e5a335..3ba772a69cc8 100644
> --- a/arch/x86/include/asm/barrier.h
> +++ b/arch/x86/include/asm/barrier.h
> @@ -54,11 +54,8 @@ static inline unsigned long array_index_mask_nospec(unsigned long index,
>  #define dma_rmb()	barrier()
>  #define dma_wmb()	barrier()
>  
> -#ifdef CONFIG_X86_32
> -#define __smp_mb()	asm volatile("lock; addl $0,-4(%%esp)" ::: "memory", "cc")
> -#else
> -#define __smp_mb()	asm volatile("lock; addl $0,-4(%%rsp)" ::: "memory", "cc")
> -#endif
> +#define __smp_mb()	asm volatile("lock; addl $0,-4(%%" _ASM_SP ")" ::: "memory", "cc")
> +
>  #define __smp_rmb()	dma_rmb()
>  #define __smp_wmb()	barrier()
>  #define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [tip: x86/cleanups] x86/asm: Simplify __smp_mb() definition
  2021-05-12  9:33 [PATCH] x86/asm: Simplify __smp_mb() definition Borislav Petkov
  2021-05-12  9:50 ` Peter Zijlstra
@ 2021-05-12 11:16 ` tip-bot2 for Borislav Petkov
  1 sibling, 0 replies; 3+ messages in thread
From: tip-bot2 for Borislav Petkov @ 2021-05-12 11:16 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Borislav Petkov, Ingo Molnar, Peter Zijlstra (Intel), x86, linux-kernel

The following commit has been merged into the x86/cleanups branch of tip:

Commit-ID:     1bc67873d401e6c2e6e30be7fef21337db07a042
Gitweb:        https://git.kernel.org/tip/1bc67873d401e6c2e6e30be7fef21337db07a042
Author:        Borislav Petkov <bp@suse.de>
AuthorDate:    Wed, 12 May 2021 11:33:10 +02:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Wed, 12 May 2021 12:22:57 +02:00

x86/asm: Simplify __smp_mb() definition

Drop the bitness ifdeffery in favor of using _ASM_SP,
which is the helper macro for the rSP register specification
for 32 and 64 bit depending on the build.

No functional changes.

Signed-off-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20210512093310.5635-1-bp@alien8.de
---
 arch/x86/include/asm/barrier.h | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index 4819d5e..3ba772a 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -54,11 +54,8 @@ static inline unsigned long array_index_mask_nospec(unsigned long index,
 #define dma_rmb()	barrier()
 #define dma_wmb()	barrier()
 
-#ifdef CONFIG_X86_32
-#define __smp_mb()	asm volatile("lock; addl $0,-4(%%esp)" ::: "memory", "cc")
-#else
-#define __smp_mb()	asm volatile("lock; addl $0,-4(%%rsp)" ::: "memory", "cc")
-#endif
+#define __smp_mb()	asm volatile("lock; addl $0,-4(%%" _ASM_SP ")" ::: "memory", "cc")
+
 #define __smp_rmb()	dma_rmb()
 #define __smp_wmb()	barrier()
 #define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)

^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-05-12 11:16 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-12  9:33 [PATCH] x86/asm: Simplify __smp_mb() definition Borislav Petkov
2021-05-12  9:50 ` Peter Zijlstra
2021-05-12 11:16 ` [tip: x86/cleanups] " tip-bot2 for Borislav Petkov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).