All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/4] x86/uaccess: Use pointer masking to limit uaccess speculation
@ 2021-05-05  3:54 Josh Poimboeuf
  2021-05-05  3:54 ` [PATCH v4 1/4] uaccess: Always inline strn*_user() helper functions Josh Poimboeuf
                   ` (3 more replies)
  0 siblings, 4 replies; 19+ messages in thread
From: Josh Poimboeuf @ 2021-05-05  3:54 UTC (permalink / raw)
  To: Al Viro
  Cc: x86, linux-kernel, Linus Torvalds, Will Deacon, Dan Williams,
	Andrea Arcangeli, Waiman Long, Peter Zijlstra, Thomas Gleixner,
	Andrew Cooper, Andy Lutomirski, Christoph Hellwig, David Laight,
	Mark Rutland, Borislav Petkov

This one managed to fall through the cracks back in September.  Here's a
fresh new version.

Ideally, we'd switch all access_ok() users to access_ok_mask() or
something, but that's a much bigger change.

I dropped all the ack/review tags because the rebase was significant.

Please review carefully :-)


v4 changes:

- Rebased on the latest.

- Split up into multiple logical patches.

- Renamed "force_user_ptr()" -> "mask_user_ptr()" to prevent confusing
  it with '__force' casting.  [based on Dan's comment]

- Instead of reusing array_index_nospec(), made a new separate inline
  asm statement.  Otherwise it fails the build on recent toolchains
  and/or kernels because the "g" constraint in array_index_mask_nospec()
  isn't big enough for TASK_SIZE_MAX.  I could have changed "g" to "r",
  but that would negatively impact code generation for the other users.


v3 was here:

  https://lore.kernel.org/lkml/1d06ed6485b66b9f674900368b63d7ef79f666ca.1599756789.git.jpoimboe@redhat.com/


Josh Poimboeuf (4):
  uaccess: Always inline strn*_user() helper functions
  uaccess: Fix __user annotations for copy_mc_to_user()
  x86/uaccess: Use pointer masking to limit uaccess speculation
  x86/nospec: Remove barrier_nospec()

 Documentation/admin-guide/hw-vuln/spectre.rst |  6 +--
 arch/x86/include/asm/barrier.h                |  3 --
 arch/x86/include/asm/futex.h                  |  5 ++
 arch/x86/include/asm/uaccess.h                | 48 +++++++++++++------
 arch/x86/include/asm/uaccess_64.h             | 12 ++---
 arch/x86/kernel/cpu/sgx/virt.c                |  6 ++-
 arch/x86/lib/copy_mc.c                        | 10 ++--
 arch/x86/lib/csum-wrappers_64.c               |  5 +-
 arch/x86/lib/getuser.S                        | 16 ++-----
 arch/x86/lib/putuser.S                        |  8 ++++
 arch/x86/lib/usercopy_32.c                    |  6 +--
 arch/x86/lib/usercopy_64.c                    |  7 +--
 lib/iov_iter.c                                |  2 +-
 lib/strncpy_from_user.c                       |  6 ++-
 lib/strnlen_user.c                            |  4 +-
 15 files changed, 89 insertions(+), 55 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v4 1/4] uaccess: Always inline strn*_user() helper functions
  2021-05-05  3:54 [PATCH v4 0/4] x86/uaccess: Use pointer masking to limit uaccess speculation Josh Poimboeuf
@ 2021-05-05  3:54 ` Josh Poimboeuf
  2021-05-05  3:54 ` [PATCH v4 2/4] uaccess: Fix __user annotations for copy_mc_to_user() Josh Poimboeuf
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 19+ messages in thread
From: Josh Poimboeuf @ 2021-05-05  3:54 UTC (permalink / raw)
  To: Al Viro
  Cc: x86, linux-kernel, Linus Torvalds, Will Deacon, Dan Williams,
	Andrea Arcangeli, Waiman Long, Peter Zijlstra, Thomas Gleixner,
	Andrew Cooper, Andy Lutomirski, Christoph Hellwig, David Laight,
	Mark Rutland, Borislav Petkov, Arnd Bergmann, Stephen Rothwell,
	Sami Tolvanen

CONFIG_DEBUG_SECTION_MISMATCH uses -fno-inline-functions-called-once,
causing these single-called helper functions to not get inlined:

  lib/strncpy_from_user.o: warning: objtool: strncpy_from_user()+0xa3: call to do_strncpy_from_user() with UACCESS enabled
  lib/strnlen_user.o: warning: objtool: strnlen_user()+0x73: call to do_strnlen_user() with UACCESS enabled

Always inline them regardless.

Reported-by: Arnd Bergmann <arnd@arndb.de>
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Reported-by: Sami Tolvanen <samitolvanen@google.com>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
---
 lib/strncpy_from_user.c | 6 ++++--
 lib/strnlen_user.c      | 4 +++-
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/lib/strncpy_from_user.c b/lib/strncpy_from_user.c
index 122d8d0e253c..388539951116 100644
--- a/lib/strncpy_from_user.c
+++ b/lib/strncpy_from_user.c
@@ -25,8 +25,10 @@
  * hit it), 'max' is the address space maximum (and we return
  * -EFAULT if we hit it).
  */
-static inline long do_strncpy_from_user(char *dst, const char __user *src,
-					unsigned long count, unsigned long max)
+static __always_inline long do_strncpy_from_user(char *dst,
+						 const char __user *src,
+						 unsigned long count,
+						 unsigned long max)
 {
 	const struct word_at_a_time constants = WORD_AT_A_TIME_CONSTANTS;
 	unsigned long res = 0;
diff --git a/lib/strnlen_user.c b/lib/strnlen_user.c
index 1616710b8a82..378744e96039 100644
--- a/lib/strnlen_user.c
+++ b/lib/strnlen_user.c
@@ -20,7 +20,9 @@
  * if it fits in a aligned 'long'. The caller needs to check
  * the return value against "> max".
  */
-static inline long do_strnlen_user(const char __user *src, unsigned long count, unsigned long max)
+static __always_inline long do_strnlen_user(const char __user *src,
+					    unsigned long count,
+					    unsigned long max)
 {
 	const struct word_at_a_time constants = WORD_AT_A_TIME_CONSTANTS;
 	unsigned long align, res = 0;
-- 
2.31.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v4 2/4] uaccess: Fix __user annotations for copy_mc_to_user()
  2021-05-05  3:54 [PATCH v4 0/4] x86/uaccess: Use pointer masking to limit uaccess speculation Josh Poimboeuf
  2021-05-05  3:54 ` [PATCH v4 1/4] uaccess: Always inline strn*_user() helper functions Josh Poimboeuf
@ 2021-05-05  3:54 ` Josh Poimboeuf
  2021-05-05  3:54 ` [PATCH v4 3/4] x86/uaccess: Use pointer masking to limit uaccess speculation Josh Poimboeuf
  2021-05-05  3:54 ` [PATCH v4 4/4] x86/nospec: Remove barrier_nospec() Josh Poimboeuf
  3 siblings, 0 replies; 19+ messages in thread
From: Josh Poimboeuf @ 2021-05-05  3:54 UTC (permalink / raw)
  To: Al Viro
  Cc: x86, linux-kernel, Linus Torvalds, Will Deacon, Dan Williams,
	Andrea Arcangeli, Waiman Long, Peter Zijlstra, Thomas Gleixner,
	Andrew Cooper, Andy Lutomirski, Christoph Hellwig, David Laight,
	Mark Rutland, Borislav Petkov

The 'dst' field is a user pointer, so annotate it as such.  This is
consistent with what powerpc is already doing for this interface.

Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
---
 arch/x86/include/asm/uaccess.h | 2 +-
 arch/x86/lib/copy_mc.c         | 8 ++++----
 lib/iov_iter.c                 | 2 +-
 3 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index c9fa7be3df82..fb75657b5e56 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -445,7 +445,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len);
 #define copy_mc_to_kernel copy_mc_to_kernel
 
 unsigned long __must_check
-copy_mc_to_user(void *to, const void *from, unsigned len);
+copy_mc_to_user(void __user *to, const void *from, unsigned len);
 #endif
 
 /*
diff --git a/arch/x86/lib/copy_mc.c b/arch/x86/lib/copy_mc.c
index 80efd45a7761..6e8b7e600def 100644
--- a/arch/x86/lib/copy_mc.c
+++ b/arch/x86/lib/copy_mc.c
@@ -70,23 +70,23 @@ unsigned long __must_check copy_mc_to_kernel(void *dst, const void *src, unsigne
 }
 EXPORT_SYMBOL_GPL(copy_mc_to_kernel);
 
-unsigned long __must_check copy_mc_to_user(void *dst, const void *src, unsigned len)
+unsigned long __must_check copy_mc_to_user(void __user *dst, const void *src, unsigned len)
 {
 	unsigned long ret;
 
 	if (copy_mc_fragile_enabled) {
 		__uaccess_begin();
-		ret = copy_mc_fragile(dst, src, len);
+		ret = copy_mc_fragile((__force void *)dst, src, len);
 		__uaccess_end();
 		return ret;
 	}
 
 	if (static_cpu_has(X86_FEATURE_ERMS)) {
 		__uaccess_begin();
-		ret = copy_mc_enhanced_fast_string(dst, src, len);
+		ret = copy_mc_enhanced_fast_string((__force void *)dst, src, len);
 		__uaccess_end();
 		return ret;
 	}
 
-	return copy_user_generic(dst, src, len);
+	return copy_user_generic((__force void *)dst, src, len);
 }
diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index 61228a6c69f8..26f87115133f 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -679,7 +679,7 @@ static int copyout_mc(void __user *to, const void *from, size_t n)
 {
 	if (access_ok(to, n)) {
 		instrument_copy_to_user(to, from, n);
-		n = copy_mc_to_user((__force void *) to, from, n);
+		n = copy_mc_to_user(to, from, n);
 	}
 	return n;
 }
-- 
2.31.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v4 3/4] x86/uaccess: Use pointer masking to limit uaccess speculation
  2021-05-05  3:54 [PATCH v4 0/4] x86/uaccess: Use pointer masking to limit uaccess speculation Josh Poimboeuf
  2021-05-05  3:54 ` [PATCH v4 1/4] uaccess: Always inline strn*_user() helper functions Josh Poimboeuf
  2021-05-05  3:54 ` [PATCH v4 2/4] uaccess: Fix __user annotations for copy_mc_to_user() Josh Poimboeuf
@ 2021-05-05  3:54 ` Josh Poimboeuf
  2021-05-05  8:48   ` David Laight
                     ` (3 more replies)
  2021-05-05  3:54 ` [PATCH v4 4/4] x86/nospec: Remove barrier_nospec() Josh Poimboeuf
  3 siblings, 4 replies; 19+ messages in thread
From: Josh Poimboeuf @ 2021-05-05  3:54 UTC (permalink / raw)
  To: Al Viro
  Cc: x86, linux-kernel, Linus Torvalds, Will Deacon, Dan Williams,
	Andrea Arcangeli, Waiman Long, Peter Zijlstra, Thomas Gleixner,
	Andrew Cooper, Andy Lutomirski, Christoph Hellwig, David Laight,
	Mark Rutland, Borislav Petkov

The x86 uaccess code uses barrier_nospec() in various places to prevent
speculative dereferencing of user-controlled pointers (which might be
combined with further gadgets or CPU bugs to leak data).

There are some issues with the current implementation:

- The barrier_nospec() in copy_from_user() was inadvertently removed
  with: 4b842e4e25b1 ("x86: get rid of small constant size cases in
  raw_copy_{to,from}_user()")

- copy_to_user() and friends should also have a speculation barrier,
  because a speculative write to a user-controlled address can still
  populate the cache line with the original data.

- The LFENCE in barrier_nospec() is overkill, when more lightweight user
  pointer masking can be used instead.

Remove existing barrier_nospec() usage, and instead do user pointer
masking, throughout the x86 uaccess code.  This is similar to what arm64
is already doing with uaccess_mask_ptr().

Fixes: 4b842e4e25b1 ("x86: get rid of small constant size cases in raw_copy_{to,from}_user()")
Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
---
 Documentation/admin-guide/hw-vuln/spectre.rst |  6 +--
 arch/x86/include/asm/futex.h                  |  5 ++
 arch/x86/include/asm/uaccess.h                | 46 +++++++++++++------
 arch/x86/include/asm/uaccess_64.h             | 12 ++---
 arch/x86/kernel/cpu/sgx/virt.c                |  6 ++-
 arch/x86/lib/copy_mc.c                        |  2 +
 arch/x86/lib/csum-wrappers_64.c               |  5 +-
 arch/x86/lib/getuser.S                        | 16 ++-----
 arch/x86/lib/putuser.S                        |  8 ++++
 arch/x86/lib/usercopy_32.c                    |  6 +--
 arch/x86/lib/usercopy_64.c                    |  7 +--
 11 files changed, 76 insertions(+), 43 deletions(-)

diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst
index e05e581af5cf..2348d27d61da 100644
--- a/Documentation/admin-guide/hw-vuln/spectre.rst
+++ b/Documentation/admin-guide/hw-vuln/spectre.rst
@@ -426,9 +426,9 @@ Spectre variant 1
    <spec_ref2>` to avoid any usable disclosure gadgets. However, it may
    not cover all attack vectors for Spectre variant 1.
 
-   Copy-from-user code has an LFENCE barrier to prevent the access_ok()
-   check from being mis-speculated.  The barrier is done by the
-   barrier_nospec() macro.
+   Usercopy code uses user pointer masking to prevent the access_ok()
+   check from being mis-speculated in the success path with a kernel
+   address.  The masking is done by the mask_user_ptr() macro.
 
    For the swapgs variant of Spectre variant 1, LFENCE barriers are
    added to interrupt, exception and NMI entry where needed.  These
diff --git a/arch/x86/include/asm/futex.h b/arch/x86/include/asm/futex.h
index f9c00110a69a..6224b2f15a0f 100644
--- a/arch/x86/include/asm/futex.h
+++ b/arch/x86/include/asm/futex.h
@@ -59,6 +59,8 @@ static __always_inline int arch_futex_atomic_op_inuser(int op, int oparg, int *o
 	if (!user_access_begin(uaddr, sizeof(u32)))
 		return -EFAULT;
 
+	uaddr = mask_user_ptr(uaddr);
+
 	switch (op) {
 	case FUTEX_OP_SET:
 		unsafe_atomic_op1("xchgl %0, %2", oval, uaddr, oparg, Efault);
@@ -94,6 +96,9 @@ static inline int futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
 
 	if (!user_access_begin(uaddr, sizeof(u32)))
 		return -EFAULT;
+
+	uaddr = mask_user_ptr(uaddr);
+
 	asm volatile("\n"
 		"1:\t" LOCK_PREFIX "cmpxchgl %4, %2\n"
 		"2:\n"
diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index fb75657b5e56..ebe9ab46b183 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -66,12 +66,35 @@ static inline bool pagefault_disabled(void);
  * Return: true (nonzero) if the memory block may be valid, false (zero)
  * if it is definitely invalid.
  */
-#define access_ok(addr, size)					\
+#define access_ok(addr, size)						\
 ({									\
 	WARN_ON_IN_IRQ();						\
 	likely(!__range_not_ok(addr, size, TASK_SIZE_MAX));		\
 })
 
+/*
+ * Sanitize a user pointer such that it becomes NULL if it's not a valid user
+ * pointer.  This prevents speculatively dereferencing a user-controlled
+ * pointer to kernel space if access_ok() speculatively returns true.  This
+ * should be done *after* access_ok(), to avoid affecting error handling
+ * behavior.
+ */
+#define mask_user_ptr(ptr)						\
+({									\
+	unsigned long _ptr = (__force unsigned long)ptr;		\
+	unsigned long mask;						\
+									\
+	asm volatile("cmp %[max], %[_ptr]\n\t"				\
+		     "sbb %[mask], %[mask]\n\t"				\
+		     : [mask] "=r" (mask)				\
+		     : [_ptr] "r" (_ptr),				\
+		       [max] "r" (TASK_SIZE_MAX)			\
+		     : "cc");						\
+									\
+	mask &= _ptr;							\
+	((typeof(ptr)) mask);						\
+})
+
 extern int __get_user_1(void);
 extern int __get_user_2(void);
 extern int __get_user_4(void);
@@ -84,11 +107,6 @@ extern int __get_user_bad(void);
 
 #define __uaccess_begin() stac()
 #define __uaccess_end()   clac()
-#define __uaccess_begin_nospec()	\
-({					\
-	stac();				\
-	barrier_nospec();		\
-})
 
 /*
  * This is the smallest unsigned integer type that can fit a value
@@ -175,7 +193,7 @@ extern int __get_user_bad(void);
  * Return: zero on success, or -EFAULT on error.
  * On error, the variable @x is set to zero.
  */
-#define __get_user(x,ptr) do_get_user_call(get_user_nocheck,x,ptr)
+#define __get_user(x,ptr) do_get_user_call(get_user_nocheck, x, mask_user_ptr(ptr))
 
 
 #ifdef CONFIG_X86_32
@@ -271,7 +289,7 @@ extern void __put_user_nocheck_8(void);
  *
  * Return: zero on success, or -EFAULT on error.
  */
-#define __put_user(x, ptr) do_put_user_call(put_user_nocheck,x,ptr)
+#define __put_user(x, ptr) do_put_user_call(put_user_nocheck, x, mask_user_ptr(ptr))
 
 #define __put_user_size(x, ptr, size, label)				\
 do {									\
@@ -475,7 +493,7 @@ static __must_check __always_inline bool user_access_begin(const void __user *pt
 {
 	if (unlikely(!access_ok(ptr,len)))
 		return 0;
-	__uaccess_begin_nospec();
+	__uaccess_begin();
 	return 1;
 }
 #define user_access_begin(a,b)	user_access_begin(a,b)
@@ -484,14 +502,15 @@ static __must_check __always_inline bool user_access_begin(const void __user *pt
 #define user_access_save()	smap_save()
 #define user_access_restore(x)	smap_restore(x)
 
-#define unsafe_put_user(x, ptr, label)	\
-	__put_user_size((__typeof__(*(ptr)))(x), (ptr), sizeof(*(ptr)), label)
+#define unsafe_put_user(x, ptr, label)						\
+	__put_user_size((__typeof__(*(ptr)))(x), mask_user_ptr(ptr),		\
+			sizeof(*(ptr)), label)
 
 #ifdef CONFIG_CC_HAS_ASM_GOTO_OUTPUT
 #define unsafe_get_user(x, ptr, err_label)					\
 do {										\
 	__inttype(*(ptr)) __gu_val;						\
-	__get_user_size(__gu_val, (ptr), sizeof(*(ptr)), err_label);		\
+	__get_user_size(__gu_val, mask_user_ptr(ptr), sizeof(*(ptr)), err_label);\
 	(x) = (__force __typeof__(*(ptr)))__gu_val;				\
 } while (0)
 #else // !CONFIG_CC_HAS_ASM_GOTO_OUTPUT
@@ -499,7 +518,8 @@ do {										\
 do {										\
 	int __gu_err;								\
 	__inttype(*(ptr)) __gu_val;						\
-	__get_user_size(__gu_val, (ptr), sizeof(*(ptr)), __gu_err);		\
+	__get_user_size(__gu_val, mask_user_ptr(ptr), sizeof(*(ptr)),		\
+			__gu_err);						\
 	(x) = (__force __typeof__(*(ptr)))__gu_val;				\
 	if (unlikely(__gu_err)) goto err_label;					\
 } while (0)
diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h
index e7265a552f4f..abd9cb204fde 100644
--- a/arch/x86/include/asm/uaccess_64.h
+++ b/arch/x86/include/asm/uaccess_64.h
@@ -49,20 +49,20 @@ copy_user_generic(void *to, const void *from, unsigned len)
 static __always_inline __must_check unsigned long
 raw_copy_from_user(void *dst, const void __user *src, unsigned long size)
 {
-	return copy_user_generic(dst, (__force void *)src, size);
+	return copy_user_generic(dst, (__force void *)mask_user_ptr(src), size);
 }
 
 static __always_inline __must_check unsigned long
 raw_copy_to_user(void __user *dst, const void *src, unsigned long size)
 {
-	return copy_user_generic((__force void *)dst, src, size);
+	return copy_user_generic((__force void *)mask_user_ptr(dst), src, size);
 }
 
 static __always_inline __must_check
 unsigned long raw_copy_in_user(void __user *dst, const void __user *src, unsigned long size)
 {
-	return copy_user_generic((__force void *)dst,
-				 (__force void *)src, size);
+	return copy_user_generic((__force void *)mask_user_ptr(dst),
+				 (__force void *)mask_user_ptr(src), size);
 }
 
 extern long __copy_user_nocache(void *dst, const void __user *src,
@@ -77,13 +77,13 @@ __copy_from_user_inatomic_nocache(void *dst, const void __user *src,
 				  unsigned size)
 {
 	kasan_check_write(dst, size);
-	return __copy_user_nocache(dst, src, size, 0);
+	return __copy_user_nocache(dst, mask_user_ptr(src), size, 0);
 }
 
 static inline int
 __copy_from_user_flushcache(void *dst, const void __user *src, unsigned size)
 {
 	kasan_check_write(dst, size);
-	return __copy_user_flushcache(dst, src, size);
+	return __copy_user_flushcache(dst, mask_user_ptr(src), size);
 }
 #endif /* _ASM_X86_UACCESS_64_H */
diff --git a/arch/x86/kernel/cpu/sgx/virt.c b/arch/x86/kernel/cpu/sgx/virt.c
index 6ad165a5c0cc..1b6606afad36 100644
--- a/arch/x86/kernel/cpu/sgx/virt.c
+++ b/arch/x86/kernel/cpu/sgx/virt.c
@@ -292,7 +292,7 @@ int sgx_virt_ecreate(struct sgx_pageinfo *pageinfo, void __user *secs,
 		return -EINVAL;
 
 	__uaccess_begin();
-	ret = __ecreate(pageinfo, (void *)secs);
+	ret = __ecreate(pageinfo, (void *)mask_user_ptr(secs));
 	__uaccess_end();
 
 	if (encls_faulted(ret)) {
@@ -323,7 +323,9 @@ static int __sgx_virt_einit(void __user *sigstruct, void __user *token,
 		return -EINVAL;
 
 	__uaccess_begin();
-	ret = __einit((void *)sigstruct, (void *)token, (void *)secs);
+	ret = __einit((void *)mask_user_ptr(sigstruct),
+		      (void *)mask_user_ptr(token),
+		      (void *)mask_user_ptr(secs));
 	__uaccess_end();
 
 	return ret;
diff --git a/arch/x86/lib/copy_mc.c b/arch/x86/lib/copy_mc.c
index 6e8b7e600def..b895bafbe7fe 100644
--- a/arch/x86/lib/copy_mc.c
+++ b/arch/x86/lib/copy_mc.c
@@ -74,6 +74,8 @@ unsigned long __must_check copy_mc_to_user(void __user *dst, const void *src, un
 {
 	unsigned long ret;
 
+	dst = mask_user_ptr(dst);
+
 	if (copy_mc_fragile_enabled) {
 		__uaccess_begin();
 		ret = copy_mc_fragile((__force void *)dst, src, len);
diff --git a/arch/x86/lib/csum-wrappers_64.c b/arch/x86/lib/csum-wrappers_64.c
index 189344924a2b..b022d34b9c4b 100644
--- a/arch/x86/lib/csum-wrappers_64.c
+++ b/arch/x86/lib/csum-wrappers_64.c
@@ -28,7 +28,8 @@ csum_and_copy_from_user(const void __user *src, void *dst, int len)
 	might_sleep();
 	if (!user_access_begin(src, len))
 		return 0;
-	sum = csum_partial_copy_generic((__force const void *)src, dst, len);
+	sum = csum_partial_copy_generic((__force const void *)mask_user_ptr(src),
+					dst, len);
 	user_access_end();
 	return sum;
 }
@@ -53,7 +54,7 @@ csum_and_copy_to_user(const void *src, void __user *dst, int len)
 	might_sleep();
 	if (!user_access_begin(dst, len))
 		return 0;
-	sum = csum_partial_copy_generic(src, (void __force *)dst, len);
+	sum = csum_partial_copy_generic(src, (void __force *)mask_user_ptr(dst), len);
 	user_access_end();
 	return sum;
 }
diff --git a/arch/x86/lib/getuser.S b/arch/x86/lib/getuser.S
index fa1bc2104b32..64715a7edb1f 100644
--- a/arch/x86/lib/getuser.S
+++ b/arch/x86/lib/getuser.S
@@ -35,8 +35,6 @@
 #include <asm/smap.h>
 #include <asm/export.h>
 
-#define ASM_BARRIER_NOSPEC ALTERNATIVE "", "lfence", X86_FEATURE_LFENCE_RDTSC
-
 #ifdef CONFIG_X86_5LEVEL
 #define LOAD_TASK_SIZE_MINUS_N(n) \
 	ALTERNATIVE __stringify(mov $((1 << 47) - 4096 - (n)),%rdx), \
@@ -51,7 +49,7 @@ SYM_FUNC_START(__get_user_1)
 	LOAD_TASK_SIZE_MINUS_N(0)
 	cmp %_ASM_DX,%_ASM_AX
 	jae bad_get_user
-	sbb %_ASM_DX, %_ASM_DX		/* array_index_mask_nospec() */
+	sbb %_ASM_DX, %_ASM_DX		/* mask_user_ptr() */
 	and %_ASM_DX, %_ASM_AX
 	ASM_STAC
 1:	movzbl (%_ASM_AX),%edx
@@ -65,7 +63,7 @@ SYM_FUNC_START(__get_user_2)
 	LOAD_TASK_SIZE_MINUS_N(1)
 	cmp %_ASM_DX,%_ASM_AX
 	jae bad_get_user
-	sbb %_ASM_DX, %_ASM_DX		/* array_index_mask_nospec() */
+	sbb %_ASM_DX, %_ASM_DX		/* mask_user_ptr() */
 	and %_ASM_DX, %_ASM_AX
 	ASM_STAC
 2:	movzwl (%_ASM_AX),%edx
@@ -79,7 +77,7 @@ SYM_FUNC_START(__get_user_4)
 	LOAD_TASK_SIZE_MINUS_N(3)
 	cmp %_ASM_DX,%_ASM_AX
 	jae bad_get_user
-	sbb %_ASM_DX, %_ASM_DX		/* array_index_mask_nospec() */
+	sbb %_ASM_DX, %_ASM_DX		/* mask_user_ptr() */
 	and %_ASM_DX, %_ASM_AX
 	ASM_STAC
 3:	movl (%_ASM_AX),%edx
@@ -94,7 +92,7 @@ SYM_FUNC_START(__get_user_8)
 	LOAD_TASK_SIZE_MINUS_N(7)
 	cmp %_ASM_DX,%_ASM_AX
 	jae bad_get_user
-	sbb %_ASM_DX, %_ASM_DX		/* array_index_mask_nospec() */
+	sbb %_ASM_DX, %_ASM_DX		/* mask_user_ptr() */
 	and %_ASM_DX, %_ASM_AX
 	ASM_STAC
 4:	movq (%_ASM_AX),%rdx
@@ -105,7 +103,7 @@ SYM_FUNC_START(__get_user_8)
 	LOAD_TASK_SIZE_MINUS_N(7)
 	cmp %_ASM_DX,%_ASM_AX
 	jae bad_get_user_8
-	sbb %_ASM_DX, %_ASM_DX		/* array_index_mask_nospec() */
+	sbb %_ASM_DX, %_ASM_DX		/* mask_user_ptr() */
 	and %_ASM_DX, %_ASM_AX
 	ASM_STAC
 4:	movl (%_ASM_AX),%edx
@@ -120,7 +118,6 @@ EXPORT_SYMBOL(__get_user_8)
 /* .. and the same for __get_user, just without the range checks */
 SYM_FUNC_START(__get_user_nocheck_1)
 	ASM_STAC
-	ASM_BARRIER_NOSPEC
 6:	movzbl (%_ASM_AX),%edx
 	xor %eax,%eax
 	ASM_CLAC
@@ -130,7 +127,6 @@ EXPORT_SYMBOL(__get_user_nocheck_1)
 
 SYM_FUNC_START(__get_user_nocheck_2)
 	ASM_STAC
-	ASM_BARRIER_NOSPEC
 7:	movzwl (%_ASM_AX),%edx
 	xor %eax,%eax
 	ASM_CLAC
@@ -140,7 +136,6 @@ EXPORT_SYMBOL(__get_user_nocheck_2)
 
 SYM_FUNC_START(__get_user_nocheck_4)
 	ASM_STAC
-	ASM_BARRIER_NOSPEC
 8:	movl (%_ASM_AX),%edx
 	xor %eax,%eax
 	ASM_CLAC
@@ -150,7 +145,6 @@ EXPORT_SYMBOL(__get_user_nocheck_4)
 
 SYM_FUNC_START(__get_user_nocheck_8)
 	ASM_STAC
-	ASM_BARRIER_NOSPEC
 #ifdef CONFIG_X86_64
 9:	movq (%_ASM_AX),%rdx
 #else
diff --git a/arch/x86/lib/putuser.S b/arch/x86/lib/putuser.S
index 0ea344c5ea43..afd819459455 100644
--- a/arch/x86/lib/putuser.S
+++ b/arch/x86/lib/putuser.S
@@ -47,6 +47,8 @@ SYM_FUNC_START(__put_user_1)
 	LOAD_TASK_SIZE_MINUS_N(0)
 	cmp %_ASM_BX,%_ASM_CX
 	jae .Lbad_put_user
+	sbb %_ASM_BX, %_ASM_BX		/* mask_user_ptr() */
+	and %_ASM_BX, %_ASM_CX
 SYM_INNER_LABEL(__put_user_nocheck_1, SYM_L_GLOBAL)
 	ASM_STAC
 1:	movb %al,(%_ASM_CX)
@@ -61,6 +63,8 @@ SYM_FUNC_START(__put_user_2)
 	LOAD_TASK_SIZE_MINUS_N(1)
 	cmp %_ASM_BX,%_ASM_CX
 	jae .Lbad_put_user
+	sbb %_ASM_BX, %_ASM_BX		/* mask_user_ptr() */
+	and %_ASM_BX, %_ASM_CX
 SYM_INNER_LABEL(__put_user_nocheck_2, SYM_L_GLOBAL)
 	ASM_STAC
 2:	movw %ax,(%_ASM_CX)
@@ -75,6 +79,8 @@ SYM_FUNC_START(__put_user_4)
 	LOAD_TASK_SIZE_MINUS_N(3)
 	cmp %_ASM_BX,%_ASM_CX
 	jae .Lbad_put_user
+	sbb %_ASM_BX, %_ASM_BX		/* mask_user_ptr() */
+	and %_ASM_BX, %_ASM_CX
 SYM_INNER_LABEL(__put_user_nocheck_4, SYM_L_GLOBAL)
 	ASM_STAC
 3:	movl %eax,(%_ASM_CX)
@@ -89,6 +95,8 @@ SYM_FUNC_START(__put_user_8)
 	LOAD_TASK_SIZE_MINUS_N(7)
 	cmp %_ASM_BX,%_ASM_CX
 	jae .Lbad_put_user
+	sbb %_ASM_BX, %_ASM_BX		/* mask_user_ptr() */
+	and %_ASM_BX, %_ASM_CX
 SYM_INNER_LABEL(__put_user_nocheck_8, SYM_L_GLOBAL)
 	ASM_STAC
 4:	mov %_ASM_AX,(%_ASM_CX)
diff --git a/arch/x86/lib/usercopy_32.c b/arch/x86/lib/usercopy_32.c
index 7d290777246d..e4dc3c2790db 100644
--- a/arch/x86/lib/usercopy_32.c
+++ b/arch/x86/lib/usercopy_32.c
@@ -68,7 +68,7 @@ clear_user(void __user *to, unsigned long n)
 {
 	might_fault();
 	if (access_ok(to, n))
-		__do_clear_user(to, n);
+		__do_clear_user(mask_user_ptr(to), n);
 	return n;
 }
 EXPORT_SYMBOL(clear_user);
@@ -331,7 +331,7 @@ do {									\
 
 unsigned long __copy_user_ll(void *to, const void *from, unsigned long n)
 {
-	__uaccess_begin_nospec();
+	__uaccess_begin();
 	if (movsl_is_ok(to, from, n))
 		__copy_user(to, from, n);
 	else
@@ -344,7 +344,7 @@ EXPORT_SYMBOL(__copy_user_ll);
 unsigned long __copy_from_user_ll_nocache_nozero(void *to, const void __user *from,
 					unsigned long n)
 {
-	__uaccess_begin_nospec();
+	__uaccess_begin();
 #ifdef CONFIG_X86_INTEL_USERCOPY
 	if (n > 64 && static_cpu_has(X86_FEATURE_XMM2))
 		n = __copy_user_intel_nocache(to, from, n);
diff --git a/arch/x86/lib/usercopy_64.c b/arch/x86/lib/usercopy_64.c
index 508c81e97ab1..be0e5efdd142 100644
--- a/arch/x86/lib/usercopy_64.c
+++ b/arch/x86/lib/usercopy_64.c
@@ -42,7 +42,8 @@ unsigned long __clear_user(void __user *addr, unsigned long size)
 		_ASM_EXTABLE_UA(0b, 3b)
 		_ASM_EXTABLE_UA(1b, 2b)
 		: [size8] "=&c"(size), [dst] "=&D" (__d0)
-		: [size1] "r"(size & 7), "[size8]" (size / 8), "[dst]"(addr));
+		: [size1] "r"(size & 7), "[size8]" (size / 8),
+		  "[dst]" (mask_user_ptr(addr)));
 	clac();
 	return size;
 }
@@ -51,7 +52,7 @@ EXPORT_SYMBOL(__clear_user);
 unsigned long clear_user(void __user *to, unsigned long n)
 {
 	if (access_ok(to, n))
-		return __clear_user(to, n);
+		return __clear_user(mask_user_ptr(to), n);
 	return n;
 }
 EXPORT_SYMBOL(clear_user);
@@ -87,7 +88,7 @@ EXPORT_SYMBOL_GPL(arch_wb_cache_pmem);
 long __copy_user_flushcache(void *dst, const void __user *src, unsigned size)
 {
 	unsigned long flushed, dest = (unsigned long) dst;
-	long rc = __copy_user_nocache(dst, src, size, 0);
+	long rc = __copy_user_nocache(dst, mask_user_ptr(src), size, 0);
 
 	/*
 	 * __copy_user_nocache() uses non-temporal stores for the bulk
-- 
2.31.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v4 4/4] x86/nospec: Remove barrier_nospec()
  2021-05-05  3:54 [PATCH v4 0/4] x86/uaccess: Use pointer masking to limit uaccess speculation Josh Poimboeuf
                   ` (2 preceding siblings ...)
  2021-05-05  3:54 ` [PATCH v4 3/4] x86/uaccess: Use pointer masking to limit uaccess speculation Josh Poimboeuf
@ 2021-05-05  3:54 ` Josh Poimboeuf
  3 siblings, 0 replies; 19+ messages in thread
From: Josh Poimboeuf @ 2021-05-05  3:54 UTC (permalink / raw)
  To: Al Viro
  Cc: x86, linux-kernel, Linus Torvalds, Will Deacon, Dan Williams,
	Andrea Arcangeli, Waiman Long, Peter Zijlstra, Thomas Gleixner,
	Andrew Cooper, Andy Lutomirski, Christoph Hellwig, David Laight,
	Mark Rutland, Borislav Petkov

The barrier_nospec() macro is no longer used.  Its uses have been
replaced with address and array index masking.  Remove it.

Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
---
 arch/x86/include/asm/barrier.h | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index 4819d5e5a335..88f692d4f4ec 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -48,9 +48,6 @@ static inline unsigned long array_index_mask_nospec(unsigned long index,
 /* Override the default implementation from linux/nospec.h. */
 #define array_index_mask_nospec array_index_mask_nospec
 
-/* Prevent speculative execution past this barrier. */
-#define barrier_nospec() alternative("", "lfence", X86_FEATURE_LFENCE_RDTSC)
-
 #define dma_rmb()	barrier()
 #define dma_wmb()	barrier()
 
-- 
2.31.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: [PATCH v4 3/4] x86/uaccess: Use pointer masking to limit uaccess speculation
  2021-05-05  3:54 ` [PATCH v4 3/4] x86/uaccess: Use pointer masking to limit uaccess speculation Josh Poimboeuf
@ 2021-05-05  8:48   ` David Laight
  2021-05-05 13:19     ` Josh Poimboeuf
  2021-05-05 18:32     ` Linus Torvalds
  2021-05-05 14:25   ` Mark Rutland
                     ` (2 subsequent siblings)
  3 siblings, 2 replies; 19+ messages in thread
From: David Laight @ 2021-05-05  8:48 UTC (permalink / raw)
  To: 'Josh Poimboeuf', Al Viro
  Cc: x86, linux-kernel, Linus Torvalds, Will Deacon, Dan Williams,
	Andrea Arcangeli, Waiman Long, Peter Zijlstra, Thomas Gleixner,
	Andrew Cooper, Andy Lutomirski, Christoph Hellwig, Mark Rutland,
	Borislav Petkov

From: Josh Poimboeuf
> Sent: 05 May 2021 04:55
> 
> The x86 uaccess code uses barrier_nospec() in various places to prevent
> speculative dereferencing of user-controlled pointers (which might be
> combined with further gadgets or CPU bugs to leak data).
...
> Remove existing barrier_nospec() usage, and instead do user pointer
> masking, throughout the x86 uaccess code.  This is similar to what arm64
> is already doing with uaccess_mask_ptr().
...
> diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
> index fb75657b5e56..ebe9ab46b183 100644
> --- a/arch/x86/include/asm/uaccess.h
> +++ b/arch/x86/include/asm/uaccess.h
> @@ -66,12 +66,35 @@ static inline bool pagefault_disabled(void);
>   * Return: true (nonzero) if the memory block may be valid, false (zero)
>   * if it is definitely invalid.
>   */
> -#define access_ok(addr, size)					\
> +#define access_ok(addr, size)						\
>  ({									\
>  	WARN_ON_IN_IRQ();						\
>  	likely(!__range_not_ok(addr, size, TASK_SIZE_MAX));		\
>  })
> 
> +/*
> + * Sanitize a user pointer such that it becomes NULL if it's not a valid user
> + * pointer.  This prevents speculatively dereferencing a user-controlled
> + * pointer to kernel space if access_ok() speculatively returns true.  This
> + * should be done *after* access_ok(), to avoid affecting error handling
> + * behavior.
> + */
> +#define mask_user_ptr(ptr)						\
> +({									\
> +	unsigned long _ptr = (__force unsigned long)ptr;		\
> +	unsigned long mask;						\
> +									\
> +	asm volatile("cmp %[max], %[_ptr]\n\t"				\
> +		     "sbb %[mask], %[mask]\n\t"				\
> +		     : [mask] "=r" (mask)				\
> +		     : [_ptr] "r" (_ptr),				\
> +		       [max] "r" (TASK_SIZE_MAX)			\
> +		     : "cc");						\
> +									\
> +	mask &= _ptr;							\
> +	((typeof(ptr)) mask);						\
> +})
> +

access_ok() and mask_user_ptr() are doing much the same check.
Is there scope for making access_ok() return the masked pointer?

So the canonical calling code would be:
	uptr = access_ok(uptr, size);
	if (!uptr)
		return -EFAULT;

This would error requests for address 0 earlier - but I don't
believe they are ever valid in Linux.
(Some historic x86 a.out formats did load to address 0.)

Clearly for a follow up patch.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 3/4] x86/uaccess: Use pointer masking to limit uaccess speculation
  2021-05-05  8:48   ` David Laight
@ 2021-05-05 13:19     ` Josh Poimboeuf
  2021-05-05 13:51       ` David Laight
  2021-05-05 18:32     ` Linus Torvalds
  1 sibling, 1 reply; 19+ messages in thread
From: Josh Poimboeuf @ 2021-05-05 13:19 UTC (permalink / raw)
  To: David Laight
  Cc: Al Viro, x86, linux-kernel, Linus Torvalds, Will Deacon,
	Dan Williams, Andrea Arcangeli, Waiman Long, Peter Zijlstra,
	Thomas Gleixner, Andrew Cooper, Andy Lutomirski,
	Christoph Hellwig, Mark Rutland, Borislav Petkov

On Wed, May 05, 2021 at 08:48:48AM +0000, David Laight wrote:
> From: Josh Poimboeuf
> > Sent: 05 May 2021 04:55
> > 
> > The x86 uaccess code uses barrier_nospec() in various places to prevent
> > speculative dereferencing of user-controlled pointers (which might be
> > combined with further gadgets or CPU bugs to leak data).
> ...
> > Remove existing barrier_nospec() usage, and instead do user pointer
> > masking, throughout the x86 uaccess code.  This is similar to what arm64
> > is already doing with uaccess_mask_ptr().
> ...
> > diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
> > index fb75657b5e56..ebe9ab46b183 100644
> > --- a/arch/x86/include/asm/uaccess.h
> > +++ b/arch/x86/include/asm/uaccess.h
> > @@ -66,12 +66,35 @@ static inline bool pagefault_disabled(void);
> >   * Return: true (nonzero) if the memory block may be valid, false (zero)
> >   * if it is definitely invalid.
> >   */
> > -#define access_ok(addr, size)					\
> > +#define access_ok(addr, size)						\
> >  ({									\
> >  	WARN_ON_IN_IRQ();						\
> >  	likely(!__range_not_ok(addr, size, TASK_SIZE_MAX));		\
> >  })
> > 
> > +/*
> > + * Sanitize a user pointer such that it becomes NULL if it's not a valid user
> > + * pointer.  This prevents speculatively dereferencing a user-controlled
> > + * pointer to kernel space if access_ok() speculatively returns true.  This
> > + * should be done *after* access_ok(), to avoid affecting error handling
> > + * behavior.
> > + */
> > +#define mask_user_ptr(ptr)						\
> > +({									\
> > +	unsigned long _ptr = (__force unsigned long)ptr;		\
> > +	unsigned long mask;						\
> > +									\
> > +	asm volatile("cmp %[max], %[_ptr]\n\t"				\
> > +		     "sbb %[mask], %[mask]\n\t"				\
> > +		     : [mask] "=r" (mask)				\
> > +		     : [_ptr] "r" (_ptr),				\
> > +		       [max] "r" (TASK_SIZE_MAX)			\
> > +		     : "cc");						\
> > +									\
> > +	mask &= _ptr;							\
> > +	((typeof(ptr)) mask);						\
> > +})
> > +
> 
> access_ok() and mask_user_ptr() are doing much the same check.
> Is there scope for making access_ok() return the masked pointer?
> 
> So the canonical calling code would be:
> 	uptr = access_ok(uptr, size);
> 	if (!uptr)
> 		return -EFAULT;
> 
> This would error requests for address 0 earlier - but I don't
> believe they are ever valid in Linux.
> (Some historic x86 a.out formats did load to address 0.)
> 
> Clearly for a follow up patch.

Yeah.  I mentioned a similar idea in the cover letter.

But I'm thinking we should still rename it to access_ok_mask(), or
otherwise change the API to avoid the masked value getting ignored.

But that'll be a much bigger patch.

-- 
Josh


^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: [PATCH v4 3/4] x86/uaccess: Use pointer masking to limit uaccess speculation
  2021-05-05 13:19     ` Josh Poimboeuf
@ 2021-05-05 13:51       ` David Laight
  0 siblings, 0 replies; 19+ messages in thread
From: David Laight @ 2021-05-05 13:51 UTC (permalink / raw)
  To: 'Josh Poimboeuf'
  Cc: Al Viro, x86, linux-kernel, Linus Torvalds, Will Deacon,
	Dan Williams, Andrea Arcangeli, Waiman Long, Peter Zijlstra,
	Thomas Gleixner, Andrew Cooper, Andy Lutomirski,
	Christoph Hellwig, Mark Rutland, Borislav Petkov

From: Josh Poimboeuf
> Sent: 05 May 2021 14:20
...
> > access_ok() and mask_user_ptr() are doing much the same check.
> > Is there scope for making access_ok() return the masked pointer?
> >
> > So the canonical calling code would be:
> > 	uptr = access_ok(uptr, size);
> > 	if (!uptr)
> > 		return -EFAULT;
> >
> > This would error requests for address 0 earlier - but I don't
> > believe they are ever valid in Linux.
> > (Some historic x86 a.out formats did load to address 0.)
> >
> > Clearly for a follow up patch.
> 
> Yeah.  I mentioned a similar idea in the cover letter.
> 
> But I'm thinking we should still rename it to access_ok_mask(), or
> otherwise change the API to avoid the masked value getting ignored.

Something like:
	if (access_ok_mask(&uaddr, size))
		return -EFAULT;
might work.

> But that'll be a much bigger patch.

True - and would need to be done is stages.

The other optimisation is for short/sequential accesses.
In particular get_user() and copy_from_user().
Here the 'size' argument can often be avoided.
Either because only the base address is ever accessed, or the
kernel guarantees an unmapped page between user and kernel addresses.

IIRC x86 has to have an unmapped page because of 'issues' with
prefetch across the boundary.
I don't know if it is on the user or kernel side - doesn't really matter.

Also for typical 64bit architectures where there is a big address hole
around 1ul << 63, access_ok() can just check (for example):
	if (((long)uaddr | size) & ~0ul << 56)
		return -EFAULT.
(change the 56 to match the TASK_SIZE_MAX).
The compiler will then optimise away any constant size.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 3/4] x86/uaccess: Use pointer masking to limit uaccess speculation
  2021-05-05  3:54 ` [PATCH v4 3/4] x86/uaccess: Use pointer masking to limit uaccess speculation Josh Poimboeuf
  2021-05-05  8:48   ` David Laight
@ 2021-05-05 14:25   ` Mark Rutland
  2021-05-05 14:48     ` Josh Poimboeuf
  2021-05-05 14:49     ` David Laight
  2021-05-05 16:55   ` Andy Lutomirski
  2021-06-02 17:11   ` Sean Christopherson
  3 siblings, 2 replies; 19+ messages in thread
From: Mark Rutland @ 2021-05-05 14:25 UTC (permalink / raw)
  To: Josh Poimboeuf, David Laight
  Cc: Al Viro, x86, linux-kernel, Linus Torvalds, Will Deacon,
	Dan Williams, Andrea Arcangeli, Waiman Long, Peter Zijlstra,
	Thomas Gleixner, Andrew Cooper, Andy Lutomirski,
	Christoph Hellwig, Borislav Petkov

Hi Josh, David,

On Tue, May 04, 2021 at 10:54:31PM -0500, Josh Poimboeuf wrote:
> The x86 uaccess code uses barrier_nospec() in various places to prevent
> speculative dereferencing of user-controlled pointers (which might be
> combined with further gadgets or CPU bugs to leak data).
> 
> There are some issues with the current implementation:
> 
> - The barrier_nospec() in copy_from_user() was inadvertently removed
>   with: 4b842e4e25b1 ("x86: get rid of small constant size cases in
>   raw_copy_{to,from}_user()")
> 
> - copy_to_user() and friends should also have a speculation barrier,
>   because a speculative write to a user-controlled address can still
>   populate the cache line with the original data.
> 
> - The LFENCE in barrier_nospec() is overkill, when more lightweight user
>   pointer masking can be used instead.
> 
> Remove existing barrier_nospec() usage, and instead do user pointer
> masking, throughout the x86 uaccess code.  This is similar to what arm64
> is already doing with uaccess_mask_ptr().

> +/*
> + * Sanitize a user pointer such that it becomes NULL if it's not a valid user
> + * pointer.  This prevents speculatively dereferencing a user-controlled
> + * pointer to kernel space if access_ok() speculatively returns true.  This
> + * should be done *after* access_ok(), to avoid affecting error handling
> + * behavior.
> + */
> +#define mask_user_ptr(ptr)						\
> +({									\
> +	unsigned long _ptr = (__force unsigned long)ptr;		\
> +	unsigned long mask;						\
> +									\
> +	asm volatile("cmp %[max], %[_ptr]\n\t"				\
> +		     "sbb %[mask], %[mask]\n\t"				\
> +		     : [mask] "=r" (mask)				\
> +		     : [_ptr] "r" (_ptr),				\
> +		       [max] "r" (TASK_SIZE_MAX)			\
> +		     : "cc");						\
> +									\
> +	mask &= _ptr;							\
> +	((typeof(ptr)) mask);						\
> +})

On arm64 we needed to have a sequence here because the addr_limit used
to be variable, but now that we've removed set_fs() and split the
user/kernel access routines we could simplify that to an AND with an
immediate mask to force all pointers into the user half of the address
space. IIUC x86_64 could do the same, and I think that was roughly what
David was suggesting.

That does mean that you could still speculatively access user memory
erroneously other than to NULL, but that's also true for speculated
pointers below TASK_SIZE_MAX when using the more complex sequence.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 3/4] x86/uaccess: Use pointer masking to limit uaccess speculation
  2021-05-05 14:25   ` Mark Rutland
@ 2021-05-05 14:48     ` Josh Poimboeuf
  2021-05-05 14:49     ` David Laight
  1 sibling, 0 replies; 19+ messages in thread
From: Josh Poimboeuf @ 2021-05-05 14:48 UTC (permalink / raw)
  To: Mark Rutland
  Cc: David Laight, Al Viro, x86, linux-kernel, Linus Torvalds,
	Will Deacon, Dan Williams, Andrea Arcangeli, Waiman Long,
	Peter Zijlstra, Thomas Gleixner, Andrew Cooper, Andy Lutomirski,
	Christoph Hellwig, Borislav Petkov

On Wed, May 05, 2021 at 03:25:42PM +0100, Mark Rutland wrote:
> On arm64 we needed to have a sequence here because the addr_limit used
> to be variable, but now that we've removed set_fs() and split the
> user/kernel access routines we could simplify that to an AND with an
> immediate mask to force all pointers into the user half of the address
> space. IIUC x86_64 could do the same, and I think that was roughly what
> David was suggesting.

True.  On 64-bit arches it might be as simple as just clearing the
most-significant bit.

-- 
Josh


^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: [PATCH v4 3/4] x86/uaccess: Use pointer masking to limit uaccess speculation
  2021-05-05 14:25   ` Mark Rutland
  2021-05-05 14:48     ` Josh Poimboeuf
@ 2021-05-05 14:49     ` David Laight
  2021-05-05 15:45       ` Mark Rutland
  1 sibling, 1 reply; 19+ messages in thread
From: David Laight @ 2021-05-05 14:49 UTC (permalink / raw)
  To: 'Mark Rutland', Josh Poimboeuf
  Cc: Al Viro, x86, linux-kernel, Linus Torvalds, Will Deacon,
	Dan Williams, Andrea Arcangeli, Waiman Long, Peter Zijlstra,
	Thomas Gleixner, Andrew Cooper, Andy Lutomirski,
	Christoph Hellwig, Borislav Petkov

From: Mark Rutland
> Sent: 05 May 2021 15:26
...
> > +/*
> > + * Sanitize a user pointer such that it becomes NULL if it's not a valid user
> > + * pointer.  This prevents speculatively dereferencing a user-controlled
> > + * pointer to kernel space if access_ok() speculatively returns true.  This
> > + * should be done *after* access_ok(), to avoid affecting error handling
> > + * behavior.
> > + */
> > +#define mask_user_ptr(ptr)						\
> > +({									\
> > +	unsigned long _ptr = (__force unsigned long)ptr;		\
> > +	unsigned long mask;						\
> > +									\
> > +	asm volatile("cmp %[max], %[_ptr]\n\t"				\
> > +		     "sbb %[mask], %[mask]\n\t"				\
> > +		     : [mask] "=r" (mask)				\
> > +		     : [_ptr] "r" (_ptr),				\
> > +		       [max] "r" (TASK_SIZE_MAX)			\
> > +		     : "cc");						\
> > +									\
> > +	mask &= _ptr;							\
> > +	((typeof(ptr)) mask);						\
> > +})
> 
> On arm64 we needed to have a sequence here because the addr_limit used
> to be variable, but now that we've removed set_fs() and split the
> user/kernel access routines we could simplify that to an AND with an
> immediate mask to force all pointers into the user half of the address
> space. IIUC x86_64 could do the same, and I think that was roughly what
> David was suggesting.

Something like that :-)

For 64bit you can either unconditionally mask the user address
(to clear a few high bits) or mask with a calculated value
if the address is invalid.
The former is almost certainly better.

The other thing is that a valid length has to be less than
the TASK_SIZE_MAX.
Provided there are 2 zero bits at the top of every user address
you can check 'addr | size < limit' and know that 'addr + size'
won't wrap into kernel space.

32bit is more difficult.
User addresses (probably) go up to 0xc0000000 and the kernel
starts (almost) immediately.
If you never map a 4k page on one side of the boundary then
you only need to check the base address provided the user buffer
is less than 4k, or the accesses are guaranteed to be sequential.
While the full window test isn't that complicated ignoring the
length will remove some code - especially for hot paths that
use __get_user() to access a fixed size structure

> That does mean that you could still speculatively access user memory
> erroneously other than to NULL, but that's also true for speculated
> pointers below TASK_SIZE_MAX when using the more complex sequence.

True, but there are almost certainly easier ways to speculatively
access user addresses than passing a kernel alias of the address
into a system call!

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 3/4] x86/uaccess: Use pointer masking to limit uaccess speculation
  2021-05-05 14:49     ` David Laight
@ 2021-05-05 15:45       ` Mark Rutland
  0 siblings, 0 replies; 19+ messages in thread
From: Mark Rutland @ 2021-05-05 15:45 UTC (permalink / raw)
  To: David Laight
  Cc: Josh Poimboeuf, Al Viro, x86, linux-kernel, Linus Torvalds,
	Will Deacon, Dan Williams, Andrea Arcangeli, Waiman Long,
	Peter Zijlstra, Thomas Gleixner, Andrew Cooper, Andy Lutomirski,
	Christoph Hellwig, Borislav Petkov

On Wed, May 05, 2021 at 02:49:53PM +0000, David Laight wrote:
> From: Mark Rutland
> > Sent: 05 May 2021 15:26
> ...
> > > +/*
> > > + * Sanitize a user pointer such that it becomes NULL if it's not a valid user
> > > + * pointer.  This prevents speculatively dereferencing a user-controlled
> > > + * pointer to kernel space if access_ok() speculatively returns true.  This
> > > + * should be done *after* access_ok(), to avoid affecting error handling
> > > + * behavior.
> > > + */
> > > +#define mask_user_ptr(ptr)						\
> > > +({									\
> > > +	unsigned long _ptr = (__force unsigned long)ptr;		\
> > > +	unsigned long mask;						\
> > > +									\
> > > +	asm volatile("cmp %[max], %[_ptr]\n\t"				\
> > > +		     "sbb %[mask], %[mask]\n\t"				\
> > > +		     : [mask] "=r" (mask)				\
> > > +		     : [_ptr] "r" (_ptr),				\
> > > +		       [max] "r" (TASK_SIZE_MAX)			\
> > > +		     : "cc");						\
> > > +									\
> > > +	mask &= _ptr;							\
> > > +	((typeof(ptr)) mask);						\
> > > +})
> > 
> > On arm64 we needed to have a sequence here because the addr_limit used
> > to be variable, but now that we've removed set_fs() and split the
> > user/kernel access routines we could simplify that to an AND with an
> > immediate mask to force all pointers into the user half of the address
> > space. IIUC x86_64 could do the same, and I think that was roughly what
> > David was suggesting.
> 
> Something like that :-)
> 
> For 64bit you can either unconditionally mask the user address
> (to clear a few high bits) or mask with a calculated value
> if the address is invalid.
> The former is almost certainly better.

Sure; I was thinking of the former as arm64 does the latter today.

> The other thing is that a valid length has to be less than
> the TASK_SIZE_MAX.
> Provided there are 2 zero bits at the top of every user address
> you can check 'addr | size < limit' and know that 'addr + size'
> won't wrap into kernel space.

I see. The size concern is interesting, and I'm not sure whether it
practically matters. If the size crosses the user/kernel gap, then for
this to (potentially) be a problem the CPU must speculate an access past
the gap before it takes the exception for the first access that hits the
gap. With that in mind:

* If the CPU cannot wildly mispredict an iteration of a uaccess loop
  (e.g. issues iterations in-order), then it would need to speculate
  accesses for the entire length of the gap without having raised an
  exception. For arm64 that's at least 2^56 bytes, which even with SVE's
  256-bit vectors that's 2^40 accesses. I think it's impractical for a
  CPU to speculate a window this large before taking an exception.

* If the CPU can wildly mispredict an iteration of a uaccess loop (e.g.
  do this non-sequentially and generate offests wildly), then it can go
  past the validated size boundary anyway, and we'd have to mask the
  pointer immediately prior to the access. Beyond value prediction, I'm
  not sure how this could happen given the way we build those loops.

... so for architectures with large user/kernel gaps I'm not sure that
it's necessary to check the size up-front.

On arm64 we also have a second defence as our uaccess primitives use
"unprivileged load/store" instructions LDTR and STTR, which use the user
permissions even when executed in kernel mode. So on CPUs where
permissions are respected under speculation these cannot access kernel
memory.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 3/4] x86/uaccess: Use pointer masking to limit uaccess speculation
  2021-05-05  3:54 ` [PATCH v4 3/4] x86/uaccess: Use pointer masking to limit uaccess speculation Josh Poimboeuf
  2021-05-05  8:48   ` David Laight
  2021-05-05 14:25   ` Mark Rutland
@ 2021-05-05 16:55   ` Andy Lutomirski
  2021-05-06  8:36     ` David Laight
  2021-06-02 17:11   ` Sean Christopherson
  3 siblings, 1 reply; 19+ messages in thread
From: Andy Lutomirski @ 2021-05-05 16:55 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Al Viro, X86 ML, LKML, Linus Torvalds, Will Deacon, Dan Williams,
	Andrea Arcangeli, Waiman Long, Peter Zijlstra, Thomas Gleixner,
	Andrew Cooper, Andy Lutomirski, Christoph Hellwig, David Laight,
	Mark Rutland, Borislav Petkov

On Tue, May 4, 2021 at 8:55 PM Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> +/*
> + * Sanitize a user pointer such that it becomes NULL if it's not a valid user
> + * pointer.  This prevents speculatively dereferencing a user-controlled
> + * pointer to kernel space if access_ok() speculatively returns true.  This
> + * should be done *after* access_ok(), to avoid affecting error handling
> + * behavior.
> + */
> +#define mask_user_ptr(ptr)                                             \
> +({                                                                     \
> +       unsigned long _ptr = (__force unsigned long)ptr;                \
> +       unsigned long mask;                                             \
> +                                                                       \
> +       asm volatile("cmp %[max], %[_ptr]\n\t"                          \
> +                    "sbb %[mask], %[mask]\n\t"                         \
> +                    : [mask] "=r" (mask)                               \
> +                    : [_ptr] "r" (_ptr),                               \
> +                      [max] "r" (TASK_SIZE_MAX)                        \
> +                    : "cc");                                           \
> +                                                                       \
> +       mask &= _ptr;                                                   \
> +       ((typeof(ptr)) mask);                                           \
> +})

Is there an equally efficient sequence that squishes the pointer value
to something noncanonical or something like -1 instead of 0?  I'm not
sure this matters, but it opens up the possibility of combining the
access_ok check with the masking without any branches at all.

Also, why are you doing mask &= _ptr; mask instead of just
((typeof(ptr)) (_ptr & mask))?  or _ptr &= mask, for that matter?

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 3/4] x86/uaccess: Use pointer masking to limit uaccess speculation
  2021-05-05  8:48   ` David Laight
  2021-05-05 13:19     ` Josh Poimboeuf
@ 2021-05-05 18:32     ` Linus Torvalds
  2021-05-06  7:57       ` David Laight
  1 sibling, 1 reply; 19+ messages in thread
From: Linus Torvalds @ 2021-05-05 18:32 UTC (permalink / raw)
  To: David Laight
  Cc: Josh Poimboeuf, Al Viro, x86, linux-kernel, Will Deacon,
	Dan Williams, Andrea Arcangeli, Waiman Long, Peter Zijlstra,
	Thomas Gleixner, Andrew Cooper, Andy Lutomirski,
	Christoph Hellwig, Mark Rutland, Borislav Petkov

On Wed, May 5, 2021 at 1:48 AM David Laight <David.Laight@aculab.com> wrote:
>
> This would error requests for address 0 earlier - but I don't
> believe they are ever valid in Linux.
> (Some historic x86 a.out formats did load to address 0.)

Not only loading at address 0 - there are various real reason s why
address 0 might actually be needed.

Anybody who still runs a 32-bit kernel and wants to use vm86 mode, for
example, requires address 0 because that's simply how the hardware
works.

So no. "mask to zero and make zero invalid" is not a proper model.

            Linus

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: [PATCH v4 3/4] x86/uaccess: Use pointer masking to limit uaccess speculation
  2021-05-05 18:32     ` Linus Torvalds
@ 2021-05-06  7:57       ` David Laight
  0 siblings, 0 replies; 19+ messages in thread
From: David Laight @ 2021-05-06  7:57 UTC (permalink / raw)
  To: 'Linus Torvalds'
  Cc: Josh Poimboeuf, Al Viro, x86, linux-kernel, Will Deacon,
	Dan Williams, Andrea Arcangeli, Waiman Long, Peter Zijlstra,
	Thomas Gleixner, Andrew Cooper, Andy Lutomirski,
	Christoph Hellwig, Mark Rutland, Borislav Petkov

From: Linus Torvalds
> Sent: 05 May 2021 19:32
> 
> On Wed, May 5, 2021 at 1:48 AM David Laight <David.Laight@aculab.com> wrote:
> >
> > This would error requests for address 0 earlier - but I don't
> > believe they are ever valid in Linux.
> > (Some historic x86 a.out formats did load to address 0.)
> 
> Not only loading at address 0 - there are various real reason s why
> address 0 might actually be needed.
> 
> Anybody who still runs a 32-bit kernel and wants to use vm86 mode, for
> example, requires address 0 because that's simply how the hardware
> works.
> 
> So no. "mask to zero and make zero invalid" is not a proper model.

I had my doubts.
But letting userspace map address zero has been a security problem.
It can turn a kernel panic into executing 'user' code with
supervisor permissions.

So I did wonder if it had been banned completely.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: [PATCH v4 3/4] x86/uaccess: Use pointer masking to limit uaccess speculation
  2021-05-05 16:55   ` Andy Lutomirski
@ 2021-05-06  8:36     ` David Laight
  2021-05-06 12:05       ` Christoph Hellwig
  0 siblings, 1 reply; 19+ messages in thread
From: David Laight @ 2021-05-06  8:36 UTC (permalink / raw)
  To: 'Andy Lutomirski', Josh Poimboeuf
  Cc: Al Viro, X86 ML, LKML, Linus Torvalds, Will Deacon, Dan Williams,
	Andrea Arcangeli, Waiman Long, Peter Zijlstra, Thomas Gleixner,
	Andrew Cooper, Christoph Hellwig, Mark Rutland, Borislav Petkov

From: Andy Lutomirski
> Sent: 05 May 2021 17:55
...
> Is there an equally efficient sequence that squishes the pointer value
> to something noncanonical or something like -1 instead of 0?  I'm not
> sure this matters, but it opens up the possibility of combining the
> access_ok check with the masking without any branches at all.

Are you thinking of using:
	uaddr = access_ok(uaddr, size)
and having the output value being one that is guaranteed
to fault when (a little later on) used to access user memory?

As well as the problem of finding a suitable invalid address
in 32bit architectures there can be issues if the code accesses
(uaddr + big_offset) since that could be outside the invalid
address window.

We are back to the fact that if we know the accesses are
sequential (or a single access) then it can usually be
arranged for them to fault without an explicit size check.

This could mean you have:
	if (access_ok_mask(&uaddr, size))
		return -EFAULT;
that never actually returns EFAULT on some architectures
when size is a small compile-time constant.

If you don't need to check the size then you'd need
something like:
	mov uaddr, reg
	add #-TASK_SIZE_MAX, reg	// sets carry for bad addresses
	sbb reg, reg			// -1 for bad addresses
	or  reg, uaddr
That converts addresses above TASK_SIZE_MASK to -1.
Non-byte accesses will fault on all x86 cpu.
For x64 (and some other 64bit) you can clear the top few
bits to get an invalid address.

So probably ok for get_user() and copy_from_user() (etc)
but not as a more general check.

	David.

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 3/4] x86/uaccess: Use pointer masking to limit uaccess speculation
  2021-05-06  8:36     ` David Laight
@ 2021-05-06 12:05       ` Christoph Hellwig
  0 siblings, 0 replies; 19+ messages in thread
From: Christoph Hellwig @ 2021-05-06 12:05 UTC (permalink / raw)
  To: David Laight
  Cc: 'Andy Lutomirski',
	Josh Poimboeuf, Al Viro, X86 ML, LKML, Linus Torvalds,
	Will Deacon, Dan Williams, Andrea Arcangeli, Waiman Long,
	Peter Zijlstra, Thomas Gleixner, Andrew Cooper,
	Christoph Hellwig, Mark Rutland, Borislav Petkov

On Thu, May 06, 2021 at 08:36:08AM +0000, David Laight wrote:
> 	uaddr = access_ok(uaddr, size)

access_ok as a public API is not interesting.  There are very few
valid uses cases for ever calling access_ok outside the usual
uaccess helper.  So leave access_ok alone, there is not point in
touching all the callers except for removing most of them.  If OTOH
we can micro-optimize get_user and put_user by using a different
variant of access_ok that seems fair game and actually useful.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 3/4] x86/uaccess: Use pointer masking to limit uaccess speculation
  2021-05-05  3:54 ` [PATCH v4 3/4] x86/uaccess: Use pointer masking to limit uaccess speculation Josh Poimboeuf
                     ` (2 preceding siblings ...)
  2021-05-05 16:55   ` Andy Lutomirski
@ 2021-06-02 17:11   ` Sean Christopherson
  2021-06-02 20:11     ` Josh Poimboeuf
  3 siblings, 1 reply; 19+ messages in thread
From: Sean Christopherson @ 2021-06-02 17:11 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Al Viro, x86, linux-kernel, Linus Torvalds, Will Deacon,
	Dan Williams, Andrea Arcangeli, Waiman Long, Peter Zijlstra,
	Thomas Gleixner, Andrew Cooper, Andy Lutomirski,
	Christoph Hellwig, David Laight, Mark Rutland, Borislav Petkov

On Tue, May 04, 2021, Josh Poimboeuf wrote:
> The x86 uaccess code uses barrier_nospec() in various places to prevent
> speculative dereferencing of user-controlled pointers (which might be
> combined with further gadgets or CPU bugs to leak data).
> 
> There are some issues with the current implementation:
> 
> - The barrier_nospec() in copy_from_user() was inadvertently removed
>   with: 4b842e4e25b1 ("x86: get rid of small constant size cases in
>   raw_copy_{to,from}_user()")

Mostly out of curiosity, wasn't copy_{from,to}_user() flawed even before that
patch?  Non-constant sizes would go straight to copy_user_generic(), and even if
string ops are used and strings are magically not vulnerable, small sizes would
skip to normal loads/stores in _copy_short_string when using
copy_user_enhanced_fast_string().

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 3/4] x86/uaccess: Use pointer masking to limit uaccess speculation
  2021-06-02 17:11   ` Sean Christopherson
@ 2021-06-02 20:11     ` Josh Poimboeuf
  0 siblings, 0 replies; 19+ messages in thread
From: Josh Poimboeuf @ 2021-06-02 20:11 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Al Viro, x86, linux-kernel, Linus Torvalds, Will Deacon,
	Dan Williams, Andrea Arcangeli, Waiman Long, Peter Zijlstra,
	Thomas Gleixner, Andrew Cooper, Andy Lutomirski,
	Christoph Hellwig, David Laight, Mark Rutland, Borislav Petkov

On Wed, Jun 02, 2021 at 05:11:57PM +0000, Sean Christopherson wrote:
> On Tue, May 04, 2021, Josh Poimboeuf wrote:
> > The x86 uaccess code uses barrier_nospec() in various places to prevent
> > speculative dereferencing of user-controlled pointers (which might be
> > combined with further gadgets or CPU bugs to leak data).
> > 
> > There are some issues with the current implementation:
> > 
> > - The barrier_nospec() in copy_from_user() was inadvertently removed
> >   with: 4b842e4e25b1 ("x86: get rid of small constant size cases in
> >   raw_copy_{to,from}_user()")
> 
> Mostly out of curiosity, wasn't copy_{from,to}_user() flawed even before that
> patch?  Non-constant sizes would go straight to copy_user_generic(), and even if
> string ops are used and strings are magically not vulnerable, small sizes would
> skip to normal loads/stores in _copy_short_string when using
> copy_user_enhanced_fast_string().

Yes, it appears so.

-- 
Josh


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2021-06-02 20:11 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-05  3:54 [PATCH v4 0/4] x86/uaccess: Use pointer masking to limit uaccess speculation Josh Poimboeuf
2021-05-05  3:54 ` [PATCH v4 1/4] uaccess: Always inline strn*_user() helper functions Josh Poimboeuf
2021-05-05  3:54 ` [PATCH v4 2/4] uaccess: Fix __user annotations for copy_mc_to_user() Josh Poimboeuf
2021-05-05  3:54 ` [PATCH v4 3/4] x86/uaccess: Use pointer masking to limit uaccess speculation Josh Poimboeuf
2021-05-05  8:48   ` David Laight
2021-05-05 13:19     ` Josh Poimboeuf
2021-05-05 13:51       ` David Laight
2021-05-05 18:32     ` Linus Torvalds
2021-05-06  7:57       ` David Laight
2021-05-05 14:25   ` Mark Rutland
2021-05-05 14:48     ` Josh Poimboeuf
2021-05-05 14:49     ` David Laight
2021-05-05 15:45       ` Mark Rutland
2021-05-05 16:55   ` Andy Lutomirski
2021-05-06  8:36     ` David Laight
2021-05-06 12:05       ` Christoph Hellwig
2021-06-02 17:11   ` Sean Christopherson
2021-06-02 20:11     ` Josh Poimboeuf
2021-05-05  3:54 ` [PATCH v4 4/4] x86/nospec: Remove barrier_nospec() Josh Poimboeuf

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.