linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/2] uaccess: Add mechanism for key checked access to user memory
@ 2022-01-26 17:33 Janis Schoetterl-Glausch
  2022-01-26 17:33 ` [RFC PATCH 1/2] " Janis Schoetterl-Glausch
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-01-26 17:33 UTC (permalink / raw)
  To: Arnd Bergmann, Andrew Morton, Heiko Carstens
  Cc: Janis Schoetterl-Glausch, Alexander Viro, Kees Cook,
	Christian Borntraeger, linux-kernel

Something like this patch series is required as part of KVM supporting
storage keys on s390.
See https://lore.kernel.org/kvm/20220118095210.1651483-1-scgl@linux.ibm.com/

On s390 each physical page is associated with 4 access control bits.
On access, these are compared with an access key, which is either
provided by the instruction or taken from the CPU state.
Based on that comparison, the access either succeeds or is prevented.

KVM on s390 needs to be able emulate this behavior, for example during
instruction emulation, when it makes accesses on behalf of the guest.
In order to do that, we need variants of __copy_from/to_user that pass
along an access key to the architecture specific implementation of
__copy_from/to_user. That is the only difference, variants do the same
might_fault(), instrument_copy_to_user(), etc. calls as the normal
functions do and need to be kept in sync with those.
If these __copy_from/to_user_key functions were to be maintained
in architecture specific code they would be prone to going out of sync
with their non key counterparts if there were code changes.
So, instead, add these variants to include/linux/uaccess.h.

Considerations:
 * The key argument is an unsigned long, in order to make the functions
   less specific to s390, which would only need an u8.
   This could also be generalized further, i.e. by having the type be
   defined by the architecture, with the default being a struct without
   any members.
   Also the functions could be renamed ..._opaque, ..._arg, or similar.
 * Which functions do we provide _key variants for? Just defining
   __copy_from/to_user_key would make it rather specific to our use
   case.
 * Should ...copy_from/to_user_key functions be callable from common
   code? The patch defines the functions to be functionally identical
   to the normal functions if the architecture does not define
   raw_copy_from/to_user_key, so that this would be possible, however it
   is not required for our use case.

For the minimal functionality we require see the diff below.

bloat-o-meter reported a .03% kernel size increase.

Comments are much appreciated.

Janis Schoetterl-Glausch (2):
  uaccess: Add mechanism for key checked access to user memory
  s390/uaccess: Provide raw_copy_from/to_user_key

 arch/s390/include/asm/uaccess.h |  22 ++++++-
 arch/s390/lib/uaccess.c         |  48 ++++++++------
 include/linux/uaccess.h         | 107 ++++++++++++++++++++++++++++++++
 lib/usercopy.c                  |  33 ++++++++++
 4 files changed, 188 insertions(+), 22 deletions(-)


diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index ac0394087f7d..b3c58b7605d6 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -114,6 +114,20 @@ __copy_from_user(void *to, const void __user *from, unsigned long n)
 	return raw_copy_from_user(to, from, n);
 }
 
+#ifdef raw_copy_from_user_key
+static __always_inline __must_check unsigned long
+__copy_from_user_key(void *to, const void __user *from, unsigned long n,
+			  unsigned long key)
+{
+	might_fault();
+	if (should_fail_usercopy())
+		return n;
+	instrument_copy_from_user(to, from, n);
+	check_object_size(to, n, false);
+	return raw_copy_from_user_key(to, from, n, key);
+}
+#endif /* raw_copy_from_user_key */
+
 /**
  * __copy_to_user_inatomic: - Copy a block of data into user space, with less checking.
  * @to:   Destination address, in user space.
@@ -148,6 +162,20 @@ __copy_to_user(void __user *to, const void *from, unsigned long n)
 	return raw_copy_to_user(to, from, n);
 }
 
+#ifdef raw_copy_to_user_key
+static __always_inline __must_check unsigned long
+__copy_to_user_key(void __user *to, const void *from, unsigned long n,
+			unsigned long key)
+{
+	might_fault();
+	if (should_fail_usercopy())
+		return n;
+	instrument_copy_to_user(to, from, n);
+	check_object_size(from, n, true);
+	return raw_copy_to_user_key(to, from, n, key);
+}
+#endif /* raw_copy_to_user_key */
+
 #ifdef INLINE_COPY_FROM_USER
 static inline __must_check unsigned long
 _copy_from_user(void *to, const void __user *from, unsigned long n)

base-commit: 0280e3c58f92b2fe0e8fbbdf8d386449168de4a8
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [RFC PATCH 1/2] uaccess: Add mechanism for key checked access to user memory
  2022-01-26 17:33 [RFC PATCH 0/2] uaccess: Add mechanism for key checked access to user memory Janis Schoetterl-Glausch
@ 2022-01-26 17:33 ` Janis Schoetterl-Glausch
  2022-01-26 17:33 ` [RFC PATCH 2/2] s390/uaccess: Provide raw_copy_from/to_user_key Janis Schoetterl-Glausch
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 8+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-01-26 17:33 UTC (permalink / raw)
  To: Arnd Bergmann, Andrew Morton, Heiko Carstens
  Cc: Janis Schoetterl-Glausch, Alexander Viro, Kees Cook,
	Christian Borntraeger, linux-kernel

KVM on s390 needs a mechanism to do accesses to guest memory
that honors storage key protection.

On s390 each physical page is associated with 4 access control bits.
On access, these are compared with an access key, which is either
provided by the instruction or taken from the CPU state.
Based on that comparison, the access either succeeds or is prevented.

KVM on s390 needs to be able emulate this behavior, for example during
instruction emulation, when it makes accesses on behalf of the guest.
Introduce ...copy_{from,to}_user_key functions KVM can use to achieve
this. These differ from their non key counterparts by having an
additional key argument, and delegating to raw_copy_from/to_user_key
instead of raw_copy_{from,to}_user. Otherwise they are the same.
If they were to be maintained in architecture specific code they would
be prone to going out of sync with their non key counterparts.
To prevent this, add them to include/linux/uaccess.h.
In order to allow use of ...copy_{from,to}_user_key from common code,
the key argument is ignored on architectures that do not provide
raw_copy_{from,to}_user_key and the functions become functionally
identical to ...copy_{from,to}_user.

Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
---
 include/linux/uaccess.h | 107 ++++++++++++++++++++++++++++++++++++++++
 lib/usercopy.c          |  33 +++++++++++++
 2 files changed, 140 insertions(+)

diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index ac0394087f7d..cba64cd23193 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -93,6 +93,11 @@ static inline void force_uaccess_end(mm_segment_t oldfs)
  * Biarch ones should also provide raw_copy_in_user() - similar to the above,
  * but both source and destination are __user pointers (affected by set_fs()
  * as usual) and both source and destination can trigger faults.
+ *
+ * Architectures can also provide raw_copy_{from,to}_user_key variants that take
+ * an additional key argument that can be used for additional memory protection
+ * checks. If these variants are not provided, ...copy_{from,to}_user_key are
+ * identical to their non key counterparts.
  */
 
 static __always_inline __must_check unsigned long
@@ -201,6 +206,108 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
 	return n;
 }
 
+/*
+ * ...copy_{from,to}_user_key variants
+ * must be kept in sync with their non key counterparts.
+ */
+#ifndef raw_copy_from_user_key
+static __always_inline unsigned long __must_check
+raw_copy_from_user_key(void *to, const void __user *from, unsigned long n,
+		       unsigned long key)
+{
+	return raw_copy_from_user(to, from, n);
+}
+#endif
+static __always_inline __must_check unsigned long
+__copy_from_user_key(void *to, const void __user *from, unsigned long n,
+		     unsigned long key)
+{
+	might_fault();
+	if (should_fail_usercopy())
+		return n;
+	instrument_copy_from_user(to, from, n);
+	check_object_size(to, n, false);
+	return raw_copy_from_user_key(to, from, n, key);
+}
+
+#ifdef INLINE_COPY_FROM_USER_KEY
+static inline __must_check unsigned long
+_copy_from_user_key(void *to, const void __user *from, unsigned long n,
+		    unsigned long key)
+{
+	unsigned long res = n;
+	might_fault();
+	if (!should_fail_usercopy() && likely(access_ok(from, n))) {
+		instrument_copy_from_user(to, from, n);
+		res = raw_copy_from_user_key(to, from, n, key);
+	}
+	if (unlikely(res))
+		memset(to + (n - res), 0, res);
+	return res;
+}
+#else
+extern __must_check unsigned long
+_copy_from_user_key(void *, const void __user *, unsigned long, unsigned long);
+#endif
+
+#ifndef raw_copy_to_user_key
+static __always_inline unsigned long __must_check
+raw_copy_to_user_key(void __user *to, const void *from, unsigned long n,
+		     unsigned long key)
+{
+	return raw_copy_to_user(to, from, n);
+}
+#endif
+
+static __always_inline __must_check unsigned long
+__copy_to_user_key(void __user *to, const void *from, unsigned long n,
+		   unsigned long key)
+{
+	might_fault();
+	if (should_fail_usercopy())
+		return n;
+	instrument_copy_to_user(to, from, n);
+	check_object_size(from, n, true);
+	return raw_copy_to_user_key(to, from, n, key);
+}
+
+#ifdef INLINE_COPY_TO_USER_KEY
+static inline __must_check unsigned long
+_copy_to_user_key(void __user *to, const void *from, unsigned long n,
+		  unsigned long key)
+{
+	might_fault();
+	if (should_fail_usercopy())
+		return n;
+	if (access_ok(to, n)) {
+		instrument_copy_to_user(to, from, n);
+		n = raw_copy_to_user_key(to, from, n, key);
+	}
+	return n;
+}
+#else
+extern __must_check unsigned long
+_copy_to_user_key(void __user *, const void *, unsigned long, unsigned long);
+#endif
+
+static __always_inline unsigned long __must_check
+copy_from_user_key(void *to, const void __user *from, unsigned long n,
+		   unsigned long key)
+{
+	if (likely(check_copy_size(to, n, false)))
+		n = _copy_from_user(to, from, n);
+	return n;
+}
+
+static __always_inline unsigned long __must_check
+copy_to_user_key(void __user *to, const void *from, unsigned long n,
+		 unsigned long key)
+{
+	if (likely(check_copy_size(from, n, true)))
+		n = _copy_to_user(to, from, n);
+	return n;
+}
+
 #ifndef copy_mc_to_kernel
 /*
  * Without arch opt-in this generic copy_mc_to_kernel() will not handle
diff --git a/lib/usercopy.c b/lib/usercopy.c
index 7413dd300516..c13394d0f306 100644
--- a/lib/usercopy.c
+++ b/lib/usercopy.c
@@ -37,6 +37,39 @@ unsigned long _copy_to_user(void __user *to, const void *from, unsigned long n)
 EXPORT_SYMBOL(_copy_to_user);
 #endif
 
+#ifndef INLINE_COPY_FROM_USER_KEY
+unsigned long _copy_from_user_key(void *to, const void __user *from,
+				  unsigned long n, unsigned long key)
+{
+	unsigned long res = n;
+	might_fault();
+	if (!should_fail_usercopy() && likely(access_ok(from, n))) {
+		instrument_copy_from_user(to, from, n);
+		res = raw_copy_from_user_key(to, from, n, key);
+	}
+	if (unlikely(res))
+		memset(to + (n - res), 0, res);
+	return res;
+}
+EXPORT_SYMBOL(_copy_from_user_key);
+#endif
+
+#ifndef INLINE_COPY_TO_USER_KEY
+unsigned long _copy_to_user_key(void __user *to, const void *from,
+				unsigned long n, unsigned long key)
+{
+	might_fault();
+	if (should_fail_usercopy())
+		return n;
+	if (likely(access_ok(to, n))) {
+		instrument_copy_to_user(to, from, n);
+		n = raw_copy_to_user_key(to, from, n, key);
+	}
+	return n;
+}
+EXPORT_SYMBOL(_copy_to_user_key);
+#endif
+
 /**
  * check_zeroed_user: check if a userspace buffer only contains zero bytes
  * @from: Source address, in userspace.
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [RFC PATCH 2/2] s390/uaccess: Provide raw_copy_from/to_user_key
  2022-01-26 17:33 [RFC PATCH 0/2] uaccess: Add mechanism for key checked access to user memory Janis Schoetterl-Glausch
  2022-01-26 17:33 ` [RFC PATCH 1/2] " Janis Schoetterl-Glausch
@ 2022-01-26 17:33 ` Janis Schoetterl-Glausch
  2022-01-31 13:39 ` [RFC PATCH 0/2] uaccess: Add mechanism for key checked access to user memory Christian Borntraeger
  2022-02-03 18:11 ` Janis Schoetterl-Glausch
  3 siblings, 0 replies; 8+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-01-26 17:33 UTC (permalink / raw)
  To: Arnd Bergmann, Andrew Morton, Heiko Carstens
  Cc: Janis Schoetterl-Glausch, Alexander Viro, Kees Cook,
	Christian Borntraeger, linux-kernel

This makes the user access functions that perform storage key checking
available, so they can be used by KVM for emulation.
Since the existing uaccess implementation on s390 makes use of move
instructions that support having an additional access key supplied,
we can implement raw_copy_from/to_user_key by enhancing the
existing implementation.

Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
---
 arch/s390/include/asm/uaccess.h | 22 +++++++++++++--
 arch/s390/lib/uaccess.c         | 48 +++++++++++++++++++--------------
 2 files changed, 48 insertions(+), 22 deletions(-)

diff --git a/arch/s390/include/asm/uaccess.h b/arch/s390/include/asm/uaccess.h
index 147cb3534ce4..422066d7c5e2 100644
--- a/arch/s390/include/asm/uaccess.h
+++ b/arch/s390/include/asm/uaccess.h
@@ -33,15 +33,33 @@ static inline int __range_ok(unsigned long addr, unsigned long size)
 
 #define access_ok(addr, size) __access_ok(addr, size)
 
+#define raw_copy_from_user_key raw_copy_from_user_key
 unsigned long __must_check
-raw_copy_from_user(void *to, const void __user *from, unsigned long n);
+raw_copy_from_user_key(void *to, const void __user *from, unsigned long n,
+		       unsigned long key);
 
+#define raw_copy_to_user_key raw_copy_to_user_key
 unsigned long __must_check
-raw_copy_to_user(void __user *to, const void *from, unsigned long n);
+raw_copy_to_user_key(void __user *to, const void *from, unsigned long n,
+		     unsigned long key);
+
+static __always_inline unsigned long __must_check
+raw_copy_from_user(void *to, const void __user *from, unsigned long n)
+{
+	return raw_copy_from_user_key(to, from, n, 0);
+}
+
+static __always_inline unsigned long __must_check
+raw_copy_to_user(void __user *to, const void *from, unsigned long n)
+{
+	return raw_copy_to_user_key(to, from, n, 0);
+}
 
 #ifndef CONFIG_KASAN
 #define INLINE_COPY_FROM_USER
 #define INLINE_COPY_TO_USER
+#define INLINE_COPY_FROM_USER_KEY
+#define INLINE_COPY_TO_USER_KEY
 #endif
 
 int __put_user_bad(void) __attribute__((noreturn));
diff --git a/arch/s390/lib/uaccess.c b/arch/s390/lib/uaccess.c
index 8a5d21461889..689a5ab3121a 100644
--- a/arch/s390/lib/uaccess.c
+++ b/arch/s390/lib/uaccess.c
@@ -59,11 +59,13 @@ static inline int copy_with_mvcos(void)
 #endif
 
 static inline unsigned long copy_from_user_mvcos(void *x, const void __user *ptr,
-						 unsigned long size)
+						 unsigned long size, unsigned long key)
 {
 	unsigned long tmp1, tmp2;
 	union oac spec = {
+		.oac2.key = key,
 		.oac2.as = PSW_BITS_AS_SECONDARY,
+		.oac2.k = 1,
 		.oac2.a = 1,
 	};
 
@@ -94,19 +96,19 @@ static inline unsigned long copy_from_user_mvcos(void *x, const void __user *ptr
 }
 
 static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr,
-						unsigned long size)
+						unsigned long size, unsigned long key)
 {
 	unsigned long tmp1, tmp2;
 
 	tmp1 = -256UL;
 	asm volatile(
 		"   sacf  0\n"
-		"0: mvcp  0(%0,%2),0(%1),%3\n"
+		"0: mvcp  0(%0,%2),0(%1),%[key]\n"
 		"7: jz    5f\n"
 		"1: algr  %0,%3\n"
 		"   la    %1,256(%1)\n"
 		"   la    %2,256(%2)\n"
-		"2: mvcp  0(%0,%2),0(%1),%3\n"
+		"2: mvcp  0(%0,%2),0(%1),%[key]\n"
 		"8: jnz   1b\n"
 		"   j     5f\n"
 		"3: la    %4,255(%1)\n"	/* %4 = ptr + 255 */
@@ -115,7 +117,7 @@ static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr,
 		"   slgr  %4,%1\n"
 		"   clgr  %0,%4\n"	/* copy crosses next page boundary? */
 		"   jnh   6f\n"
-		"4: mvcp  0(%4,%2),0(%1),%3\n"
+		"4: mvcp  0(%4,%2),0(%1),%[key]\n"
 		"9: slgr  %0,%4\n"
 		"   j     6f\n"
 		"5: slgr  %0,%0\n"
@@ -123,24 +125,28 @@ static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr,
 		EX_TABLE(0b,3b) EX_TABLE(2b,3b) EX_TABLE(4b,6b)
 		EX_TABLE(7b,3b) EX_TABLE(8b,3b) EX_TABLE(9b,6b)
 		: "+a" (size), "+a" (ptr), "+a" (x), "+a" (tmp1), "=a" (tmp2)
-		: : "cc", "memory");
+		: [key] "d" (key << 4)
+		: "cc", "memory");
 	return size;
 }
 
-unsigned long raw_copy_from_user(void *to, const void __user *from, unsigned long n)
+unsigned long raw_copy_from_user_key(void *to, const void __user *from,
+				     unsigned long n, unsigned long key)
 {
 	if (copy_with_mvcos())
-		return copy_from_user_mvcos(to, from, n);
-	return copy_from_user_mvcp(to, from, n);
+		return copy_from_user_mvcos(to, from, n, key);
+	return copy_from_user_mvcp(to, from, n, key);
 }
-EXPORT_SYMBOL(raw_copy_from_user);
+EXPORT_SYMBOL(raw_copy_from_user_key);
 
 static inline unsigned long copy_to_user_mvcos(void __user *ptr, const void *x,
-					       unsigned long size)
+					       unsigned long size, unsigned long key)
 {
 	unsigned long tmp1, tmp2;
 	union oac spec = {
+		.oac1.key = key,
 		.oac1.as = PSW_BITS_AS_SECONDARY,
+		.oac1.k = 1,
 		.oac1.a = 1,
 	};
 
@@ -171,19 +177,19 @@ static inline unsigned long copy_to_user_mvcos(void __user *ptr, const void *x,
 }
 
 static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x,
-					      unsigned long size)
+					      unsigned long size, unsigned long key)
 {
 	unsigned long tmp1, tmp2;
 
 	tmp1 = -256UL;
 	asm volatile(
 		"   sacf  0\n"
-		"0: mvcs  0(%0,%1),0(%2),%3\n"
+		"0: mvcs  0(%0,%1),0(%2),%[key]\n"
 		"7: jz    5f\n"
 		"1: algr  %0,%3\n"
 		"   la    %1,256(%1)\n"
 		"   la    %2,256(%2)\n"
-		"2: mvcs  0(%0,%1),0(%2),%3\n"
+		"2: mvcs  0(%0,%1),0(%2),%[key]\n"
 		"8: jnz   1b\n"
 		"   j     5f\n"
 		"3: la    %4,255(%1)\n" /* %4 = ptr + 255 */
@@ -192,7 +198,7 @@ static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x,
 		"   slgr  %4,%1\n"
 		"   clgr  %0,%4\n"	/* copy crosses next page boundary? */
 		"   jnh   6f\n"
-		"4: mvcs  0(%4,%1),0(%2),%3\n"
+		"4: mvcs  0(%4,%1),0(%2),%[key]\n"
 		"9: slgr  %0,%4\n"
 		"   j     6f\n"
 		"5: slgr  %0,%0\n"
@@ -200,17 +206,19 @@ static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x,
 		EX_TABLE(0b,3b) EX_TABLE(2b,3b) EX_TABLE(4b,6b)
 		EX_TABLE(7b,3b) EX_TABLE(8b,3b) EX_TABLE(9b,6b)
 		: "+a" (size), "+a" (ptr), "+a" (x), "+a" (tmp1), "=a" (tmp2)
-		: : "cc", "memory");
+		: [key] "d" (key << 4)
+		: "cc", "memory");
 	return size;
 }
 
-unsigned long raw_copy_to_user(void __user *to, const void *from, unsigned long n)
+unsigned long raw_copy_to_user_key(void __user *to, const void *from,
+				   unsigned long n, unsigned long key)
 {
 	if (copy_with_mvcos())
-		return copy_to_user_mvcos(to, from, n);
-	return copy_to_user_mvcs(to, from, n);
+		return copy_to_user_mvcos(to, from, n, key);
+	return copy_to_user_mvcs(to, from, n, key);
 }
-EXPORT_SYMBOL(raw_copy_to_user);
+EXPORT_SYMBOL(raw_copy_to_user_key);
 
 static inline unsigned long clear_user_mvcos(void __user *to, unsigned long size)
 {
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [RFC PATCH 0/2] uaccess: Add mechanism for key checked access to user memory
  2022-01-26 17:33 [RFC PATCH 0/2] uaccess: Add mechanism for key checked access to user memory Janis Schoetterl-Glausch
  2022-01-26 17:33 ` [RFC PATCH 1/2] " Janis Schoetterl-Glausch
  2022-01-26 17:33 ` [RFC PATCH 2/2] s390/uaccess: Provide raw_copy_from/to_user_key Janis Schoetterl-Glausch
@ 2022-01-31 13:39 ` Christian Borntraeger
  2022-02-03 18:11 ` Janis Schoetterl-Glausch
  3 siblings, 0 replies; 8+ messages in thread
From: Christian Borntraeger @ 2022-01-31 13:39 UTC (permalink / raw)
  To: Janis Schoetterl-Glausch, Arnd Bergmann, Andrew Morton, Heiko Carstens
  Cc: Alexander Viro, Kees Cook, linux-kernel

Am 26.01.22 um 18:33 schrieb Janis Schoetterl-Glausch:
> Something like this patch series is required as part of KVM supporting
> storage keys on s390.
> See https://lore.kernel.org/kvm/20220118095210.1651483-1-scgl@linux.ibm.com/

Just to give some more context. In theory we could confine the alternative
uaccess functions in s390x architecture code, after all we only have one
place in KVM code where we call it. But this will be very likely
result in future changes not being synced. This would very likely also
continue to work but it might miss security and functionality enhancements.
And I think we want our KVM uaccess to also do the kasan, error-inject and
so on. After all there is a reason why all copy_*user functions were merged
and now architectures only provide raw_*_user functions.

> 
> On s390 each physical page is associated with 4 access control bits.
> On access, these are compared with an access key, which is either
> provided by the instruction or taken from the CPU state.
> Based on that comparison, the access either succeeds or is prevented.
> 
> KVM on s390 needs to be able emulate this behavior, for example during
> instruction emulation, when it makes accesses on behalf of the guest.
> In order to do that, we need variants of __copy_from/to_user that pass
> along an access key to the architecture specific implementation of
> __copy_from/to_user. That is the only difference, variants do the same
> might_fault(), instrument_copy_to_user(), etc. calls as the normal
> functions do and need to be kept in sync with those.
> If these __copy_from/to_user_key functions were to be maintained
> in architecture specific code they would be prone to going out of sync
> with their non key counterparts if there were code changes.
> So, instead, add these variants to include/linux/uaccess.h.
> 
> Considerations:
>   * The key argument is an unsigned long, in order to make the functions
>     less specific to s390, which would only need an u8.
>     This could also be generalized further, i.e. by having the type be
>     defined by the architecture, with the default being a struct without
>     any members.
>     Also the functions could be renamed ..._opaque, ..._arg, or similar.
>   * Which functions do we provide _key variants for? Just defining
>     __copy_from/to_user_key would make it rather specific to our use
>     case.
>   * Should ...copy_from/to_user_key functions be callable from common
>     code? The patch defines the functions to be functionally identical
>     to the normal functions if the architecture does not define
>     raw_copy_from/to_user_key, so that this would be possible, however it
>     is not required for our use case.
> 
> For the minimal functionality we require see the diff below.
> 
> bloat-o-meter reported a .03% kernel size increase.
> 
> Comments are much appreciated.
> 
> Janis Schoetterl-Glausch (2):
>    uaccess: Add mechanism for key checked access to user memory
>    s390/uaccess: Provide raw_copy_from/to_user_key
> 
>   arch/s390/include/asm/uaccess.h |  22 ++++++-
>   arch/s390/lib/uaccess.c         |  48 ++++++++------
>   include/linux/uaccess.h         | 107 ++++++++++++++++++++++++++++++++
>   lib/usercopy.c                  |  33 ++++++++++
>   4 files changed, 188 insertions(+), 22 deletions(-)
> 
> 
> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
> index ac0394087f7d..b3c58b7605d6 100644
> --- a/include/linux/uaccess.h
> +++ b/include/linux/uaccess.h
> @@ -114,6 +114,20 @@ __copy_from_user(void *to, const void __user *from, unsigned long n)
>   	return raw_copy_from_user(to, from, n);
>   }
>   
> +#ifdef raw_copy_from_user_key
> +static __always_inline __must_check unsigned long
> +__copy_from_user_key(void *to, const void __user *from, unsigned long n,
> +			  unsigned long key)
> +{
> +	might_fault();
> +	if (should_fail_usercopy())
> +		return n;
> +	instrument_copy_from_user(to, from, n);
> +	check_object_size(to, n, false);
> +	return raw_copy_from_user_key(to, from, n, key);
> +}
> +#endif /* raw_copy_from_user_key */
> +
>   /**
>    * __copy_to_user_inatomic: - Copy a block of data into user space, with less checking.
>    * @to:   Destination address, in user space.
> @@ -148,6 +162,20 @@ __copy_to_user(void __user *to, const void *from, unsigned long n)
>   	return raw_copy_to_user(to, from, n);
>   }
>   
> +#ifdef raw_copy_to_user_key
> +static __always_inline __must_check unsigned long
> +__copy_to_user_key(void __user *to, const void *from, unsigned long n,
> +			unsigned long key)
> +{
> +	might_fault();
> +	if (should_fail_usercopy())
> +		return n;
> +	instrument_copy_to_user(to, from, n);
> +	check_object_size(from, n, true);
> +	return raw_copy_to_user_key(to, from, n, key);
> +}
> +#endif /* raw_copy_to_user_key */
> +
>   #ifdef INLINE_COPY_FROM_USER
>   static inline __must_check unsigned long
>   _copy_from_user(void *to, const void __user *from, unsigned long n)
> 
> base-commit: 0280e3c58f92b2fe0e8fbbdf8d386449168de4a8

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC PATCH 0/2] uaccess: Add mechanism for key checked access to user memory
  2022-01-26 17:33 [RFC PATCH 0/2] uaccess: Add mechanism for key checked access to user memory Janis Schoetterl-Glausch
                   ` (2 preceding siblings ...)
  2022-01-31 13:39 ` [RFC PATCH 0/2] uaccess: Add mechanism for key checked access to user memory Christian Borntraeger
@ 2022-02-03 18:11 ` Janis Schoetterl-Glausch
  2022-02-03 18:11   ` [RFC PATCH 1/2] uaccess: Add mechanism for arch specific user access with argument Janis Schoetterl-Glausch
  2022-02-03 18:11   ` [RFC PATCH 2/2] s390/uaccess: Provide raw_copy_from/to_user_opaque Janis Schoetterl-Glausch
  3 siblings, 2 replies; 8+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-03 18:11 UTC (permalink / raw)
  To: scgl; +Cc: akpm, arnd, borntraeger, hca, keescook, linux-kernel, viro

> Considerations:
>  * The key argument is an unsigned long, in order to make the functions
>    less specific to s390, which would only need an u8.
>    This could also be generalized further, i.e. by having the type be
>    defined by the architecture, with the default being a struct without
>    any members.
>    Also the functions could be renamed ..._opaque, ..._arg, or similar.
>  * Which functions do we provide _key variants for? Just defining
>    __copy_from/to_user_key would make it rather specific to our use
>    case.
>  * Should ...copy_from/to_user_key functions be callable from common
>    code? The patch defines the functions to be functionally identical
>    to the normal functions if the architecture does not define
>    raw_copy_from/to_user_key, so that this would be possible, however it
>    is not required for our use case.
> 
After thinking about it some more, this variant seems an attractive
compromise between the different dimensions.
It maximises extensibility by having the additional argument and
semantic completely architecture defined.
At the same time it keeps the changes to the minimum, which reduces the
maintenance cost of keeping the functions in sync.
It is also clear how other use cases can be supported, when they arise.
Calling the functions from common code would be supported by defining
the opaque argument as an empty struct by default, and defaulting to
raw_copy_from/to_user. If other variants of copy to/from user with an
additional argument are required they can be added in the same manner as
is done here for __copy_from/to_user.
> 
> Comments are much appreciated.

Janis Schoetterl-Glausch (2):
  uaccess: Add mechanism for arch specific user access with argument
  s390/uaccess: Provide raw_copy_from/to_user_opaque

 arch/s390/include/asm/uaccess.h | 27 ++++++++++++++--
 arch/s390/lib/uaccess.c         | 56 ++++++++++++++++++++-------------
 include/linux/uaccess.h         | 28 +++++++++++++++++
 3 files changed, 88 insertions(+), 23 deletions(-)

-- 
2.32.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [RFC PATCH 1/2] uaccess: Add mechanism for arch specific user access with argument
  2022-02-03 18:11 ` Janis Schoetterl-Glausch
@ 2022-02-03 18:11   ` Janis Schoetterl-Glausch
  2022-02-03 19:20     ` Heiko Carstens
  2022-02-03 18:11   ` [RFC PATCH 2/2] s390/uaccess: Provide raw_copy_from/to_user_opaque Janis Schoetterl-Glausch
  1 sibling, 1 reply; 8+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-03 18:11 UTC (permalink / raw)
  To: scgl; +Cc: akpm, arnd, borntraeger, hca, keescook, linux-kernel, viro

KVM on s390 needs a mechanism to do accesses to guest memory
that honor storage key protection.

On s390 each physical page is associated with 4 access control bits.
On access these are compared with an access key, which is either
provided by the instruction or taken from the CPU state.
Based on that comparison, the access either succeeds or is prevented.

KVM on s390 needs to be able emulate this behavior, for example during
instruction emulation. KVM usually accesses the guest via
__copy_from/to_user, but in this case we need to also pass the access key.
Introduce __copy_from/to_user_opaque functions KVM can use to achieve
this by forwarding an architecture specific argument.
These functions are the same as their non _opaque counterparts, except
for the additional argument and also reside in include/linux/uaccess.h
so that they will not go out of sync should their counterparts change.

Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
---
 include/linux/uaccess.h | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index ac0394087f7d..cc2c7c6e2b92 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -114,6 +114,20 @@ __copy_from_user(void *to, const void __user *from, unsigned long n)
 	return raw_copy_from_user(to, from, n);
 }
 
+#ifdef uaccess_opaque
+static __always_inline __must_check unsigned long
+__copy_from_user_opaque(void *to, const void __user *from, unsigned long n,
+			struct uaccess_opaque opaque)
+{
+	might_fault();
+	if (should_fail_usercopy())
+		return n;
+	instrument_copy_from_user(to, from, n);
+	check_object_size(to, n, false);
+	return raw_copy_from_user_opaque(to, from, n, opaque);
+}
+#endif /* uaccess_opaque */
+
 /**
  * __copy_to_user_inatomic: - Copy a block of data into user space, with less checking.
  * @to:   Destination address, in user space.
@@ -148,6 +162,20 @@ __copy_to_user(void __user *to, const void *from, unsigned long n)
 	return raw_copy_to_user(to, from, n);
 }
 
+#ifdef uaccess_opaque
+static __always_inline __must_check unsigned long
+__copy_to_user_opaque(void __user *to, const void *from, unsigned long n,
+		      struct uaccess_opaque opaque)
+{
+	might_fault();
+	if (should_fail_usercopy())
+		return n;
+	instrument_copy_to_user(to, from, n);
+	check_object_size(from, n, true);
+	return raw_copy_to_user_opaque(to, from, n, opaque);
+}
+#endif /* uaccess_opaque */
+
 #ifdef INLINE_COPY_FROM_USER
 static inline __must_check unsigned long
 _copy_from_user(void *to, const void __user *from, unsigned long n)
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [RFC PATCH 2/2] s390/uaccess: Provide raw_copy_from/to_user_opaque
  2022-02-03 18:11 ` Janis Schoetterl-Glausch
  2022-02-03 18:11   ` [RFC PATCH 1/2] uaccess: Add mechanism for arch specific user access with argument Janis Schoetterl-Glausch
@ 2022-02-03 18:11   ` Janis Schoetterl-Glausch
  1 sibling, 0 replies; 8+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-03 18:11 UTC (permalink / raw)
  To: scgl; +Cc: akpm, arnd, borntraeger, hca, keescook, linux-kernel, viro

This enables KVM to perform key checked guest accesses by passing the
access key via the opaque argument of __copy_from/to_user_opaque.
Since the existing uaccess implementation on s390 makes use of move
instructions that support having an additional access key supplied,
we can implement raw_copy_from/to_user_opaque by enhancing the
existing implementation.

Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
---
 arch/s390/include/asm/uaccess.h | 27 ++++++++++++++--
 arch/s390/lib/uaccess.c         | 56 ++++++++++++++++++++-------------
 2 files changed, 60 insertions(+), 23 deletions(-)

diff --git a/arch/s390/include/asm/uaccess.h b/arch/s390/include/asm/uaccess.h
index 02b467461163..1a324bc3ae0b 100644
--- a/arch/s390/include/asm/uaccess.h
+++ b/arch/s390/include/asm/uaccess.h
@@ -33,11 +33,34 @@ static inline int __range_ok(unsigned long addr, unsigned long size)
 
 #define access_ok(addr, size) __access_ok(addr, size)
 
+#define uaccess_opaque uaccess_opaque
+struct uaccess_opaque {
+	u8 key;
+};
+
 unsigned long __must_check
-raw_copy_from_user(void *to, const void __user *from, unsigned long n);
+raw_copy_from_user_opaque(void *to, const void __user *from, unsigned long n,
+			  struct uaccess_opaque opaque);
 
 unsigned long __must_check
-raw_copy_to_user(void __user *to, const void *from, unsigned long n);
+raw_copy_to_user_opaque(void __user *to, const void *from, unsigned long n,
+			struct uaccess_opaque opaque);
+
+static __always_inline unsigned long __must_check
+raw_copy_from_user(void *to, const void __user *from, unsigned long n)
+{
+	struct uaccess_opaque opaque = { .key = 0};
+
+	return raw_copy_from_user_opaque(to, from, n, opaque);
+}
+
+static __always_inline unsigned long __must_check
+raw_copy_to_user(void __user *to, const void *from, unsigned long n)
+{
+	struct uaccess_opaque opaque = { .key = 0};
+
+	return raw_copy_to_user_opaque(to, from, n, opaque);
+}
 
 #ifndef CONFIG_KASAN
 #define INLINE_COPY_FROM_USER
diff --git a/arch/s390/lib/uaccess.c b/arch/s390/lib/uaccess.c
index d3a700385875..6446634c3b75 100644
--- a/arch/s390/lib/uaccess.c
+++ b/arch/s390/lib/uaccess.c
@@ -59,11 +59,13 @@ static inline int copy_with_mvcos(void)
 #endif
 
 static inline unsigned long copy_from_user_mvcos(void *x, const void __user *ptr,
-						 unsigned long size)
+						 unsigned long size, u8 key)
 {
 	unsigned long tmp1, tmp2;
 	union oac spec = {
+		.oac2.key = key,
 		.oac2.as = PSW_BITS_AS_SECONDARY,
+		.oac2.k = 1,
 		.oac2.a = 1,
 	};
 
@@ -94,19 +96,19 @@ static inline unsigned long copy_from_user_mvcos(void *x, const void __user *ptr
 }
 
 static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr,
-						unsigned long size)
+						unsigned long size, u8 key)
 {
 	unsigned long tmp1, tmp2;
 
 	tmp1 = -256UL;
 	asm volatile(
 		"   sacf  0\n"
-		"0: mvcp  0(%0,%2),0(%1),%3\n"
+		"0: mvcp  0(%0,%2),0(%1),%[key]\n"
 		"7: jz    5f\n"
 		"1: algr  %0,%3\n"
 		"   la    %1,256(%1)\n"
 		"   la    %2,256(%2)\n"
-		"2: mvcp  0(%0,%2),0(%1),%3\n"
+		"2: mvcp  0(%0,%2),0(%1),%[key]\n"
 		"8: jnz   1b\n"
 		"   j     5f\n"
 		"3: la    %4,255(%1)\n"	/* %4 = ptr + 255 */
@@ -115,7 +117,7 @@ static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr,
 		"   slgr  %4,%1\n"
 		"   clgr  %0,%4\n"	/* copy crosses next page boundary? */
 		"   jnh   6f\n"
-		"4: mvcp  0(%4,%2),0(%1),%3\n"
+		"4: mvcp  0(%4,%2),0(%1),%[key]\n"
 		"9: slgr  %0,%4\n"
 		"   j     6f\n"
 		"5: slgr  %0,%0\n"
@@ -123,24 +125,31 @@ static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr,
 		EX_TABLE(0b,3b) EX_TABLE(2b,3b) EX_TABLE(4b,6b)
 		EX_TABLE(7b,3b) EX_TABLE(8b,3b) EX_TABLE(9b,6b)
 		: "+a" (size), "+a" (ptr), "+a" (x), "+a" (tmp1), "=a" (tmp2)
-		: : "cc", "memory");
+		: [key] "d" (key << 4)
+		: "cc", "memory");
 	return size;
 }
 
-unsigned long raw_copy_from_user(void *to, const void __user *from, unsigned long n)
+unsigned long raw_copy_from_user_opaque(void *to, const void __user *from,
+					unsigned long n,
+					struct uaccess_opaque opaque)
 {
+	u8 key = opaque.key;
+
 	if (copy_with_mvcos())
-		return copy_from_user_mvcos(to, from, n);
-	return copy_from_user_mvcp(to, from, n);
+		return copy_from_user_mvcos(to, from, n, key);
+	return copy_from_user_mvcp(to, from, n, key);
 }
-EXPORT_SYMBOL(raw_copy_from_user);
+EXPORT_SYMBOL(raw_copy_from_user_opaque);
 
-static inline unsigned long copy_to_user_mvcos(void __user *ptr, const void *x,
-					       unsigned long size)
+inline unsigned long copy_to_user_mvcos(void __user *ptr, const void *x,
+					unsigned long size, u8 key)
 {
 	unsigned long tmp1, tmp2;
 	union oac spec = {
+		.oac1.key = key,
 		.oac1.as = PSW_BITS_AS_SECONDARY,
+		.oac1.k = 1,
 		.oac1.a = 1,
 	};
 
@@ -171,19 +180,19 @@ static inline unsigned long copy_to_user_mvcos(void __user *ptr, const void *x,
 }
 
 static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x,
-					      unsigned long size)
+					      unsigned long size, u8 key)
 {
 	unsigned long tmp1, tmp2;
 
 	tmp1 = -256UL;
 	asm volatile(
 		"   sacf  0\n"
-		"0: mvcs  0(%0,%1),0(%2),%3\n"
+		"0: mvcs  0(%0,%1),0(%2),%[key]\n"
 		"7: jz    5f\n"
 		"1: algr  %0,%3\n"
 		"   la    %1,256(%1)\n"
 		"   la    %2,256(%2)\n"
-		"2: mvcs  0(%0,%1),0(%2),%3\n"
+		"2: mvcs  0(%0,%1),0(%2),%[key]\n"
 		"8: jnz   1b\n"
 		"   j     5f\n"
 		"3: la    %4,255(%1)\n" /* %4 = ptr + 255 */
@@ -192,7 +201,7 @@ static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x,
 		"   slgr  %4,%1\n"
 		"   clgr  %0,%4\n"	/* copy crosses next page boundary? */
 		"   jnh   6f\n"
-		"4: mvcs  0(%4,%1),0(%2),%3\n"
+		"4: mvcs  0(%4,%1),0(%2),%[key]\n"
 		"9: slgr  %0,%4\n"
 		"   j     6f\n"
 		"5: slgr  %0,%0\n"
@@ -200,17 +209,22 @@ static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x,
 		EX_TABLE(0b,3b) EX_TABLE(2b,3b) EX_TABLE(4b,6b)
 		EX_TABLE(7b,3b) EX_TABLE(8b,3b) EX_TABLE(9b,6b)
 		: "+a" (size), "+a" (ptr), "+a" (x), "+a" (tmp1), "=a" (tmp2)
-		: : "cc", "memory");
+		: [key] "d" (key << 4)
+		: "cc", "memory");
 	return size;
 }
 
-unsigned long raw_copy_to_user(void __user *to, const void *from, unsigned long n)
+unsigned long raw_copy_to_user_opaque(void __user *to, const void *from,
+				      unsigned long n,
+				      struct uaccess_opaque opaque)
 {
+	u8 key = opaque.key;
+
 	if (copy_with_mvcos())
-		return copy_to_user_mvcos(to, from, n);
-	return copy_to_user_mvcs(to, from, n);
+		return copy_to_user_mvcos(to, from, n, key);
+	return copy_to_user_mvcs(to, from, n, key);
 }
-EXPORT_SYMBOL(raw_copy_to_user);
+EXPORT_SYMBOL(raw_copy_to_user_opaque);
 
 static inline unsigned long clear_user_mvcos(void __user *to, unsigned long size)
 {
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [RFC PATCH 1/2] uaccess: Add mechanism for arch specific user access with argument
  2022-02-03 18:11   ` [RFC PATCH 1/2] uaccess: Add mechanism for arch specific user access with argument Janis Schoetterl-Glausch
@ 2022-02-03 19:20     ` Heiko Carstens
  0 siblings, 0 replies; 8+ messages in thread
From: Heiko Carstens @ 2022-02-03 19:20 UTC (permalink / raw)
  To: Janis Schoetterl-Glausch
  Cc: akpm, arnd, borntraeger, keescook, linux-kernel, viro

On Thu, Feb 03, 2022 at 07:11:40PM +0100, Janis Schoetterl-Glausch wrote:
> KVM on s390 needs a mechanism to do accesses to guest memory
> that honor storage key protection.
> 
> On s390 each physical page is associated with 4 access control bits.
> On access these are compared with an access key, which is either
> provided by the instruction or taken from the CPU state.
> Based on that comparison, the access either succeeds or is prevented.
> 
> KVM on s390 needs to be able emulate this behavior, for example during
> instruction emulation. KVM usually accesses the guest via
> __copy_from/to_user, but in this case we need to also pass the access key.
> Introduce __copy_from/to_user_opaque functions KVM can use to achieve
> this by forwarding an architecture specific argument.
> These functions are the same as their non _opaque counterparts, except
> for the additional argument and also reside in include/linux/uaccess.h
> so that they will not go out of sync should their counterparts change.
> 
> Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
> ---
>  include/linux/uaccess.h | 28 ++++++++++++++++++++++++++++
>  1 file changed, 28 insertions(+)
> 
> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
> index ac0394087f7d..cc2c7c6e2b92 100644
> --- a/include/linux/uaccess.h
> +++ b/include/linux/uaccess.h
> @@ -114,6 +114,20 @@ __copy_from_user(void *to, const void __user *from, unsigned long n)
>  	return raw_copy_from_user(to, from, n);
>  }
>  
> +#ifdef uaccess_opaque
> +static __always_inline __must_check unsigned long
> +__copy_from_user_opaque(void *to, const void __user *from, unsigned long n,
> +			struct uaccess_opaque opaque)
> +{
> +	might_fault();
> +	if (should_fail_usercopy())
> +		return n;
> +	instrument_copy_from_user(to, from, n);
> +	check_object_size(to, n, false);
> +	return raw_copy_from_user_opaque(to, from, n, opaque);
> +}
> +#endif /* uaccess_opaque */
> +
>  /**
>   * __copy_to_user_inatomic: - Copy a block of data into user space, with less checking.
>   * @to:   Destination address, in user space.
> @@ -148,6 +162,20 @@ __copy_to_user(void __user *to, const void *from, unsigned long n)
>  	return raw_copy_to_user(to, from, n);
>  }
>  
> +#ifdef uaccess_opaque
> +static __always_inline __must_check unsigned long
> +__copy_to_user_opaque(void __user *to, const void *from, unsigned long n,
> +		      struct uaccess_opaque opaque)
> +{
> +	might_fault();
> +	if (should_fail_usercopy())
> +		return n;
> +	instrument_copy_to_user(to, from, n);
> +	check_object_size(from, n, true);
> +	return raw_copy_to_user_opaque(to, from, n, opaque);
> +}
> +#endif /* uaccess_opaque */

I don't think this is acceptable for several reasons:

- we really don't want an "opaque" copy_to_user variant with completely
  different semantics for each architecture

- even if this would be only for s390 it is anything but obvious for the
  reader what the semantics of "opaque" are

- making a double underscore variant of something the regular api is really
  not nice

So I guess we have three options:

- add a "key" variant to common code, where the semantics are clearly that
  "key" is a matching access key required to access a user space page

- have this completely in s390 arch code and accept the burden (and risk)
  of keeping instrumentation, etc. in sync

- add some macros similar to the SYSCALL_DEFINE macros, which allow to
  create architecture specific copy_to/from_user variants with additional
  parameters.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2022-02-03 19:20 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-26 17:33 [RFC PATCH 0/2] uaccess: Add mechanism for key checked access to user memory Janis Schoetterl-Glausch
2022-01-26 17:33 ` [RFC PATCH 1/2] " Janis Schoetterl-Glausch
2022-01-26 17:33 ` [RFC PATCH 2/2] s390/uaccess: Provide raw_copy_from/to_user_key Janis Schoetterl-Glausch
2022-01-31 13:39 ` [RFC PATCH 0/2] uaccess: Add mechanism for key checked access to user memory Christian Borntraeger
2022-02-03 18:11 ` Janis Schoetterl-Glausch
2022-02-03 18:11   ` [RFC PATCH 1/2] uaccess: Add mechanism for arch specific user access with argument Janis Schoetterl-Glausch
2022-02-03 19:20     ` Heiko Carstens
2022-02-03 18:11   ` [RFC PATCH 2/2] s390/uaccess: Provide raw_copy_from/to_user_opaque Janis Schoetterl-Glausch

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).