linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/10] KVM: s390: Do storage key checking
@ 2022-02-11 18:22 Janis Schoetterl-Glausch
  2022-02-11 18:22 ` [PATCH v4 01/10] s390/uaccess: Add copy_from/to_user_key functions Janis Schoetterl-Glausch
                   ` (10 more replies)
  0 siblings, 11 replies; 39+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-11 18:22 UTC (permalink / raw)
  To: Christian Borntraeger, Heiko Carstens, Janosch Frank
  Cc: Janis Schoetterl-Glausch, Alexander Gordeev, Claudio Imbrenda,
	David Hildenbrand, Jonathan Corbet, kvm, linux-doc, linux-kernel,
	linux-s390, Paolo Bonzini, Sven Schnelle, Vasily Gorbik

Check keys when emulating instructions and let user space do key checked
accesses.
User space can do so via an extension of the MEMOP IOCTL:
* allow optional key checking
* allow MEMOP on vm fd, so key checked accesses on absolute memory
  become possible

I haven't finished the memop selftest rewrite, but decided to send out a
new version anyway, since the functional patches are (hopefully) final
and the memop selftest patch works. I'll reply to it with the
rewritten version.

v3: https://lore.kernel.org/kvm/20220209170422.1910690-1-scgl@linux.ibm.com/
v2: https://lore.kernel.org/kvm/20220207165930.1608621-1-scgl@linux.ibm.com/

v3 -> v4
 * rebase
 * ignore key in memop if skey flag not specified
 * fix nits in documentation
 * pick up tags

v2 -> v3
 * get rid of reserved bytes check in vm,vcpu memop
 * minor documentation changes
 * moved memop selftest patches to end of series and squashed them,
   currently working on making the test pretty

v1 -> v2
 * rebase
 * storage key variants of _?copy_from/to_user instead of
   __copy_from/to_user_key, with long key arg instead of char
 * refactor protection override checks
 * u8 instead of char for key argument in s390 KVM code
 * add comments
 * pass ar (access register) to trans_exec in access_guest_with_key
 * check reserved/unused fields (backwards compatible)
 * move key arg of MEMOP out of flags
 * rename new MEMOP capability to KVM_CAP_S390_MEM_OP_EXTENSION
 * minor changes

Janis Schoetterl-Glausch (10):
  s390/uaccess: Add copy_from/to_user_key functions
  KVM: s390: Honor storage keys when accessing guest memory
  KVM: s390: handle_tprot: Honor storage keys
  KVM: s390: selftests: Test TEST PROTECTION emulation
  KVM: s390: Add optional storage key checking to MEMOP IOCTL
  KVM: s390: Add vm IOCTL for key checked guest absolute memory access
  KVM: s390: Rename existing vcpu memop functions
  KVM: s390: Add capability for storage key extension of MEM_OP IOCTL
  KVM: s390: Update api documentation for memop ioctl
  KVM: s390: selftests: Test memops with storage keys

 Documentation/virt/kvm/api.rst            | 112 ++++-
 arch/s390/include/asm/ctl_reg.h           |   2 +
 arch/s390/include/asm/page.h              |   2 +
 arch/s390/include/asm/uaccess.h           |  22 +
 arch/s390/kvm/gaccess.c                   | 250 +++++++++-
 arch/s390/kvm/gaccess.h                   |  84 +++-
 arch/s390/kvm/intercept.c                 |  12 +-
 arch/s390/kvm/kvm-s390.c                  | 132 ++++-
 arch/s390/kvm/priv.c                      |  66 +--
 arch/s390/lib/uaccess.c                   |  81 +++-
 include/uapi/linux/kvm.h                  |  11 +-
 tools/testing/selftests/kvm/.gitignore    |   1 +
 tools/testing/selftests/kvm/Makefile      |   1 +
 tools/testing/selftests/kvm/s390x/memop.c | 558 +++++++++++++++++++---
 tools/testing/selftests/kvm/s390x/tprot.c | 227 +++++++++
 15 files changed, 1375 insertions(+), 186 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/s390x/tprot.c

Range-diff against v3:
 1:  0049c4412978 =  1:  313eb689b715 s390/uaccess: Add copy_from/to_user_key functions
 2:  296096b9a7b9 =  2:  192fe30b1863 KVM: s390: Honor storage keys when accessing guest memory
 3:  a5976cb3a147 =  3:  19bd017ae5a4 KVM: s390: handle_tprot: Honor storage keys
 4:  5f5e056e66df =  4:  d20fad8d501b KVM: s390: selftests: Test TEST PROTECTION emulation
 5:  64fa17a83b26 !  5:  bdee09b4a15e KVM: s390: Add optional storage key checking to MEMOP IOCTL
    @@ Commit message
         CPU would, or pass another key if necessary.
     
         Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
    -    Acked-by: Janosch Frank <frankja@linux.ibm.com>
         Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
    +    Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com>
    +    Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
     
      ## arch/s390/kvm/kvm-s390.c ##
    -@@
    - #include <linux/sched/signal.h>
    - #include <linux/string.h>
    - #include <linux/pgtable.h>
    -+#include <linux/bitfield.h>
    - 
    - #include <asm/asm-offsets.h>
    - #include <asm/lowcore.h>
     @@ arch/s390/kvm/kvm-s390.c: static int kvm_s390_handle_pv(struct kvm *kvm, struct kvm_pv_cmd *cmd)
      	return r;
      }
    @@ arch/s390/kvm/kvm-s390.c: static long kvm_s390_guest_mem_op(struct kvm_vcpu *vcp
     +	if (mop->flags & KVM_S390_MEMOP_F_SKEY_PROTECTION) {
     +		if (access_key_invalid(mop->key))
     +			return -EINVAL;
    ++	} else {
    ++		mop->key = 0;
     +	}
      	if (!(mop->flags & KVM_S390_MEMOP_F_CHECK_ONLY)) {
      		tmpbuf = vmalloc(mop->size);
 6:  57e3ad332677 !  6:  e207a2f9af8a KVM: s390: Add vm IOCTL for key checked guest absolute memory access
    @@ Commit message
         accesses and so are not applied as they are when using the vcpu memop.
     
         Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
    -    Acked-by: Janosch Frank <frankja@linux.ibm.com>
    +    Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com>
     
      ## arch/s390/kvm/gaccess.c ##
     @@ arch/s390/kvm/gaccess.c: static int low_address_protection_enabled(struct kvm_vcpu *vcpu,
    @@ arch/s390/kvm/kvm-s390.c: static bool access_key_invalid(u8 access_key)
     +	if (mop->flags & KVM_S390_MEMOP_F_SKEY_PROTECTION) {
     +		if (access_key_invalid(mop->key))
     +			return -EINVAL;
    ++	} else {
    ++		mop->key = 0;
     +	}
     +	if (!(mop->flags & KVM_S390_MEMOP_F_CHECK_ONLY)) {
     +		tmpbuf = vmalloc(mop->size);
 7:  1615f5ab6e30 =  7:  52adbceebe41 KVM: s390: Rename existing vcpu memop functions
 8:  a8420e0f1b7f =  8:  43280a2db282 KVM: s390: Add capability for storage key extension of MEM_OP IOCTL
 9:  c59952ee362b !  9:  9389cd2f4d23 KVM: s390: Update api documentation for memop ioctl
    @@ Commit message
         as well as the existing SIDA operations.
     
         Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
    +    Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
     
      ## Documentation/virt/kvm/api.rst ##
     @@ Documentation/virt/kvm/api.rst: The fields in each entry are defined as follows:
    @@ Documentation/virt/kvm/api.rst: Parameters are specified via the following struc
     +the access. "ar" designates the access register number to be used; the valid
     +range is 0..15.
     +Logical accesses are permitted for the VCPU ioctl only.
    -+Logical accesses are permitted for non secure guests only.
    ++Logical accesses are permitted for non-protected guests only.
     +
     +Supported flags:
     +  * ``KVM_S390_MEMOP_F_CHECK_ONLY``
    @@ Documentation/virt/kvm/api.rst: Parameters are specified via the following struc
     +  * ``KVM_S390_MEMOP_F_SKEY_PROTECTION``
     +
     +The KVM_S390_MEMOP_F_CHECK_ONLY flag can be set to check whether the
    -+corresponding memory access would cause an access exception, without touching
    -+the data in memory at the destination.
    ++corresponding memory access would cause an access exception; however,
    ++no actual access to the data in memory at the destination is performed.
     +In this case, "buf" is unused and can be NULL.
     +
     +In case an access exception occurred during the access (or would occur
    @@ Documentation/virt/kvm/api.rst: Parameters are specified via the following struc
     +Absolute accesses are permitted for the VM ioctl if KVM_CAP_S390_MEM_OP_EXTENSION
     +is > 0.
     +Currently absolute accesses are not permitted for VCPU ioctls.
    -+Absolute accesses are permitted for non secure guests only.
    ++Absolute accesses are permitted for non-protected guests only.
     +
     +Supported flags:
     +  * ``KVM_S390_MEMOP_F_CHECK_ONLY``
    @@ Documentation/virt/kvm/api.rst: Parameters are specified via the following struc
     +^^^^^^^^^^^^^^^^
     +
     +Access the secure instruction data area which contains memory operands necessary
    -+for instruction emulation for secure guests.
    ++for instruction emulation for protected guests.
     +SIDA accesses are available if the KVM_CAP_S390_PROTECTED capability is available.
     +SIDA accesses are permitted for the VCPU ioctl only.
    -+SIDA accesses are permitted for secure guests only.
    ++SIDA accesses are permitted for protected guests only.
      
     -The "reserved" field is meant for future extensions. It is not used by
     -KVM with the currently defined set of flags.
10:  68752e1eca95 = 10:  af33593d63a4 KVM: s390: selftests: Test memops with storage keys

base-commit: f1baf68e1383f6ed93eb9cff2866d46562607a43
-- 
2.32.0


^ permalink raw reply	[flat|nested] 39+ messages in thread

* [PATCH v4 01/10] s390/uaccess: Add copy_from/to_user_key functions
  2022-02-11 18:22 [PATCH v4 00/10] KVM: s390: Do storage key checking Janis Schoetterl-Glausch
@ 2022-02-11 18:22 ` Janis Schoetterl-Glausch
  2022-02-14 15:30   ` Claudio Imbrenda
  2022-02-11 18:22 ` [PATCH v4 02/10] KVM: s390: Honor storage keys when accessing guest memory Janis Schoetterl-Glausch
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 39+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-11 18:22 UTC (permalink / raw)
  To: Christian Borntraeger, Heiko Carstens, Janosch Frank
  Cc: Janis Schoetterl-Glausch, Alexander Gordeev, Claudio Imbrenda,
	David Hildenbrand, Jonathan Corbet, kvm, linux-doc, linux-kernel,
	linux-s390, Paolo Bonzini, Sven Schnelle, Vasily Gorbik

Add copy_from/to_user_key functions, which perform storage key checking.
These functions can be used by KVM for emulating instructions that need
to be key checked.
These functions differ from their non _key counterparts in
include/linux/uaccess.h only in the additional key argument and must be
kept in sync with those.

Since the existing uaccess implementation on s390 makes use of move
instructions that support having an additional access key supplied,
we can implement raw_copy_from/to_user_key by enhancing the
existing implementation.

Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
Acked-by: Heiko Carstens <hca@linux.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com>
Acked-by: Janosch Frank <frankja@linux.ibm.com>
---
 arch/s390/include/asm/uaccess.h | 22 +++++++++
 arch/s390/lib/uaccess.c         | 81 +++++++++++++++++++++++++--------
 2 files changed, 85 insertions(+), 18 deletions(-)

diff --git a/arch/s390/include/asm/uaccess.h b/arch/s390/include/asm/uaccess.h
index d74e26b48604..ba1bcb91af95 100644
--- a/arch/s390/include/asm/uaccess.h
+++ b/arch/s390/include/asm/uaccess.h
@@ -44,6 +44,28 @@ raw_copy_to_user(void __user *to, const void *from, unsigned long n);
 #define INLINE_COPY_TO_USER
 #endif
 
+unsigned long __must_check
+_copy_from_user_key(void *to, const void __user *from, unsigned long n, unsigned long key);
+
+static __always_inline unsigned long __must_check
+copy_from_user_key(void *to, const void __user *from, unsigned long n, unsigned long key)
+{
+	if (likely(check_copy_size(to, n, false)))
+		n = _copy_from_user_key(to, from, n, key);
+	return n;
+}
+
+unsigned long __must_check
+_copy_to_user_key(void __user *to, const void *from, unsigned long n, unsigned long key);
+
+static __always_inline unsigned long __must_check
+copy_to_user_key(void __user *to, const void *from, unsigned long n, unsigned long key)
+{
+	if (likely(check_copy_size(from, n, true)))
+		n = _copy_to_user_key(to, from, n, key);
+	return n;
+}
+
 int __put_user_bad(void) __attribute__((noreturn));
 int __get_user_bad(void) __attribute__((noreturn));
 
diff --git a/arch/s390/lib/uaccess.c b/arch/s390/lib/uaccess.c
index 8a5d21461889..b709239feb5d 100644
--- a/arch/s390/lib/uaccess.c
+++ b/arch/s390/lib/uaccess.c
@@ -59,11 +59,13 @@ static inline int copy_with_mvcos(void)
 #endif
 
 static inline unsigned long copy_from_user_mvcos(void *x, const void __user *ptr,
-						 unsigned long size)
+						 unsigned long size, unsigned long key)
 {
 	unsigned long tmp1, tmp2;
 	union oac spec = {
+		.oac2.key = key,
 		.oac2.as = PSW_BITS_AS_SECONDARY,
+		.oac2.k = 1,
 		.oac2.a = 1,
 	};
 
@@ -94,19 +96,19 @@ static inline unsigned long copy_from_user_mvcos(void *x, const void __user *ptr
 }
 
 static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr,
-						unsigned long size)
+						unsigned long size, unsigned long key)
 {
 	unsigned long tmp1, tmp2;
 
 	tmp1 = -256UL;
 	asm volatile(
 		"   sacf  0\n"
-		"0: mvcp  0(%0,%2),0(%1),%3\n"
+		"0: mvcp  0(%0,%2),0(%1),%[key]\n"
 		"7: jz    5f\n"
 		"1: algr  %0,%3\n"
 		"   la    %1,256(%1)\n"
 		"   la    %2,256(%2)\n"
-		"2: mvcp  0(%0,%2),0(%1),%3\n"
+		"2: mvcp  0(%0,%2),0(%1),%[key]\n"
 		"8: jnz   1b\n"
 		"   j     5f\n"
 		"3: la    %4,255(%1)\n"	/* %4 = ptr + 255 */
@@ -115,7 +117,7 @@ static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr,
 		"   slgr  %4,%1\n"
 		"   clgr  %0,%4\n"	/* copy crosses next page boundary? */
 		"   jnh   6f\n"
-		"4: mvcp  0(%4,%2),0(%1),%3\n"
+		"4: mvcp  0(%4,%2),0(%1),%[key]\n"
 		"9: slgr  %0,%4\n"
 		"   j     6f\n"
 		"5: slgr  %0,%0\n"
@@ -123,24 +125,49 @@ static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr,
 		EX_TABLE(0b,3b) EX_TABLE(2b,3b) EX_TABLE(4b,6b)
 		EX_TABLE(7b,3b) EX_TABLE(8b,3b) EX_TABLE(9b,6b)
 		: "+a" (size), "+a" (ptr), "+a" (x), "+a" (tmp1), "=a" (tmp2)
-		: : "cc", "memory");
+		: [key] "d" (key << 4)
+		: "cc", "memory");
 	return size;
 }
 
-unsigned long raw_copy_from_user(void *to, const void __user *from, unsigned long n)
+static unsigned long raw_copy_from_user_key(void *to, const void __user *from,
+					    unsigned long n, unsigned long key)
 {
 	if (copy_with_mvcos())
-		return copy_from_user_mvcos(to, from, n);
-	return copy_from_user_mvcp(to, from, n);
+		return copy_from_user_mvcos(to, from, n, key);
+	return copy_from_user_mvcp(to, from, n, key);
+}
+
+unsigned long raw_copy_from_user(void *to, const void __user *from, unsigned long n)
+{
+	return raw_copy_from_user_key(to, from, n, 0);
 }
 EXPORT_SYMBOL(raw_copy_from_user);
 
+unsigned long _copy_from_user_key(void *to, const void __user *from,
+				  unsigned long n, unsigned long key)
+{
+	unsigned long res = n;
+
+	might_fault();
+	if (!should_fail_usercopy()) {
+		instrument_copy_from_user(to, from, n);
+		res = raw_copy_from_user_key(to, from, n, key);
+	}
+	if (unlikely(res))
+		memset(to + (n - res), 0, res);
+	return res;
+}
+EXPORT_SYMBOL(_copy_from_user_key);
+
 static inline unsigned long copy_to_user_mvcos(void __user *ptr, const void *x,
-					       unsigned long size)
+					       unsigned long size, unsigned long key)
 {
 	unsigned long tmp1, tmp2;
 	union oac spec = {
+		.oac1.key = key,
 		.oac1.as = PSW_BITS_AS_SECONDARY,
+		.oac1.k = 1,
 		.oac1.a = 1,
 	};
 
@@ -171,19 +198,19 @@ static inline unsigned long copy_to_user_mvcos(void __user *ptr, const void *x,
 }
 
 static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x,
-					      unsigned long size)
+					      unsigned long size, unsigned long key)
 {
 	unsigned long tmp1, tmp2;
 
 	tmp1 = -256UL;
 	asm volatile(
 		"   sacf  0\n"
-		"0: mvcs  0(%0,%1),0(%2),%3\n"
+		"0: mvcs  0(%0,%1),0(%2),%[key]\n"
 		"7: jz    5f\n"
 		"1: algr  %0,%3\n"
 		"   la    %1,256(%1)\n"
 		"   la    %2,256(%2)\n"
-		"2: mvcs  0(%0,%1),0(%2),%3\n"
+		"2: mvcs  0(%0,%1),0(%2),%[key]\n"
 		"8: jnz   1b\n"
 		"   j     5f\n"
 		"3: la    %4,255(%1)\n" /* %4 = ptr + 255 */
@@ -192,7 +219,7 @@ static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x,
 		"   slgr  %4,%1\n"
 		"   clgr  %0,%4\n"	/* copy crosses next page boundary? */
 		"   jnh   6f\n"
-		"4: mvcs  0(%4,%1),0(%2),%3\n"
+		"4: mvcs  0(%4,%1),0(%2),%[key]\n"
 		"9: slgr  %0,%4\n"
 		"   j     6f\n"
 		"5: slgr  %0,%0\n"
@@ -200,18 +227,36 @@ static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x,
 		EX_TABLE(0b,3b) EX_TABLE(2b,3b) EX_TABLE(4b,6b)
 		EX_TABLE(7b,3b) EX_TABLE(8b,3b) EX_TABLE(9b,6b)
 		: "+a" (size), "+a" (ptr), "+a" (x), "+a" (tmp1), "=a" (tmp2)
-		: : "cc", "memory");
+		: [key] "d" (key << 4)
+		: "cc", "memory");
 	return size;
 }
 
-unsigned long raw_copy_to_user(void __user *to, const void *from, unsigned long n)
+static unsigned long raw_copy_to_user_key(void __user *to, const void *from,
+					  unsigned long n, unsigned long key)
 {
 	if (copy_with_mvcos())
-		return copy_to_user_mvcos(to, from, n);
-	return copy_to_user_mvcs(to, from, n);
+		return copy_to_user_mvcos(to, from, n, key);
+	return copy_to_user_mvcs(to, from, n, key);
+}
+
+unsigned long raw_copy_to_user(void __user *to, const void *from, unsigned long n)
+{
+	return raw_copy_to_user_key(to, from, n, 0);
 }
 EXPORT_SYMBOL(raw_copy_to_user);
 
+unsigned long _copy_to_user_key(void __user *to, const void *from,
+				unsigned long n, unsigned long key)
+{
+	might_fault();
+	if (should_fail_usercopy())
+		return n;
+	instrument_copy_to_user(to, from, n);
+	return raw_copy_to_user_key(to, from, n, key);
+}
+EXPORT_SYMBOL(_copy_to_user_key);
+
 static inline unsigned long clear_user_mvcos(void __user *to, unsigned long size)
 {
 	unsigned long tmp1, tmp2;
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v4 02/10] KVM: s390: Honor storage keys when accessing guest memory
  2022-02-11 18:22 [PATCH v4 00/10] KVM: s390: Do storage key checking Janis Schoetterl-Glausch
  2022-02-11 18:22 ` [PATCH v4 01/10] s390/uaccess: Add copy_from/to_user_key functions Janis Schoetterl-Glausch
@ 2022-02-11 18:22 ` Janis Schoetterl-Glausch
  2022-02-14 15:52   ` Claudio Imbrenda
  2022-02-11 18:22 ` [PATCH v4 03/10] KVM: s390: handle_tprot: Honor storage keys Janis Schoetterl-Glausch
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 39+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-11 18:22 UTC (permalink / raw)
  To: Christian Borntraeger, Heiko Carstens, Janosch Frank
  Cc: Janis Schoetterl-Glausch, Alexander Gordeev, Claudio Imbrenda,
	David Hildenbrand, Jonathan Corbet, kvm, linux-doc, linux-kernel,
	linux-s390, Paolo Bonzini, Sven Schnelle, Vasily Gorbik

Storage key checking had not been implemented for instructions emulated
by KVM. Implement it by enhancing the functions used for guest access,
in particular those making use of access_guest which has been renamed
to access_guest_with_key.
Accesses via access_guest_real should not be key checked.

For actual accesses, key checking is done by
copy_from/to_user_key (which internally uses MVCOS/MVCP/MVCS).
In cases where accessibility is checked without an actual access,
this is performed by getting the storage key and checking if the access
key matches. In both cases, if applicable, storage and fetch protection
override are honored.

Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com>
---
 arch/s390/include/asm/ctl_reg.h |   2 +
 arch/s390/include/asm/page.h    |   2 +
 arch/s390/kvm/gaccess.c         | 187 ++++++++++++++++++++++++++++++--
 arch/s390/kvm/gaccess.h         |  77 +++++++++++--
 arch/s390/kvm/intercept.c       |  12 +-
 arch/s390/kvm/kvm-s390.c        |   4 +-
 6 files changed, 253 insertions(+), 31 deletions(-)

diff --git a/arch/s390/include/asm/ctl_reg.h b/arch/s390/include/asm/ctl_reg.h
index 04dc65f8901d..c800199a376b 100644
--- a/arch/s390/include/asm/ctl_reg.h
+++ b/arch/s390/include/asm/ctl_reg.h
@@ -12,6 +12,8 @@
 
 #define CR0_CLOCK_COMPARATOR_SIGN	BIT(63 - 10)
 #define CR0_LOW_ADDRESS_PROTECTION	BIT(63 - 35)
+#define CR0_FETCH_PROTECTION_OVERRIDE	BIT(63 - 38)
+#define CR0_STORAGE_PROTECTION_OVERRIDE	BIT(63 - 39)
 #define CR0_EMERGENCY_SIGNAL_SUBMASK	BIT(63 - 49)
 #define CR0_EXTERNAL_CALL_SUBMASK	BIT(63 - 50)
 #define CR0_CLOCK_COMPARATOR_SUBMASK	BIT(63 - 52)
diff --git a/arch/s390/include/asm/page.h b/arch/s390/include/asm/page.h
index d98d17a36c7b..cfc4d6fb2385 100644
--- a/arch/s390/include/asm/page.h
+++ b/arch/s390/include/asm/page.h
@@ -20,6 +20,8 @@
 #define PAGE_SIZE	_PAGE_SIZE
 #define PAGE_MASK	_PAGE_MASK
 #define PAGE_DEFAULT_ACC	0
+/* storage-protection override */
+#define PAGE_SPO_ACC		9
 #define PAGE_DEFAULT_KEY	(PAGE_DEFAULT_ACC << 4)
 
 #define HPAGE_SHIFT	20
diff --git a/arch/s390/kvm/gaccess.c b/arch/s390/kvm/gaccess.c
index 4460808c3b9a..7fca0cff4c12 100644
--- a/arch/s390/kvm/gaccess.c
+++ b/arch/s390/kvm/gaccess.c
@@ -10,6 +10,7 @@
 #include <linux/mm_types.h>
 #include <linux/err.h>
 #include <linux/pgtable.h>
+#include <linux/bitfield.h>
 
 #include <asm/gmap.h>
 #include "kvm-s390.h"
@@ -794,6 +795,79 @@ static int low_address_protection_enabled(struct kvm_vcpu *vcpu,
 	return 1;
 }
 
+static bool fetch_prot_override_applicable(struct kvm_vcpu *vcpu, enum gacc_mode mode,
+					   union asce asce)
+{
+	psw_t *psw = &vcpu->arch.sie_block->gpsw;
+	unsigned long override;
+
+	if (mode == GACC_FETCH || mode == GACC_IFETCH) {
+		/* check if fetch protection override enabled */
+		override = vcpu->arch.sie_block->gcr[0];
+		override &= CR0_FETCH_PROTECTION_OVERRIDE;
+		/* not applicable if subject to DAT && private space */
+		override = override && !(psw_bits(*psw).dat && asce.p);
+		return override;
+	}
+	return false;
+}
+
+static bool fetch_prot_override_applies(unsigned long ga, unsigned int len)
+{
+	return ga < 2048 && ga + len <= 2048;
+}
+
+static bool storage_prot_override_applicable(struct kvm_vcpu *vcpu)
+{
+	/* check if storage protection override enabled */
+	return vcpu->arch.sie_block->gcr[0] & CR0_STORAGE_PROTECTION_OVERRIDE;
+}
+
+static bool storage_prot_override_applies(u8 access_control)
+{
+	/* matches special storage protection override key (9) -> allow */
+	return access_control == PAGE_SPO_ACC;
+}
+
+static int vcpu_check_access_key(struct kvm_vcpu *vcpu, u8 access_key,
+				 enum gacc_mode mode, union asce asce, gpa_t gpa,
+				 unsigned long ga, unsigned int len)
+{
+	u8 storage_key, access_control;
+	unsigned long hva;
+	int r;
+
+	/* access key 0 matches any storage key -> allow */
+	if (access_key == 0)
+		return 0;
+	/*
+	 * caller needs to ensure that gfn is accessible, so we can
+	 * assume that this cannot fail
+	 */
+	hva = gfn_to_hva(vcpu->kvm, gpa_to_gfn(gpa));
+	mmap_read_lock(current->mm);
+	r = get_guest_storage_key(current->mm, hva, &storage_key);
+	mmap_read_unlock(current->mm);
+	if (r)
+		return r;
+	access_control = FIELD_GET(_PAGE_ACC_BITS, storage_key);
+	/* access key matches storage key -> allow */
+	if (access_control == access_key)
+		return 0;
+	if (mode == GACC_FETCH || mode == GACC_IFETCH) {
+		/* it is a fetch and fetch protection is off -> allow */
+		if (!(storage_key & _PAGE_FP_BIT))
+			return 0;
+		if (fetch_prot_override_applicable(vcpu, mode, asce) &&
+		    fetch_prot_override_applies(ga, len))
+			return 0;
+	}
+	if (storage_prot_override_applicable(vcpu) &&
+	    storage_prot_override_applies(access_control))
+		return 0;
+	return PGM_PROTECTION;
+}
+
 /**
  * guest_range_to_gpas() - Calculate guest physical addresses of page fragments
  * covering a logical range
@@ -804,6 +878,7 @@ static int low_address_protection_enabled(struct kvm_vcpu *vcpu,
  * @len: length of range in bytes
  * @asce: address-space-control element to use for translation
  * @mode: access mode
+ * @access_key: access key to mach the range's storage keys against
  *
  * Translate a logical range to a series of guest absolute addresses,
  * such that the concatenation of page fragments starting at each gpa make up
@@ -830,7 +905,8 @@ static int low_address_protection_enabled(struct kvm_vcpu *vcpu,
  */
 static int guest_range_to_gpas(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
 			       unsigned long *gpas, unsigned long len,
-			       const union asce asce, enum gacc_mode mode)
+			       const union asce asce, enum gacc_mode mode,
+			       u8 access_key)
 {
 	psw_t *psw = &vcpu->arch.sie_block->gpsw;
 	unsigned int offset = offset_in_page(ga);
@@ -857,6 +933,10 @@ static int guest_range_to_gpas(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
 		}
 		if (rc)
 			return trans_exc(vcpu, rc, ga, ar, mode, prot);
+		rc = vcpu_check_access_key(vcpu, access_key, mode, asce, gpa, ga,
+					   fragment_len);
+		if (rc)
+			return trans_exc(vcpu, rc, ga, ar, mode, PROT_TYPE_KEYC);
 		if (gpas)
 			*gpas++ = gpa;
 		offset = 0;
@@ -880,16 +960,54 @@ static int access_guest_page(struct kvm *kvm, enum gacc_mode mode, gpa_t gpa,
 	return rc;
 }
 
-int access_guest(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar, void *data,
-		 unsigned long len, enum gacc_mode mode)
+static int
+access_guest_page_with_key(struct kvm *kvm, enum gacc_mode mode, gpa_t gpa,
+			   void *data, unsigned int len, u8 access_key)
+{
+	struct kvm_memory_slot *slot;
+	bool writable;
+	gfn_t gfn;
+	hva_t hva;
+	int rc;
+
+	gfn = gpa >> PAGE_SHIFT;
+	slot = gfn_to_memslot(kvm, gfn);
+	hva = gfn_to_hva_memslot_prot(slot, gfn, &writable);
+
+	if (kvm_is_error_hva(hva))
+		return PGM_ADDRESSING;
+	/*
+	 * Check if it's a ro memslot, even tho that can't occur (they're unsupported).
+	 * Don't try to actually handle that case.
+	 */
+	if (!writable && mode == GACC_STORE)
+		return -EOPNOTSUPP;
+	hva += offset_in_page(gpa);
+	if (mode == GACC_STORE)
+		rc = copy_to_user_key((void __user *)hva, data, len, access_key);
+	else
+		rc = copy_from_user_key(data, (void __user *)hva, len, access_key);
+	if (rc)
+		return PGM_PROTECTION;
+	if (mode == GACC_STORE)
+		mark_page_dirty_in_slot(kvm, slot, gfn);
+	return 0;
+}
+
+int access_guest_with_key(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
+			  void *data, unsigned long len, enum gacc_mode mode,
+			  u8 access_key)
 {
 	psw_t *psw = &vcpu->arch.sie_block->gpsw;
 	unsigned long nr_pages, idx;
 	unsigned long gpa_array[2];
 	unsigned int fragment_len;
 	unsigned long *gpas;
+	enum prot_type prot;
 	int need_ipte_lock;
 	union asce asce;
+	bool try_storage_prot_override;
+	bool try_fetch_prot_override;
 	int rc;
 
 	if (!len)
@@ -904,16 +1022,47 @@ int access_guest(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar, void *data,
 		gpas = vmalloc(array_size(nr_pages, sizeof(unsigned long)));
 	if (!gpas)
 		return -ENOMEM;
+	try_fetch_prot_override = fetch_prot_override_applicable(vcpu, mode, asce);
+	try_storage_prot_override = storage_prot_override_applicable(vcpu);
 	need_ipte_lock = psw_bits(*psw).dat && !asce.r;
 	if (need_ipte_lock)
 		ipte_lock(vcpu);
-	rc = guest_range_to_gpas(vcpu, ga, ar, gpas, len, asce, mode);
-	for (idx = 0; idx < nr_pages && !rc; idx++) {
+	/*
+	 * Since we do the access further down ultimately via a move instruction
+	 * that does key checking and returns an error in case of a protection
+	 * violation, we don't need to do the check during address translation.
+	 * Skip it by passing access key 0, which matches any storage key,
+	 * obviating the need for any further checks. As a result the check is
+	 * handled entirely in hardware on access, we only need to take care to
+	 * forego key protection checking if fetch protection override applies or
+	 * retry with the special key 9 in case of storage protection override.
+	 */
+	rc = guest_range_to_gpas(vcpu, ga, ar, gpas, len, asce, mode, 0);
+	if (rc)
+		goto out_unlock;
+	for (idx = 0; idx < nr_pages; idx++) {
 		fragment_len = min(PAGE_SIZE - offset_in_page(gpas[idx]), len);
-		rc = access_guest_page(vcpu->kvm, mode, gpas[idx], data, fragment_len);
+		if (try_fetch_prot_override && fetch_prot_override_applies(ga, fragment_len)) {
+			rc = access_guest_page(vcpu->kvm, mode, gpas[idx],
+					       data, fragment_len);
+		} else {
+			rc = access_guest_page_with_key(vcpu->kvm, mode, gpas[idx],
+							data, fragment_len, access_key);
+		}
+		if (rc == PGM_PROTECTION && try_storage_prot_override)
+			rc = access_guest_page_with_key(vcpu->kvm, mode, gpas[idx],
+							data, fragment_len, PAGE_SPO_ACC);
+		if (rc == PGM_PROTECTION)
+			prot = PROT_TYPE_KEYC;
+		if (rc)
+			break;
 		len -= fragment_len;
 		data += fragment_len;
+		ga = kvm_s390_logical_to_effective(vcpu, ga + fragment_len);
 	}
+	if (rc > 0)
+		rc = trans_exc(vcpu, rc, ga, ar, mode, prot);
+out_unlock:
 	if (need_ipte_lock)
 		ipte_unlock(vcpu);
 	if (nr_pages > ARRAY_SIZE(gpa_array))
@@ -940,12 +1089,13 @@ int access_guest_real(struct kvm_vcpu *vcpu, unsigned long gra,
 }
 
 /**
- * guest_translate_address - translate guest logical into guest absolute address
+ * guest_translate_address_with_key - translate guest logical into guest absolute address
  * @vcpu: virtual cpu
  * @gva: Guest virtual address
  * @ar: Access register
  * @gpa: Guest physical address
  * @mode: Translation access mode
+ * @access_key: access key to mach the storage key with
  *
  * Parameter semantics are the same as the ones from guest_translate.
  * The memory contents at the guest address are not changed.
@@ -953,8 +1103,9 @@ int access_guest_real(struct kvm_vcpu *vcpu, unsigned long gra,
  * Note: The IPTE lock is not taken during this function, so the caller
  * has to take care of this.
  */
-int guest_translate_address(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
-			    unsigned long *gpa, enum gacc_mode mode)
+int guest_translate_address_with_key(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
+				     unsigned long *gpa, enum gacc_mode mode,
+				     u8 access_key)
 {
 	union asce asce;
 	int rc;
@@ -963,7 +1114,17 @@ int guest_translate_address(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
 	rc = get_vcpu_asce(vcpu, &asce, gva, ar, mode);
 	if (rc)
 		return rc;
-	return guest_range_to_gpas(vcpu, gva, ar, gpa, 1, asce, mode);
+	return guest_range_to_gpas(vcpu, gva, ar, gpa, 1, asce, mode,
+				   access_key);
+}
+
+int guest_translate_address(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
+			    unsigned long *gpa, enum gacc_mode mode)
+{
+	u8 access_key = psw_bits(vcpu->arch.sie_block->gpsw).key;
+
+	return guest_translate_address_with_key(vcpu, gva, ar, gpa, mode,
+						access_key);
 }
 
 /**
@@ -973,9 +1134,10 @@ int guest_translate_address(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
  * @ar: Access register
  * @length: Length of test range
  * @mode: Translation access mode
+ * @access_key: access key to mach the storage keys with
  */
 int check_gva_range(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
-		    unsigned long length, enum gacc_mode mode)
+		    unsigned long length, enum gacc_mode mode, u8 access_key)
 {
 	union asce asce;
 	int rc = 0;
@@ -984,7 +1146,8 @@ int check_gva_range(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
 	if (rc)
 		return rc;
 	ipte_lock(vcpu);
-	rc = guest_range_to_gpas(vcpu, gva, ar, NULL, length, asce, mode);
+	rc = guest_range_to_gpas(vcpu, gva, ar, NULL, length, asce, mode,
+				 access_key);
 	ipte_unlock(vcpu);
 
 	return rc;
diff --git a/arch/s390/kvm/gaccess.h b/arch/s390/kvm/gaccess.h
index 7c72a5e3449f..e5b2f56e7962 100644
--- a/arch/s390/kvm/gaccess.h
+++ b/arch/s390/kvm/gaccess.h
@@ -186,24 +186,31 @@ enum gacc_mode {
 	GACC_IFETCH,
 };
 
+int guest_translate_address_with_key(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
+				     unsigned long *gpa, enum gacc_mode mode,
+				     u8 access_key);
+
 int guest_translate_address(struct kvm_vcpu *vcpu, unsigned long gva,
 			    u8 ar, unsigned long *gpa, enum gacc_mode mode);
+
 int check_gva_range(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
-		    unsigned long length, enum gacc_mode mode);
+		    unsigned long length, enum gacc_mode mode, u8 access_key);
 
-int access_guest(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar, void *data,
-		 unsigned long len, enum gacc_mode mode);
+int access_guest_with_key(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
+			  void *data, unsigned long len, enum gacc_mode mode,
+			  u8 access_key);
 
 int access_guest_real(struct kvm_vcpu *vcpu, unsigned long gra,
 		      void *data, unsigned long len, enum gacc_mode mode);
 
 /**
- * write_guest - copy data from kernel space to guest space
+ * write_guest_with_key - copy data from kernel space to guest space
  * @vcpu: virtual cpu
  * @ga: guest address
  * @ar: access register
  * @data: source address in kernel space
  * @len: number of bytes to copy
+ * @access_key: access key the storage key needs to match
  *
  * Copy @len bytes from @data (kernel space) to @ga (guest address).
  * In order to copy data to guest space the PSW of the vcpu is inspected:
@@ -214,8 +221,8 @@ int access_guest_real(struct kvm_vcpu *vcpu, unsigned long gra,
  * The addressing mode of the PSW is also inspected, so that address wrap
  * around is taken into account for 24-, 31- and 64-bit addressing mode,
  * if the to be copied data crosses page boundaries in guest address space.
- * In addition also low address and DAT protection are inspected before
- * copying any data (key protection is currently not implemented).
+ * In addition low address, DAT and key protection checks are performed before
+ * copying any data.
  *
  * This function modifies the 'struct kvm_s390_pgm_info pgm' member of @vcpu.
  * In case of an access exception (e.g. protection exception) pgm will contain
@@ -243,10 +250,53 @@ int access_guest_real(struct kvm_vcpu *vcpu, unsigned long gra,
  *	 if data has been changed in guest space in case of an exception.
  */
 static inline __must_check
+int write_guest_with_key(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
+			 void *data, unsigned long len, u8 access_key)
+{
+	return access_guest_with_key(vcpu, ga, ar, data, len, GACC_STORE,
+				     access_key);
+}
+
+/**
+ * write_guest - copy data from kernel space to guest space
+ * @vcpu: virtual cpu
+ * @ga: guest address
+ * @ar: access register
+ * @data: source address in kernel space
+ * @len: number of bytes to copy
+ *
+ * The behaviour of write_guest is identical to write_guest_with_key, except
+ * that the PSW access key is used instead of an explicit argument.
+ */
+static inline __must_check
 int write_guest(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar, void *data,
 		unsigned long len)
 {
-	return access_guest(vcpu, ga, ar, data, len, GACC_STORE);
+	u8 access_key = psw_bits(vcpu->arch.sie_block->gpsw).key;
+
+	return write_guest_with_key(vcpu, ga, ar, data, len, access_key);
+}
+
+/**
+ * read_guest_with_key - copy data from guest space to kernel space
+ * @vcpu: virtual cpu
+ * @ga: guest address
+ * @ar: access register
+ * @data: destination address in kernel space
+ * @len: number of bytes to copy
+ * @access_key: access key the storage key needs to match
+ *
+ * Copy @len bytes from @ga (guest address) to @data (kernel space).
+ *
+ * The behaviour of read_guest_with_key is identical to write_guest_with_key,
+ * except that data will be copied from guest space to kernel space.
+ */
+static inline __must_check
+int read_guest_with_key(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
+			void *data, unsigned long len, u8 access_key)
+{
+	return access_guest_with_key(vcpu, ga, ar, data, len, GACC_FETCH,
+				     access_key);
 }
 
 /**
@@ -259,14 +309,16 @@ int write_guest(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar, void *data,
  *
  * Copy @len bytes from @ga (guest address) to @data (kernel space).
  *
- * The behaviour of read_guest is identical to write_guest, except that
- * data will be copied from guest space to kernel space.
+ * The behaviour of read_guest is identical to read_guest_with_key, except
+ * that the PSW access key is used instead of an explicit argument.
  */
 static inline __must_check
 int read_guest(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar, void *data,
 	       unsigned long len)
 {
-	return access_guest(vcpu, ga, ar, data, len, GACC_FETCH);
+	u8 access_key = psw_bits(vcpu->arch.sie_block->gpsw).key;
+
+	return read_guest_with_key(vcpu, ga, ar, data, len, access_key);
 }
 
 /**
@@ -287,7 +339,10 @@ static inline __must_check
 int read_guest_instr(struct kvm_vcpu *vcpu, unsigned long ga, void *data,
 		     unsigned long len)
 {
-	return access_guest(vcpu, ga, 0, data, len, GACC_IFETCH);
+	u8 access_key = psw_bits(vcpu->arch.sie_block->gpsw).key;
+
+	return access_guest_with_key(vcpu, ga, 0, data, len, GACC_IFETCH,
+				     access_key);
 }
 
 /**
diff --git a/arch/s390/kvm/intercept.c b/arch/s390/kvm/intercept.c
index d07ff646d844..8bd42a20d924 100644
--- a/arch/s390/kvm/intercept.c
+++ b/arch/s390/kvm/intercept.c
@@ -331,18 +331,18 @@ static int handle_mvpg_pei(struct kvm_vcpu *vcpu)
 
 	kvm_s390_get_regs_rre(vcpu, &reg1, &reg2);
 
-	/* Make sure that the source is paged-in */
-	rc = guest_translate_address(vcpu, vcpu->run->s.regs.gprs[reg2],
-				     reg2, &srcaddr, GACC_FETCH);
+	/* Ensure that the source is paged-in, no actual access -> no key checking */
+	rc = guest_translate_address_with_key(vcpu, vcpu->run->s.regs.gprs[reg2],
+					      reg2, &srcaddr, GACC_FETCH, 0);
 	if (rc)
 		return kvm_s390_inject_prog_cond(vcpu, rc);
 	rc = kvm_arch_fault_in_page(vcpu, srcaddr, 0);
 	if (rc != 0)
 		return rc;
 
-	/* Make sure that the destination is paged-in */
-	rc = guest_translate_address(vcpu, vcpu->run->s.regs.gprs[reg1],
-				     reg1, &dstaddr, GACC_STORE);
+	/* Ensure that the source is paged-in, no actual access -> no key checking */
+	rc = guest_translate_address_with_key(vcpu, vcpu->run->s.regs.gprs[reg1],
+					      reg1, &dstaddr, GACC_STORE, 0);
 	if (rc)
 		return kvm_s390_inject_prog_cond(vcpu, rc);
 	rc = kvm_arch_fault_in_page(vcpu, dstaddr, 1);
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 2296b1ff1e02..fdbd6c1dc709 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -4713,7 +4713,7 @@ static long kvm_s390_guest_mem_op(struct kvm_vcpu *vcpu,
 	case KVM_S390_MEMOP_LOGICAL_READ:
 		if (mop->flags & KVM_S390_MEMOP_F_CHECK_ONLY) {
 			r = check_gva_range(vcpu, mop->gaddr, mop->ar,
-					    mop->size, GACC_FETCH);
+					    mop->size, GACC_FETCH, 0);
 			break;
 		}
 		r = read_guest(vcpu, mop->gaddr, mop->ar, tmpbuf, mop->size);
@@ -4725,7 +4725,7 @@ static long kvm_s390_guest_mem_op(struct kvm_vcpu *vcpu,
 	case KVM_S390_MEMOP_LOGICAL_WRITE:
 		if (mop->flags & KVM_S390_MEMOP_F_CHECK_ONLY) {
 			r = check_gva_range(vcpu, mop->gaddr, mop->ar,
-					    mop->size, GACC_STORE);
+					    mop->size, GACC_STORE, 0);
 			break;
 		}
 		if (copy_from_user(tmpbuf, uaddr, mop->size)) {
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v4 03/10] KVM: s390: handle_tprot: Honor storage keys
  2022-02-11 18:22 [PATCH v4 00/10] KVM: s390: Do storage key checking Janis Schoetterl-Glausch
  2022-02-11 18:22 ` [PATCH v4 01/10] s390/uaccess: Add copy_from/to_user_key functions Janis Schoetterl-Glausch
  2022-02-11 18:22 ` [PATCH v4 02/10] KVM: s390: Honor storage keys when accessing guest memory Janis Schoetterl-Glausch
@ 2022-02-11 18:22 ` Janis Schoetterl-Glausch
  2022-02-11 18:22 ` [PATCH v4 04/10] KVM: s390: selftests: Test TEST PROTECTION emulation Janis Schoetterl-Glausch
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 39+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-11 18:22 UTC (permalink / raw)
  To: Christian Borntraeger, Heiko Carstens, Janosch Frank
  Cc: Janis Schoetterl-Glausch, Alexander Gordeev, Claudio Imbrenda,
	David Hildenbrand, Jonathan Corbet, kvm, linux-doc, linux-kernel,
	linux-s390, Paolo Bonzini, Sven Schnelle, Vasily Gorbik

Use the access key operand to check for key protection when
translating guest addresses.
Since the translation code checks for accessing exceptions/error hvas,
we can remove the check here and simplify the control flow.
Keep checking if the memory is read-only even if such memslots are
currently not supported.

handle_tprot was the last user of guest_translate_address,
so remove it.

Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 arch/s390/kvm/gaccess.c |  9 ------
 arch/s390/kvm/gaccess.h |  3 --
 arch/s390/kvm/priv.c    | 66 ++++++++++++++++++++++-------------------
 3 files changed, 35 insertions(+), 43 deletions(-)

diff --git a/arch/s390/kvm/gaccess.c b/arch/s390/kvm/gaccess.c
index 7fca0cff4c12..37838f637707 100644
--- a/arch/s390/kvm/gaccess.c
+++ b/arch/s390/kvm/gaccess.c
@@ -1118,15 +1118,6 @@ int guest_translate_address_with_key(struct kvm_vcpu *vcpu, unsigned long gva, u
 				   access_key);
 }
 
-int guest_translate_address(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
-			    unsigned long *gpa, enum gacc_mode mode)
-{
-	u8 access_key = psw_bits(vcpu->arch.sie_block->gpsw).key;
-
-	return guest_translate_address_with_key(vcpu, gva, ar, gpa, mode,
-						access_key);
-}
-
 /**
  * check_gva_range - test a range of guest virtual addresses for accessibility
  * @vcpu: virtual cpu
diff --git a/arch/s390/kvm/gaccess.h b/arch/s390/kvm/gaccess.h
index e5b2f56e7962..c5f2e7311b17 100644
--- a/arch/s390/kvm/gaccess.h
+++ b/arch/s390/kvm/gaccess.h
@@ -190,9 +190,6 @@ int guest_translate_address_with_key(struct kvm_vcpu *vcpu, unsigned long gva, u
 				     unsigned long *gpa, enum gacc_mode mode,
 				     u8 access_key);
 
-int guest_translate_address(struct kvm_vcpu *vcpu, unsigned long gva,
-			    u8 ar, unsigned long *gpa, enum gacc_mode mode);
-
 int check_gva_range(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
 		    unsigned long length, enum gacc_mode mode, u8 access_key);
 
diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
index 417154b314a6..30b24c42ef99 100644
--- a/arch/s390/kvm/priv.c
+++ b/arch/s390/kvm/priv.c
@@ -1443,10 +1443,11 @@ int kvm_s390_handle_eb(struct kvm_vcpu *vcpu)
 
 static int handle_tprot(struct kvm_vcpu *vcpu)
 {
-	u64 address1, address2;
-	unsigned long hva, gpa;
-	int ret = 0, cc = 0;
+	u64 address, operand2;
+	unsigned long gpa;
+	u8 access_key;
 	bool writable;
+	int ret, cc;
 	u8 ar;
 
 	vcpu->stat.instruction_tprot++;
@@ -1454,43 +1455,46 @@ static int handle_tprot(struct kvm_vcpu *vcpu)
 	if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE)
 		return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP);
 
-	kvm_s390_get_base_disp_sse(vcpu, &address1, &address2, &ar, NULL);
+	kvm_s390_get_base_disp_sse(vcpu, &address, &operand2, &ar, NULL);
+	access_key = (operand2 & 0xf0) >> 4;
 
-	/* we only handle the Linux memory detection case:
-	 * access key == 0
-	 * everything else goes to userspace. */
-	if (address2 & 0xf0)
-		return -EOPNOTSUPP;
 	if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_DAT)
 		ipte_lock(vcpu);
-	ret = guest_translate_address(vcpu, address1, ar, &gpa, GACC_STORE);
-	if (ret == PGM_PROTECTION) {
+
+	ret = guest_translate_address_with_key(vcpu, address, ar, &gpa,
+					       GACC_STORE, access_key);
+	if (ret == 0) {
+		gfn_to_hva_prot(vcpu->kvm, gpa_to_gfn(gpa), &writable);
+	} else if (ret == PGM_PROTECTION) {
+		writable = false;
 		/* Write protected? Try again with read-only... */
-		cc = 1;
-		ret = guest_translate_address(vcpu, address1, ar, &gpa,
-					      GACC_FETCH);
+		ret = guest_translate_address_with_key(vcpu, address, ar, &gpa,
+						       GACC_FETCH, access_key);
 	}
-	if (ret) {
-		if (ret == PGM_ADDRESSING || ret == PGM_TRANSLATION_SPEC) {
-			ret = kvm_s390_inject_program_int(vcpu, ret);
-		} else if (ret > 0) {
-			/* Translation not available */
-			kvm_s390_set_psw_cc(vcpu, 3);
+	if (ret >= 0) {
+		cc = -1;
+
+		/* Fetching permitted; storing permitted */
+		if (ret == 0 && writable)
+			cc = 0;
+		/* Fetching permitted; storing not permitted */
+		else if (ret == 0 && !writable)
+			cc = 1;
+		/* Fetching not permitted; storing not permitted */
+		else if (ret == PGM_PROTECTION)
+			cc = 2;
+		/* Translation not available */
+		else if (ret != PGM_ADDRESSING && ret != PGM_TRANSLATION_SPEC)
+			cc = 3;
+
+		if (cc != -1) {
+			kvm_s390_set_psw_cc(vcpu, cc);
 			ret = 0;
+		} else {
+			ret = kvm_s390_inject_program_int(vcpu, ret);
 		}
-		goto out_unlock;
 	}
 
-	hva = gfn_to_hva_prot(vcpu->kvm, gpa_to_gfn(gpa), &writable);
-	if (kvm_is_error_hva(hva)) {
-		ret = kvm_s390_inject_program_int(vcpu, PGM_ADDRESSING);
-	} else {
-		if (!writable)
-			cc = 1;		/* Write not permitted ==> read-only */
-		kvm_s390_set_psw_cc(vcpu, cc);
-		/* Note: CC2 only occurs for storage keys (not supported yet) */
-	}
-out_unlock:
 	if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_DAT)
 		ipte_unlock(vcpu);
 	return ret;
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v4 04/10] KVM: s390: selftests: Test TEST PROTECTION emulation
  2022-02-11 18:22 [PATCH v4 00/10] KVM: s390: Do storage key checking Janis Schoetterl-Glausch
                   ` (2 preceding siblings ...)
  2022-02-11 18:22 ` [PATCH v4 03/10] KVM: s390: handle_tprot: Honor storage keys Janis Schoetterl-Glausch
@ 2022-02-11 18:22 ` Janis Schoetterl-Glausch
  2022-02-11 18:22 ` [PATCH v4 05/10] KVM: s390: Add optional storage key checking to MEMOP IOCTL Janis Schoetterl-Glausch
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 39+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-11 18:22 UTC (permalink / raw)
  To: Christian Borntraeger, Heiko Carstens, Janosch Frank
  Cc: Janis Schoetterl-Glausch, Alexander Gordeev, Claudio Imbrenda,
	David Hildenbrand, Jonathan Corbet, kvm, linux-doc, linux-kernel,
	linux-s390, Paolo Bonzini, Sven Schnelle, Vasily Gorbik

Test the emulation of TEST PROTECTION in the presence of storage keys.
Emulation only occurs under certain conditions, one of which is the host
page being protected.
Trigger this by protecting the test pages via mprotect.

Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
---
 tools/testing/selftests/kvm/.gitignore    |   1 +
 tools/testing/selftests/kvm/Makefile      |   1 +
 tools/testing/selftests/kvm/s390x/tprot.c | 227 ++++++++++++++++++++++
 3 files changed, 229 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/s390x/tprot.c

diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index dce7de7755e6..7903580a48ac 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -8,6 +8,7 @@
 /s390x/memop
 /s390x/resets
 /s390x/sync_regs_test
+/s390x/tprot
 /x86_64/amx_test
 /x86_64/cpuid_test
 /x86_64/cr4_cpuid_sync_test
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 0e4926bc9a58..086f490e808d 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -121,6 +121,7 @@ TEST_GEN_PROGS_aarch64 += kvm_binary_stats_test
 TEST_GEN_PROGS_s390x = s390x/memop
 TEST_GEN_PROGS_s390x += s390x/resets
 TEST_GEN_PROGS_s390x += s390x/sync_regs_test
+TEST_GEN_PROGS_s390x += s390x/tprot
 TEST_GEN_PROGS_s390x += demand_paging_test
 TEST_GEN_PROGS_s390x += dirty_log_test
 TEST_GEN_PROGS_s390x += kvm_create_max_vcpus
diff --git a/tools/testing/selftests/kvm/s390x/tprot.c b/tools/testing/selftests/kvm/s390x/tprot.c
new file mode 100644
index 000000000000..c097b9db495e
--- /dev/null
+++ b/tools/testing/selftests/kvm/s390x/tprot.c
@@ -0,0 +1,227 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Test TEST PROTECTION emulation.
+ *
+ * Copyright IBM Corp. 2021
+ */
+
+#include <sys/mman.h>
+#include "test_util.h"
+#include "kvm_util.h"
+
+#define PAGE_SHIFT 12
+#define PAGE_SIZE (1 << PAGE_SHIFT)
+#define CR0_FETCH_PROTECTION_OVERRIDE	(1UL << (63 - 38))
+#define CR0_STORAGE_PROTECTION_OVERRIDE	(1UL << (63 - 39))
+
+#define VCPU_ID 1
+
+static __aligned(PAGE_SIZE) uint8_t pages[2][PAGE_SIZE];
+static uint8_t *const page_store_prot = pages[0];
+static uint8_t *const page_fetch_prot = pages[1];
+
+/* Nonzero return value indicates that address not mapped */
+static int set_storage_key(void *addr, uint8_t key)
+{
+	int not_mapped = 0;
+
+	asm volatile (
+		       "lra	%[addr], 0(0,%[addr])\n"
+		"	jz	0f\n"
+		"	llill	%[not_mapped],1\n"
+		"	j	1f\n"
+		"0:	sske	%[key], %[addr]\n"
+		"1:"
+		: [addr] "+&a" (addr), [not_mapped] "+r" (not_mapped)
+		: [key] "r" (key)
+		: "cc"
+	);
+	return -not_mapped;
+}
+
+enum permission {
+	READ_WRITE = 0,
+	READ = 1,
+	RW_PROTECTED = 2,
+	TRANSL_UNAVAIL = 3,
+};
+
+static enum permission test_protection(void *addr, uint8_t key)
+{
+	uint64_t mask;
+
+	asm volatile (
+		       "tprot	%[addr], 0(%[key])\n"
+		"	ipm	%[mask]\n"
+		: [mask] "=r" (mask)
+		: [addr] "Q" (*(char *)addr),
+		  [key] "a" (key)
+		: "cc"
+	);
+
+	return (enum permission)(mask >> 28);
+}
+
+enum stage {
+	STAGE_END,
+	STAGE_INIT_SIMPLE,
+	TEST_SIMPLE,
+	STAGE_INIT_FETCH_PROT_OVERRIDE,
+	TEST_FETCH_PROT_OVERRIDE,
+	TEST_STORAGE_PROT_OVERRIDE,
+};
+
+struct test {
+	enum stage stage;
+	void *addr;
+	uint8_t key;
+	enum permission expected;
+} tests[] = {
+	/*
+	 * We perform each test in the array by executing TEST PROTECTION on
+	 * the specified addr with the specified key and checking if the returned
+	 * permissions match the expected value.
+	 * Both guest and host cooperate to set up the required test conditions.
+	 * A central condition is that the page targeted by addr has to be DAT
+	 * protected in the host mappings, in order for KVM to emulate the
+	 * TEST PROTECTION instruction.
+	 * Since the page tables are shared, the host uses mprotect to achieve
+	 * this.
+	 *
+	 * Test resulting in RW_PROTECTED/TRANSL_UNAVAIL will be interpreted
+	 * by SIE, not KVM, but there is no harm in testing them also.
+	 * See Enhanced Suppression-on-Protection Facilities in the
+	 * Interpretive-Execution Mode
+	 */
+	/*
+	 * guest: set storage key of page_store_prot to 1
+	 *        storage key of page_fetch_prot to 9 and enable
+	 *        protection for it
+	 * STAGE_INIT_SIMPLE
+	 * host: write protect both via mprotect
+	 */
+	/* access key 0 matches any storage key -> RW */
+	{ TEST_SIMPLE, page_store_prot, 0x00, READ_WRITE },
+	/* access key matches storage key -> RW */
+	{ TEST_SIMPLE, page_store_prot, 0x10, READ_WRITE },
+	/* mismatched keys, but no fetch protection -> RO */
+	{ TEST_SIMPLE, page_store_prot, 0x20, READ },
+	/* access key 0 matches any storage key -> RW */
+	{ TEST_SIMPLE, page_fetch_prot, 0x00, READ_WRITE },
+	/* access key matches storage key -> RW */
+	{ TEST_SIMPLE, page_fetch_prot, 0x90, READ_WRITE },
+	/* mismatched keys, fetch protection -> inaccessible */
+	{ TEST_SIMPLE, page_fetch_prot, 0x10, RW_PROTECTED },
+	/* page 0 not mapped yet -> translation not available */
+	{ TEST_SIMPLE, (void *)0x00, 0x10, TRANSL_UNAVAIL },
+	/*
+	 * host: try to map page 0
+	 * guest: set storage key of page 0 to 9 and enable fetch protection
+	 * STAGE_INIT_FETCH_PROT_OVERRIDE
+	 * host: write protect page 0
+	 *       enable fetch protection override
+	 */
+	/* mismatched keys, fetch protection, but override applies -> RO */
+	{ TEST_FETCH_PROT_OVERRIDE, (void *)0x00, 0x10, READ },
+	/* mismatched keys, fetch protection, override applies to 0-2048 only -> inaccessible */
+	{ TEST_FETCH_PROT_OVERRIDE, (void *)2049, 0x10, RW_PROTECTED },
+	/*
+	 * host: enable storage protection override
+	 */
+	/* mismatched keys, but override applies (storage key 9) -> RW */
+	{ TEST_STORAGE_PROT_OVERRIDE, page_fetch_prot, 0x10, READ_WRITE },
+	/* mismatched keys, no fetch protection, override doesn't apply -> RO */
+	{ TEST_STORAGE_PROT_OVERRIDE, page_store_prot, 0x20, READ },
+	/* mismatched keys, but override applies (storage key 9) -> RW */
+	{ TEST_STORAGE_PROT_OVERRIDE, (void *)2049, 0x10, READ_WRITE },
+	/* end marker */
+	{ STAGE_END, 0, 0, 0 },
+};
+
+static enum stage perform_next_stage(int *i, bool mapped_0)
+{
+	enum stage stage = tests[*i].stage;
+	enum permission result;
+	bool skip;
+
+	for (; tests[*i].stage == stage; (*i)++) {
+		/*
+		 * Some fetch protection override tests require that page 0
+		 * be mapped, however, when the hosts tries to map that page via
+		 * vm_vaddr_alloc, it may happen that some other page gets mapped
+		 * instead.
+		 * In order to skip these tests we detect this inside the guest
+		 */
+		skip = tests[*i].addr < (void *)4096 &&
+		       tests[*i].expected != TRANSL_UNAVAIL &&
+		       !mapped_0;
+		if (!skip) {
+			result = test_protection(tests[*i].addr, tests[*i].key);
+			GUEST_ASSERT_2(result == tests[*i].expected, *i, result);
+		}
+	}
+	return stage;
+}
+
+static void guest_code(void)
+{
+	bool mapped_0;
+	int i = 0;
+
+	GUEST_ASSERT_EQ(set_storage_key(page_store_prot, 0x10), 0);
+	GUEST_ASSERT_EQ(set_storage_key(page_fetch_prot, 0x98), 0);
+	GUEST_SYNC(STAGE_INIT_SIMPLE);
+	GUEST_SYNC(perform_next_stage(&i, false));
+
+	/* Fetch-protection override */
+	mapped_0 = !set_storage_key((void *)0, 0x98);
+	GUEST_SYNC(STAGE_INIT_FETCH_PROT_OVERRIDE);
+	GUEST_SYNC(perform_next_stage(&i, mapped_0));
+
+	/* Storage-protection override */
+	GUEST_SYNC(perform_next_stage(&i, mapped_0));
+}
+
+#define HOST_SYNC(vmp, stage)							\
+({										\
+	struct kvm_vm *__vm = (vmp);						\
+	struct ucall uc;							\
+	int __stage = (stage);							\
+										\
+	vcpu_run(__vm, VCPU_ID);						\
+	get_ucall(__vm, VCPU_ID, &uc);						\
+	if (uc.cmd == UCALL_ABORT) {						\
+		TEST_FAIL("line %lu: %s, hints: %lu, %lu", uc.args[1],		\
+			  (const char *)uc.args[0], uc.args[2], uc.args[3]);	\
+	}									\
+	ASSERT_EQ(uc.cmd, UCALL_SYNC);						\
+	ASSERT_EQ(uc.args[1], __stage);						\
+})
+
+int main(int argc, char *argv[])
+{
+	struct kvm_vm *vm;
+	struct kvm_run *run;
+	vm_vaddr_t guest_0_page;
+
+	vm = vm_create_default(VCPU_ID, 0, guest_code);
+	run = vcpu_state(vm, VCPU_ID);
+
+	HOST_SYNC(vm, STAGE_INIT_SIMPLE);
+	mprotect(addr_gva2hva(vm, (vm_vaddr_t)pages), PAGE_SIZE * 2, PROT_READ);
+	HOST_SYNC(vm, TEST_SIMPLE);
+
+	guest_0_page = vm_vaddr_alloc(vm, PAGE_SIZE, 0);
+	if (guest_0_page != 0)
+		print_skip("Did not allocate page at 0 for fetch protection override tests");
+	HOST_SYNC(vm, STAGE_INIT_FETCH_PROT_OVERRIDE);
+	if (guest_0_page == 0)
+		mprotect(addr_gva2hva(vm, (vm_vaddr_t)0), PAGE_SIZE, PROT_READ);
+	run->s.regs.crs[0] |= CR0_FETCH_PROTECTION_OVERRIDE;
+	run->kvm_dirty_regs = KVM_SYNC_CRS;
+	HOST_SYNC(vm, TEST_FETCH_PROT_OVERRIDE);
+
+	run->s.regs.crs[0] |= CR0_STORAGE_PROTECTION_OVERRIDE;
+	run->kvm_dirty_regs = KVM_SYNC_CRS;
+	HOST_SYNC(vm, TEST_STORAGE_PROT_OVERRIDE);
+}
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v4 05/10] KVM: s390: Add optional storage key checking to MEMOP IOCTL
  2022-02-11 18:22 [PATCH v4 00/10] KVM: s390: Do storage key checking Janis Schoetterl-Glausch
                   ` (3 preceding siblings ...)
  2022-02-11 18:22 ` [PATCH v4 04/10] KVM: s390: selftests: Test TEST PROTECTION emulation Janis Schoetterl-Glausch
@ 2022-02-11 18:22 ` Janis Schoetterl-Glausch
  2022-02-11 18:22 ` [PATCH v4 06/10] KVM: s390: Add vm IOCTL for key checked guest absolute memory access Janis Schoetterl-Glausch
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 39+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-11 18:22 UTC (permalink / raw)
  To: Christian Borntraeger, Heiko Carstens, Janosch Frank
  Cc: Janis Schoetterl-Glausch, Alexander Gordeev, Claudio Imbrenda,
	David Hildenbrand, Jonathan Corbet, kvm, linux-doc, linux-kernel,
	linux-s390, Paolo Bonzini, Sven Schnelle, Vasily Gorbik

User space needs a mechanism to perform key checked accesses when
emulating instructions.

The key can be passed as an additional argument.
Having an additional argument is flexible, as user space can
pass the guest PSW's key, in order to make an access the same way the
CPU would, or pass another key if necessary.

Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
---
 arch/s390/kvm/kvm-s390.c | 31 +++++++++++++++++++++----------
 include/uapi/linux/kvm.h |  6 +++++-
 2 files changed, 26 insertions(+), 11 deletions(-)

diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index fdbd6c1dc709..c31b40abfa23 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -2359,6 +2359,11 @@ static int kvm_s390_handle_pv(struct kvm *kvm, struct kvm_pv_cmd *cmd)
 	return r;
 }
 
+static bool access_key_invalid(u8 access_key)
+{
+	return access_key > 0xf;
+}
+
 long kvm_arch_vm_ioctl(struct file *filp,
 		       unsigned int ioctl, unsigned long arg)
 {
@@ -4692,17 +4697,21 @@ static long kvm_s390_guest_mem_op(struct kvm_vcpu *vcpu,
 	void *tmpbuf = NULL;
 	int r = 0;
 	const u64 supported_flags = KVM_S390_MEMOP_F_INJECT_EXCEPTION
-				    | KVM_S390_MEMOP_F_CHECK_ONLY;
+				    | KVM_S390_MEMOP_F_CHECK_ONLY
+				    | KVM_S390_MEMOP_F_SKEY_PROTECTION;
 
 	if (mop->flags & ~supported_flags || mop->ar >= NUM_ACRS || !mop->size)
 		return -EINVAL;
-
 	if (mop->size > MEM_OP_MAX_SIZE)
 		return -E2BIG;
-
 	if (kvm_s390_pv_cpu_is_protected(vcpu))
 		return -EINVAL;
-
+	if (mop->flags & KVM_S390_MEMOP_F_SKEY_PROTECTION) {
+		if (access_key_invalid(mop->key))
+			return -EINVAL;
+	} else {
+		mop->key = 0;
+	}
 	if (!(mop->flags & KVM_S390_MEMOP_F_CHECK_ONLY)) {
 		tmpbuf = vmalloc(mop->size);
 		if (!tmpbuf)
@@ -4712,11 +4721,12 @@ static long kvm_s390_guest_mem_op(struct kvm_vcpu *vcpu,
 	switch (mop->op) {
 	case KVM_S390_MEMOP_LOGICAL_READ:
 		if (mop->flags & KVM_S390_MEMOP_F_CHECK_ONLY) {
-			r = check_gva_range(vcpu, mop->gaddr, mop->ar,
-					    mop->size, GACC_FETCH, 0);
+			r = check_gva_range(vcpu, mop->gaddr, mop->ar, mop->size,
+					    GACC_FETCH, mop->key);
 			break;
 		}
-		r = read_guest(vcpu, mop->gaddr, mop->ar, tmpbuf, mop->size);
+		r = read_guest_with_key(vcpu, mop->gaddr, mop->ar, tmpbuf,
+					mop->size, mop->key);
 		if (r == 0) {
 			if (copy_to_user(uaddr, tmpbuf, mop->size))
 				r = -EFAULT;
@@ -4724,15 +4734,16 @@ static long kvm_s390_guest_mem_op(struct kvm_vcpu *vcpu,
 		break;
 	case KVM_S390_MEMOP_LOGICAL_WRITE:
 		if (mop->flags & KVM_S390_MEMOP_F_CHECK_ONLY) {
-			r = check_gva_range(vcpu, mop->gaddr, mop->ar,
-					    mop->size, GACC_STORE, 0);
+			r = check_gva_range(vcpu, mop->gaddr, mop->ar, mop->size,
+					    GACC_STORE, mop->key);
 			break;
 		}
 		if (copy_from_user(tmpbuf, uaddr, mop->size)) {
 			r = -EFAULT;
 			break;
 		}
-		r = write_guest(vcpu, mop->gaddr, mop->ar, tmpbuf, mop->size);
+		r = write_guest_with_key(vcpu, mop->gaddr, mop->ar, tmpbuf,
+					 mop->size, mop->key);
 		break;
 	}
 
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 5191b57e1562..4566f429db2c 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -562,7 +562,10 @@ struct kvm_s390_mem_op {
 	__u32 op;		/* type of operation */
 	__u64 buf;		/* buffer in userspace */
 	union {
-		__u8 ar;	/* the access register number */
+		struct {
+			__u8 ar;	/* the access register number */
+			__u8 key;	/* access key, ignored if flag unset */
+		};
 		__u32 sida_offset; /* offset into the sida */
 		__u8 reserved[32]; /* should be set to 0 */
 	};
@@ -575,6 +578,7 @@ struct kvm_s390_mem_op {
 /* flags for kvm_s390_mem_op->flags */
 #define KVM_S390_MEMOP_F_CHECK_ONLY		(1ULL << 0)
 #define KVM_S390_MEMOP_F_INJECT_EXCEPTION	(1ULL << 1)
+#define KVM_S390_MEMOP_F_SKEY_PROTECTION	(1ULL << 2)
 
 /* for KVM_INTERRUPT */
 struct kvm_interrupt {
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v4 06/10] KVM: s390: Add vm IOCTL for key checked guest absolute memory access
  2022-02-11 18:22 [PATCH v4 00/10] KVM: s390: Do storage key checking Janis Schoetterl-Glausch
                   ` (4 preceding siblings ...)
  2022-02-11 18:22 ` [PATCH v4 05/10] KVM: s390: Add optional storage key checking to MEMOP IOCTL Janis Schoetterl-Glausch
@ 2022-02-11 18:22 ` Janis Schoetterl-Glausch
  2022-02-14 17:58   ` Claudio Imbrenda
  2022-02-21 16:32   ` [PATCH] KVM: s390: Add missing vm MEM_OP size check Janis Schoetterl-Glausch
  2022-02-11 18:22 ` [PATCH v4 07/10] KVM: s390: Rename existing vcpu memop functions Janis Schoetterl-Glausch
                   ` (4 subsequent siblings)
  10 siblings, 2 replies; 39+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-11 18:22 UTC (permalink / raw)
  To: Christian Borntraeger, Heiko Carstens, Janosch Frank
  Cc: Janis Schoetterl-Glausch, Alexander Gordeev, Claudio Imbrenda,
	David Hildenbrand, Jonathan Corbet, kvm, linux-doc, linux-kernel,
	linux-s390, Paolo Bonzini, Sven Schnelle, Vasily Gorbik

Channel I/O honors storage keys and is performed on absolute memory.
For I/O emulation user space therefore needs to be able to do key
checked accesses.
The vm IOCTL supports read/write accesses, as well as checking
if an access would succeed.
Unlike relying on KVM_S390_GET_SKEYS for key checking would,
the vm IOCTL performs the check in lockstep with the read or write,
by, ultimately, mapping the access to move instructions that
support key protection checking with a supplied key.
Fetch and storage protection override are not applicable to absolute
accesses and so are not applied as they are when using the vcpu memop.

Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com>
---
 arch/s390/kvm/gaccess.c  | 72 +++++++++++++++++++++++++++++++++++
 arch/s390/kvm/gaccess.h  |  6 +++
 arch/s390/kvm/kvm-s390.c | 81 ++++++++++++++++++++++++++++++++++++++++
 include/uapi/linux/kvm.h |  2 +
 4 files changed, 161 insertions(+)

diff --git a/arch/s390/kvm/gaccess.c b/arch/s390/kvm/gaccess.c
index 37838f637707..d53a183c2005 100644
--- a/arch/s390/kvm/gaccess.c
+++ b/arch/s390/kvm/gaccess.c
@@ -795,6 +795,35 @@ static int low_address_protection_enabled(struct kvm_vcpu *vcpu,
 	return 1;
 }
 
+static int vm_check_access_key(struct kvm *kvm, u8 access_key,
+			       enum gacc_mode mode, gpa_t gpa)
+{
+	u8 storage_key, access_control;
+	bool fetch_protected;
+	unsigned long hva;
+	int r;
+
+	if (access_key == 0)
+		return 0;
+
+	hva = gfn_to_hva(kvm, gpa_to_gfn(gpa));
+	if (kvm_is_error_hva(hva))
+		return PGM_ADDRESSING;
+
+	mmap_read_lock(current->mm);
+	r = get_guest_storage_key(current->mm, hva, &storage_key);
+	mmap_read_unlock(current->mm);
+	if (r)
+		return r;
+	access_control = FIELD_GET(_PAGE_ACC_BITS, storage_key);
+	if (access_control == access_key)
+		return 0;
+	fetch_protected = storage_key & _PAGE_FP_BIT;
+	if ((mode == GACC_FETCH || mode == GACC_IFETCH) && !fetch_protected)
+		return 0;
+	return PGM_PROTECTION;
+}
+
 static bool fetch_prot_override_applicable(struct kvm_vcpu *vcpu, enum gacc_mode mode,
 					   union asce asce)
 {
@@ -994,6 +1023,26 @@ access_guest_page_with_key(struct kvm *kvm, enum gacc_mode mode, gpa_t gpa,
 	return 0;
 }
 
+int access_guest_abs_with_key(struct kvm *kvm, gpa_t gpa, void *data,
+			      unsigned long len, enum gacc_mode mode, u8 access_key)
+{
+	int offset = offset_in_page(gpa);
+	int fragment_len;
+	int rc;
+
+	while (min(PAGE_SIZE - offset, len) > 0) {
+		fragment_len = min(PAGE_SIZE - offset, len);
+		rc = access_guest_page_with_key(kvm, mode, gpa, data, fragment_len, access_key);
+		if (rc)
+			return rc;
+		offset = 0;
+		len -= fragment_len;
+		data += fragment_len;
+		gpa += fragment_len;
+	}
+	return 0;
+}
+
 int access_guest_with_key(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
 			  void *data, unsigned long len, enum gacc_mode mode,
 			  u8 access_key)
@@ -1144,6 +1193,29 @@ int check_gva_range(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
 	return rc;
 }
 
+/**
+ * check_gpa_range - test a range of guest physical addresses for accessibility
+ * @kvm: virtual machine instance
+ * @gpa: guest physical address
+ * @length: length of test range
+ * @mode: access mode to test, relevant for storage keys
+ * @access_key: access key to mach the storage keys with
+ */
+int check_gpa_range(struct kvm *kvm, unsigned long gpa, unsigned long length,
+		    enum gacc_mode mode, u8 access_key)
+{
+	unsigned int fragment_len;
+	int rc = 0;
+
+	while (length && !rc) {
+		fragment_len = min(PAGE_SIZE - offset_in_page(gpa), length);
+		rc = vm_check_access_key(kvm, access_key, mode, gpa);
+		length -= fragment_len;
+		gpa += fragment_len;
+	}
+	return rc;
+}
+
 /**
  * kvm_s390_check_low_addr_prot_real - check for low-address protection
  * @vcpu: virtual cpu
diff --git a/arch/s390/kvm/gaccess.h b/arch/s390/kvm/gaccess.h
index c5f2e7311b17..1124ff282012 100644
--- a/arch/s390/kvm/gaccess.h
+++ b/arch/s390/kvm/gaccess.h
@@ -193,6 +193,12 @@ int guest_translate_address_with_key(struct kvm_vcpu *vcpu, unsigned long gva, u
 int check_gva_range(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
 		    unsigned long length, enum gacc_mode mode, u8 access_key);
 
+int check_gpa_range(struct kvm *kvm, unsigned long gpa, unsigned long length,
+		    enum gacc_mode mode, u8 access_key);
+
+int access_guest_abs_with_key(struct kvm *kvm, gpa_t gpa, void *data,
+			      unsigned long len, enum gacc_mode mode, u8 access_key);
+
 int access_guest_with_key(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
 			  void *data, unsigned long len, enum gacc_mode mode,
 			  u8 access_key);
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index c31b40abfa23..36bc73b5f5de 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -2364,6 +2364,78 @@ static bool access_key_invalid(u8 access_key)
 	return access_key > 0xf;
 }
 
+static int kvm_s390_vm_mem_op(struct kvm *kvm, struct kvm_s390_mem_op *mop)
+{
+	void __user *uaddr = (void __user *)mop->buf;
+	u64 supported_flags;
+	void *tmpbuf = NULL;
+	int r, srcu_idx;
+
+	supported_flags = KVM_S390_MEMOP_F_SKEY_PROTECTION
+			  | KVM_S390_MEMOP_F_CHECK_ONLY;
+	if (mop->flags & ~supported_flags)
+		return -EINVAL;
+	if (mop->size > MEM_OP_MAX_SIZE)
+		return -E2BIG;
+	if (kvm_s390_pv_is_protected(kvm))
+		return -EINVAL;
+	if (mop->flags & KVM_S390_MEMOP_F_SKEY_PROTECTION) {
+		if (access_key_invalid(mop->key))
+			return -EINVAL;
+	} else {
+		mop->key = 0;
+	}
+	if (!(mop->flags & KVM_S390_MEMOP_F_CHECK_ONLY)) {
+		tmpbuf = vmalloc(mop->size);
+		if (!tmpbuf)
+			return -ENOMEM;
+	}
+
+	srcu_idx = srcu_read_lock(&kvm->srcu);
+
+	if (kvm_is_error_gpa(kvm, mop->gaddr)) {
+		r = PGM_ADDRESSING;
+		goto out_unlock;
+	}
+
+	switch (mop->op) {
+	case KVM_S390_MEMOP_ABSOLUTE_READ: {
+		if (mop->flags & KVM_S390_MEMOP_F_CHECK_ONLY) {
+			r = check_gpa_range(kvm, mop->gaddr, mop->size, GACC_FETCH, mop->key);
+		} else {
+			r = access_guest_abs_with_key(kvm, mop->gaddr, tmpbuf,
+						      mop->size, GACC_FETCH, mop->key);
+			if (r == 0) {
+				if (copy_to_user(uaddr, tmpbuf, mop->size))
+					r = -EFAULT;
+			}
+		}
+		break;
+	}
+	case KVM_S390_MEMOP_ABSOLUTE_WRITE: {
+		if (mop->flags & KVM_S390_MEMOP_F_CHECK_ONLY) {
+			r = check_gpa_range(kvm, mop->gaddr, mop->size, GACC_STORE, mop->key);
+		} else {
+			if (copy_from_user(tmpbuf, uaddr, mop->size)) {
+				r = -EFAULT;
+				break;
+			}
+			r = access_guest_abs_with_key(kvm, mop->gaddr, tmpbuf,
+						      mop->size, GACC_STORE, mop->key);
+		}
+		break;
+	}
+	default:
+		r = -EINVAL;
+	}
+
+out_unlock:
+	srcu_read_unlock(&kvm->srcu, srcu_idx);
+
+	vfree(tmpbuf);
+	return r;
+}
+
 long kvm_arch_vm_ioctl(struct file *filp,
 		       unsigned int ioctl, unsigned long arg)
 {
@@ -2488,6 +2560,15 @@ long kvm_arch_vm_ioctl(struct file *filp,
 		}
 		break;
 	}
+	case KVM_S390_MEM_OP: {
+		struct kvm_s390_mem_op mem_op;
+
+		if (copy_from_user(&mem_op, argp, sizeof(mem_op)) == 0)
+			r = kvm_s390_vm_mem_op(kvm, &mem_op);
+		else
+			r = -EFAULT;
+		break;
+	}
 	default:
 		r = -ENOTTY;
 	}
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 4566f429db2c..4bc7623def87 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -575,6 +575,8 @@ struct kvm_s390_mem_op {
 #define KVM_S390_MEMOP_LOGICAL_WRITE	1
 #define KVM_S390_MEMOP_SIDA_READ	2
 #define KVM_S390_MEMOP_SIDA_WRITE	3
+#define KVM_S390_MEMOP_ABSOLUTE_READ	4
+#define KVM_S390_MEMOP_ABSOLUTE_WRITE	5
 /* flags for kvm_s390_mem_op->flags */
 #define KVM_S390_MEMOP_F_CHECK_ONLY		(1ULL << 0)
 #define KVM_S390_MEMOP_F_INJECT_EXCEPTION	(1ULL << 1)
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v4 07/10] KVM: s390: Rename existing vcpu memop functions
  2022-02-11 18:22 [PATCH v4 00/10] KVM: s390: Do storage key checking Janis Schoetterl-Glausch
                   ` (5 preceding siblings ...)
  2022-02-11 18:22 ` [PATCH v4 06/10] KVM: s390: Add vm IOCTL for key checked guest absolute memory access Janis Schoetterl-Glausch
@ 2022-02-11 18:22 ` Janis Schoetterl-Glausch
  2022-02-11 18:22 ` [PATCH v4 08/10] KVM: s390: Add capability for storage key extension of MEM_OP IOCTL Janis Schoetterl-Glausch
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 39+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-11 18:22 UTC (permalink / raw)
  To: Christian Borntraeger, Heiko Carstens, Janosch Frank
  Cc: Janis Schoetterl-Glausch, Alexander Gordeev, Claudio Imbrenda,
	David Hildenbrand, Jonathan Corbet, kvm, linux-doc, linux-kernel,
	linux-s390, Paolo Bonzini, Sven Schnelle, Vasily Gorbik

Makes the naming consistent, now that we also have a vm ioctl.

Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 arch/s390/kvm/kvm-s390.c | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 36bc73b5f5de..773bccdd446c 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -4741,8 +4741,8 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
 	return r;
 }
 
-static long kvm_s390_guest_sida_op(struct kvm_vcpu *vcpu,
-				   struct kvm_s390_mem_op *mop)
+static long kvm_s390_vcpu_sida_op(struct kvm_vcpu *vcpu,
+				  struct kvm_s390_mem_op *mop)
 {
 	void __user *uaddr = (void __user *)mop->buf;
 	int r = 0;
@@ -4771,8 +4771,9 @@ static long kvm_s390_guest_sida_op(struct kvm_vcpu *vcpu,
 	}
 	return r;
 }
-static long kvm_s390_guest_mem_op(struct kvm_vcpu *vcpu,
-				  struct kvm_s390_mem_op *mop)
+
+static long kvm_s390_vcpu_mem_op(struct kvm_vcpu *vcpu,
+				 struct kvm_s390_mem_op *mop)
 {
 	void __user *uaddr = (void __user *)mop->buf;
 	void *tmpbuf = NULL;
@@ -4835,8 +4836,8 @@ static long kvm_s390_guest_mem_op(struct kvm_vcpu *vcpu,
 	return r;
 }
 
-static long kvm_s390_guest_memsida_op(struct kvm_vcpu *vcpu,
-				      struct kvm_s390_mem_op *mop)
+static long kvm_s390_vcpu_memsida_op(struct kvm_vcpu *vcpu,
+				     struct kvm_s390_mem_op *mop)
 {
 	int r, srcu_idx;
 
@@ -4845,12 +4846,12 @@ static long kvm_s390_guest_memsida_op(struct kvm_vcpu *vcpu,
 	switch (mop->op) {
 	case KVM_S390_MEMOP_LOGICAL_READ:
 	case KVM_S390_MEMOP_LOGICAL_WRITE:
-		r = kvm_s390_guest_mem_op(vcpu, mop);
+		r = kvm_s390_vcpu_mem_op(vcpu, mop);
 		break;
 	case KVM_S390_MEMOP_SIDA_READ:
 	case KVM_S390_MEMOP_SIDA_WRITE:
 		/* we are locked against sida going away by the vcpu->mutex */
-		r = kvm_s390_guest_sida_op(vcpu, mop);
+		r = kvm_s390_vcpu_sida_op(vcpu, mop);
 		break;
 	default:
 		r = -EINVAL;
@@ -5013,7 +5014,7 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
 		struct kvm_s390_mem_op mem_op;
 
 		if (copy_from_user(&mem_op, argp, sizeof(mem_op)) == 0)
-			r = kvm_s390_guest_memsida_op(vcpu, &mem_op);
+			r = kvm_s390_vcpu_memsida_op(vcpu, &mem_op);
 		else
 			r = -EFAULT;
 		break;
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v4 08/10] KVM: s390: Add capability for storage key extension of MEM_OP IOCTL
  2022-02-11 18:22 [PATCH v4 00/10] KVM: s390: Do storage key checking Janis Schoetterl-Glausch
                   ` (6 preceding siblings ...)
  2022-02-11 18:22 ` [PATCH v4 07/10] KVM: s390: Rename existing vcpu memop functions Janis Schoetterl-Glausch
@ 2022-02-11 18:22 ` Janis Schoetterl-Glausch
  2022-02-14 14:29   ` Christian Borntraeger
  2022-02-14 17:47   ` Claudio Imbrenda
  2022-02-11 18:22 ` [PATCH v4 09/10] KVM: s390: Update api documentation for memop ioctl Janis Schoetterl-Glausch
                   ` (2 subsequent siblings)
  10 siblings, 2 replies; 39+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-11 18:22 UTC (permalink / raw)
  To: Christian Borntraeger, Heiko Carstens, Janosch Frank
  Cc: Janis Schoetterl-Glausch, Alexander Gordeev, Claudio Imbrenda,
	David Hildenbrand, Jonathan Corbet, kvm, linux-doc, linux-kernel,
	linux-s390, Paolo Bonzini, Sven Schnelle, Vasily Gorbik

Availability of the KVM_CAP_S390_MEM_OP_EXTENSION capability signals that:
* The vcpu MEM_OP IOCTL supports storage key checking.
* The vm MEM_OP IOCTL exists.

Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
---
 arch/s390/kvm/kvm-s390.c | 1 +
 include/uapi/linux/kvm.h | 1 +
 2 files changed, 2 insertions(+)

diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 773bccdd446c..c2c26c2aad64 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -564,6 +564,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_S390_VCPU_RESETS:
 	case KVM_CAP_SET_GUEST_DEBUG:
 	case KVM_CAP_S390_DIAG318:
+	case KVM_CAP_S390_MEM_OP_EXTENSION:
 		r = 1;
 		break;
 	case KVM_CAP_SET_GUEST_DEBUG2:
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 4bc7623def87..08756eeea065 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1140,6 +1140,7 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_VM_GPA_BITS 207
 #define KVM_CAP_XSAVE2 208
 #define KVM_CAP_SYS_ATTRIBUTES 209
+#define KVM_CAP_S390_MEM_OP_EXTENSION 210
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v4 09/10] KVM: s390: Update api documentation for memop ioctl
  2022-02-11 18:22 [PATCH v4 00/10] KVM: s390: Do storage key checking Janis Schoetterl-Glausch
                   ` (7 preceding siblings ...)
  2022-02-11 18:22 ` [PATCH v4 08/10] KVM: s390: Add capability for storage key extension of MEM_OP IOCTL Janis Schoetterl-Glausch
@ 2022-02-11 18:22 ` Janis Schoetterl-Glausch
  2022-02-14 17:52   ` Claudio Imbrenda
  2022-02-21 14:36   ` [PATCH] KVM: s390: Clarify key argument for MEM_OP in api docs Janis Schoetterl-Glausch
  2022-02-11 18:22 ` [PATCH v4 10/10] KVM: s390: selftests: Test memops with storage keys Janis Schoetterl-Glausch
  2022-02-14 14:52 ` [PATCH v4 00/10] KVM: s390: Do " Christian Borntraeger
  10 siblings, 2 replies; 39+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-11 18:22 UTC (permalink / raw)
  To: Christian Borntraeger, Heiko Carstens, Janosch Frank
  Cc: Janis Schoetterl-Glausch, Alexander Gordeev, Claudio Imbrenda,
	David Hildenbrand, Jonathan Corbet, kvm, linux-doc, linux-kernel,
	linux-s390, Paolo Bonzini, Sven Schnelle, Vasily Gorbik

Document all currently existing operations, flags and explain under
which circumstances they are available. Document the recently
introduced absolute operations and the storage key protection flag,
as well as the existing SIDA operations.

Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
---
 Documentation/virt/kvm/api.rst | 112 ++++++++++++++++++++++++++-------
 include/uapi/linux/kvm.h       |   2 +-
 2 files changed, 91 insertions(+), 23 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index a4267104db50..48f23bb80d7f 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -3683,15 +3683,17 @@ The fields in each entry are defined as follows:
 4.89 KVM_S390_MEM_OP
 --------------------
 
-:Capability: KVM_CAP_S390_MEM_OP
+:Capability: KVM_CAP_S390_MEM_OP, KVM_CAP_S390_PROTECTED, KVM_CAP_S390_MEM_OP_EXTENSION
 :Architectures: s390
-:Type: vcpu ioctl
+:Type: vm ioctl, vcpu ioctl
 :Parameters: struct kvm_s390_mem_op (in)
 :Returns: = 0 on success,
           < 0 on generic error (e.g. -EFAULT or -ENOMEM),
           > 0 if an exception occurred while walking the page tables
 
-Read or write data from/to the logical (virtual) memory of a VCPU.
+Read or write data from/to the VM's memory.
+The KVM_CAP_S390_MEM_OP_EXTENSION capability specifies what functionality is
+supported.
 
 Parameters are specified via the following structure::
 
@@ -3701,33 +3703,99 @@ Parameters are specified via the following structure::
 	__u32 size;		/* amount of bytes */
 	__u32 op;		/* type of operation */
 	__u64 buf;		/* buffer in userspace */
-	__u8 ar;		/* the access register number */
-	__u8 reserved[31];	/* should be set to 0 */
+	union {
+		struct {
+			__u8 ar;	/* the access register number */
+			__u8 key;	/* access key, ignored if flag unset */
+		};
+		__u32 sida_offset; /* offset into the sida */
+		__u8 reserved[32]; /* ignored */
+	};
   };
 
-The type of operation is specified in the "op" field. It is either
-KVM_S390_MEMOP_LOGICAL_READ for reading from logical memory space or
-KVM_S390_MEMOP_LOGICAL_WRITE for writing to logical memory space. The
-KVM_S390_MEMOP_F_CHECK_ONLY flag can be set in the "flags" field to check
-whether the corresponding memory access would create an access exception
-(without touching the data in the memory at the destination). In case an
-access exception occurred while walking the MMU tables of the guest, the
-ioctl returns a positive error number to indicate the type of exception.
-This exception is also raised directly at the corresponding VCPU if the
-flag KVM_S390_MEMOP_F_INJECT_EXCEPTION is set in the "flags" field.
-
 The start address of the memory region has to be specified in the "gaddr"
 field, and the length of the region in the "size" field (which must not
 be 0). The maximum value for "size" can be obtained by checking the
 KVM_CAP_S390_MEM_OP capability. "buf" is the buffer supplied by the
 userspace application where the read data should be written to for
-KVM_S390_MEMOP_LOGICAL_READ, or where the data that should be written is
-stored for a KVM_S390_MEMOP_LOGICAL_WRITE. When KVM_S390_MEMOP_F_CHECK_ONLY
-is specified, "buf" is unused and can be NULL. "ar" designates the access
-register number to be used; the valid range is 0..15.
+a read access, or where the data that should be written is stored for
+a write access.  The "reserved" field is meant for future extensions.
+Reserved and unused values are ignored. Future extension that add members must
+introduce new flags.
+
+The type of operation is specified in the "op" field. Flags modifying
+their behavior can be set in the "flags" field. Undefined flag bits must
+be set to 0.
+
+Possible operations are:
+  * ``KVM_S390_MEMOP_LOGICAL_READ``
+  * ``KVM_S390_MEMOP_LOGICAL_WRITE``
+  * ``KVM_S390_MEMOP_ABSOLUTE_READ``
+  * ``KVM_S390_MEMOP_ABSOLUTE_WRITE``
+  * ``KVM_S390_MEMOP_SIDA_READ``
+  * ``KVM_S390_MEMOP_SIDA_WRITE``
+
+Logical read/write:
+^^^^^^^^^^^^^^^^^^^
+
+Access logical memory, i.e. translate the given guest address to an absolute
+address given the state of the VCPU and use the absolute address as target of
+the access. "ar" designates the access register number to be used; the valid
+range is 0..15.
+Logical accesses are permitted for the VCPU ioctl only.
+Logical accesses are permitted for non-protected guests only.
+
+Supported flags:
+  * ``KVM_S390_MEMOP_F_CHECK_ONLY``
+  * ``KVM_S390_MEMOP_F_INJECT_EXCEPTION``
+  * ``KVM_S390_MEMOP_F_SKEY_PROTECTION``
+
+The KVM_S390_MEMOP_F_CHECK_ONLY flag can be set to check whether the
+corresponding memory access would cause an access exception; however,
+no actual access to the data in memory at the destination is performed.
+In this case, "buf" is unused and can be NULL.
+
+In case an access exception occurred during the access (or would occur
+in case of KVM_S390_MEMOP_F_CHECK_ONLY), the ioctl returns a positive
+error number indicating the type of exception. This exception is also
+raised directly at the corresponding VCPU if the flag
+KVM_S390_MEMOP_F_INJECT_EXCEPTION is set.
+
+If the KVM_S390_MEMOP_F_SKEY_PROTECTION flag is set, storage key
+protection is also in effect and may cause exceptions if accesses are
+prohibited given the access key passed in "key".
+KVM_S390_MEMOP_F_SKEY_PROTECTION is available if KVM_CAP_S390_MEM_OP_EXTENSION
+is > 0.
+
+Absolute read/write:
+^^^^^^^^^^^^^^^^^^^^
+
+Access absolute memory. This operation is intended to be used with the
+KVM_S390_MEMOP_F_SKEY_PROTECTION flag, to allow accessing memory and performing
+the checks required for storage key protection as one operation (as opposed to
+user space getting the storage keys, performing the checks, and accessing
+memory thereafter, which could lead to a delay between check and access).
+Absolute accesses are permitted for the VM ioctl if KVM_CAP_S390_MEM_OP_EXTENSION
+is > 0.
+Currently absolute accesses are not permitted for VCPU ioctls.
+Absolute accesses are permitted for non-protected guests only.
+
+Supported flags:
+  * ``KVM_S390_MEMOP_F_CHECK_ONLY``
+  * ``KVM_S390_MEMOP_F_SKEY_PROTECTION``
+
+The semantics of the flags are as for logical accesses.
+
+SIDA read/write:
+^^^^^^^^^^^^^^^^
+
+Access the secure instruction data area which contains memory operands necessary
+for instruction emulation for protected guests.
+SIDA accesses are available if the KVM_CAP_S390_PROTECTED capability is available.
+SIDA accesses are permitted for the VCPU ioctl only.
+SIDA accesses are permitted for protected guests only.
 
-The "reserved" field is meant for future extensions. It is not used by
-KVM with the currently defined set of flags.
+No flags are supported.
 
 4.90 KVM_S390_GET_SKEYS
 -----------------------
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 08756eeea065..dbc550bbd9fa 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -567,7 +567,7 @@ struct kvm_s390_mem_op {
 			__u8 key;	/* access key, ignored if flag unset */
 		};
 		__u32 sida_offset; /* offset into the sida */
-		__u8 reserved[32]; /* should be set to 0 */
+		__u8 reserved[32]; /* ignored */
 	};
 };
 /* types for kvm_s390_mem_op->op */
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v4 10/10] KVM: s390: selftests: Test memops with storage keys
  2022-02-11 18:22 [PATCH v4 00/10] KVM: s390: Do storage key checking Janis Schoetterl-Glausch
                   ` (8 preceding siblings ...)
  2022-02-11 18:22 ` [PATCH v4 09/10] KVM: s390: Update api documentation for memop ioctl Janis Schoetterl-Glausch
@ 2022-02-11 18:22 ` Janis Schoetterl-Glausch
  2022-02-17 14:53   ` Janis Schoetterl-Glausch
  2022-02-25 15:53   ` [PATCH v2 0/5] memop selftest for storage key checking Janis Schoetterl-Glausch
  2022-02-14 14:52 ` [PATCH v4 00/10] KVM: s390: Do " Christian Borntraeger
  10 siblings, 2 replies; 39+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-11 18:22 UTC (permalink / raw)
  To: Christian Borntraeger, Heiko Carstens, Janosch Frank
  Cc: Janis Schoetterl-Glausch, Alexander Gordeev, Claudio Imbrenda,
	David Hildenbrand, Jonathan Corbet, kvm, linux-doc, linux-kernel,
	linux-s390, Paolo Bonzini, Sven Schnelle, Vasily Gorbik

Test vm and vcpu memops with storage keys, both successful accesses
as well as various exception conditions.

Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
---
 tools/testing/selftests/kvm/s390x/memop.c | 558 +++++++++++++++++++---
 1 file changed, 495 insertions(+), 63 deletions(-)

diff --git a/tools/testing/selftests/kvm/s390x/memop.c b/tools/testing/selftests/kvm/s390x/memop.c
index 9f49ead380ab..ac08fd5aa746 100644
--- a/tools/testing/selftests/kvm/s390x/memop.c
+++ b/tools/testing/selftests/kvm/s390x/memop.c
@@ -13,28 +13,304 @@
 #include "test_util.h"
 #include "kvm_util.h"
 
+#define PAGE_SHIFT 12
+#define PAGE_SIZE (1 << PAGE_SHIFT)
+#define PAGE_MASK (~(PAGE_SIZE - 1))
+#define CR0_FETCH_PROTECTION_OVERRIDE	(1UL << (63 - 38))
+#define CR0_STORAGE_PROTECTION_OVERRIDE	(1UL << (63 - 39))
+
 #define VCPU_ID 1
 
+const uint64_t last_page_addr = UINT64_MAX - PAGE_SIZE + 1;
+
 static uint8_t mem1[65536];
 static uint8_t mem2[65536];
 
+static void set_storage_key_range(void *addr, size_t len, u8 key)
+{
+	uintptr_t _addr, abs, i;
+
+	_addr = (uintptr_t)addr;
+	for (i = _addr & PAGE_MASK; i < _addr + len; i += PAGE_SIZE) {
+		abs = i;
+		asm volatile (
+			       "lra	%[abs], 0(0,%[abs])\n"
+			"	sske	%[key], %[abs]\n"
+			: [abs] "+&a" (abs)
+			: [key] "r" (key)
+			: "cc"
+		);
+	}
+}
+
 static void guest_code(void)
+{
+	/* Set storage key */
+	set_storage_key_range(mem1, sizeof(mem1), 0x90);
+	set_storage_key_range(mem2, sizeof(mem2), 0x90);
+	GUEST_SYNC(0);
+
+	/* Write, read back, without keys */
+	memcpy(mem2, mem1, sizeof(mem2));
+	GUEST_SYNC(10);
+
+	/* Write, read back, key 0 */
+	memcpy(mem2, mem1, sizeof(mem2));
+	GUEST_SYNC(20);
+
+	/* Write, read back, matching key, 1 page */
+	memcpy(mem2, mem1, sizeof(mem2));
+	GUEST_SYNC(30);
+
+	/* Write, read back, matching key, all pages */
+	memcpy(mem2, mem1, sizeof(mem2));
+	GUEST_SYNC(40);
+
+	/* Set fetch protection */
+	set_storage_key_range(0, 1, 0x18);
+	GUEST_SYNC(50);
+
+	/* Enable fetch protection override */
+	GUEST_SYNC(60);
+
+	/* Enable storage protection override, set fetch protection*/
+	set_storage_key_range(mem1, sizeof(mem1), 0x98);
+	set_storage_key_range(mem2, sizeof(mem2), 0x98);
+	GUEST_SYNC(70);
+
+	/* Write, read back, mismatching key,
+	 * storage protection override, all pages
+	 */
+	memcpy(mem2, mem1, sizeof(mem2));
+	GUEST_SYNC(80);
+
+	/* VM memop, write, read back, matching key */
+	memcpy(mem2, mem1, sizeof(mem2));
+	GUEST_SYNC(90);
+
+	/* VM memop, write, read back, key 0 */
+	memcpy(mem2, mem1, sizeof(mem2));
+	/* VM memop, fail to read from 0 absolute/virtual, mismatching key,
+	 * fetch protection override does not apply to VM memops
+	 */
+	asm volatile ("sske %1,%0\n"
+		: : "r"(0), "r"(0x18) : "cc"
+	);
+	GUEST_SYNC(100);
+
+	/* Enable AR mode */
+	GUEST_SYNC(110);
+
+	/* Disable AR mode */
+	GUEST_SYNC(120);
+}
+
+static void reroll_mem1(void)
 {
 	int i;
 
-	for (;;) {
-		for (i = 0; i < sizeof(mem2); i++)
-			mem2[i] = mem1[i];
-		GUEST_SYNC(0);
-	}
+	for (i = 0; i < sizeof(mem1); i++)
+		mem1[i] = rand();
+}
+
+static int _vcpu_read_guest(struct kvm_vm *vm, void *host_addr,
+			    uintptr_t guest_addr, size_t len)
+{
+	struct kvm_s390_mem_op ksmo = {
+		.gaddr = guest_addr,
+		.flags = 0,
+		.size = len,
+		.op = KVM_S390_MEMOP_LOGICAL_READ,
+		.buf = (uintptr_t)host_addr,
+		.ar = 0,
+	};
+
+	return _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+}
+
+static void vcpu_read_guest(struct kvm_vm *vm, void *host_addr,
+			    uintptr_t guest_addr, size_t len)
+{
+	int rv;
+
+	rv = _vcpu_read_guest(vm, host_addr, guest_addr, len);
+	TEST_ASSERT(rv == 0, "vcpu memop read failed: reason = %d\n", rv);
+}
+
+static int _vcpu_read_guest_key(struct kvm_vm *vm, void *host_addr,
+				uintptr_t guest_addr, size_t len, u8 access_key)
+{
+	struct kvm_s390_mem_op ksmo = {0};
+
+	ksmo.gaddr = guest_addr;
+	ksmo.flags = KVM_S390_MEMOP_F_SKEY_PROTECTION;
+	ksmo.size = len;
+	ksmo.op = KVM_S390_MEMOP_LOGICAL_READ;
+	ksmo.buf = (uintptr_t)host_addr;
+	ksmo.ar = 0;
+	ksmo.key = access_key;
+
+	return _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+}
+
+static void vcpu_read_guest_key(struct kvm_vm *vm, void *host_addr,
+				uintptr_t guest_addr, size_t len, u8 access_key)
+{
+	int rv;
+
+	rv = _vcpu_read_guest_key(vm, host_addr, guest_addr, len, access_key);
+	TEST_ASSERT(rv == 0, "vcpu memop read failed: reason = %d\n", rv);
+}
+
+static int _vcpu_write_guest(struct kvm_vm *vm, uintptr_t guest_addr,
+			     void *host_addr, size_t len)
+{
+	struct kvm_s390_mem_op ksmo = {
+		.gaddr = guest_addr,
+		.flags = 0,
+		.size = len,
+		.op = KVM_S390_MEMOP_LOGICAL_WRITE,
+		.buf = (uintptr_t)host_addr,
+		.ar = 0,
+	};
+	return _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+}
+
+static void vcpu_write_guest(struct kvm_vm *vm, uintptr_t guest_addr,
+			     void *host_addr, size_t len)
+{
+	int rv;
+
+	rv = _vcpu_write_guest(vm, guest_addr, host_addr, len);
+	TEST_ASSERT(rv == 0, "vcpu memop write failed: reason = %d\n", rv);
+}
+
+static int _vcpu_write_guest_key(struct kvm_vm *vm, uintptr_t guest_addr,
+				 void *host_addr, size_t len, u8 access_key)
+{
+	struct kvm_s390_mem_op ksmo = {0};
+
+	ksmo.gaddr = guest_addr;
+	ksmo.flags = KVM_S390_MEMOP_F_SKEY_PROTECTION;
+	ksmo.size = len;
+	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
+	ksmo.buf = (uintptr_t)host_addr;
+	ksmo.ar = 0;
+	ksmo.key = access_key;
+
+	return _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+}
+
+static void vcpu_write_guest_key(struct kvm_vm *vm, uintptr_t guest_addr,
+				 void *host_addr, size_t len, u8 access_key)
+{
+	int rv;
+
+	rv = _vcpu_write_guest_key(vm, guest_addr, host_addr, len, access_key);
+	TEST_ASSERT(rv == 0, "vcpu memop write failed: reason = %d\n", rv);
+}
+
+static int _vm_read_guest_key(struct kvm_vm *vm, void *host_addr,
+			      uintptr_t guest_addr, size_t len, u8 access_key)
+{
+	struct kvm_s390_mem_op ksmo = {0};
+
+	ksmo.gaddr = guest_addr;
+	ksmo.flags = KVM_S390_MEMOP_F_SKEY_PROTECTION;
+	ksmo.size = len;
+	ksmo.op = KVM_S390_MEMOP_ABSOLUTE_READ;
+	ksmo.buf = (uintptr_t)host_addr;
+	ksmo.key = access_key;
+
+	return _vm_ioctl(vm, KVM_S390_MEM_OP, &ksmo);
+}
+
+static void vm_read_guest_key(struct kvm_vm *vm, void *host_addr,
+			      uintptr_t guest_addr, size_t len, u8 access_key)
+{
+	int rv;
+
+	rv = _vm_read_guest_key(vm, host_addr, guest_addr, len, access_key);
+	TEST_ASSERT(rv == 0, "vm memop read failed: reason = %d\n", rv);
+}
+
+static int _vm_write_guest_key(struct kvm_vm *vm, uintptr_t guest_addr,
+			       void *host_addr, size_t len, u8 access_key)
+{
+	struct kvm_s390_mem_op ksmo = {0};
+
+	ksmo.gaddr = guest_addr;
+	ksmo.flags = KVM_S390_MEMOP_F_SKEY_PROTECTION;
+	ksmo.size = len;
+	ksmo.op = KVM_S390_MEMOP_ABSOLUTE_WRITE;
+	ksmo.buf = (uintptr_t)host_addr;
+	ksmo.key = access_key;
+
+	return _vm_ioctl(vm, KVM_S390_MEM_OP, &ksmo);
+}
+
+static void vm_write_guest_key(struct kvm_vm *vm, uintptr_t guest_addr,
+			       void *host_addr, size_t len, u8 access_key)
+{
+	int rv;
+
+	rv = _vm_write_guest_key(vm, guest_addr, host_addr, len, access_key);
+	TEST_ASSERT(rv == 0, "vm memop write failed: reason = %d\n", rv);
 }
 
+enum access_mode {
+	ACCESS_READ,
+	ACCESS_WRITE
+};
+
+static int _vm_check_guest_key(struct kvm_vm *vm, enum access_mode mode,
+			       uintptr_t guest_addr, size_t len, u8 access_key)
+{
+	struct kvm_s390_mem_op ksmo = {0};
+
+	ksmo.gaddr = guest_addr;
+	ksmo.flags = KVM_S390_MEMOP_F_CHECK_ONLY | KVM_S390_MEMOP_F_SKEY_PROTECTION;
+	ksmo.size = len;
+	if (mode == ACCESS_READ)
+		ksmo.op = KVM_S390_MEMOP_ABSOLUTE_READ;
+	else
+		ksmo.op = KVM_S390_MEMOP_ABSOLUTE_WRITE;
+	ksmo.key = access_key;
+
+	return _vm_ioctl(vm, KVM_S390_MEM_OP, &ksmo);
+}
+
+static void vm_check_guest_key(struct kvm_vm *vm, enum access_mode mode,
+			       uintptr_t guest_addr, size_t len, u8 access_key)
+{
+	int rv;
+
+	rv = _vm_check_guest_key(vm, mode, guest_addr, len, access_key);
+	TEST_ASSERT(rv == 0, "vm memop write failed: reason = %d\n", rv);
+}
+
+#define HOST_SYNC(vmp, stage)						\
+({									\
+	struct kvm_vm *__vm = (vmp);					\
+	struct ucall uc;						\
+	int __stage = (stage);						\
+									\
+	vcpu_run(__vm, VCPU_ID);					\
+	get_ucall(__vm, VCPU_ID, &uc);					\
+	ASSERT_EQ(uc.cmd, UCALL_SYNC);					\
+	ASSERT_EQ(uc.args[1], __stage);					\
+})									\
+
 int main(int argc, char *argv[])
 {
 	struct kvm_vm *vm;
 	struct kvm_run *run;
 	struct kvm_s390_mem_op ksmo;
-	int rv, i, maxsize;
+	bool has_skey_ext;
+	vm_vaddr_t guest_mem1;
+	vm_vaddr_t guest_mem2;
+	vm_paddr_t guest_mem1_abs;
+	int rv, maxsize;
 
 	setbuf(stdout, NULL);	/* Tell stdout not to buffer its content */
 
@@ -45,67 +321,225 @@ int main(int argc, char *argv[])
 	}
 	if (maxsize > sizeof(mem1))
 		maxsize = sizeof(mem1);
+	has_skey_ext = kvm_check_cap(KVM_CAP_S390_MEM_OP_EXTENSION);
+	if (!has_skey_ext)
+		print_skip("Storage key extension not supported");
 
 	/* Create VM */
 	vm = vm_create_default(VCPU_ID, 0, guest_code);
 	run = vcpu_state(vm, VCPU_ID);
+	guest_mem1 = (uintptr_t)mem1;
+	guest_mem2 = (uintptr_t)mem2;
+	guest_mem1_abs = addr_gva2gpa(vm, guest_mem1);
 
-	for (i = 0; i < sizeof(mem1); i++)
-		mem1[i] = i * i + i;
+	/* Set storage key */
+	HOST_SYNC(vm, 0);
 
-	/* Set the first array */
-	ksmo.gaddr = addr_gva2gpa(vm, (uintptr_t)mem1);
-	ksmo.flags = 0;
-	ksmo.size = maxsize;
-	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
-	ksmo.buf = (uintptr_t)mem1;
-	ksmo.ar = 0;
-	vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	/* Write, read back, without keys */
+	reroll_mem1();
+	vcpu_write_guest(vm, guest_mem1, mem1, maxsize);
+	HOST_SYNC(vm, 10); // Copy in vm
+	memset(mem2, 0xaa, sizeof(mem2));
+	vcpu_read_guest(vm, mem2, guest_mem2, maxsize);
+	TEST_ASSERT(!memcmp(mem1, mem2, maxsize),
+		    "Memory contents do not match!");
 
-	/* Let the guest code copy the first array to the second */
-	vcpu_run(vm, VCPU_ID);
-	TEST_ASSERT(run->exit_reason == KVM_EXIT_S390_SIEIC,
-		    "Unexpected exit reason: %u (%s)\n",
-		    run->exit_reason,
-		    exit_reason_str(run->exit_reason));
+	if (has_skey_ext) {
+		vm_vaddr_t guest_0_page = vm_vaddr_alloc(vm, PAGE_SIZE, 0);
+		vm_vaddr_t guest_last_page = vm_vaddr_alloc(vm, PAGE_SIZE, last_page_addr);
+		vm_paddr_t guest_mem2_abs = addr_gva2gpa(vm, guest_mem2);
 
-	memset(mem2, 0xaa, sizeof(mem2));
+		/* Write, read back, key 0 */
+		reroll_mem1();
+		vcpu_write_guest_key(vm, guest_mem1, mem1, maxsize, 0);
+		HOST_SYNC(vm, 20); // Copy in vm
+		memset(mem2, 0xaa, sizeof(mem2));
+		vcpu_read_guest_key(vm, mem2, guest_mem2, maxsize, 0);
+		TEST_ASSERT(!memcmp(mem1, mem2, maxsize),
+			    "Memory contents do not match!");
 
-	/* Get the second array */
-	ksmo.gaddr = (uintptr_t)mem2;
-	ksmo.flags = 0;
-	ksmo.size = maxsize;
-	ksmo.op = KVM_S390_MEMOP_LOGICAL_READ;
-	ksmo.buf = (uintptr_t)mem2;
-	ksmo.ar = 0;
-	vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+		/* Write, read back, matching key, 1 page */
+		reroll_mem1();
+		vcpu_write_guest_key(vm, guest_mem1, mem1, PAGE_SIZE, 9);
+		HOST_SYNC(vm, 30); // Copy in vm
+		memset(mem2, 0xaa, sizeof(mem2));
+		vcpu_read_guest_key(vm, mem2, guest_mem2, PAGE_SIZE, 9);
+		TEST_ASSERT(!memcmp(mem1, mem2, PAGE_SIZE),
+			    "Memory contents do not match!");
 
-	TEST_ASSERT(!memcmp(mem1, mem2, maxsize),
-		    "Memory contents do not match!");
+		/* Write, read back, matching key, all pages */
+		reroll_mem1();
+		vcpu_write_guest_key(vm, guest_mem1, mem1, maxsize, 9);
+		HOST_SYNC(vm, 40); // Copy in vm
+		memset(mem2, 0xaa, sizeof(mem2));
+		vcpu_read_guest_key(vm, mem2, guest_mem2, maxsize, 9);
+		TEST_ASSERT(!memcmp(mem1, mem2, maxsize),
+			    "Memory contents do not match!");
 
-	/* Check error conditions - first bad size: */
-	ksmo.gaddr = (uintptr_t)mem1;
-	ksmo.flags = 0;
-	ksmo.size = -1;
-	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
-	ksmo.buf = (uintptr_t)mem1;
-	ksmo.ar = 0;
-	rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+		/* Fail to write, read back old value, mismatching key */
+		rv = _vcpu_write_guest_key(vm, guest_mem1, mem1, maxsize, 2);
+		TEST_ASSERT(rv == 4, "Store should result in protection exception");
+		memset(mem2, 0xaa, sizeof(mem2));
+		vcpu_read_guest_key(vm, mem2, guest_mem2, maxsize, 2);
+		TEST_ASSERT(!memcmp(mem1, mem2, maxsize),
+			    "Memory contents do not match!");
+
+		/* Set fetch protection */
+		HOST_SYNC(vm, 50);
+
+		/* Write without key, read back, matching key, fetch protection */
+		reroll_mem1();
+		vcpu_write_guest(vm, guest_0_page, mem1, PAGE_SIZE);
+		memset(mem2, 0xaa, sizeof(mem2));
+		/* Lets not copy in the guest, in case guest_0_page != 0 */
+		vcpu_read_guest_key(vm, mem2, guest_0_page, PAGE_SIZE, 1);
+		TEST_ASSERT(!memcmp(mem1, mem2, PAGE_SIZE),
+			    "Memory contents do not match!");
+
+		/* Fail to read,  mismatching key, fetch protection */
+		rv = _vcpu_read_guest_key(vm, mem2, guest_0_page, PAGE_SIZE, 2);
+		TEST_ASSERT(rv == 4, "Fetch should result in protection exception");
+
+		/* Enable fetch protection override */
+		run->s.regs.crs[0] |= CR0_FETCH_PROTECTION_OVERRIDE;
+		run->kvm_dirty_regs = KVM_SYNC_CRS;
+		HOST_SYNC(vm, 60);
+
+		if (guest_0_page != 0)
+			print_skip("Did not allocate page at 0 for fetch protection override test");
+
+		/* Write without key, read back, mismatching key,
+		 * fetch protection override, 1 page
+		 */
+		if (guest_0_page == 0) {
+			reroll_mem1();
+			vcpu_write_guest(vm, guest_0_page, mem1, PAGE_SIZE);
+			memset(mem2, 0xaa, sizeof(mem2));
+			/* Lets not copy in the guest, in case guest_0_page != 0 */
+			vcpu_read_guest_key(vm, mem2, guest_0_page, 2048, 2);
+			TEST_ASSERT(!memcmp(mem1, mem2, 2048),
+				    "Memory contents do not match!");
+		}
+
+		/* Fail to read, mismatching key,
+		 * fetch protection override address exceeded, 1 page
+		 */
+		if (guest_0_page == 0) {
+			rv = _vcpu_read_guest_key(vm, mem2, 0, 2048 + 1, 2);
+			TEST_ASSERT(rv == 4,
+				    "Fetch should result in protection exception");
+		}
+
+		if (guest_last_page != last_page_addr)
+			print_skip("Did not allocate last page for fetch protection override test");
+
+		/* Write without key, read back, mismatching key,
+		 * fetch protection override, 2 pages, last page not fetch protected
+		 */
+		reroll_mem1();
+		vcpu_write_guest(vm, guest_last_page, mem1, PAGE_SIZE);
+		vcpu_write_guest(vm, guest_0_page, mem1 + PAGE_SIZE, PAGE_SIZE);
+		if (guest_0_page == 0 && guest_last_page == last_page_addr) {
+			memset(mem2, 0xaa, sizeof(mem2));
+			/* Lets not copy in the guest, in case guest_0_page != 0 */
+			vcpu_read_guest_key(vm, mem2, last_page_addr,
+					    PAGE_SIZE + 2048, 2);
+			TEST_ASSERT(!memcmp(mem1, mem2, PAGE_SIZE + 2048),
+				    "Memory contents do not match!");
+		}
+
+		/* Fail to read, mismatching key, fetch protection override address
+		 * exceeded, 2 pages, last page not fetch protected
+		 */
+		if (guest_0_page == 0 && guest_last_page == last_page_addr) {
+			rv = _vcpu_read_guest_key(vm, mem2, last_page_addr,
+						  PAGE_SIZE + 2048 + 1, 2);
+			TEST_ASSERT(rv == 4,
+				    "Fetch should result in protection exception");
+		}
+
+		/* Enable storage protection override, set fetch protection*/
+		run->s.regs.crs[0] |= CR0_STORAGE_PROTECTION_OVERRIDE;
+		run->kvm_dirty_regs = KVM_SYNC_CRS;
+		HOST_SYNC(vm, 70);
+
+		/* Write, read back, mismatching key,
+		 * storage protection override, all pages
+		 */
+		reroll_mem1();
+		vcpu_write_guest_key(vm, guest_mem1, mem1, maxsize, 2);
+		HOST_SYNC(vm, 80); // Copy in vm
+		memset(mem2, 0xaa, sizeof(mem2));
+		vcpu_read_guest_key(vm, mem2, guest_mem2, maxsize, 2);
+		TEST_ASSERT(!memcmp(mem1, mem2, maxsize),
+			    "Memory contents do not match!");
+
+		/* VM memop, write, read back, matching key */
+		reroll_mem1();
+		vm_write_guest_key(vm, guest_mem1_abs, mem1, maxsize, 9);
+		HOST_SYNC(vm, 90); // Copy in vm
+		memset(mem2, 0xaa, sizeof(mem2));
+		vm_read_guest_key(vm, mem2, guest_mem2_abs, maxsize, 9);
+		TEST_ASSERT(!memcmp(mem1, mem2, maxsize),
+			    "Memory contents do not match!");
+		vm_check_guest_key(vm, ACCESS_WRITE, guest_mem1_abs, maxsize, 9);
+		vm_check_guest_key(vm, ACCESS_READ, guest_mem2_abs, maxsize, 9);
+
+		/* VM memop, write, read back, key 0 */
+		reroll_mem1();
+		vm_write_guest_key(vm, guest_mem1_abs, mem1, maxsize, 0);
+		HOST_SYNC(vm, 100); // Copy in vm
+		memset(mem2, 0xaa, sizeof(mem2));
+		vm_read_guest_key(vm, mem2, guest_mem2_abs, maxsize, 0);
+		TEST_ASSERT(!memcmp(mem1, mem2, maxsize),
+			    "Memory contents do not match!");
+		rv = _vm_check_guest_key(vm, ACCESS_READ, guest_mem1_abs, maxsize, 9);
+		TEST_ASSERT(rv == 0, "Check should succeed");
+		vm_check_guest_key(vm, ACCESS_WRITE, guest_mem1_abs, maxsize, 0);
+		vm_check_guest_key(vm, ACCESS_READ, guest_mem2_abs, maxsize, 0);
+
+		/* VM memop, fail to write, fail to read, mismatching key,
+		 * storage protection override does not apply to VM memops
+		 */
+		rv = _vm_write_guest_key(vm, guest_mem1_abs, mem1, maxsize, 2);
+		TEST_ASSERT(rv == 4, "Store should result in protection exception");
+		rv = _vm_read_guest_key(vm, mem2, guest_mem2_abs, maxsize, 2);
+		TEST_ASSERT(rv == 4, "Fetch should result in protection exception");
+		rv = _vm_check_guest_key(vm, ACCESS_WRITE, guest_mem1_abs, maxsize, 2);
+		TEST_ASSERT(rv == 4, "Check should indicate protection exception");
+		rv = _vm_check_guest_key(vm, ACCESS_READ, guest_mem2_abs, maxsize, 2);
+		TEST_ASSERT(rv == 4, "Check should indicate protection exception");
+
+		/* VM memop, fail to read from 0 absolute/virtual, mismatching key,
+		 * fetch protection override does not apply to VM memops
+		 */
+		rv = _vm_read_guest_key(vm, mem2, 0, 2048, 2);
+		TEST_ASSERT(rv != 0, "Fetch should result in exception");
+		rv = _vm_read_guest_key(vm, mem2, addr_gva2gpa(vm, 0), 2048, 2);
+		TEST_ASSERT(rv == 4, "Fetch should result in protection exception");
+	} else {
+		struct ucall uc;
+
+		do {
+			vcpu_run(vm, VCPU_ID);
+			get_ucall(vm, VCPU_ID, &uc);
+			ASSERT_EQ(uc.cmd, UCALL_SYNC);
+		} while (uc.args[1] < 100);
+	}
+
+	/* Check error conditions */
+
+	/* Bad size: */
+	rv = _vcpu_write_guest(vm, (uintptr_t)mem1, mem1, -1);
 	TEST_ASSERT(rv == -1 && errno == E2BIG, "ioctl allows insane sizes");
 
 	/* Zero size: */
-	ksmo.gaddr = (uintptr_t)mem1;
-	ksmo.flags = 0;
-	ksmo.size = 0;
-	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
-	ksmo.buf = (uintptr_t)mem1;
-	ksmo.ar = 0;
-	rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	rv = _vcpu_write_guest(vm, (uintptr_t)mem1, mem1, 0);
 	TEST_ASSERT(rv == -1 && (errno == EINVAL || errno == ENOMEM),
 		    "ioctl allows 0 as size");
 
 	/* Bad flags: */
-	ksmo.gaddr = (uintptr_t)mem1;
+	ksmo.gaddr = guest_mem1;
 	ksmo.flags = -1;
 	ksmo.size = maxsize;
 	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
@@ -115,7 +549,7 @@ int main(int argc, char *argv[])
 	TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows all flags");
 
 	/* Bad operation: */
-	ksmo.gaddr = (uintptr_t)mem1;
+	ksmo.gaddr = guest_mem1;
 	ksmo.flags = 0;
 	ksmo.size = maxsize;
 	ksmo.op = -1;
@@ -135,21 +569,17 @@ int main(int argc, char *argv[])
 	TEST_ASSERT(rv > 0, "ioctl does not report bad guest memory access");
 
 	/* Bad host address: */
-	ksmo.gaddr = (uintptr_t)mem1;
-	ksmo.flags = 0;
-	ksmo.size = maxsize;
-	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
-	ksmo.buf = 0;
-	ksmo.ar = 0;
-	rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	rv = _vcpu_write_guest(vm, guest_mem1, 0, maxsize);
 	TEST_ASSERT(rv == -1 && errno == EFAULT,
 		    "ioctl does not report bad host memory address");
 
-	/* Bad access register: */
+	/* Enable AR mode */
 	run->psw_mask &= ~(3UL << (63 - 17));
-	run->psw_mask |= 1UL << (63 - 17);  /* Enable AR mode */
-	vcpu_run(vm, VCPU_ID);              /* To sync new state to SIE block */
-	ksmo.gaddr = (uintptr_t)mem1;
+	run->psw_mask |= 1UL << (63 - 17);
+	HOST_SYNC(vm, 110);
+
+	/* Bad access register: */
+	ksmo.gaddr = guest_mem1;
 	ksmo.flags = 0;
 	ksmo.size = maxsize;
 	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
@@ -157,8 +587,10 @@ int main(int argc, char *argv[])
 	ksmo.ar = 17;
 	rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
 	TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows ARs > 15");
-	run->psw_mask &= ~(3UL << (63 - 17));   /* Disable AR mode */
-	vcpu_run(vm, VCPU_ID);                  /* Run to sync new state */
+
+	/* Disable AR mode */
+	run->psw_mask &= ~(3UL << (63 - 17));
+	HOST_SYNC(vm, 120);
 
 	kvm_vm_free(vm);
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [PATCH v4 08/10] KVM: s390: Add capability for storage key extension of MEM_OP IOCTL
  2022-02-11 18:22 ` [PATCH v4 08/10] KVM: s390: Add capability for storage key extension of MEM_OP IOCTL Janis Schoetterl-Glausch
@ 2022-02-14 14:29   ` Christian Borntraeger
  2022-02-14 17:47   ` Claudio Imbrenda
  1 sibling, 0 replies; 39+ messages in thread
From: Christian Borntraeger @ 2022-02-14 14:29 UTC (permalink / raw)
  To: Janis Schoetterl-Glausch, Heiko Carstens, Janosch Frank
  Cc: Alexander Gordeev, Claudio Imbrenda, David Hildenbrand,
	Jonathan Corbet, kvm, linux-doc, linux-kernel, linux-s390,
	Paolo Bonzini, Sven Schnelle, Vasily Gorbik

Am 11.02.22 um 19:22 schrieb Janis Schoetterl-Glausch:
> Availability of the KVM_CAP_S390_MEM_OP_EXTENSION capability signals that:
> * The vcpu MEM_OP IOCTL supports storage key checking.
> * The vm MEM_OP IOCTL exists.
> 
> Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
> Reviewed-by: Janosch Frank <frankja@linux.ibm.com>

Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com>

> ---
>   arch/s390/kvm/kvm-s390.c | 1 +
>   include/uapi/linux/kvm.h | 1 +
>   2 files changed, 2 insertions(+)
> 
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 773bccdd446c..c2c26c2aad64 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -564,6 +564,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>   	case KVM_CAP_S390_VCPU_RESETS:
>   	case KVM_CAP_SET_GUEST_DEBUG:
>   	case KVM_CAP_S390_DIAG318:
> +	case KVM_CAP_S390_MEM_OP_EXTENSION:
>   		r = 1;
>   		break;
>   	case KVM_CAP_SET_GUEST_DEBUG2:
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 4bc7623def87..08756eeea065 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1140,6 +1140,7 @@ struct kvm_ppc_resize_hpt {
>   #define KVM_CAP_VM_GPA_BITS 207
>   #define KVM_CAP_XSAVE2 208
>   #define KVM_CAP_SYS_ATTRIBUTES 209
> +#define KVM_CAP_S390_MEM_OP_EXTENSION 210
>   
>   #ifdef KVM_CAP_IRQ_ROUTING
>   

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v4 00/10] KVM: s390: Do storage key checking
  2022-02-11 18:22 [PATCH v4 00/10] KVM: s390: Do storage key checking Janis Schoetterl-Glausch
                   ` (9 preceding siblings ...)
  2022-02-11 18:22 ` [PATCH v4 10/10] KVM: s390: selftests: Test memops with storage keys Janis Schoetterl-Glausch
@ 2022-02-14 14:52 ` Christian Borntraeger
  10 siblings, 0 replies; 39+ messages in thread
From: Christian Borntraeger @ 2022-02-14 14:52 UTC (permalink / raw)
  To: Janis Schoetterl-Glausch, Heiko Carstens, Janosch Frank
  Cc: Alexander Gordeev, Claudio Imbrenda, David Hildenbrand,
	Jonathan Corbet, kvm, linux-doc, linux-kernel, linux-s390,
	Paolo Bonzini, Sven Schnelle, Vasily Gorbik

Am 11.02.22 um 19:22 schrieb Janis Schoetterl-Glausch:
> Check keys when emulating instructions and let user space do key checked
> accesses.
> User space can do so via an extension of the MEMOP IOCTL:
> * allow optional key checking
> * allow MEMOP on vm fd, so key checked accesses on absolute memory
>    become possible
> 
> I haven't finished the memop selftest rewrite, but decided to send out a
> new version anyway, since the functional patches are (hopefully) final
> and the memop selftest patch works. I'll reply to it with the
> rewritten version.
> 
> v3: https://lore.kernel.org/kvm/20220209170422.1910690-1-scgl@linux.ibm.com/
> v2: https://lore.kernel.org/kvm/20220207165930.1608621-1-scgl@linux.ibm.com/
> 
> v3 -> v4
>   * rebase
>   * ignore key in memop if skey flag not specified
>   * fix nits in documentation
>   * pick up tags

I queued patches 1-9 for CI runners (and will also queue for next).

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v4 01/10] s390/uaccess: Add copy_from/to_user_key functions
  2022-02-11 18:22 ` [PATCH v4 01/10] s390/uaccess: Add copy_from/to_user_key functions Janis Schoetterl-Glausch
@ 2022-02-14 15:30   ` Claudio Imbrenda
  0 siblings, 0 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2022-02-14 15:30 UTC (permalink / raw)
  To: Janis Schoetterl-Glausch
  Cc: Christian Borntraeger, Heiko Carstens, Janosch Frank,
	Alexander Gordeev, David Hildenbrand, Jonathan Corbet, kvm,
	linux-doc, linux-kernel, linux-s390, Paolo Bonzini,
	Sven Schnelle, Vasily Gorbik

On Fri, 11 Feb 2022 19:22:06 +0100
Janis Schoetterl-Glausch <scgl@linux.ibm.com> wrote:

> Add copy_from/to_user_key functions, which perform storage key checking.
> These functions can be used by KVM for emulating instructions that need
> to be key checked.
> These functions differ from their non _key counterparts in
> include/linux/uaccess.h only in the additional key argument and must be
> kept in sync with those.
> 
> Since the existing uaccess implementation on s390 makes use of move
> instructions that support having an additional access key supplied,
> we can implement raw_copy_from/to_user_key by enhancing the
> existing implementation.
> 
> Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
> Acked-by: Heiko Carstens <hca@linux.ibm.com>
> Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com>
> Acked-by: Janosch Frank <frankja@linux.ibm.com>

Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>

> ---
>  arch/s390/include/asm/uaccess.h | 22 +++++++++
>  arch/s390/lib/uaccess.c         | 81 +++++++++++++++++++++++++--------
>  2 files changed, 85 insertions(+), 18 deletions(-)
> 
> diff --git a/arch/s390/include/asm/uaccess.h b/arch/s390/include/asm/uaccess.h
> index d74e26b48604..ba1bcb91af95 100644
> --- a/arch/s390/include/asm/uaccess.h
> +++ b/arch/s390/include/asm/uaccess.h
> @@ -44,6 +44,28 @@ raw_copy_to_user(void __user *to, const void *from, unsigned long n);
>  #define INLINE_COPY_TO_USER
>  #endif
>  
> +unsigned long __must_check
> +_copy_from_user_key(void *to, const void __user *from, unsigned long n, unsigned long key);
> +
> +static __always_inline unsigned long __must_check
> +copy_from_user_key(void *to, const void __user *from, unsigned long n, unsigned long key)
> +{
> +	if (likely(check_copy_size(to, n, false)))
> +		n = _copy_from_user_key(to, from, n, key);
> +	return n;
> +}
> +
> +unsigned long __must_check
> +_copy_to_user_key(void __user *to, const void *from, unsigned long n, unsigned long key);
> +
> +static __always_inline unsigned long __must_check
> +copy_to_user_key(void __user *to, const void *from, unsigned long n, unsigned long key)
> +{
> +	if (likely(check_copy_size(from, n, true)))
> +		n = _copy_to_user_key(to, from, n, key);
> +	return n;
> +}
> +
>  int __put_user_bad(void) __attribute__((noreturn));
>  int __get_user_bad(void) __attribute__((noreturn));
>  
> diff --git a/arch/s390/lib/uaccess.c b/arch/s390/lib/uaccess.c
> index 8a5d21461889..b709239feb5d 100644
> --- a/arch/s390/lib/uaccess.c
> +++ b/arch/s390/lib/uaccess.c
> @@ -59,11 +59,13 @@ static inline int copy_with_mvcos(void)
>  #endif
>  
>  static inline unsigned long copy_from_user_mvcos(void *x, const void __user *ptr,
> -						 unsigned long size)
> +						 unsigned long size, unsigned long key)
>  {
>  	unsigned long tmp1, tmp2;
>  	union oac spec = {
> +		.oac2.key = key,
>  		.oac2.as = PSW_BITS_AS_SECONDARY,
> +		.oac2.k = 1,
>  		.oac2.a = 1,
>  	};
>  
> @@ -94,19 +96,19 @@ static inline unsigned long copy_from_user_mvcos(void *x, const void __user *ptr
>  }
>  
>  static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr,
> -						unsigned long size)
> +						unsigned long size, unsigned long key)
>  {
>  	unsigned long tmp1, tmp2;
>  
>  	tmp1 = -256UL;
>  	asm volatile(
>  		"   sacf  0\n"
> -		"0: mvcp  0(%0,%2),0(%1),%3\n"
> +		"0: mvcp  0(%0,%2),0(%1),%[key]\n"
>  		"7: jz    5f\n"
>  		"1: algr  %0,%3\n"
>  		"   la    %1,256(%1)\n"
>  		"   la    %2,256(%2)\n"
> -		"2: mvcp  0(%0,%2),0(%1),%3\n"
> +		"2: mvcp  0(%0,%2),0(%1),%[key]\n"
>  		"8: jnz   1b\n"
>  		"   j     5f\n"
>  		"3: la    %4,255(%1)\n"	/* %4 = ptr + 255 */
> @@ -115,7 +117,7 @@ static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr,
>  		"   slgr  %4,%1\n"
>  		"   clgr  %0,%4\n"	/* copy crosses next page boundary? */
>  		"   jnh   6f\n"
> -		"4: mvcp  0(%4,%2),0(%1),%3\n"
> +		"4: mvcp  0(%4,%2),0(%1),%[key]\n"
>  		"9: slgr  %0,%4\n"
>  		"   j     6f\n"
>  		"5: slgr  %0,%0\n"
> @@ -123,24 +125,49 @@ static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr,
>  		EX_TABLE(0b,3b) EX_TABLE(2b,3b) EX_TABLE(4b,6b)
>  		EX_TABLE(7b,3b) EX_TABLE(8b,3b) EX_TABLE(9b,6b)
>  		: "+a" (size), "+a" (ptr), "+a" (x), "+a" (tmp1), "=a" (tmp2)
> -		: : "cc", "memory");
> +		: [key] "d" (key << 4)
> +		: "cc", "memory");
>  	return size;
>  }
>  
> -unsigned long raw_copy_from_user(void *to, const void __user *from, unsigned long n)
> +static unsigned long raw_copy_from_user_key(void *to, const void __user *from,
> +					    unsigned long n, unsigned long key)
>  {
>  	if (copy_with_mvcos())
> -		return copy_from_user_mvcos(to, from, n);
> -	return copy_from_user_mvcp(to, from, n);
> +		return copy_from_user_mvcos(to, from, n, key);
> +	return copy_from_user_mvcp(to, from, n, key);
> +}
> +
> +unsigned long raw_copy_from_user(void *to, const void __user *from, unsigned long n)
> +{
> +	return raw_copy_from_user_key(to, from, n, 0);
>  }
>  EXPORT_SYMBOL(raw_copy_from_user);
>  
> +unsigned long _copy_from_user_key(void *to, const void __user *from,
> +				  unsigned long n, unsigned long key)
> +{
> +	unsigned long res = n;
> +
> +	might_fault();
> +	if (!should_fail_usercopy()) {
> +		instrument_copy_from_user(to, from, n);
> +		res = raw_copy_from_user_key(to, from, n, key);
> +	}
> +	if (unlikely(res))
> +		memset(to + (n - res), 0, res);
> +	return res;
> +}
> +EXPORT_SYMBOL(_copy_from_user_key);
> +
>  static inline unsigned long copy_to_user_mvcos(void __user *ptr, const void *x,
> -					       unsigned long size)
> +					       unsigned long size, unsigned long key)
>  {
>  	unsigned long tmp1, tmp2;
>  	union oac spec = {
> +		.oac1.key = key,
>  		.oac1.as = PSW_BITS_AS_SECONDARY,
> +		.oac1.k = 1,
>  		.oac1.a = 1,
>  	};
>  
> @@ -171,19 +198,19 @@ static inline unsigned long copy_to_user_mvcos(void __user *ptr, const void *x,
>  }
>  
>  static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x,
> -					      unsigned long size)
> +					      unsigned long size, unsigned long key)
>  {
>  	unsigned long tmp1, tmp2;
>  
>  	tmp1 = -256UL;
>  	asm volatile(
>  		"   sacf  0\n"
> -		"0: mvcs  0(%0,%1),0(%2),%3\n"
> +		"0: mvcs  0(%0,%1),0(%2),%[key]\n"
>  		"7: jz    5f\n"
>  		"1: algr  %0,%3\n"
>  		"   la    %1,256(%1)\n"
>  		"   la    %2,256(%2)\n"
> -		"2: mvcs  0(%0,%1),0(%2),%3\n"
> +		"2: mvcs  0(%0,%1),0(%2),%[key]\n"
>  		"8: jnz   1b\n"
>  		"   j     5f\n"
>  		"3: la    %4,255(%1)\n" /* %4 = ptr + 255 */
> @@ -192,7 +219,7 @@ static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x,
>  		"   slgr  %4,%1\n"
>  		"   clgr  %0,%4\n"	/* copy crosses next page boundary? */
>  		"   jnh   6f\n"
> -		"4: mvcs  0(%4,%1),0(%2),%3\n"
> +		"4: mvcs  0(%4,%1),0(%2),%[key]\n"
>  		"9: slgr  %0,%4\n"
>  		"   j     6f\n"
>  		"5: slgr  %0,%0\n"
> @@ -200,18 +227,36 @@ static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x,
>  		EX_TABLE(0b,3b) EX_TABLE(2b,3b) EX_TABLE(4b,6b)
>  		EX_TABLE(7b,3b) EX_TABLE(8b,3b) EX_TABLE(9b,6b)
>  		: "+a" (size), "+a" (ptr), "+a" (x), "+a" (tmp1), "=a" (tmp2)
> -		: : "cc", "memory");
> +		: [key] "d" (key << 4)
> +		: "cc", "memory");
>  	return size;
>  }
>  
> -unsigned long raw_copy_to_user(void __user *to, const void *from, unsigned long n)
> +static unsigned long raw_copy_to_user_key(void __user *to, const void *from,
> +					  unsigned long n, unsigned long key)
>  {
>  	if (copy_with_mvcos())
> -		return copy_to_user_mvcos(to, from, n);
> -	return copy_to_user_mvcs(to, from, n);
> +		return copy_to_user_mvcos(to, from, n, key);
> +	return copy_to_user_mvcs(to, from, n, key);
> +}
> +
> +unsigned long raw_copy_to_user(void __user *to, const void *from, unsigned long n)
> +{
> +	return raw_copy_to_user_key(to, from, n, 0);
>  }
>  EXPORT_SYMBOL(raw_copy_to_user);
>  
> +unsigned long _copy_to_user_key(void __user *to, const void *from,
> +				unsigned long n, unsigned long key)
> +{
> +	might_fault();
> +	if (should_fail_usercopy())
> +		return n;
> +	instrument_copy_to_user(to, from, n);
> +	return raw_copy_to_user_key(to, from, n, key);
> +}
> +EXPORT_SYMBOL(_copy_to_user_key);
> +
>  static inline unsigned long clear_user_mvcos(void __user *to, unsigned long size)
>  {
>  	unsigned long tmp1, tmp2;


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v4 02/10] KVM: s390: Honor storage keys when accessing guest memory
  2022-02-11 18:22 ` [PATCH v4 02/10] KVM: s390: Honor storage keys when accessing guest memory Janis Schoetterl-Glausch
@ 2022-02-14 15:52   ` Claudio Imbrenda
  0 siblings, 0 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2022-02-14 15:52 UTC (permalink / raw)
  To: Janis Schoetterl-Glausch
  Cc: Christian Borntraeger, Heiko Carstens, Janosch Frank,
	Alexander Gordeev, David Hildenbrand, Jonathan Corbet, kvm,
	linux-doc, linux-kernel, linux-s390, Paolo Bonzini,
	Sven Schnelle, Vasily Gorbik

On Fri, 11 Feb 2022 19:22:07 +0100
Janis Schoetterl-Glausch <scgl@linux.ibm.com> wrote:

> Storage key checking had not been implemented for instructions emulated
> by KVM. Implement it by enhancing the functions used for guest access,
> in particular those making use of access_guest which has been renamed
> to access_guest_with_key.
> Accesses via access_guest_real should not be key checked.
> 
> For actual accesses, key checking is done by
> copy_from/to_user_key (which internally uses MVCOS/MVCP/MVCS).
> In cases where accessibility is checked without an actual access,
> this is performed by getting the storage key and checking if the access
> key matches. In both cases, if applicable, storage and fetch protection
> override are honored.
> 
> Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
> Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
> Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com>

Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>

> ---
>  arch/s390/include/asm/ctl_reg.h |   2 +
>  arch/s390/include/asm/page.h    |   2 +
>  arch/s390/kvm/gaccess.c         | 187 ++++++++++++++++++++++++++++++--
>  arch/s390/kvm/gaccess.h         |  77 +++++++++++--
>  arch/s390/kvm/intercept.c       |  12 +-
>  arch/s390/kvm/kvm-s390.c        |   4 +-
>  6 files changed, 253 insertions(+), 31 deletions(-)
> 
> diff --git a/arch/s390/include/asm/ctl_reg.h b/arch/s390/include/asm/ctl_reg.h
> index 04dc65f8901d..c800199a376b 100644
> --- a/arch/s390/include/asm/ctl_reg.h
> +++ b/arch/s390/include/asm/ctl_reg.h
> @@ -12,6 +12,8 @@
>  
>  #define CR0_CLOCK_COMPARATOR_SIGN	BIT(63 - 10)
>  #define CR0_LOW_ADDRESS_PROTECTION	BIT(63 - 35)
> +#define CR0_FETCH_PROTECTION_OVERRIDE	BIT(63 - 38)
> +#define CR0_STORAGE_PROTECTION_OVERRIDE	BIT(63 - 39)
>  #define CR0_EMERGENCY_SIGNAL_SUBMASK	BIT(63 - 49)
>  #define CR0_EXTERNAL_CALL_SUBMASK	BIT(63 - 50)
>  #define CR0_CLOCK_COMPARATOR_SUBMASK	BIT(63 - 52)
> diff --git a/arch/s390/include/asm/page.h b/arch/s390/include/asm/page.h
> index d98d17a36c7b..cfc4d6fb2385 100644
> --- a/arch/s390/include/asm/page.h
> +++ b/arch/s390/include/asm/page.h
> @@ -20,6 +20,8 @@
>  #define PAGE_SIZE	_PAGE_SIZE
>  #define PAGE_MASK	_PAGE_MASK
>  #define PAGE_DEFAULT_ACC	0
> +/* storage-protection override */
> +#define PAGE_SPO_ACC		9
>  #define PAGE_DEFAULT_KEY	(PAGE_DEFAULT_ACC << 4)
>  
>  #define HPAGE_SHIFT	20
> diff --git a/arch/s390/kvm/gaccess.c b/arch/s390/kvm/gaccess.c
> index 4460808c3b9a..7fca0cff4c12 100644
> --- a/arch/s390/kvm/gaccess.c
> +++ b/arch/s390/kvm/gaccess.c
> @@ -10,6 +10,7 @@
>  #include <linux/mm_types.h>
>  #include <linux/err.h>
>  #include <linux/pgtable.h>
> +#include <linux/bitfield.h>
>  
>  #include <asm/gmap.h>
>  #include "kvm-s390.h"
> @@ -794,6 +795,79 @@ static int low_address_protection_enabled(struct kvm_vcpu *vcpu,
>  	return 1;
>  }
>  
> +static bool fetch_prot_override_applicable(struct kvm_vcpu *vcpu, enum gacc_mode mode,
> +					   union asce asce)
> +{
> +	psw_t *psw = &vcpu->arch.sie_block->gpsw;
> +	unsigned long override;
> +
> +	if (mode == GACC_FETCH || mode == GACC_IFETCH) {
> +		/* check if fetch protection override enabled */
> +		override = vcpu->arch.sie_block->gcr[0];
> +		override &= CR0_FETCH_PROTECTION_OVERRIDE;
> +		/* not applicable if subject to DAT && private space */
> +		override = override && !(psw_bits(*psw).dat && asce.p);
> +		return override;
> +	}
> +	return false;
> +}
> +
> +static bool fetch_prot_override_applies(unsigned long ga, unsigned int len)
> +{
> +	return ga < 2048 && ga + len <= 2048;
> +}
> +
> +static bool storage_prot_override_applicable(struct kvm_vcpu *vcpu)
> +{
> +	/* check if storage protection override enabled */
> +	return vcpu->arch.sie_block->gcr[0] & CR0_STORAGE_PROTECTION_OVERRIDE;
> +}
> +
> +static bool storage_prot_override_applies(u8 access_control)
> +{
> +	/* matches special storage protection override key (9) -> allow */
> +	return access_control == PAGE_SPO_ACC;
> +}
> +
> +static int vcpu_check_access_key(struct kvm_vcpu *vcpu, u8 access_key,
> +				 enum gacc_mode mode, union asce asce, gpa_t gpa,
> +				 unsigned long ga, unsigned int len)
> +{
> +	u8 storage_key, access_control;
> +	unsigned long hva;
> +	int r;
> +
> +	/* access key 0 matches any storage key -> allow */
> +	if (access_key == 0)
> +		return 0;
> +	/*
> +	 * caller needs to ensure that gfn is accessible, so we can
> +	 * assume that this cannot fail
> +	 */
> +	hva = gfn_to_hva(vcpu->kvm, gpa_to_gfn(gpa));
> +	mmap_read_lock(current->mm);
> +	r = get_guest_storage_key(current->mm, hva, &storage_key);
> +	mmap_read_unlock(current->mm);
> +	if (r)
> +		return r;
> +	access_control = FIELD_GET(_PAGE_ACC_BITS, storage_key);
> +	/* access key matches storage key -> allow */
> +	if (access_control == access_key)
> +		return 0;
> +	if (mode == GACC_FETCH || mode == GACC_IFETCH) {
> +		/* it is a fetch and fetch protection is off -> allow */
> +		if (!(storage_key & _PAGE_FP_BIT))
> +			return 0;
> +		if (fetch_prot_override_applicable(vcpu, mode, asce) &&
> +		    fetch_prot_override_applies(ga, len))
> +			return 0;
> +	}
> +	if (storage_prot_override_applicable(vcpu) &&
> +	    storage_prot_override_applies(access_control))
> +		return 0;
> +	return PGM_PROTECTION;
> +}
> +
>  /**
>   * guest_range_to_gpas() - Calculate guest physical addresses of page fragments
>   * covering a logical range
> @@ -804,6 +878,7 @@ static int low_address_protection_enabled(struct kvm_vcpu *vcpu,
>   * @len: length of range in bytes
>   * @asce: address-space-control element to use for translation
>   * @mode: access mode
> + * @access_key: access key to mach the range's storage keys against
>   *
>   * Translate a logical range to a series of guest absolute addresses,
>   * such that the concatenation of page fragments starting at each gpa make up
> @@ -830,7 +905,8 @@ static int low_address_protection_enabled(struct kvm_vcpu *vcpu,
>   */
>  static int guest_range_to_gpas(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
>  			       unsigned long *gpas, unsigned long len,
> -			       const union asce asce, enum gacc_mode mode)
> +			       const union asce asce, enum gacc_mode mode,
> +			       u8 access_key)
>  {
>  	psw_t *psw = &vcpu->arch.sie_block->gpsw;
>  	unsigned int offset = offset_in_page(ga);
> @@ -857,6 +933,10 @@ static int guest_range_to_gpas(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
>  		}
>  		if (rc)
>  			return trans_exc(vcpu, rc, ga, ar, mode, prot);
> +		rc = vcpu_check_access_key(vcpu, access_key, mode, asce, gpa, ga,
> +					   fragment_len);
> +		if (rc)
> +			return trans_exc(vcpu, rc, ga, ar, mode, PROT_TYPE_KEYC);
>  		if (gpas)
>  			*gpas++ = gpa;
>  		offset = 0;
> @@ -880,16 +960,54 @@ static int access_guest_page(struct kvm *kvm, enum gacc_mode mode, gpa_t gpa,
>  	return rc;
>  }
>  
> -int access_guest(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar, void *data,
> -		 unsigned long len, enum gacc_mode mode)
> +static int
> +access_guest_page_with_key(struct kvm *kvm, enum gacc_mode mode, gpa_t gpa,
> +			   void *data, unsigned int len, u8 access_key)
> +{
> +	struct kvm_memory_slot *slot;
> +	bool writable;
> +	gfn_t gfn;
> +	hva_t hva;
> +	int rc;
> +
> +	gfn = gpa >> PAGE_SHIFT;
> +	slot = gfn_to_memslot(kvm, gfn);
> +	hva = gfn_to_hva_memslot_prot(slot, gfn, &writable);
> +
> +	if (kvm_is_error_hva(hva))
> +		return PGM_ADDRESSING;
> +	/*
> +	 * Check if it's a ro memslot, even tho that can't occur (they're unsupported).
> +	 * Don't try to actually handle that case.
> +	 */
> +	if (!writable && mode == GACC_STORE)
> +		return -EOPNOTSUPP;
> +	hva += offset_in_page(gpa);
> +	if (mode == GACC_STORE)
> +		rc = copy_to_user_key((void __user *)hva, data, len, access_key);
> +	else
> +		rc = copy_from_user_key(data, (void __user *)hva, len, access_key);
> +	if (rc)
> +		return PGM_PROTECTION;
> +	if (mode == GACC_STORE)
> +		mark_page_dirty_in_slot(kvm, slot, gfn);
> +	return 0;
> +}
> +
> +int access_guest_with_key(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
> +			  void *data, unsigned long len, enum gacc_mode mode,
> +			  u8 access_key)
>  {
>  	psw_t *psw = &vcpu->arch.sie_block->gpsw;
>  	unsigned long nr_pages, idx;
>  	unsigned long gpa_array[2];
>  	unsigned int fragment_len;
>  	unsigned long *gpas;
> +	enum prot_type prot;
>  	int need_ipte_lock;
>  	union asce asce;
> +	bool try_storage_prot_override;
> +	bool try_fetch_prot_override;
>  	int rc;
>  
>  	if (!len)
> @@ -904,16 +1022,47 @@ int access_guest(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar, void *data,
>  		gpas = vmalloc(array_size(nr_pages, sizeof(unsigned long)));
>  	if (!gpas)
>  		return -ENOMEM;
> +	try_fetch_prot_override = fetch_prot_override_applicable(vcpu, mode, asce);
> +	try_storage_prot_override = storage_prot_override_applicable(vcpu);
>  	need_ipte_lock = psw_bits(*psw).dat && !asce.r;
>  	if (need_ipte_lock)
>  		ipte_lock(vcpu);
> -	rc = guest_range_to_gpas(vcpu, ga, ar, gpas, len, asce, mode);
> -	for (idx = 0; idx < nr_pages && !rc; idx++) {
> +	/*
> +	 * Since we do the access further down ultimately via a move instruction
> +	 * that does key checking and returns an error in case of a protection
> +	 * violation, we don't need to do the check during address translation.
> +	 * Skip it by passing access key 0, which matches any storage key,
> +	 * obviating the need for any further checks. As a result the check is
> +	 * handled entirely in hardware on access, we only need to take care to
> +	 * forego key protection checking if fetch protection override applies or
> +	 * retry with the special key 9 in case of storage protection override.
> +	 */
> +	rc = guest_range_to_gpas(vcpu, ga, ar, gpas, len, asce, mode, 0);
> +	if (rc)
> +		goto out_unlock;
> +	for (idx = 0; idx < nr_pages; idx++) {
>  		fragment_len = min(PAGE_SIZE - offset_in_page(gpas[idx]), len);
> -		rc = access_guest_page(vcpu->kvm, mode, gpas[idx], data, fragment_len);
> +		if (try_fetch_prot_override && fetch_prot_override_applies(ga, fragment_len)) {
> +			rc = access_guest_page(vcpu->kvm, mode, gpas[idx],
> +					       data, fragment_len);
> +		} else {
> +			rc = access_guest_page_with_key(vcpu->kvm, mode, gpas[idx],
> +							data, fragment_len, access_key);
> +		}
> +		if (rc == PGM_PROTECTION && try_storage_prot_override)
> +			rc = access_guest_page_with_key(vcpu->kvm, mode, gpas[idx],
> +							data, fragment_len, PAGE_SPO_ACC);
> +		if (rc == PGM_PROTECTION)
> +			prot = PROT_TYPE_KEYC;
> +		if (rc)
> +			break;
>  		len -= fragment_len;
>  		data += fragment_len;
> +		ga = kvm_s390_logical_to_effective(vcpu, ga + fragment_len);
>  	}
> +	if (rc > 0)
> +		rc = trans_exc(vcpu, rc, ga, ar, mode, prot);
> +out_unlock:
>  	if (need_ipte_lock)
>  		ipte_unlock(vcpu);
>  	if (nr_pages > ARRAY_SIZE(gpa_array))
> @@ -940,12 +1089,13 @@ int access_guest_real(struct kvm_vcpu *vcpu, unsigned long gra,
>  }
>  
>  /**
> - * guest_translate_address - translate guest logical into guest absolute address
> + * guest_translate_address_with_key - translate guest logical into guest absolute address
>   * @vcpu: virtual cpu
>   * @gva: Guest virtual address
>   * @ar: Access register
>   * @gpa: Guest physical address
>   * @mode: Translation access mode
> + * @access_key: access key to mach the storage key with
>   *
>   * Parameter semantics are the same as the ones from guest_translate.
>   * The memory contents at the guest address are not changed.
> @@ -953,8 +1103,9 @@ int access_guest_real(struct kvm_vcpu *vcpu, unsigned long gra,
>   * Note: The IPTE lock is not taken during this function, so the caller
>   * has to take care of this.
>   */
> -int guest_translate_address(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
> -			    unsigned long *gpa, enum gacc_mode mode)
> +int guest_translate_address_with_key(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
> +				     unsigned long *gpa, enum gacc_mode mode,
> +				     u8 access_key)
>  {
>  	union asce asce;
>  	int rc;
> @@ -963,7 +1114,17 @@ int guest_translate_address(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
>  	rc = get_vcpu_asce(vcpu, &asce, gva, ar, mode);
>  	if (rc)
>  		return rc;
> -	return guest_range_to_gpas(vcpu, gva, ar, gpa, 1, asce, mode);
> +	return guest_range_to_gpas(vcpu, gva, ar, gpa, 1, asce, mode,
> +				   access_key);
> +}
> +
> +int guest_translate_address(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
> +			    unsigned long *gpa, enum gacc_mode mode)
> +{
> +	u8 access_key = psw_bits(vcpu->arch.sie_block->gpsw).key;
> +
> +	return guest_translate_address_with_key(vcpu, gva, ar, gpa, mode,
> +						access_key);
>  }
>  
>  /**
> @@ -973,9 +1134,10 @@ int guest_translate_address(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
>   * @ar: Access register
>   * @length: Length of test range
>   * @mode: Translation access mode
> + * @access_key: access key to mach the storage keys with
>   */
>  int check_gva_range(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
> -		    unsigned long length, enum gacc_mode mode)
> +		    unsigned long length, enum gacc_mode mode, u8 access_key)
>  {
>  	union asce asce;
>  	int rc = 0;
> @@ -984,7 +1146,8 @@ int check_gva_range(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
>  	if (rc)
>  		return rc;
>  	ipte_lock(vcpu);
> -	rc = guest_range_to_gpas(vcpu, gva, ar, NULL, length, asce, mode);
> +	rc = guest_range_to_gpas(vcpu, gva, ar, NULL, length, asce, mode,
> +				 access_key);
>  	ipte_unlock(vcpu);
>  
>  	return rc;
> diff --git a/arch/s390/kvm/gaccess.h b/arch/s390/kvm/gaccess.h
> index 7c72a5e3449f..e5b2f56e7962 100644
> --- a/arch/s390/kvm/gaccess.h
> +++ b/arch/s390/kvm/gaccess.h
> @@ -186,24 +186,31 @@ enum gacc_mode {
>  	GACC_IFETCH,
>  };
>  
> +int guest_translate_address_with_key(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
> +				     unsigned long *gpa, enum gacc_mode mode,
> +				     u8 access_key);
> +
>  int guest_translate_address(struct kvm_vcpu *vcpu, unsigned long gva,
>  			    u8 ar, unsigned long *gpa, enum gacc_mode mode);
> +
>  int check_gva_range(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
> -		    unsigned long length, enum gacc_mode mode);
> +		    unsigned long length, enum gacc_mode mode, u8 access_key);
>  
> -int access_guest(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar, void *data,
> -		 unsigned long len, enum gacc_mode mode);
> +int access_guest_with_key(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
> +			  void *data, unsigned long len, enum gacc_mode mode,
> +			  u8 access_key);
>  
>  int access_guest_real(struct kvm_vcpu *vcpu, unsigned long gra,
>  		      void *data, unsigned long len, enum gacc_mode mode);
>  
>  /**
> - * write_guest - copy data from kernel space to guest space
> + * write_guest_with_key - copy data from kernel space to guest space
>   * @vcpu: virtual cpu
>   * @ga: guest address
>   * @ar: access register
>   * @data: source address in kernel space
>   * @len: number of bytes to copy
> + * @access_key: access key the storage key needs to match
>   *
>   * Copy @len bytes from @data (kernel space) to @ga (guest address).
>   * In order to copy data to guest space the PSW of the vcpu is inspected:
> @@ -214,8 +221,8 @@ int access_guest_real(struct kvm_vcpu *vcpu, unsigned long gra,
>   * The addressing mode of the PSW is also inspected, so that address wrap
>   * around is taken into account for 24-, 31- and 64-bit addressing mode,
>   * if the to be copied data crosses page boundaries in guest address space.
> - * In addition also low address and DAT protection are inspected before
> - * copying any data (key protection is currently not implemented).
> + * In addition low address, DAT and key protection checks are performed before
> + * copying any data.
>   *
>   * This function modifies the 'struct kvm_s390_pgm_info pgm' member of @vcpu.
>   * In case of an access exception (e.g. protection exception) pgm will contain
> @@ -243,10 +250,53 @@ int access_guest_real(struct kvm_vcpu *vcpu, unsigned long gra,
>   *	 if data has been changed in guest space in case of an exception.
>   */
>  static inline __must_check
> +int write_guest_with_key(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
> +			 void *data, unsigned long len, u8 access_key)
> +{
> +	return access_guest_with_key(vcpu, ga, ar, data, len, GACC_STORE,
> +				     access_key);
> +}
> +
> +/**
> + * write_guest - copy data from kernel space to guest space
> + * @vcpu: virtual cpu
> + * @ga: guest address
> + * @ar: access register
> + * @data: source address in kernel space
> + * @len: number of bytes to copy
> + *
> + * The behaviour of write_guest is identical to write_guest_with_key, except
> + * that the PSW access key is used instead of an explicit argument.
> + */
> +static inline __must_check
>  int write_guest(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar, void *data,
>  		unsigned long len)
>  {
> -	return access_guest(vcpu, ga, ar, data, len, GACC_STORE);
> +	u8 access_key = psw_bits(vcpu->arch.sie_block->gpsw).key;
> +
> +	return write_guest_with_key(vcpu, ga, ar, data, len, access_key);
> +}
> +
> +/**
> + * read_guest_with_key - copy data from guest space to kernel space
> + * @vcpu: virtual cpu
> + * @ga: guest address
> + * @ar: access register
> + * @data: destination address in kernel space
> + * @len: number of bytes to copy
> + * @access_key: access key the storage key needs to match
> + *
> + * Copy @len bytes from @ga (guest address) to @data (kernel space).
> + *
> + * The behaviour of read_guest_with_key is identical to write_guest_with_key,
> + * except that data will be copied from guest space to kernel space.
> + */
> +static inline __must_check
> +int read_guest_with_key(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
> +			void *data, unsigned long len, u8 access_key)
> +{
> +	return access_guest_with_key(vcpu, ga, ar, data, len, GACC_FETCH,
> +				     access_key);
>  }
>  
>  /**
> @@ -259,14 +309,16 @@ int write_guest(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar, void *data,
>   *
>   * Copy @len bytes from @ga (guest address) to @data (kernel space).
>   *
> - * The behaviour of read_guest is identical to write_guest, except that
> - * data will be copied from guest space to kernel space.
> + * The behaviour of read_guest is identical to read_guest_with_key, except
> + * that the PSW access key is used instead of an explicit argument.
>   */
>  static inline __must_check
>  int read_guest(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar, void *data,
>  	       unsigned long len)
>  {
> -	return access_guest(vcpu, ga, ar, data, len, GACC_FETCH);
> +	u8 access_key = psw_bits(vcpu->arch.sie_block->gpsw).key;
> +
> +	return read_guest_with_key(vcpu, ga, ar, data, len, access_key);
>  }
>  
>  /**
> @@ -287,7 +339,10 @@ static inline __must_check
>  int read_guest_instr(struct kvm_vcpu *vcpu, unsigned long ga, void *data,
>  		     unsigned long len)
>  {
> -	return access_guest(vcpu, ga, 0, data, len, GACC_IFETCH);
> +	u8 access_key = psw_bits(vcpu->arch.sie_block->gpsw).key;
> +
> +	return access_guest_with_key(vcpu, ga, 0, data, len, GACC_IFETCH,
> +				     access_key);
>  }
>  
>  /**
> diff --git a/arch/s390/kvm/intercept.c b/arch/s390/kvm/intercept.c
> index d07ff646d844..8bd42a20d924 100644
> --- a/arch/s390/kvm/intercept.c
> +++ b/arch/s390/kvm/intercept.c
> @@ -331,18 +331,18 @@ static int handle_mvpg_pei(struct kvm_vcpu *vcpu)
>  
>  	kvm_s390_get_regs_rre(vcpu, &reg1, &reg2);
>  
> -	/* Make sure that the source is paged-in */
> -	rc = guest_translate_address(vcpu, vcpu->run->s.regs.gprs[reg2],
> -				     reg2, &srcaddr, GACC_FETCH);
> +	/* Ensure that the source is paged-in, no actual access -> no key checking */
> +	rc = guest_translate_address_with_key(vcpu, vcpu->run->s.regs.gprs[reg2],
> +					      reg2, &srcaddr, GACC_FETCH, 0);
>  	if (rc)
>  		return kvm_s390_inject_prog_cond(vcpu, rc);
>  	rc = kvm_arch_fault_in_page(vcpu, srcaddr, 0);
>  	if (rc != 0)
>  		return rc;
>  
> -	/* Make sure that the destination is paged-in */
> -	rc = guest_translate_address(vcpu, vcpu->run->s.regs.gprs[reg1],
> -				     reg1, &dstaddr, GACC_STORE);
> +	/* Ensure that the source is paged-in, no actual access -> no key checking */
> +	rc = guest_translate_address_with_key(vcpu, vcpu->run->s.regs.gprs[reg1],
> +					      reg1, &dstaddr, GACC_STORE, 0);
>  	if (rc)
>  		return kvm_s390_inject_prog_cond(vcpu, rc);
>  	rc = kvm_arch_fault_in_page(vcpu, dstaddr, 1);
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 2296b1ff1e02..fdbd6c1dc709 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -4713,7 +4713,7 @@ static long kvm_s390_guest_mem_op(struct kvm_vcpu *vcpu,
>  	case KVM_S390_MEMOP_LOGICAL_READ:
>  		if (mop->flags & KVM_S390_MEMOP_F_CHECK_ONLY) {
>  			r = check_gva_range(vcpu, mop->gaddr, mop->ar,
> -					    mop->size, GACC_FETCH);
> +					    mop->size, GACC_FETCH, 0);
>  			break;
>  		}
>  		r = read_guest(vcpu, mop->gaddr, mop->ar, tmpbuf, mop->size);
> @@ -4725,7 +4725,7 @@ static long kvm_s390_guest_mem_op(struct kvm_vcpu *vcpu,
>  	case KVM_S390_MEMOP_LOGICAL_WRITE:
>  		if (mop->flags & KVM_S390_MEMOP_F_CHECK_ONLY) {
>  			r = check_gva_range(vcpu, mop->gaddr, mop->ar,
> -					    mop->size, GACC_STORE);
> +					    mop->size, GACC_STORE, 0);
>  			break;
>  		}
>  		if (copy_from_user(tmpbuf, uaddr, mop->size)) {


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v4 08/10] KVM: s390: Add capability for storage key extension of MEM_OP IOCTL
  2022-02-11 18:22 ` [PATCH v4 08/10] KVM: s390: Add capability for storage key extension of MEM_OP IOCTL Janis Schoetterl-Glausch
  2022-02-14 14:29   ` Christian Borntraeger
@ 2022-02-14 17:47   ` Claudio Imbrenda
  1 sibling, 0 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2022-02-14 17:47 UTC (permalink / raw)
  To: Janis Schoetterl-Glausch
  Cc: Christian Borntraeger, Heiko Carstens, Janosch Frank,
	Alexander Gordeev, David Hildenbrand, Jonathan Corbet, kvm,
	linux-doc, linux-kernel, linux-s390, Paolo Bonzini,
	Sven Schnelle, Vasily Gorbik

On Fri, 11 Feb 2022 19:22:13 +0100
Janis Schoetterl-Glausch <scgl@linux.ibm.com> wrote:

> Availability of the KVM_CAP_S390_MEM_OP_EXTENSION capability signals that:
> * The vcpu MEM_OP IOCTL supports storage key checking.
> * The vm MEM_OP IOCTL exists.
> 
> Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
> Reviewed-by: Janosch Frank <frankja@linux.ibm.com>

Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>

> ---
>  arch/s390/kvm/kvm-s390.c | 1 +
>  include/uapi/linux/kvm.h | 1 +
>  2 files changed, 2 insertions(+)
> 
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 773bccdd446c..c2c26c2aad64 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -564,6 +564,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_S390_VCPU_RESETS:
>  	case KVM_CAP_SET_GUEST_DEBUG:
>  	case KVM_CAP_S390_DIAG318:
> +	case KVM_CAP_S390_MEM_OP_EXTENSION:
>  		r = 1;
>  		break;
>  	case KVM_CAP_SET_GUEST_DEBUG2:
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 4bc7623def87..08756eeea065 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1140,6 +1140,7 @@ struct kvm_ppc_resize_hpt {
>  #define KVM_CAP_VM_GPA_BITS 207
>  #define KVM_CAP_XSAVE2 208
>  #define KVM_CAP_SYS_ATTRIBUTES 209
> +#define KVM_CAP_S390_MEM_OP_EXTENSION 210
>  
>  #ifdef KVM_CAP_IRQ_ROUTING
>  


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v4 09/10] KVM: s390: Update api documentation for memop ioctl
  2022-02-11 18:22 ` [PATCH v4 09/10] KVM: s390: Update api documentation for memop ioctl Janis Schoetterl-Glausch
@ 2022-02-14 17:52   ` Claudio Imbrenda
  2022-02-21 14:36   ` [PATCH] KVM: s390: Clarify key argument for MEM_OP in api docs Janis Schoetterl-Glausch
  1 sibling, 0 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2022-02-14 17:52 UTC (permalink / raw)
  To: Janis Schoetterl-Glausch
  Cc: Christian Borntraeger, Heiko Carstens, Janosch Frank,
	Alexander Gordeev, David Hildenbrand, Jonathan Corbet, kvm,
	linux-doc, linux-kernel, linux-s390, Paolo Bonzini,
	Sven Schnelle, Vasily Gorbik

On Fri, 11 Feb 2022 19:22:14 +0100
Janis Schoetterl-Glausch <scgl@linux.ibm.com> wrote:

> Document all currently existing operations, flags and explain under
> which circumstances they are available. Document the recently
> introduced absolute operations and the storage key protection flag,
> as well as the existing SIDA operations.
> 
> Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
> Reviewed-by: Janosch Frank <frankja@linux.ibm.com>

Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>

> ---
>  Documentation/virt/kvm/api.rst | 112 ++++++++++++++++++++++++++-------
>  include/uapi/linux/kvm.h       |   2 +-
>  2 files changed, 91 insertions(+), 23 deletions(-)
> 
> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> index a4267104db50..48f23bb80d7f 100644
> --- a/Documentation/virt/kvm/api.rst
> +++ b/Documentation/virt/kvm/api.rst
> @@ -3683,15 +3683,17 @@ The fields in each entry are defined as follows:
>  4.89 KVM_S390_MEM_OP
>  --------------------
>  
> -:Capability: KVM_CAP_S390_MEM_OP
> +:Capability: KVM_CAP_S390_MEM_OP, KVM_CAP_S390_PROTECTED, KVM_CAP_S390_MEM_OP_EXTENSION
>  :Architectures: s390
> -:Type: vcpu ioctl
> +:Type: vm ioctl, vcpu ioctl
>  :Parameters: struct kvm_s390_mem_op (in)
>  :Returns: = 0 on success,
>            < 0 on generic error (e.g. -EFAULT or -ENOMEM),
>            > 0 if an exception occurred while walking the page tables  
>  
> -Read or write data from/to the logical (virtual) memory of a VCPU.
> +Read or write data from/to the VM's memory.
> +The KVM_CAP_S390_MEM_OP_EXTENSION capability specifies what functionality is
> +supported.
>  
>  Parameters are specified via the following structure::
>  
> @@ -3701,33 +3703,99 @@ Parameters are specified via the following structure::
>  	__u32 size;		/* amount of bytes */
>  	__u32 op;		/* type of operation */
>  	__u64 buf;		/* buffer in userspace */
> -	__u8 ar;		/* the access register number */
> -	__u8 reserved[31];	/* should be set to 0 */
> +	union {
> +		struct {
> +			__u8 ar;	/* the access register number */
> +			__u8 key;	/* access key, ignored if flag unset */
> +		};
> +		__u32 sida_offset; /* offset into the sida */
> +		__u8 reserved[32]; /* ignored */
> +	};
>    };
>  
> -The type of operation is specified in the "op" field. It is either
> -KVM_S390_MEMOP_LOGICAL_READ for reading from logical memory space or
> -KVM_S390_MEMOP_LOGICAL_WRITE for writing to logical memory space. The
> -KVM_S390_MEMOP_F_CHECK_ONLY flag can be set in the "flags" field to check
> -whether the corresponding memory access would create an access exception
> -(without touching the data in the memory at the destination). In case an
> -access exception occurred while walking the MMU tables of the guest, the
> -ioctl returns a positive error number to indicate the type of exception.
> -This exception is also raised directly at the corresponding VCPU if the
> -flag KVM_S390_MEMOP_F_INJECT_EXCEPTION is set in the "flags" field.
> -
>  The start address of the memory region has to be specified in the "gaddr"
>  field, and the length of the region in the "size" field (which must not
>  be 0). The maximum value for "size" can be obtained by checking the
>  KVM_CAP_S390_MEM_OP capability. "buf" is the buffer supplied by the
>  userspace application where the read data should be written to for
> -KVM_S390_MEMOP_LOGICAL_READ, or where the data that should be written is
> -stored for a KVM_S390_MEMOP_LOGICAL_WRITE. When KVM_S390_MEMOP_F_CHECK_ONLY
> -is specified, "buf" is unused and can be NULL. "ar" designates the access
> -register number to be used; the valid range is 0..15.
> +a read access, or where the data that should be written is stored for
> +a write access.  The "reserved" field is meant for future extensions.
> +Reserved and unused values are ignored. Future extension that add members must
> +introduce new flags.
> +
> +The type of operation is specified in the "op" field. Flags modifying
> +their behavior can be set in the "flags" field. Undefined flag bits must
> +be set to 0.
> +
> +Possible operations are:
> +  * ``KVM_S390_MEMOP_LOGICAL_READ``
> +  * ``KVM_S390_MEMOP_LOGICAL_WRITE``
> +  * ``KVM_S390_MEMOP_ABSOLUTE_READ``
> +  * ``KVM_S390_MEMOP_ABSOLUTE_WRITE``
> +  * ``KVM_S390_MEMOP_SIDA_READ``
> +  * ``KVM_S390_MEMOP_SIDA_WRITE``
> +
> +Logical read/write:
> +^^^^^^^^^^^^^^^^^^^
> +
> +Access logical memory, i.e. translate the given guest address to an absolute
> +address given the state of the VCPU and use the absolute address as target of
> +the access. "ar" designates the access register number to be used; the valid
> +range is 0..15.
> +Logical accesses are permitted for the VCPU ioctl only.
> +Logical accesses are permitted for non-protected guests only.
> +
> +Supported flags:
> +  * ``KVM_S390_MEMOP_F_CHECK_ONLY``
> +  * ``KVM_S390_MEMOP_F_INJECT_EXCEPTION``
> +  * ``KVM_S390_MEMOP_F_SKEY_PROTECTION``
> +
> +The KVM_S390_MEMOP_F_CHECK_ONLY flag can be set to check whether the
> +corresponding memory access would cause an access exception; however,
> +no actual access to the data in memory at the destination is performed.
> +In this case, "buf" is unused and can be NULL.
> +
> +In case an access exception occurred during the access (or would occur
> +in case of KVM_S390_MEMOP_F_CHECK_ONLY), the ioctl returns a positive
> +error number indicating the type of exception. This exception is also
> +raised directly at the corresponding VCPU if the flag
> +KVM_S390_MEMOP_F_INJECT_EXCEPTION is set.
> +
> +If the KVM_S390_MEMOP_F_SKEY_PROTECTION flag is set, storage key
> +protection is also in effect and may cause exceptions if accesses are
> +prohibited given the access key passed in "key".
> +KVM_S390_MEMOP_F_SKEY_PROTECTION is available if KVM_CAP_S390_MEM_OP_EXTENSION
> +is > 0.
> +
> +Absolute read/write:
> +^^^^^^^^^^^^^^^^^^^^
> +
> +Access absolute memory. This operation is intended to be used with the
> +KVM_S390_MEMOP_F_SKEY_PROTECTION flag, to allow accessing memory and performing
> +the checks required for storage key protection as one operation (as opposed to
> +user space getting the storage keys, performing the checks, and accessing
> +memory thereafter, which could lead to a delay between check and access).
> +Absolute accesses are permitted for the VM ioctl if KVM_CAP_S390_MEM_OP_EXTENSION
> +is > 0.
> +Currently absolute accesses are not permitted for VCPU ioctls.
> +Absolute accesses are permitted for non-protected guests only.
> +
> +Supported flags:
> +  * ``KVM_S390_MEMOP_F_CHECK_ONLY``
> +  * ``KVM_S390_MEMOP_F_SKEY_PROTECTION``
> +
> +The semantics of the flags are as for logical accesses.
> +
> +SIDA read/write:
> +^^^^^^^^^^^^^^^^
> +
> +Access the secure instruction data area which contains memory operands necessary
> +for instruction emulation for protected guests.
> +SIDA accesses are available if the KVM_CAP_S390_PROTECTED capability is available.
> +SIDA accesses are permitted for the VCPU ioctl only.
> +SIDA accesses are permitted for protected guests only.
>  
> -The "reserved" field is meant for future extensions. It is not used by
> -KVM with the currently defined set of flags.
> +No flags are supported.
>  
>  4.90 KVM_S390_GET_SKEYS
>  -----------------------
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 08756eeea065..dbc550bbd9fa 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -567,7 +567,7 @@ struct kvm_s390_mem_op {
>  			__u8 key;	/* access key, ignored if flag unset */
>  		};
>  		__u32 sida_offset; /* offset into the sida */
> -		__u8 reserved[32]; /* should be set to 0 */
> +		__u8 reserved[32]; /* ignored */
>  	};
>  };
>  /* types for kvm_s390_mem_op->op */


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v4 06/10] KVM: s390: Add vm IOCTL for key checked guest absolute memory access
  2022-02-11 18:22 ` [PATCH v4 06/10] KVM: s390: Add vm IOCTL for key checked guest absolute memory access Janis Schoetterl-Glausch
@ 2022-02-14 17:58   ` Claudio Imbrenda
  2022-02-21 16:32   ` [PATCH] KVM: s390: Add missing vm MEM_OP size check Janis Schoetterl-Glausch
  1 sibling, 0 replies; 39+ messages in thread
From: Claudio Imbrenda @ 2022-02-14 17:58 UTC (permalink / raw)
  To: Janis Schoetterl-Glausch
  Cc: Christian Borntraeger, Heiko Carstens, Janosch Frank,
	Alexander Gordeev, David Hildenbrand, Jonathan Corbet, kvm,
	linux-doc, linux-kernel, linux-s390, Paolo Bonzini,
	Sven Schnelle, Vasily Gorbik

On Fri, 11 Feb 2022 19:22:11 +0100
Janis Schoetterl-Glausch <scgl@linux.ibm.com> wrote:

> Channel I/O honors storage keys and is performed on absolute memory.
> For I/O emulation user space therefore needs to be able to do key
> checked accesses.
> The vm IOCTL supports read/write accesses, as well as checking
> if an access would succeed.
> Unlike relying on KVM_S390_GET_SKEYS for key checking would,
> the vm IOCTL performs the check in lockstep with the read or write,
> by, ultimately, mapping the access to move instructions that
> support key protection checking with a supplied key.
> Fetch and storage protection override are not applicable to absolute
> accesses and so are not applied as they are when using the vcpu memop.
> 
> Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
> Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com>

Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>

> ---
>  arch/s390/kvm/gaccess.c  | 72 +++++++++++++++++++++++++++++++++++
>  arch/s390/kvm/gaccess.h  |  6 +++
>  arch/s390/kvm/kvm-s390.c | 81 ++++++++++++++++++++++++++++++++++++++++
>  include/uapi/linux/kvm.h |  2 +
>  4 files changed, 161 insertions(+)
> 
> diff --git a/arch/s390/kvm/gaccess.c b/arch/s390/kvm/gaccess.c
> index 37838f637707..d53a183c2005 100644
> --- a/arch/s390/kvm/gaccess.c
> +++ b/arch/s390/kvm/gaccess.c
> @@ -795,6 +795,35 @@ static int low_address_protection_enabled(struct kvm_vcpu *vcpu,
>  	return 1;
>  }
>  
> +static int vm_check_access_key(struct kvm *kvm, u8 access_key,
> +			       enum gacc_mode mode, gpa_t gpa)
> +{
> +	u8 storage_key, access_control;
> +	bool fetch_protected;
> +	unsigned long hva;
> +	int r;
> +
> +	if (access_key == 0)
> +		return 0;
> +
> +	hva = gfn_to_hva(kvm, gpa_to_gfn(gpa));
> +	if (kvm_is_error_hva(hva))
> +		return PGM_ADDRESSING;
> +
> +	mmap_read_lock(current->mm);
> +	r = get_guest_storage_key(current->mm, hva, &storage_key);
> +	mmap_read_unlock(current->mm);
> +	if (r)
> +		return r;
> +	access_control = FIELD_GET(_PAGE_ACC_BITS, storage_key);
> +	if (access_control == access_key)
> +		return 0;
> +	fetch_protected = storage_key & _PAGE_FP_BIT;
> +	if ((mode == GACC_FETCH || mode == GACC_IFETCH) && !fetch_protected)
> +		return 0;
> +	return PGM_PROTECTION;
> +}
> +
>  static bool fetch_prot_override_applicable(struct kvm_vcpu *vcpu, enum gacc_mode mode,
>  					   union asce asce)
>  {
> @@ -994,6 +1023,26 @@ access_guest_page_with_key(struct kvm *kvm, enum gacc_mode mode, gpa_t gpa,
>  	return 0;
>  }
>  
> +int access_guest_abs_with_key(struct kvm *kvm, gpa_t gpa, void *data,
> +			      unsigned long len, enum gacc_mode mode, u8 access_key)
> +{
> +	int offset = offset_in_page(gpa);
> +	int fragment_len;
> +	int rc;
> +
> +	while (min(PAGE_SIZE - offset, len) > 0) {
> +		fragment_len = min(PAGE_SIZE - offset, len);
> +		rc = access_guest_page_with_key(kvm, mode, gpa, data, fragment_len, access_key);
> +		if (rc)
> +			return rc;
> +		offset = 0;
> +		len -= fragment_len;
> +		data += fragment_len;
> +		gpa += fragment_len;
> +	}
> +	return 0;
> +}
> +
>  int access_guest_with_key(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
>  			  void *data, unsigned long len, enum gacc_mode mode,
>  			  u8 access_key)
> @@ -1144,6 +1193,29 @@ int check_gva_range(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
>  	return rc;
>  }
>  
> +/**
> + * check_gpa_range - test a range of guest physical addresses for accessibility
> + * @kvm: virtual machine instance
> + * @gpa: guest physical address
> + * @length: length of test range
> + * @mode: access mode to test, relevant for storage keys
> + * @access_key: access key to mach the storage keys with
> + */
> +int check_gpa_range(struct kvm *kvm, unsigned long gpa, unsigned long length,
> +		    enum gacc_mode mode, u8 access_key)
> +{
> +	unsigned int fragment_len;
> +	int rc = 0;
> +
> +	while (length && !rc) {
> +		fragment_len = min(PAGE_SIZE - offset_in_page(gpa), length);
> +		rc = vm_check_access_key(kvm, access_key, mode, gpa);
> +		length -= fragment_len;
> +		gpa += fragment_len;
> +	}
> +	return rc;
> +}
> +
>  /**
>   * kvm_s390_check_low_addr_prot_real - check for low-address protection
>   * @vcpu: virtual cpu
> diff --git a/arch/s390/kvm/gaccess.h b/arch/s390/kvm/gaccess.h
> index c5f2e7311b17..1124ff282012 100644
> --- a/arch/s390/kvm/gaccess.h
> +++ b/arch/s390/kvm/gaccess.h
> @@ -193,6 +193,12 @@ int guest_translate_address_with_key(struct kvm_vcpu *vcpu, unsigned long gva, u
>  int check_gva_range(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,
>  		    unsigned long length, enum gacc_mode mode, u8 access_key);
>  
> +int check_gpa_range(struct kvm *kvm, unsigned long gpa, unsigned long length,
> +		    enum gacc_mode mode, u8 access_key);
> +
> +int access_guest_abs_with_key(struct kvm *kvm, gpa_t gpa, void *data,
> +			      unsigned long len, enum gacc_mode mode, u8 access_key);
> +
>  int access_guest_with_key(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
>  			  void *data, unsigned long len, enum gacc_mode mode,
>  			  u8 access_key);
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index c31b40abfa23..36bc73b5f5de 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -2364,6 +2364,78 @@ static bool access_key_invalid(u8 access_key)
>  	return access_key > 0xf;
>  }
>  
> +static int kvm_s390_vm_mem_op(struct kvm *kvm, struct kvm_s390_mem_op *mop)
> +{
> +	void __user *uaddr = (void __user *)mop->buf;
> +	u64 supported_flags;
> +	void *tmpbuf = NULL;
> +	int r, srcu_idx;
> +
> +	supported_flags = KVM_S390_MEMOP_F_SKEY_PROTECTION
> +			  | KVM_S390_MEMOP_F_CHECK_ONLY;
> +	if (mop->flags & ~supported_flags)
> +		return -EINVAL;
> +	if (mop->size > MEM_OP_MAX_SIZE)
> +		return -E2BIG;
> +	if (kvm_s390_pv_is_protected(kvm))
> +		return -EINVAL;
> +	if (mop->flags & KVM_S390_MEMOP_F_SKEY_PROTECTION) {
> +		if (access_key_invalid(mop->key))
> +			return -EINVAL;
> +	} else {
> +		mop->key = 0;
> +	}
> +	if (!(mop->flags & KVM_S390_MEMOP_F_CHECK_ONLY)) {
> +		tmpbuf = vmalloc(mop->size);
> +		if (!tmpbuf)
> +			return -ENOMEM;
> +	}
> +
> +	srcu_idx = srcu_read_lock(&kvm->srcu);
> +
> +	if (kvm_is_error_gpa(kvm, mop->gaddr)) {
> +		r = PGM_ADDRESSING;
> +		goto out_unlock;
> +	}
> +
> +	switch (mop->op) {
> +	case KVM_S390_MEMOP_ABSOLUTE_READ: {
> +		if (mop->flags & KVM_S390_MEMOP_F_CHECK_ONLY) {
> +			r = check_gpa_range(kvm, mop->gaddr, mop->size, GACC_FETCH, mop->key);
> +		} else {
> +			r = access_guest_abs_with_key(kvm, mop->gaddr, tmpbuf,
> +						      mop->size, GACC_FETCH, mop->key);
> +			if (r == 0) {
> +				if (copy_to_user(uaddr, tmpbuf, mop->size))
> +					r = -EFAULT;
> +			}
> +		}
> +		break;
> +	}
> +	case KVM_S390_MEMOP_ABSOLUTE_WRITE: {
> +		if (mop->flags & KVM_S390_MEMOP_F_CHECK_ONLY) {
> +			r = check_gpa_range(kvm, mop->gaddr, mop->size, GACC_STORE, mop->key);
> +		} else {
> +			if (copy_from_user(tmpbuf, uaddr, mop->size)) {
> +				r = -EFAULT;
> +				break;
> +			}
> +			r = access_guest_abs_with_key(kvm, mop->gaddr, tmpbuf,
> +						      mop->size, GACC_STORE, mop->key);
> +		}
> +		break;
> +	}
> +	default:
> +		r = -EINVAL;
> +	}
> +
> +out_unlock:
> +	srcu_read_unlock(&kvm->srcu, srcu_idx);
> +
> +	vfree(tmpbuf);
> +	return r;
> +}
> +
>  long kvm_arch_vm_ioctl(struct file *filp,
>  		       unsigned int ioctl, unsigned long arg)
>  {
> @@ -2488,6 +2560,15 @@ long kvm_arch_vm_ioctl(struct file *filp,
>  		}
>  		break;
>  	}
> +	case KVM_S390_MEM_OP: {
> +		struct kvm_s390_mem_op mem_op;
> +
> +		if (copy_from_user(&mem_op, argp, sizeof(mem_op)) == 0)
> +			r = kvm_s390_vm_mem_op(kvm, &mem_op);
> +		else
> +			r = -EFAULT;
> +		break;
> +	}
>  	default:
>  		r = -ENOTTY;
>  	}
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 4566f429db2c..4bc7623def87 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -575,6 +575,8 @@ struct kvm_s390_mem_op {
>  #define KVM_S390_MEMOP_LOGICAL_WRITE	1
>  #define KVM_S390_MEMOP_SIDA_READ	2
>  #define KVM_S390_MEMOP_SIDA_WRITE	3
> +#define KVM_S390_MEMOP_ABSOLUTE_READ	4
> +#define KVM_S390_MEMOP_ABSOLUTE_WRITE	5
>  /* flags for kvm_s390_mem_op->flags */
>  #define KVM_S390_MEMOP_F_CHECK_ONLY		(1ULL << 0)
>  #define KVM_S390_MEMOP_F_INJECT_EXCEPTION	(1ULL << 1)


^ permalink raw reply	[flat|nested] 39+ messages in thread

* (no subject)
  2022-02-11 18:22 ` [PATCH v4 10/10] KVM: s390: selftests: Test memops with storage keys Janis Schoetterl-Glausch
@ 2022-02-17 14:53   ` Janis Schoetterl-Glausch
  2022-02-17 14:53     ` [PATCH 1/2] KVM: s390: selftests: Refactor memop test Janis Schoetterl-Glausch
  2022-02-17 14:53     ` [PATCH 2/2] KVM: s390: selftests: Test vm and vcpu memop with keys Janis Schoetterl-Glausch
  2022-02-25 15:53   ` [PATCH v2 0/5] memop selftest for storage key checking Janis Schoetterl-Glausch
  1 sibling, 2 replies; 39+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-17 14:53 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank, Claudio Imbrenda
  Cc: Janis Schoetterl-Glausch, Thomas Huth, David Hildenbrand, kvm,
	linux-kselftest, linux-kernel

Subject: [PATCH 0/2] memop selftest for storage key checking

As previously mentioned, I rewrote the memop selftest.
It makes heavy use of macros, but that should be fine for a test.
Feedback appreciated.
Requires Thomas' recent SIDA memop selftest patch.

Janis Schoetterl-Glausch (2):
  KVM: s390: selftests: Refactor memop test
  KVM: s390: selftests: Test vm and vcpu memop with keys

 tools/testing/selftests/kvm/s390x/memop.c | 741 ++++++++++++++++++----
 1 file changed, 623 insertions(+), 118 deletions(-)

-- 
2.32.0


^ permalink raw reply	[flat|nested] 39+ messages in thread

* [PATCH 1/2] KVM: s390: selftests: Refactor memop test
  2022-02-17 14:53   ` Janis Schoetterl-Glausch
@ 2022-02-17 14:53     ` Janis Schoetterl-Glausch
  2022-02-17 17:36       ` Shuah Khan
  2022-02-17 14:53     ` [PATCH 2/2] KVM: s390: selftests: Test vm and vcpu memop with keys Janis Schoetterl-Glausch
  1 sibling, 1 reply; 39+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-17 14:53 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank, Claudio Imbrenda
  Cc: Janis Schoetterl-Glausch, Thomas Huth, David Hildenbrand, kvm,
	linux-kselftest, linux-kernel

Introduce macro for performing MEM_OP ioctls in a concise way.
Split test cases into multiple host/guest pairs making them independent.
Make various minor improvements.
All in all this lays the groundwork for future extensions.

Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
---
 tools/testing/selftests/kvm/s390x/memop.c | 427 ++++++++++++++++------
 1 file changed, 309 insertions(+), 118 deletions(-)

diff --git a/tools/testing/selftests/kvm/s390x/memop.c b/tools/testing/selftests/kvm/s390x/memop.c
index d19c3ffdea3f..4510418d73e6 100644
--- a/tools/testing/selftests/kvm/s390x/memop.c
+++ b/tools/testing/selftests/kvm/s390x/memop.c
@@ -13,169 +13,360 @@
 #include "test_util.h"
 #include "kvm_util.h"
 
+enum mop_target {
+	LOGICAL,
+	SIDA,
+	ABSOLUTE,
+	INVALID,
+};
+
+enum mop_access_mode {
+	READ,
+	WRITE,
+};
+
+struct mop_desc {
+	uintptr_t gaddr;
+	uintptr_t gaddr_v;
+	uint64_t set_flags;
+	unsigned int f_check : 1;
+	unsigned int f_inject : 1;
+	unsigned int f_key : 1;
+	unsigned int _gaddr_v : 1;
+	unsigned int _set_flags : 1;
+	unsigned int _sida_offset : 1;
+	unsigned int _ar : 1;
+	uint32_t size;
+	enum mop_target target;
+	enum mop_access_mode mode;
+	void *buf;
+	uint32_t sida_offset;
+	uint8_t ar;
+	uint8_t key;
+};
+
+static struct kvm_s390_mem_op ksmo_from_desc(struct mop_desc desc)
+{
+	struct kvm_s390_mem_op ksmo = {
+		.gaddr = (uintptr_t)desc.gaddr,
+		.size = desc.size,
+		.buf = ((uintptr_t)desc.buf),
+		.reserved = "ignored_ignored_ignored_ignored"
+	};
+
+	switch (desc.target) {
+	case LOGICAL:
+		if (desc.mode == READ)
+			ksmo.op = KVM_S390_MEMOP_LOGICAL_READ;
+		if (desc.mode == WRITE)
+			ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
+		break;
+	case SIDA:
+		if (desc.mode == READ)
+			ksmo.op = KVM_S390_MEMOP_SIDA_READ;
+		if (desc.mode == WRITE)
+			ksmo.op = KVM_S390_MEMOP_SIDA_WRITE;
+		break;
+	case ABSOLUTE:
+		if (desc.mode == READ)
+			ksmo.op = KVM_S390_MEMOP_ABSOLUTE_READ;
+		if (desc.mode == WRITE)
+			ksmo.op = KVM_S390_MEMOP_ABSOLUTE_WRITE;
+		break;
+	case INVALID:
+		ksmo.op = -1;
+	}
+	if (desc.f_check)
+		ksmo.flags |= KVM_S390_MEMOP_F_CHECK_ONLY;
+	if (desc.f_inject)
+		ksmo.flags |= KVM_S390_MEMOP_F_INJECT_EXCEPTION;
+	if (desc._set_flags)
+		ksmo.flags = desc.set_flags;
+	if (desc.f_key) {
+		ksmo.flags |= KVM_S390_MEMOP_F_SKEY_PROTECTION;
+		ksmo.key = desc.key;
+	}
+	if (desc._ar)
+		ksmo.ar = desc.ar;
+	else
+		ksmo.ar = 0;
+	if (desc._sida_offset)
+		ksmo.sida_offset = desc.sida_offset;
+
+	return ksmo;
+}
+
+/* vcpu dummy id signifying that vm instead of vcpu ioctl is to occur */
+const uint32_t VM_VCPU_ID = (uint32_t)-1;
+
+struct test_vcpu {
+	struct kvm_vm *vm;
+	uint32_t id;
+};
+
+#define PRINT_MEMOP false
+static void print_memop(uint32_t vcpu_id, const struct kvm_s390_mem_op *ksmo)
+{
+	if (!PRINT_MEMOP)
+		return;
+
+	if (vcpu_id == VM_VCPU_ID)
+		printf("vm memop(");
+	else
+		printf("vcpu memop(");
+	switch (ksmo->op) {
+	case KVM_S390_MEMOP_LOGICAL_READ:
+		printf("LOGICAL, READ, ");
+		break;
+	case KVM_S390_MEMOP_LOGICAL_WRITE:
+		printf("LOGICAL, WRITE, ");
+		break;
+	case KVM_S390_MEMOP_SIDA_READ:
+		printf("SIDA, READ, ");
+		break;
+	case KVM_S390_MEMOP_SIDA_WRITE:
+		printf("SIDA, WRITE, ");
+		break;
+	case KVM_S390_MEMOP_ABSOLUTE_READ:
+		printf("ABSOLUTE, READ, ");
+		break;
+	case KVM_S390_MEMOP_ABSOLUTE_WRITE:
+		printf("ABSOLUTE, WRITE, ");
+		break;
+	}
+	printf("gaddr=%llu, size=%u, buf=%llu, ar=%u, key=%u",
+	       ksmo->gaddr, ksmo->size, ksmo->buf, ksmo->ar, ksmo->key);
+	if (ksmo->flags & KVM_S390_MEMOP_F_CHECK_ONLY)
+		printf(", CHECK_ONLY");
+	if (ksmo->flags & KVM_S390_MEMOP_F_INJECT_EXCEPTION)
+		printf(", INJECT_EXCEPTION");
+	if (ksmo->flags & KVM_S390_MEMOP_F_SKEY_PROTECTION)
+		printf(", SKEY_PROTECTION");
+	puts(")");
+}
+
+static void memop_ioctl(struct test_vcpu vcpu, struct kvm_s390_mem_op *ksmo)
+{
+	if (vcpu.id == VM_VCPU_ID)
+		vm_ioctl(vcpu.vm, KVM_S390_MEM_OP, ksmo);
+	else
+		vcpu_ioctl(vcpu.vm, vcpu.id, KVM_S390_MEM_OP, ksmo);
+}
+
+static int err_memop_ioctl(struct test_vcpu vcpu, struct kvm_s390_mem_op *ksmo)
+{
+	if (vcpu.id == VM_VCPU_ID)
+		return _vm_ioctl(vcpu.vm, KVM_S390_MEM_OP, ksmo);
+	else
+		return _vcpu_ioctl(vcpu.vm, vcpu.id, KVM_S390_MEM_OP, ksmo);
+}
+
+#define MEMOP(err, vcpu_p, mop_target_p, access_mode_p, buf_p, size_p, ...)	\
+({										\
+	struct test_vcpu __vcpu = (vcpu_p);					\
+	struct mop_desc __desc = {						\
+		.target = (mop_target_p),					\
+		.mode = (access_mode_p),					\
+		.buf = (buf_p),							\
+		.size = (size_p),						\
+		__VA_ARGS__							\
+	};									\
+	struct kvm_s390_mem_op __ksmo;						\
+										\
+	if (__desc._gaddr_v) {							\
+		if (__desc.target == ABSOLUTE)					\
+			__desc.gaddr = addr_gva2gpa(__vcpu.vm, __desc.gaddr_v);	\
+		else								\
+			__desc.gaddr = __desc.gaddr_v;				\
+	}									\
+	__ksmo = ksmo_from_desc(__desc);					\
+	print_memop(__vcpu.id, &__ksmo);					\
+	err##memop_ioctl(__vcpu, &__ksmo);					\
+})
+
+#define MOP(...) MEMOP(, __VA_ARGS__)
+#define ERR_MOP(...) MEMOP(err_, __VA_ARGS__)
+
+#define GADDR(a) .gaddr = ((uintptr_t)a)
+#define GADDR_V(v) ._gaddr_v = 1, .gaddr_v = ((uintptr_t)v)
+#define CHECK_ONLY .f_check = 1
+#define SET_FLAGS(f) ._set_flags = 1, .set_flags = (f)
+#define SIDA_OFFSET(o) ._sida_offset = 1, .sida_offset = (o)
+#define AR(a) ._ar = 1, .ar = (a)
+#define KEY(a) .f_key = 1, .key = (a)
+
+#define CHECK_N_DO(f, ...) ({ f(__VA_ARGS__, CHECK_ONLY); f(__VA_ARGS__); })
+
 #define VCPU_ID 1
+#define PAGE_SHIFT 12
+#define PAGE_SIZE (1ULL << PAGE_SHIFT)
+#define PAGE_MASK (~(PAGE_SIZE - 1))
+
+#define ASSERT_MEM_EQ(p1, p2, size) \
+	TEST_ASSERT(!memcmp(p1, p2, size), "Memory contents do not match!")
 
 static uint8_t mem1[65536];
 static uint8_t mem2[65536];
 
-static void guest_code(void)
+static void prepare_mem12(void)
 {
 	int i;
 
-	for (;;) {
-		for (i = 0; i < sizeof(mem2); i++)
-			mem2[i] = mem1[i];
-		GUEST_SYNC(0);
-	}
+	for (i = 0; i < sizeof(mem1); i++)
+		mem1[i] = rand();
+	memset(mem2, 0xaa, sizeof(mem2));
 }
 
-int main(int argc, char *argv[])
-{
-	struct kvm_vm *vm;
+struct test_default {
+	struct test_vcpu vm;
+	struct test_vcpu vcpu;
 	struct kvm_run *run;
-	struct kvm_s390_mem_op ksmo;
-	int rv, i, maxsize;
+	int size;
+};
 
-	setbuf(stdout, NULL);	/* Tell stdout not to buffer its content */
+static struct test_default test_default_init(void *guest_code)
+{
+	struct test_default t;
 
-	maxsize = kvm_check_cap(KVM_CAP_S390_MEM_OP);
-	if (!maxsize) {
-		print_skip("CAP_S390_MEM_OP not supported");
-		exit(KSFT_SKIP);
-	}
-	if (maxsize > sizeof(mem1))
-		maxsize = sizeof(mem1);
+	t.size = min((size_t)kvm_check_cap(KVM_CAP_S390_MEM_OP), sizeof(mem1));
+	t.vm = (struct test_vcpu) { vm_create_default(VCPU_ID, 0, guest_code), VM_VCPU_ID };
+	t.vcpu = (struct test_vcpu) { t.vm.vm, VCPU_ID };
+	t.run = vcpu_state(t.vm.vm, VCPU_ID);
+	return t;
+}
 
-	/* Create VM */
-	vm = vm_create_default(VCPU_ID, 0, guest_code);
-	run = vcpu_state(vm, VCPU_ID);
+static void test_vm_free(struct test_vcpu vm)
+{
+	kvm_vm_free(vm.vm);
+}
 
-	for (i = 0; i < sizeof(mem1); i++)
-		mem1[i] = i * i + i;
-
-	/* Set the first array */
-	ksmo.gaddr = addr_gva2gpa(vm, (uintptr_t)mem1);
-	ksmo.flags = 0;
-	ksmo.size = maxsize;
-	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
-	ksmo.buf = (uintptr_t)mem1;
-	ksmo.ar = 0;
-	vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
-
-	/* Let the guest code copy the first array to the second */
-	vcpu_run(vm, VCPU_ID);
-	TEST_ASSERT(run->exit_reason == KVM_EXIT_S390_SIEIC,
-		    "Unexpected exit reason: %u (%s)\n",
-		    run->exit_reason,
-		    exit_reason_str(run->exit_reason));
+#define HOST_SYNC(vcpu_p, stage)					\
+({									\
+	struct test_vcpu __vcpu = (vcpu_p);				\
+	struct ucall uc;						\
+	int __stage = (stage);						\
+									\
+	vcpu_run(__vcpu.vm, __vcpu.id);					\
+	get_ucall(__vcpu.vm, __vcpu.id, &uc);				\
+	ASSERT_EQ(uc.cmd, UCALL_SYNC);					\
+	ASSERT_EQ(uc.args[1], __stage);					\
+})									\
 
-	memset(mem2, 0xaa, sizeof(mem2));
+enum stage {
+	/* Synced state set by host, e.g. DAT */
+	STAGE_INITED,
+	/* Guest did nothing */
+	STAGE_IDLED,
+	/* Guest copied memory (locations up to test case) */
+	STAGE_COPIED,
+};
+
+#define DEFAULT_WRITE_READ(copy_cpu, mop_cpu, mop_target_p, size, ...)		\
+({										\
+	struct test_vcpu __copy_cpu = (copy_cpu), __mop_cpu = (mop_cpu);	\
+	enum mop_target __target = (mop_target_p);				\
+	uint32_t __size = (size);						\
+										\
+	prepare_mem12();							\
+	CHECK_N_DO(MOP, __mop_cpu, __target, WRITE, mem1, __size,		\
+			GADDR_V(mem1), ##__VA_ARGS__);				\
+	HOST_SYNC(__copy_cpu, STAGE_COPIED);					\
+	CHECK_N_DO(MOP, __mop_cpu, __target, READ, mem2, __size,		\
+			GADDR_V(mem2), ##__VA_ARGS__);				\
+	ASSERT_MEM_EQ(mem1, mem2, __size);					\
+})
+
+static void guest_copy(void)
+{
+	GUEST_SYNC(STAGE_INITED);
+	memcpy(&mem2, &mem1, sizeof(mem2));
+	GUEST_SYNC(STAGE_COPIED);
+}
+
+static void test_copy(void)
+{
+	struct test_default t = test_default_init(guest_copy);
+
+	HOST_SYNC(t.vcpu, STAGE_INITED);
+
+	DEFAULT_WRITE_READ(t.vcpu, t.vcpu, LOGICAL, t.size);
+
+	test_vm_free(t.vm);
+}
 
-	/* Get the second array */
-	ksmo.gaddr = (uintptr_t)mem2;
-	ksmo.flags = 0;
-	ksmo.size = maxsize;
-	ksmo.op = KVM_S390_MEMOP_LOGICAL_READ;
-	ksmo.buf = (uintptr_t)mem2;
-	ksmo.ar = 0;
-	vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
-
-	TEST_ASSERT(!memcmp(mem1, mem2, maxsize),
-		    "Memory contents do not match!");
-
-	/* Check error conditions - first bad size: */
-	ksmo.gaddr = (uintptr_t)mem1;
-	ksmo.flags = 0;
-	ksmo.size = -1;
-	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
-	ksmo.buf = (uintptr_t)mem1;
-	ksmo.ar = 0;
-	rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+static void guest_idle(void)
+{
+	GUEST_SYNC(STAGE_INITED);
+	for (;;)
+		GUEST_SYNC(STAGE_IDLED);
+}
+
+static void test_errors(void)
+{
+	struct test_default t = test_default_init(guest_idle);
+	int rv;
+
+	HOST_SYNC(t.vcpu, STAGE_INITED);
+
+	rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, -1, GADDR_V(mem1));
 	TEST_ASSERT(rv == -1 && errno == E2BIG, "ioctl allows insane sizes");
 
 	/* Zero size: */
-	ksmo.gaddr = (uintptr_t)mem1;
-	ksmo.flags = 0;
-	ksmo.size = 0;
-	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
-	ksmo.buf = (uintptr_t)mem1;
-	ksmo.ar = 0;
-	rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, 0, GADDR_V(mem1));
 	TEST_ASSERT(rv == -1 && (errno == EINVAL || errno == ENOMEM),
 		    "ioctl allows 0 as size");
 
 	/* Bad flags: */
-	ksmo.gaddr = (uintptr_t)mem1;
-	ksmo.flags = -1;
-	ksmo.size = maxsize;
-	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
-	ksmo.buf = (uintptr_t)mem1;
-	ksmo.ar = 0;
-	rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, t.size, GADDR_V(mem1), SET_FLAGS(-1));
 	TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows all flags");
 
 	/* Bad operation: */
-	ksmo.gaddr = (uintptr_t)mem1;
-	ksmo.flags = 0;
-	ksmo.size = maxsize;
-	ksmo.op = -1;
-	ksmo.buf = (uintptr_t)mem1;
-	ksmo.ar = 0;
-	rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	rv = ERR_MOP(t.vcpu, INVALID, WRITE, mem1, t.size, GADDR_V(mem1));
 	TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows bad operations");
 
 	/* Bad guest address: */
-	ksmo.gaddr = ~0xfffUL;
-	ksmo.flags = KVM_S390_MEMOP_F_CHECK_ONLY;
-	ksmo.size = maxsize;
-	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
-	ksmo.buf = (uintptr_t)mem1;
-	ksmo.ar = 0;
-	rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, t.size, GADDR((void *)~0xfffUL), CHECK_ONLY);
 	TEST_ASSERT(rv > 0, "ioctl does not report bad guest memory access");
 
 	/* Bad host address: */
-	ksmo.gaddr = (uintptr_t)mem1;
-	ksmo.flags = 0;
-	ksmo.size = maxsize;
-	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
-	ksmo.buf = 0;
-	ksmo.ar = 0;
-	rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, 0, t.size, GADDR_V(mem1));
 	TEST_ASSERT(rv == -1 && errno == EFAULT,
 		    "ioctl does not report bad host memory address");
 
 	/* Bad access register: */
-	run->psw_mask &= ~(3UL << (63 - 17));
-	run->psw_mask |= 1UL << (63 - 17);  /* Enable AR mode */
-	vcpu_run(vm, VCPU_ID);              /* To sync new state to SIE block */
-	ksmo.gaddr = (uintptr_t)mem1;
-	ksmo.flags = 0;
-	ksmo.size = maxsize;
-	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
-	ksmo.buf = (uintptr_t)mem1;
-	ksmo.ar = 17;
-	rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	t.run->psw_mask &= ~(3UL << (63 - 17));
+	t.run->psw_mask |= 1UL << (63 - 17);  /* Enable AR mode */
+	HOST_SYNC(t.vcpu, STAGE_IDLED);
+	rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, t.size, GADDR(mem1), AR(17));
 	TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows ARs > 15");
-	run->psw_mask &= ~(3UL << (63 - 17));   /* Disable AR mode */
-	vcpu_run(vm, VCPU_ID);                  /* Run to sync new state */
+	t.run->psw_mask &= ~(3UL << (63 - 17));   /* Disable AR mode */
+	HOST_SYNC(t.vcpu, STAGE_IDLED);
 
 	/* Check that the SIDA calls are rejected for non-protected guests */
-	ksmo.gaddr = 0;
-	ksmo.flags = 0;
-	ksmo.size = 8;
-	ksmo.op = KVM_S390_MEMOP_SIDA_READ;
-	ksmo.buf = (uintptr_t)mem1;
-	ksmo.sida_offset = 0x1c0;
-	rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	rv = ERR_MOP(t.vcpu, SIDA, READ, mem1, 8, GADDR(0), SIDA_OFFSET(0x1c0));
 	TEST_ASSERT(rv == -1 && errno == EINVAL,
 		    "ioctl does not reject SIDA_READ in non-protected mode");
-	ksmo.op = KVM_S390_MEMOP_SIDA_WRITE;
-	rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	rv = ERR_MOP(t.vcpu, SIDA, WRITE, mem1, 8, GADDR(0), SIDA_OFFSET(0x1c0));
 	TEST_ASSERT(rv == -1 && errno == EINVAL,
 		    "ioctl does not reject SIDA_WRITE in non-protected mode");
 
-	kvm_vm_free(vm);
+	test_vm_free(t.vm);
+}
+
+int main(int argc, char *argv[])
+{
+	int memop_cap;
+
+	setbuf(stdout, NULL);	/* Tell stdout not to buffer its content */
+
+	memop_cap = kvm_check_cap(KVM_CAP_S390_MEM_OP);
+	if (!memop_cap) {
+		print_skip("CAP_S390_MEM_OP not supported");
+		exit(KSFT_SKIP);
+	}
+
+	test_copy();
+	test_errors();
 
 	return 0;
 }
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 2/2] KVM: s390: selftests: Test vm and vcpu memop with keys
  2022-02-17 14:53   ` Janis Schoetterl-Glausch
  2022-02-17 14:53     ` [PATCH 1/2] KVM: s390: selftests: Refactor memop test Janis Schoetterl-Glausch
@ 2022-02-17 14:53     ` Janis Schoetterl-Glausch
  2022-02-17 17:54       ` Shuah Khan
  1 sibling, 1 reply; 39+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-17 14:53 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank, Claudio Imbrenda
  Cc: Janis Schoetterl-Glausch, Thomas Huth, David Hildenbrand, kvm,
	linux-kselftest, linux-kernel

Test storage key checking for both vm and vcpu MEM_OP ioctls.
Test both error and non error conditions.

Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
---
 tools/testing/selftests/kvm/s390x/memop.c | 342 +++++++++++++++++++++-
 1 file changed, 328 insertions(+), 14 deletions(-)

diff --git a/tools/testing/selftests/kvm/s390x/memop.c b/tools/testing/selftests/kvm/s390x/memop.c
index 4510418d73e6..bc12a9238967 100644
--- a/tools/testing/selftests/kvm/s390x/memop.c
+++ b/tools/testing/selftests/kvm/s390x/memop.c
@@ -201,6 +201,8 @@ static int err_memop_ioctl(struct test_vcpu vcpu, struct kvm_s390_mem_op *ksmo)
 #define PAGE_SHIFT 12
 #define PAGE_SIZE (1ULL << PAGE_SHIFT)
 #define PAGE_MASK (~(PAGE_SIZE - 1))
+#define CR0_FETCH_PROTECTION_OVERRIDE	(1UL << (63 - 38))
+#define CR0_STORAGE_PROTECTION_OVERRIDE	(1UL << (63 - 39))
 
 #define ASSERT_MEM_EQ(p1, p2, size) \
 	TEST_ASSERT(!memcmp(p1, p2, size), "Memory contents do not match!")
@@ -235,6 +237,11 @@ static struct test_default test_default_init(void *guest_code)
 	return t;
 }
 
+static vm_vaddr_t test_vaddr_alloc(struct test_vcpu vm, size_t size, vm_vaddr_t vaddr_min)
+{
+	return vm_vaddr_alloc(vm.vm, size, vaddr_min);
+}
+
 static void test_vm_free(struct test_vcpu vm)
 {
 	kvm_vm_free(vm.vm);
@@ -257,6 +264,8 @@ enum stage {
 	STAGE_INITED,
 	/* Guest did nothing */
 	STAGE_IDLED,
+	/* Guest set storage keys (specifics up to test case) */
+	STAGE_SKEYS_SET,
 	/* Guest copied memory (locations up to test case) */
 	STAGE_COPIED,
 };
@@ -276,6 +285,20 @@ enum stage {
 	ASSERT_MEM_EQ(mem1, mem2, __size);					\
 })
 
+#define DEFAULT_READ(copy_cpu, mop_cpu, mop_target_p, size, ...)		\
+({										\
+	struct test_vcpu __copy_cpu = (copy_cpu), __mop_cpu = (mop_cpu);	\
+	enum mop_target __target = (mop_target_p);				\
+	uint32_t __size = (size);						\
+										\
+	prepare_mem12();							\
+	CHECK_N_DO(MOP, __mop_cpu, __target, WRITE, mem1, __size,		\
+			GADDR_V(mem1));						\
+	HOST_SYNC(__copy_cpu, STAGE_COPIED);					\
+	CHECK_N_DO(MOP, __mop_cpu, __target, READ, mem2, __size, ##__VA_ARGS__);\
+	ASSERT_MEM_EQ(mem1, mem2, __size);					\
+})
+
 static void guest_copy(void)
 {
 	GUEST_SYNC(STAGE_INITED);
@@ -294,6 +317,269 @@ static void test_copy(void)
 	test_vm_free(t.vm);
 }
 
+static void set_storage_key_range(void *addr, size_t len, uint8_t key)
+{
+	uintptr_t _addr, abs, i;
+	int not_mapped = 0;
+
+	_addr = (uintptr_t)addr;
+	for (i = _addr & PAGE_MASK; i < _addr + len; i += PAGE_SIZE) {
+		abs = i;
+		asm volatile (
+			       "lra	%[abs], 0(0,%[abs])\n"
+			"	jz	0f\n"
+			"	llill	%[not_mapped],1\n"
+			"	j	1f\n"
+			"0:	sske	%[key], %[abs]\n"
+			"1:"
+			: [abs] "+&a" (abs), [not_mapped] "+r" (not_mapped)
+			: [key] "r" (key)
+			: "cc"
+		);
+		GUEST_ASSERT_EQ(not_mapped, 0);
+	}
+}
+
+static void guest_copy_key(void)
+{
+	set_storage_key_range(mem1, sizeof(mem1), 0x90);
+	set_storage_key_range(mem2, sizeof(mem2), 0x90);
+	GUEST_SYNC(STAGE_SKEYS_SET);
+
+	for (;;) {
+		memcpy(&mem2, &mem1, sizeof(mem2));
+		GUEST_SYNC(STAGE_COPIED);
+	}
+}
+
+static void test_copy_key(void)
+{
+	struct test_default t = test_default_init(guest_copy_key);
+
+	HOST_SYNC(t.vcpu, STAGE_SKEYS_SET);
+
+	/* vm, no key */
+	DEFAULT_WRITE_READ(t.vcpu, t.vm, ABSOLUTE, t.size);
+
+	/* vm/vcpu, machting key or key 0 */
+	DEFAULT_WRITE_READ(t.vcpu, t.vcpu, LOGICAL, t.size, KEY(0));
+	DEFAULT_WRITE_READ(t.vcpu, t.vcpu, LOGICAL, t.size, KEY(9));
+	DEFAULT_WRITE_READ(t.vcpu, t.vm, ABSOLUTE, t.size, KEY(0));
+	DEFAULT_WRITE_READ(t.vcpu, t.vm, ABSOLUTE, t.size, KEY(9));
+	/*
+	 * There used to be different code paths for key handling depending on
+	 * if the region crossed a page boundary.
+	 * There currently are not, but the more tests the merrier.
+	 */
+	DEFAULT_WRITE_READ(t.vcpu, t.vcpu, LOGICAL, 1, KEY(0));
+	DEFAULT_WRITE_READ(t.vcpu, t.vcpu, LOGICAL, 1, KEY(9));
+	DEFAULT_WRITE_READ(t.vcpu, t.vm, ABSOLUTE, 1, KEY(0));
+	DEFAULT_WRITE_READ(t.vcpu, t.vm, ABSOLUTE, 1, KEY(9));
+
+	/* vm/vcpu, mismatching keys on read, but no fetch protection */
+	DEFAULT_READ(t.vcpu, t.vcpu, LOGICAL, t.size, GADDR_V(mem2), KEY(2));
+	DEFAULT_READ(t.vcpu, t.vm, ABSOLUTE, t.size, GADDR_V(mem1), KEY(2));
+
+	test_vm_free(t.vm);
+}
+
+static void guest_copy_key_fetch_prot(void)
+{
+	/*
+	 * For some reason combining the first sync with override enablement
+	 * results in an exception when calling HOST_SYNC.
+	 */
+	GUEST_SYNC(STAGE_INITED);
+	/* Storage protection override applies to both store and fetch. */
+	set_storage_key_range(mem1, sizeof(mem1), 0x98);
+	set_storage_key_range(mem2, sizeof(mem2), 0x98);
+	GUEST_SYNC(STAGE_SKEYS_SET);
+
+	for (;;) {
+		memcpy(&mem2, &mem1, sizeof(mem2));
+		GUEST_SYNC(STAGE_COPIED);
+	}
+}
+
+static void test_copy_key_storage_prot_override(void)
+{
+	struct test_default t = test_default_init(guest_copy_key_fetch_prot);
+
+	HOST_SYNC(t.vcpu, STAGE_INITED);
+	t.run->s.regs.crs[0] |= CR0_STORAGE_PROTECTION_OVERRIDE;
+	t.run->kvm_dirty_regs = KVM_SYNC_CRS;
+	HOST_SYNC(t.vcpu, STAGE_SKEYS_SET);
+
+	/* vcpu, mismatching keys, storage protection override in effect */
+	DEFAULT_WRITE_READ(t.vcpu, t.vcpu, LOGICAL, t.size, KEY(2));
+
+	test_vm_free(t.vm);
+}
+
+static void test_copy_key_fetch_prot(void)
+{
+	struct test_default t = test_default_init(guest_copy_key_fetch_prot);
+
+	HOST_SYNC(t.vcpu, STAGE_INITED);
+	HOST_SYNC(t.vcpu, STAGE_SKEYS_SET);
+
+	/* vm/vcpu, matching key, fetch protection in effect */
+	DEFAULT_READ(t.vcpu, t.vcpu, LOGICAL, t.size, GADDR_V(mem2), KEY(9));
+	DEFAULT_READ(t.vcpu, t.vm, ABSOLUTE, t.size, GADDR_V(mem2), KEY(9));
+
+	test_vm_free(t.vm);
+}
+
+#define ERR_PROT_MOP(...)							\
+({										\
+	int rv;									\
+										\
+	rv = ERR_MOP(__VA_ARGS__);						\
+	TEST_ASSERT(rv == 4, "Should result in protection exception");		\
+})
+
+static void test_errors_key(void)
+{
+	struct test_default t = test_default_init(guest_copy_key_fetch_prot);
+
+	HOST_SYNC(t.vcpu, STAGE_INITED);
+	HOST_SYNC(t.vcpu, STAGE_SKEYS_SET);
+
+	/* vm/vcpu, mismatching keys, fetch protection in effect */
+	CHECK_N_DO(ERR_PROT_MOP, t.vcpu, LOGICAL, WRITE, mem1, t.size, GADDR_V(mem1), KEY(2));
+	CHECK_N_DO(ERR_PROT_MOP, t.vcpu, LOGICAL, READ, mem2, t.size, GADDR_V(mem2), KEY(2));
+	CHECK_N_DO(ERR_PROT_MOP, t.vm, ABSOLUTE, WRITE, mem1, t.size, GADDR_V(mem1), KEY(2));
+	CHECK_N_DO(ERR_PROT_MOP, t.vm, ABSOLUTE, READ, mem2, t.size, GADDR_V(mem2), KEY(2));
+
+	test_vm_free(t.vm);
+}
+
+static void test_errors_key_storage_prot_override(void)
+{
+	struct test_default t = test_default_init(guest_copy_key_fetch_prot);
+
+	HOST_SYNC(t.vcpu, STAGE_INITED);
+	t.run->s.regs.crs[0] |= CR0_STORAGE_PROTECTION_OVERRIDE;
+	t.run->kvm_dirty_regs = KVM_SYNC_CRS;
+	HOST_SYNC(t.vcpu, STAGE_SKEYS_SET);
+
+	/* vm, mismatching keys, storage protection override not applicable to vm */
+	CHECK_N_DO(ERR_PROT_MOP, t.vm, ABSOLUTE, WRITE, mem1, t.size, GADDR_V(mem1), KEY(2));
+	CHECK_N_DO(ERR_PROT_MOP, t.vm, ABSOLUTE, READ, mem2, t.size, GADDR_V(mem2), KEY(2));
+
+	test_vm_free(t.vm);
+}
+
+const uint64_t last_page_addr = -PAGE_SIZE;
+
+static void guest_copy_key_fetch_prot_override(void)
+{
+	int i;
+	char *page_0 = 0;
+
+	GUEST_SYNC(STAGE_INITED);
+	set_storage_key_range(0, PAGE_SIZE, 0x18);
+	set_storage_key_range((void *)last_page_addr, PAGE_SIZE, 0x0);
+	asm volatile ("sske %[key],%[addr]\n" :: [addr] "r"(0), [key] "r"(0x18) : "cc");
+	GUEST_SYNC(STAGE_SKEYS_SET);
+
+	for (;;) {
+		for (i = 0; i < PAGE_SIZE; i++)
+			page_0[i] = mem1[i];
+		GUEST_SYNC(STAGE_COPIED);
+	}
+}
+
+static void test_copy_key_fetch_prot_override(void)
+{
+	struct test_default t = test_default_init(guest_copy_key_fetch_prot_override);
+	vm_vaddr_t guest_0_page, guest_last_page;
+
+	guest_0_page = test_vaddr_alloc(t.vm, PAGE_SIZE, 0);
+	guest_last_page = test_vaddr_alloc(t.vm, PAGE_SIZE, last_page_addr);
+	if (guest_0_page != 0 || guest_last_page != last_page_addr) {
+		print_skip("did not allocate guest pages at required positions");
+		goto out;
+	}
+
+	HOST_SYNC(t.vcpu, STAGE_INITED);
+	t.run->s.regs.crs[0] |= CR0_FETCH_PROTECTION_OVERRIDE;
+	t.run->kvm_dirty_regs = KVM_SYNC_CRS;
+	HOST_SYNC(t.vcpu, STAGE_SKEYS_SET);
+
+	/* vcpu, mismatching keys on fetch, fetch protection override applies */
+	prepare_mem12();
+	MOP(t.vcpu, LOGICAL, WRITE, mem1, PAGE_SIZE, GADDR_V(mem1));
+	HOST_SYNC(t.vcpu, STAGE_COPIED);
+	CHECK_N_DO(MOP, t.vcpu, LOGICAL, READ, mem2, 2048, GADDR_V(guest_0_page), KEY(2));
+	ASSERT_MEM_EQ(mem1, mem2, 2048);
+
+	/*
+	 * vcpu, mismatching keys on fetch, fetch protection override applies,
+	 * wraparound
+	 */
+	prepare_mem12();
+	MOP(t.vcpu, LOGICAL, WRITE, mem1, 2 * PAGE_SIZE, GADDR_V(guest_last_page));
+	HOST_SYNC(t.vcpu, STAGE_COPIED);
+	CHECK_N_DO(MOP, t.vcpu, LOGICAL, READ, mem2, PAGE_SIZE + 2048,
+			GADDR_V(guest_last_page), KEY(2));
+	ASSERT_MEM_EQ(mem1, mem2, 2048);
+
+out:
+	test_vm_free(t.vm);
+}
+
+static void test_errors_key_fetch_prot_override_not_enabled(void)
+{
+	struct test_default t = test_default_init(guest_copy_key_fetch_prot_override);
+	vm_vaddr_t guest_0_page, guest_last_page;
+
+	guest_0_page = test_vaddr_alloc(t.vm, PAGE_SIZE, 0);
+	guest_last_page = test_vaddr_alloc(t.vm, PAGE_SIZE, last_page_addr);
+	if (guest_0_page != 0 || guest_last_page != last_page_addr) {
+		print_skip("did not allocate guest pages at required positions");
+		goto out;
+	}
+	HOST_SYNC(t.vcpu, STAGE_INITED);
+	HOST_SYNC(t.vcpu, STAGE_SKEYS_SET);
+
+	/* vcpu, mismatching keys on fetch, fetch protection override not enabled */
+	CHECK_N_DO(ERR_PROT_MOP, t.vcpu, LOGICAL, READ, mem2, 2048, GADDR_V(0), KEY(2));
+
+out:
+	test_vm_free(t.vm);
+}
+
+static void test_errors_key_fetch_prot_override_enabled(void)
+{
+	struct test_default t = test_default_init(guest_copy_key_fetch_prot_override);
+	vm_vaddr_t guest_0_page, guest_last_page;
+
+	guest_0_page = test_vaddr_alloc(t.vm, PAGE_SIZE, 0);
+	guest_last_page = test_vaddr_alloc(t.vm, PAGE_SIZE, last_page_addr);
+	if (guest_0_page != 0 || guest_last_page != last_page_addr) {
+		print_skip("did not allocate guest pages at required positions");
+		goto out;
+	}
+	HOST_SYNC(t.vcpu, STAGE_INITED);
+	t.run->s.regs.crs[0] |= CR0_FETCH_PROTECTION_OVERRIDE;
+	t.run->kvm_dirty_regs = KVM_SYNC_CRS;
+	HOST_SYNC(t.vcpu, STAGE_SKEYS_SET);
+
+	/*
+	 * vcpu, mismatching keys on fetch,
+	 * fetch protection override does not apply because memory range acceeded
+	 */
+	CHECK_N_DO(ERR_PROT_MOP, t.vcpu, LOGICAL, READ, mem2, 2048 + 1, GADDR_V(0), KEY(2));
+	CHECK_N_DO(ERR_PROT_MOP, t.vcpu, LOGICAL, READ, mem2, PAGE_SIZE + 2048 + 1,
+				 GADDR_V(guest_last_page), KEY(2));
+	CHECK_N_DO(ERR_PROT_MOP, t.vm, ABSOLUTE, READ, mem2, 2048, GADDR(0), KEY(2));
+	CHECK_N_DO(ERR_PROT_MOP, t.vm, ABSOLUTE, READ, mem2, 2048, GADDR_V(guest_0_page), KEY(2));
+
+out:
+	test_vm_free(t.vm);
+}
+
 static void guest_idle(void)
 {
 	GUEST_SYNC(STAGE_INITED);
@@ -301,38 +587,53 @@ static void guest_idle(void)
 		GUEST_SYNC(STAGE_IDLED);
 }
 
-static void test_errors(void)
+static void _test_errors_common(struct test_vcpu vcpu, enum mop_target target, int size)
 {
-	struct test_default t = test_default_init(guest_idle);
 	int rv;
 
-	HOST_SYNC(t.vcpu, STAGE_INITED);
-
-	rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, -1, GADDR_V(mem1));
+	rv = ERR_MOP(vcpu, target, WRITE, mem1, -1, GADDR_V(mem1));
 	TEST_ASSERT(rv == -1 && errno == E2BIG, "ioctl allows insane sizes");
 
 	/* Zero size: */
-	rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, 0, GADDR_V(mem1));
+	rv = ERR_MOP(vcpu, target, WRITE, mem1, 0, GADDR_V(mem1));
 	TEST_ASSERT(rv == -1 && (errno == EINVAL || errno == ENOMEM),
 		    "ioctl allows 0 as size");
 
 	/* Bad flags: */
-	rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, t.size, GADDR_V(mem1), SET_FLAGS(-1));
+	rv = ERR_MOP(vcpu, target, WRITE, mem1, size, GADDR_V(mem1), SET_FLAGS(-1));
 	TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows all flags");
 
-	/* Bad operation: */
-	rv = ERR_MOP(t.vcpu, INVALID, WRITE, mem1, t.size, GADDR_V(mem1));
-	TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows bad operations");
-
 	/* Bad guest address: */
-	rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, t.size, GADDR((void *)~0xfffUL), CHECK_ONLY);
+	rv = ERR_MOP(vcpu, target, WRITE, mem1, size, GADDR((void *)~0xfffUL), CHECK_ONLY);
 	TEST_ASSERT(rv > 0, "ioctl does not report bad guest memory access");
 
 	/* Bad host address: */
-	rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, 0, t.size, GADDR_V(mem1));
+	rv = ERR_MOP(vcpu, target, WRITE, 0, size, GADDR_V(mem1));
 	TEST_ASSERT(rv == -1 && errno == EFAULT,
 		    "ioctl does not report bad host memory address");
 
+	/* Bad key: */
+	rv = ERR_MOP(vcpu, target, WRITE, mem1, size, GADDR_V(mem1), KEY(17));
+	TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows invalid key");
+}
+
+static void test_errors(void)
+{
+	struct test_default t = test_default_init(guest_idle);
+	int rv;
+
+	HOST_SYNC(t.vcpu, STAGE_INITED);
+
+	_test_errors_common(t.vcpu, LOGICAL, t.size);
+	_test_errors_common(t.vm, ABSOLUTE, t.size);
+
+	/* Bad operation: */
+	rv = ERR_MOP(t.vcpu, INVALID, WRITE, mem1, t.size, GADDR_V(mem1));
+	TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows bad operations");
+	/* virtual addresses are not translated when passing INVALID */
+	rv = ERR_MOP(t.vm, INVALID, WRITE, mem1, PAGE_SIZE, GADDR(0));
+	TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows bad operations");
+
 	/* Bad access register: */
 	t.run->psw_mask &= ~(3UL << (63 - 17));
 	t.run->psw_mask |= 1UL << (63 - 17);  /* Enable AR mode */
@@ -355,17 +656,30 @@ static void test_errors(void)
 
 int main(int argc, char *argv[])
 {
-	int memop_cap;
+	int memop_cap, extension_cap;
 
 	setbuf(stdout, NULL);	/* Tell stdout not to buffer its content */
 
 	memop_cap = kvm_check_cap(KVM_CAP_S390_MEM_OP);
+	extension_cap = kvm_check_cap(KVM_CAP_S390_MEM_OP_EXTENSION);
 	if (!memop_cap) {
 		print_skip("CAP_S390_MEM_OP not supported");
 		exit(KSFT_SKIP);
 	}
 
 	test_copy();
+	if (extension_cap > 0) {
+		test_copy_key();
+		test_copy_key_storage_prot_override();
+		test_copy_key_fetch_prot();
+		test_copy_key_fetch_prot_override();
+		test_errors_key();
+		test_errors_key_storage_prot_override();
+		test_errors_key_fetch_prot_override_not_enabled();
+		test_errors_key_fetch_prot_override_enabled();
+	} else {
+		print_skip("storage key memop extension not supported");
+	}
 	test_errors();
 
 	return 0;
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [PATCH 1/2] KVM: s390: selftests: Refactor memop test
  2022-02-17 14:53     ` [PATCH 1/2] KVM: s390: selftests: Refactor memop test Janis Schoetterl-Glausch
@ 2022-02-17 17:36       ` Shuah Khan
  2022-02-18 12:09         ` Janis Schoetterl-Glausch
  0 siblings, 1 reply; 39+ messages in thread
From: Shuah Khan @ 2022-02-17 17:36 UTC (permalink / raw)
  To: Janis Schoetterl-Glausch, Christian Borntraeger, Janosch Frank,
	Claudio Imbrenda
  Cc: Thomas Huth, David Hildenbrand, kvm, linux-kselftest,
	linux-kernel, Shuah Khan

On 2/17/22 7:53 AM, Janis Schoetterl-Glausch wrote:
> Introduce macro for performing MEM_OP ioctls in a concise way.

How does this help? What is the value in re-writing existing
code and turning it into a macro?


> Split test cases into multiple host/guest pairs making them independent.

This is a good change.

> Make various minor improvements.
> All in all this lays the groundwork for future extensions.

Also good if these changes make it easier to add test. Would be helpful
to know the details of the groundwork.

thanks,
-- Shuah

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 2/2] KVM: s390: selftests: Test vm and vcpu memop with keys
  2022-02-17 14:53     ` [PATCH 2/2] KVM: s390: selftests: Test vm and vcpu memop with keys Janis Schoetterl-Glausch
@ 2022-02-17 17:54       ` Shuah Khan
  2022-02-18 12:14         ` Janis Schoetterl-Glausch
  0 siblings, 1 reply; 39+ messages in thread
From: Shuah Khan @ 2022-02-17 17:54 UTC (permalink / raw)
  To: Janis Schoetterl-Glausch, Christian Borntraeger, Janosch Frank,
	Claudio Imbrenda
  Cc: Thomas Huth, David Hildenbrand, kvm, linux-kselftest,
	linux-kernel, Shuah Khan

On 2/17/22 7:53 AM, Janis Schoetterl-Glausch wrote:
> Test storage key checking for both vm and vcpu MEM_OP ioctls.
> Test both error and non error conditions.
> 

This patch seems to combine restructuring the code and new code.
e,g test_errors() was added in the last patch, only to be redone
in this patch with test_errors split into test_common_errors()

Doing restructure in a separate patch and then adding new code
makes it easier to review and also keep them simpler patches.

Please split the code in these two patches to just do restructure
and then add new code.

I also would like to have good reasons to change existing code and
make them into macros.
  
> Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
> ---
>   tools/testing/selftests/kvm/s390x/memop.c | 342 +++++++++++++++++++++-
>   1 file changed, 328 insertions(+), 14 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/s390x/memop.c b/tools/testing/selftests/kvm/s390x/memop.c
> index 4510418d73e6..bc12a9238967 100644
> --- a/tools/testing/selftests/kvm/s390x/memop.c
> +++ b/tools/testing/selftests/kvm/s390x/memop.c
> @@ -201,6 +201,8 @@ static int err_memop_ioctl(struct test_vcpu vcpu, struct kvm_s390_mem_op *ksmo)
>   #define PAGE_SHIFT 12
>   #define PAGE_SIZE (1ULL << PAGE_SHIFT)
>   #define PAGE_MASK (~(PAGE_SIZE - 1))
> +#define CR0_FETCH_PROTECTION_OVERRIDE	(1UL << (63 - 38))
> +#define CR0_STORAGE_PROTECTION_OVERRIDE	(1UL << (63 - 39))
>   
>   #define ASSERT_MEM_EQ(p1, p2, size) \
>   	TEST_ASSERT(!memcmp(p1, p2, size), "Memory contents do not match!")
> @@ -235,6 +237,11 @@ static struct test_default test_default_init(void *guest_code)
>   	return t;
>   }
>   
> +static vm_vaddr_t test_vaddr_alloc(struct test_vcpu vm, size_t size, vm_vaddr_t vaddr_min)
> +{
> +	return vm_vaddr_alloc(vm.vm, size, vaddr_min);
> +}
> +

What is the value of adding a new routine that simply calls another?
Do you see this routine changing in the future to do more?

thanks,
-- Shuah

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 1/2] KVM: s390: selftests: Refactor memop test
  2022-02-17 17:36       ` Shuah Khan
@ 2022-02-18 12:09         ` Janis Schoetterl-Glausch
  2022-02-18 22:14           ` Shuah Khan
  0 siblings, 1 reply; 39+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-18 12:09 UTC (permalink / raw)
  To: Shuah Khan, Christian Borntraeger, Janosch Frank, Claudio Imbrenda
  Cc: Thomas Huth, David Hildenbrand, kvm, linux-kselftest, linux-kernel

On 2/17/22 18:36, Shuah Khan wrote:
> On 2/17/22 7:53 AM, Janis Schoetterl-Glausch wrote:
>> Introduce macro for performing MEM_OP ioctls in a concise way.
> 
> How does this help? What is the value in re-writing existing
> code and turning it into a macro?

I want invocations of the ioctl to be independent of each other, so the reader does not
have to keep track of the state of the struct kvm_s390_mem_op.

So you have to specify all arguments manually like so, which is rather noisy and makes it
hard to see what the relevant parameter is:

ksmo.gaddr = guest_mem1;
ksmo.flags = 0;
ksmo.size = maxsize;
ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
ksmo.buf = (uintptr_t)mem1;
ksmo.ar = 17;
rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);

Or you introduce an abstraction.
Previously I used lots of functions with repeated code which got chaotic.
I decided on the macro because it's more flexible, e.g. you don't have to pass default args.
For example, there is only one test that passes the access register arg, so you would want
to default it to 0 for all other test.
For the access key argument you need to pass both a flag and the key itself, so you'd probably
get rid of this redundancy also.
There also might be future extensions of the ioctl that work the same way
(not 100% but not purely theoretical either).

With the macro all that is orthogonal, you just pass the argument you need or you don't.
With functions you'd maybe add a memop_key() variant and a _ar() variant and a _key_ar()
variant if you need it (currently not necessary), doubling the number of functions with
each additional argument. Another example is GADDR_V and GADDR, the first takes care of
translating the address to an physical one, but sometimes you need to pass it untranslated,
and we need to combine that with passing a key or not.

A big improvement was making the target of the ioctl (vm/vcpu) and the operation arguments
instead of baking it into the function. Since they're mandatory arguments this is independent
of the macro vs functions question.

In the end there are multiple independent but interacting improvements and it is kinda
hard to make the call on how far to go along one dimension, e.g. I was unsure if I
wanted to introduce the DEFAULT_READ macro, but decided for it, since, as a reviewer,
you can see that it executes the same code with different arguments, instead of trying
to identify the difference between 5 copy-pasted and modified lines of code. On the other
hand you have the cost of introducing an indirection.
> 
> 
>> Split test cases into multiple host/guest pairs making them independent.
> 
> This is a good change.
> 
>> Make various minor improvements.
>> All in all this lays the groundwork for future extensions.
> 
> Also good if these changes make it easier to add test. Would be helpful
> to know the details of the groundwork.

Yeah I'm not too happy about the commit descriptions.
I was unsure how to structure the patches, since the new test motivate
the restructuring, e.g. if I put the _test_errors_common in the first patch
it's kinda weird since at that stage there is no commonality at all.
I ended up moving stuff around + since I'm not quite sure about stuff like the DEFAULT_
macros I left the description kinda vague.

Probably should have linked to the series this is a reply to, since linux-kselftest wasn't on cc:
https://lore.kernel.org/kvm/20220211182215.2730017-1-scgl@linux.ibm.com/

> 
> thanks,
> -- Shuah


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 2/2] KVM: s390: selftests: Test vm and vcpu memop with keys
  2022-02-17 17:54       ` Shuah Khan
@ 2022-02-18 12:14         ` Janis Schoetterl-Glausch
  2022-02-18 21:14           ` Shuah Khan
  0 siblings, 1 reply; 39+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-18 12:14 UTC (permalink / raw)
  To: Shuah Khan, Christian Borntraeger, Janosch Frank, Claudio Imbrenda
  Cc: Thomas Huth, David Hildenbrand, kvm, linux-kselftest, linux-kernel

On 2/17/22 18:54, Shuah Khan wrote:
> On 2/17/22 7:53 AM, Janis Schoetterl-Glausch wrote:
>> Test storage key checking for both vm and vcpu MEM_OP ioctls.
>> Test both error and non error conditions.
>>
> 
> This patch seems to combine restructuring the code and new code.
> e,g test_errors() was added in the last patch, only to be redone
> in this patch with test_errors split into test_common_errors()
> 
> Doing restructure in a separate patch and then adding new code
> makes it easier to review and also keep them simpler patches.
> 
> Please split the code in these two patches to just do restructure
> and then add new code.
> 
> I also would like to have good reasons to change existing code and
> make them into macros.
>  
>> Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
>> ---
>>   tools/testing/selftests/kvm/s390x/memop.c | 342 +++++++++++++++++++++-
>>   1 file changed, 328 insertions(+), 14 deletions(-)
>>
>> diff --git a/tools/testing/selftests/kvm/s390x/memop.c b/tools/testing/selftests/kvm/s390x/memop.c
>> index 4510418d73e6..bc12a9238967 100644
>> --- a/tools/testing/selftests/kvm/s390x/memop.c
>> +++ b/tools/testing/selftests/kvm/s390x/memop.c
>> @@ -201,6 +201,8 @@ static int err_memop_ioctl(struct test_vcpu vcpu, struct kvm_s390_mem_op *ksmo)
>>   #define PAGE_SHIFT 12
>>   #define PAGE_SIZE (1ULL << PAGE_SHIFT)
>>   #define PAGE_MASK (~(PAGE_SIZE - 1))
>> +#define CR0_FETCH_PROTECTION_OVERRIDE    (1UL << (63 - 38))
>> +#define CR0_STORAGE_PROTECTION_OVERRIDE    (1UL << (63 - 39))
>>     #define ASSERT_MEM_EQ(p1, p2, size) \
>>       TEST_ASSERT(!memcmp(p1, p2, size), "Memory contents do not match!")
>> @@ -235,6 +237,11 @@ static struct test_default test_default_init(void *guest_code)
>>       return t;
>>   }
>>   +static vm_vaddr_t test_vaddr_alloc(struct test_vcpu vm, size_t size, vm_vaddr_t vaddr_min)
>> +{
>> +    return vm_vaddr_alloc(vm.vm, size, vaddr_min);
>> +}
>> +
> 
> What is the value of adding a new routine that simply calls another?

I just found the vm.vm confusing/ugly and wanted to hide it,
I'm not married to that idea, tho.

> Do you see this routine changing in the future to do more?

No.
> 
> thanks,
> -- Shuah


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 2/2] KVM: s390: selftests: Test vm and vcpu memop with keys
  2022-02-18 12:14         ` Janis Schoetterl-Glausch
@ 2022-02-18 21:14           ` Shuah Khan
  0 siblings, 0 replies; 39+ messages in thread
From: Shuah Khan @ 2022-02-18 21:14 UTC (permalink / raw)
  To: Janis Schoetterl-Glausch, Christian Borntraeger, Janosch Frank,
	Claudio Imbrenda
  Cc: Thomas Huth, David Hildenbrand, kvm, linux-kselftest,
	linux-kernel, Shuah Khan

On 2/18/22 5:14 AM, Janis Schoetterl-Glausch wrote:
> On 2/17/22 18:54, Shuah Khan wrote:
>> On 2/17/22 7:53 AM, Janis Schoetterl-Glausch wrote:
>>> Test storage key checking for both vm and vcpu MEM_OP ioctls.
>>> Test both error and non error conditions.
>>>
>>
>> This patch seems to combine restructuring the code and new code.
>> e,g test_errors() was added in the last patch, only to be redone
>> in this patch with test_errors split into test_common_errors()
>>
>> Doing restructure in a separate patch and then adding new code
>> makes it easier to review and also keep them simpler patches.
>>
>> Please split the code in these two patches to just do restructure
>> and then add new code.
>>
>> I also would like to have good reasons to change existing code and
>> make them into macros.
>>   
>>> Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
>>> ---
>>>    tools/testing/selftests/kvm/s390x/memop.c | 342 +++++++++++++++++++++-
>>>    1 file changed, 328 insertions(+), 14 deletions(-)
>>>
>>> diff --git a/tools/testing/selftests/kvm/s390x/memop.c b/tools/testing/selftests/kvm/s390x/memop.c
>>> index 4510418d73e6..bc12a9238967 100644
>>> --- a/tools/testing/selftests/kvm/s390x/memop.c
>>> +++ b/tools/testing/selftests/kvm/s390x/memop.c
>>> @@ -201,6 +201,8 @@ static int err_memop_ioctl(struct test_vcpu vcpu, struct kvm_s390_mem_op *ksmo)
>>>    #define PAGE_SHIFT 12
>>>    #define PAGE_SIZE (1ULL << PAGE_SHIFT)
>>>    #define PAGE_MASK (~(PAGE_SIZE - 1))
>>> +#define CR0_FETCH_PROTECTION_OVERRIDE    (1UL << (63 - 38))
>>> +#define CR0_STORAGE_PROTECTION_OVERRIDE    (1UL << (63 - 39))
>>>      #define ASSERT_MEM_EQ(p1, p2, size) \
>>>        TEST_ASSERT(!memcmp(p1, p2, size), "Memory contents do not match!")
>>> @@ -235,6 +237,11 @@ static struct test_default test_default_init(void *guest_code)
>>>        return t;
>>>    }
>>>    +static vm_vaddr_t test_vaddr_alloc(struct test_vcpu vm, size_t size, vm_vaddr_t vaddr_min)
>>> +{
>>> +    return vm_vaddr_alloc(vm.vm, size, vaddr_min);
>>> +}
>>> +
>>
>> What is the value of adding a new routine that simply calls another?
> 
> I just found the vm.vm confusing/ugly and wanted to hide it,
> I'm not married to that idea, tho.
> 
>> Do you see this routine changing in the future to do more?
> 
> No.
>>


Let's drop it the new routine then.

thanks,
-- Shuah


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 1/2] KVM: s390: selftests: Refactor memop test
  2022-02-18 12:09         ` Janis Schoetterl-Glausch
@ 2022-02-18 22:14           ` Shuah Khan
  0 siblings, 0 replies; 39+ messages in thread
From: Shuah Khan @ 2022-02-18 22:14 UTC (permalink / raw)
  To: Janis Schoetterl-Glausch, Christian Borntraeger, Janosch Frank,
	Claudio Imbrenda
  Cc: Thomas Huth, David Hildenbrand, kvm, linux-kselftest,
	linux-kernel, Shuah Khan

On 2/18/22 5:09 AM, Janis Schoetterl-Glausch wrote:
> On 2/17/22 18:36, Shuah Khan wrote:
>> On 2/17/22 7:53 AM, Janis Schoetterl-Glausch wrote:
>>> Introduce macro for performing MEM_OP ioctls in a concise way.
>>
>> How does this help? What is the value in re-writing existing
>> code and turning it into a macro?
> 
> I want invocations of the ioctl to be independent of each other, so the reader does not
> have to keep track of the state of the struct kvm_s390_mem_op.
> 
> So you have to specify all arguments manually like so, which is rather noisy and makes it
> hard to see what the relevant parameter is:
> 
> ksmo.gaddr = guest_mem1;
> ksmo.flags = 0;
> ksmo.size = maxsize;
> ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
> ksmo.buf = (uintptr_t)mem1;
> ksmo.ar = 17;
> rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
> 
> Or you introduce an abstraction.
> Previously I used lots of functions with repeated code which got chaotic.
> I decided on the macro because it's more flexible, e.g. you don't have to pass default args.
> For example, there is only one test that passes the access register arg, so you would want
> to default it to 0 for all other test.
> For the access key argument you need to pass both a flag and the key itself, so you'd probably
> get rid of this redundancy also.
> There also might be future extensions of the ioctl that work the same way
> (not 100% but not purely theoretical either).
> 
> With the macro all that is orthogonal, you just pass the argument you need or you don't.
> With functions you'd maybe add a memop_key() variant and a _ar() variant and a _key_ar()
> variant if you need it (currently not necessary), doubling the number of functions with
> each additional argument. Another example is GADDR_V and GADDR, the first takes care of
> translating the address to an physical one, but sometimes you need to pass it untranslated,
> and we need to combine that with passing a key or not.
> 
> A big improvement was making the target of the ioctl (vm/vcpu) and the operation arguments
> instead of baking it into the function. Since they're mandatory arguments this is independent
> of the macro vs functions question.
> 
> In the end there are multiple independent but interacting improvements and it is kinda
> hard to make the call on how far to go along one dimension, e.g. I was unsure if I
> wanted to introduce the DEFAULT_READ macro, but decided for it, since, as a reviewer,
> you can see that it executes the same code with different arguments, instead of trying
> to identify the difference between 5 copy-pasted and modified lines of code. On the other
> hand you have the cost of introducing an indirection.
>>
>>

Sounds good. I am not fan of macros, however, in this case macro
helps. Please split the patches so that restructuring work is
done first and then the new code - as per my suggestion on the
second patch.

thanks,
-- Shuah

^ permalink raw reply	[flat|nested] 39+ messages in thread

* [PATCH] KVM: s390: Clarify key argument for MEM_OP in api docs
  2022-02-11 18:22 ` [PATCH v4 09/10] KVM: s390: Update api documentation for memop ioctl Janis Schoetterl-Glausch
  2022-02-14 17:52   ` Claudio Imbrenda
@ 2022-02-21 14:36   ` Janis Schoetterl-Glausch
  2022-02-21 16:06     ` Christian Borntraeger
  1 sibling, 1 reply; 39+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-21 14:36 UTC (permalink / raw)
  To: Paolo Bonzini, Jonathan Corbet, Christian Borntraeger
  Cc: Janis Schoetterl-Glausch, Janosch Frank, Claudio Imbrenda, kvm,
	linux-doc, linux-kernel

Clarify that the key argument represents the access key, not the whole
storage key.

Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
---
 Documentation/virt/kvm/api.rst | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 48f23bb80d7f..622667cc87ef 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -3763,7 +3763,7 @@ KVM_S390_MEMOP_F_INJECT_EXCEPTION is set.
 
 If the KVM_S390_MEMOP_F_SKEY_PROTECTION flag is set, storage key
 protection is also in effect and may cause exceptions if accesses are
-prohibited given the access key passed in "key".
+prohibited given the access key designated by "key"; the valid range is 0..15.
 KVM_S390_MEMOP_F_SKEY_PROTECTION is available if KVM_CAP_S390_MEM_OP_EXTENSION
 is > 0.
 

base-commit: af33593d63a403287b8a6edd217e854a3571938b
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [PATCH] KVM: s390: Clarify key argument for MEM_OP in api docs
  2022-02-21 14:36   ` [PATCH] KVM: s390: Clarify key argument for MEM_OP in api docs Janis Schoetterl-Glausch
@ 2022-02-21 16:06     ` Christian Borntraeger
  0 siblings, 0 replies; 39+ messages in thread
From: Christian Borntraeger @ 2022-02-21 16:06 UTC (permalink / raw)
  To: Janis Schoetterl-Glausch, Paolo Bonzini, Jonathan Corbet
  Cc: Janosch Frank, Claudio Imbrenda, kvm, linux-doc, linux-kernel,
	Viktor Mihajlovski



Am 21.02.22 um 15:36 schrieb Janis Schoetterl-Glausch:
> Clarify that the key argument represents the access key, not the whole
> storage key.
> 
> Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
> ---
>   Documentation/virt/kvm/api.rst | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> index 48f23bb80d7f..622667cc87ef 100644
> --- a/Documentation/virt/kvm/api.rst
> +++ b/Documentation/virt/kvm/api.rst
> @@ -3763,7 +3763,7 @@ KVM_S390_MEMOP_F_INJECT_EXCEPTION is set.
>   
>   If the KVM_S390_MEMOP_F_SKEY_PROTECTION flag is set, storage key
>   protection is also in effect and may cause exceptions if accesses are
> -prohibited given the access key passed in "key".
> +prohibited given the access key designated by "key"; the valid range is 0..15.
>   KVM_S390_MEMOP_F_SKEY_PROTECTION is available if KVM_CAP_S390_MEM_OP_EXTENSION
>   is > 0.
>   
> 
> base-commit: af33593d63a403287b8a6edd217e854a3571938b

I have already queued the patches. Will add this on top as a fixup patch.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* [PATCH] KVM: s390: Add missing vm MEM_OP size check
  2022-02-11 18:22 ` [PATCH v4 06/10] KVM: s390: Add vm IOCTL for key checked guest absolute memory access Janis Schoetterl-Glausch
  2022-02-14 17:58   ` Claudio Imbrenda
@ 2022-02-21 16:32   ` Janis Schoetterl-Glausch
  2022-02-22  8:04     ` Christian Borntraeger
  1 sibling, 1 reply; 39+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-21 16:32 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Janis Schoetterl-Glausch
  Cc: David Hildenbrand, Sven Schnelle, kvm, linux-s390, linux-kernel

Check that size is not zero, preventing the following warning:

WARNING: CPU: 0 PID: 9692 at mm/vmalloc.c:3059 __vmalloc_node_range+0x528/0x648
Modules linked in:
CPU: 0 PID: 9692 Comm: memop Not tainted 5.17.0-rc3-e4+ #80
Hardware name: IBM 8561 T01 701 (LPAR)
Krnl PSW : 0704c00180000000 0000000082dc584c (__vmalloc_node_range+0x52c/0x648)
           R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0 RI:0 EA:3
Krnl GPRS: 0000000000000083 ffffffffffffffff 0000000000000000 0000000000000001
           0000038000000000 000003ff80000000 0000000000000cc0 000000008ebb8000
           0000000087a8a700 000000004040aeb1 000003ffd9f7dec8 000000008ebb8000
           000000009d9b8000 000000000102a1b4 00000380035afb68 00000380035afaa8
Krnl Code: 0000000082dc583e: d028a7f4ff80        trtr    2036(41,%r10),3968(%r15)
           0000000082dc5844: af000000            mc      0,0
          #0000000082dc5848: af000000            mc      0,0
          >0000000082dc584c: a7d90000            lghi    %r13,0
           0000000082dc5850: b904002d            lgr     %r2,%r13
           0000000082dc5854: eb6ff1080004        lmg     %r6,%r15,264(%r15)
           0000000082dc585a: 07fe                bcr     15,%r14
           0000000082dc585c: 47000700            bc      0,1792
Call Trace:
 [<0000000082dc584c>] __vmalloc_node_range+0x52c/0x648
 [<0000000082dc5b62>] vmalloc+0x5a/0x68
 [<000003ff8067f4ca>] kvm_arch_vm_ioctl+0x2da/0x2a30 [kvm]
 [<000003ff806705bc>] kvm_vm_ioctl+0x4ec/0x978 [kvm]
 [<0000000082e562fe>] __s390x_sys_ioctl+0xbe/0x100
 [<000000008360a9bc>] __do_syscall+0x1d4/0x200
 [<0000000083618bd2>] system_call+0x82/0xb0
Last Breaking-Event-Address:
 [<0000000082dc5348>] __vmalloc_node_range+0x28/0x648

Other than the warning, there is no ill effect from the missing check,
the condition is detected by subsequent code and causes a return
with ENOMEM.

Fixes: ef11c9463ae0 (KVM: s390: Add vm IOCTL for key checked guest absolute memory access)
Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
---
 arch/s390/kvm/kvm-s390.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index c2c26c2aad64..e056ad86ccd2 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -2374,7 +2374,7 @@ static int kvm_s390_vm_mem_op(struct kvm *kvm, struct kvm_s390_mem_op *mop)
 
 	supported_flags = KVM_S390_MEMOP_F_SKEY_PROTECTION
 			  | KVM_S390_MEMOP_F_CHECK_ONLY;
-	if (mop->flags & ~supported_flags)
+	if (mop->flags & ~supported_flags || !mop->size)
 		return -EINVAL;
 	if (mop->size > MEM_OP_MAX_SIZE)
 		return -E2BIG;
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [PATCH] KVM: s390: Add missing vm MEM_OP size check
  2022-02-21 16:32   ` [PATCH] KVM: s390: Add missing vm MEM_OP size check Janis Schoetterl-Glausch
@ 2022-02-22  8:04     ` Christian Borntraeger
  0 siblings, 0 replies; 39+ messages in thread
From: Christian Borntraeger @ 2022-02-22  8:04 UTC (permalink / raw)
  To: Janis Schoetterl-Glausch, Janosch Frank, Claudio Imbrenda,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev
  Cc: David Hildenbrand, Sven Schnelle, kvm, linux-s390, linux-kernel



Am 21.02.22 um 17:32 schrieb Janis Schoetterl-Glausch:
> Check that size is not zero, preventing the following warning:
> 
> WARNING: CPU: 0 PID: 9692 at mm/vmalloc.c:3059 __vmalloc_node_range+0x528/0x648
> Modules linked in:
> CPU: 0 PID: 9692 Comm: memop Not tainted 5.17.0-rc3-e4+ #80
> Hardware name: IBM 8561 T01 701 (LPAR)
> Krnl PSW : 0704c00180000000 0000000082dc584c (__vmalloc_node_range+0x52c/0x648)
>             R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0 RI:0 EA:3
> Krnl GPRS: 0000000000000083 ffffffffffffffff 0000000000000000 0000000000000001
>             0000038000000000 000003ff80000000 0000000000000cc0 000000008ebb8000
>             0000000087a8a700 000000004040aeb1 000003ffd9f7dec8 000000008ebb8000
>             000000009d9b8000 000000000102a1b4 00000380035afb68 00000380035afaa8
> Krnl Code: 0000000082dc583e: d028a7f4ff80        trtr    2036(41,%r10),3968(%r15)
>             0000000082dc5844: af000000            mc      0,0
>            #0000000082dc5848: af000000            mc      0,0
>            >0000000082dc584c: a7d90000            lghi    %r13,0
>             0000000082dc5850: b904002d            lgr     %r2,%r13
>             0000000082dc5854: eb6ff1080004        lmg     %r6,%r15,264(%r15)
>             0000000082dc585a: 07fe                bcr     15,%r14
>             0000000082dc585c: 47000700            bc      0,1792
> Call Trace:
>   [<0000000082dc584c>] __vmalloc_node_range+0x52c/0x648
>   [<0000000082dc5b62>] vmalloc+0x5a/0x68
>   [<000003ff8067f4ca>] kvm_arch_vm_ioctl+0x2da/0x2a30 [kvm]
>   [<000003ff806705bc>] kvm_vm_ioctl+0x4ec/0x978 [kvm]
>   [<0000000082e562fe>] __s390x_sys_ioctl+0xbe/0x100
>   [<000000008360a9bc>] __do_syscall+0x1d4/0x200
>   [<0000000083618bd2>] system_call+0x82/0xb0
> Last Breaking-Event-Address:
>   [<0000000082dc5348>] __vmalloc_node_range+0x28/0x648
> 
> Other than the warning, there is no ill effect from the missing check,
> the condition is detected by subsequent code and causes a return
> with ENOMEM.
> 
> Fixes: ef11c9463ae0 (KVM: s390: Add vm IOCTL for key checked guest absolute memory access)
> Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>

applied to kvms390/next, Thanks.
> ---
>   arch/s390/kvm/kvm-s390.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index c2c26c2aad64..e056ad86ccd2 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -2374,7 +2374,7 @@ static int kvm_s390_vm_mem_op(struct kvm *kvm, struct kvm_s390_mem_op *mop)
>   
>   	supported_flags = KVM_S390_MEMOP_F_SKEY_PROTECTION
>   			  | KVM_S390_MEMOP_F_CHECK_ONLY;
> -	if (mop->flags & ~supported_flags)
> +	if (mop->flags & ~supported_flags || !mop->size)
>   		return -EINVAL;
>   	if (mop->size > MEM_OP_MAX_SIZE)
>   		return -E2BIG;

^ permalink raw reply	[flat|nested] 39+ messages in thread

* [PATCH v2 0/5] memop selftest for storage key checking
  2022-02-11 18:22 ` [PATCH v4 10/10] KVM: s390: selftests: Test memops with storage keys Janis Schoetterl-Glausch
  2022-02-17 14:53   ` Janis Schoetterl-Glausch
@ 2022-02-25 15:53   ` Janis Schoetterl-Glausch
  2022-02-25 15:53     ` [PATCH v2 1/5] KVM: s390: selftests: Split memop tests Janis Schoetterl-Glausch
                       ` (5 more replies)
  1 sibling, 6 replies; 39+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-25 15:53 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank, Claudio Imbrenda, Shuah Khan
  Cc: Janis Schoetterl-Glausch, Thomas Huth, David Hildenbrand, kvm,
	linux-kselftest, linux-kernel

Refactor memop selftest and add tests.
Add storage key tests, both for success as well as failure cases.
Similarly test both vcpu and vm ioctls.

v1 -> v2
 * restructure commits
 * get rid of test_* wrapper functions that hid vm.vm
 * minor changes

v0 -> v2
 * complete rewrite

v1: https://lore.kernel.org/kvm/20220217145336.1794778-1-scgl@linux.ibm.com/
v0: https://lore.kernel.org/kvm/20220211182215.2730017-11-scgl@linux.ibm.com/

Janis Schoetterl-Glausch (5):
  KVM: s390: selftests: Split memop tests
  KVM: s390: selftests: Add macro as abstraction for MEM_OP
  KVM: s390: selftests: Add named stages for memop test
  KVM: s390: selftests: Add more copy memop tests
  KVM: s390: selftests: Add error memop tests

 tools/testing/selftests/kvm/s390x/memop.c | 734 ++++++++++++++++++----
 1 file changed, 616 insertions(+), 118 deletions(-)


base-commit: ee6a569d3bf64c9676eee3eecb861fb01cc11311
-- 
2.32.0


^ permalink raw reply	[flat|nested] 39+ messages in thread

* [PATCH v2 1/5] KVM: s390: selftests: Split memop tests
  2022-02-25 15:53   ` [PATCH v2 0/5] memop selftest for storage key checking Janis Schoetterl-Glausch
@ 2022-02-25 15:53     ` Janis Schoetterl-Glausch
  2022-02-25 15:53     ` [PATCH v2 2/5] KVM: s390: selftests: Add macro as abstraction for MEM_OP Janis Schoetterl-Glausch
                       ` (4 subsequent siblings)
  5 siblings, 0 replies; 39+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-25 15:53 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank, Claudio Imbrenda, Shuah Khan
  Cc: Janis Schoetterl-Glausch, Thomas Huth, David Hildenbrand, kvm,
	linux-kselftest, linux-kernel

Split success case/copy test from error test, making them independent.
This means they do not share state and are easier to understand.
Also, new test can be added in the same manner without affecting the old
ones. In order to make that simpler, introduce functionality for the
setup of commonly used variables.

Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
---
 tools/testing/selftests/kvm/s390x/memop.c | 137 +++++++++++++---------
 1 file changed, 82 insertions(+), 55 deletions(-)

diff --git a/tools/testing/selftests/kvm/s390x/memop.c b/tools/testing/selftests/kvm/s390x/memop.c
index d19c3ffdea3f..b9b673acb766 100644
--- a/tools/testing/selftests/kvm/s390x/memop.c
+++ b/tools/testing/selftests/kvm/s390x/memop.c
@@ -18,71 +18,82 @@
 static uint8_t mem1[65536];
 static uint8_t mem2[65536];
 
-static void guest_code(void)
+struct test_default {
+	struct kvm_vm *kvm_vm;
+	struct kvm_run *run;
+	int size;
+};
+
+static struct test_default test_default_init(void *guest_code)
 {
-	int i;
+	struct test_default t;
 
-	for (;;) {
-		for (i = 0; i < sizeof(mem2); i++)
-			mem2[i] = mem1[i];
-		GUEST_SYNC(0);
-	}
+	t.size = min((size_t)kvm_check_cap(KVM_CAP_S390_MEM_OP), sizeof(mem1));
+	t.kvm_vm = vm_create_default(VCPU_ID, 0, guest_code);
+	t.run = vcpu_state(t.kvm_vm, VCPU_ID);
+	return t;
 }
 
-int main(int argc, char *argv[])
+static void guest_copy(void)
 {
-	struct kvm_vm *vm;
-	struct kvm_run *run;
-	struct kvm_s390_mem_op ksmo;
-	int rv, i, maxsize;
-
-	setbuf(stdout, NULL);	/* Tell stdout not to buffer its content */
-
-	maxsize = kvm_check_cap(KVM_CAP_S390_MEM_OP);
-	if (!maxsize) {
-		print_skip("CAP_S390_MEM_OP not supported");
-		exit(KSFT_SKIP);
-	}
-	if (maxsize > sizeof(mem1))
-		maxsize = sizeof(mem1);
+	memcpy(&mem2, &mem1, sizeof(mem2));
+	GUEST_SYNC(0);
+}
 
-	/* Create VM */
-	vm = vm_create_default(VCPU_ID, 0, guest_code);
-	run = vcpu_state(vm, VCPU_ID);
+static void test_copy(void)
+{
+	struct test_default t = test_default_init(guest_copy);
+	struct kvm_s390_mem_op ksmo;
+	int i;
 
 	for (i = 0; i < sizeof(mem1); i++)
 		mem1[i] = i * i + i;
 
 	/* Set the first array */
-	ksmo.gaddr = addr_gva2gpa(vm, (uintptr_t)mem1);
+	ksmo.gaddr = addr_gva2gpa(t.kvm_vm, (uintptr_t)mem1);
 	ksmo.flags = 0;
-	ksmo.size = maxsize;
+	ksmo.size = t.size;
 	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
 	ksmo.buf = (uintptr_t)mem1;
 	ksmo.ar = 0;
-	vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
 
 	/* Let the guest code copy the first array to the second */
-	vcpu_run(vm, VCPU_ID);
-	TEST_ASSERT(run->exit_reason == KVM_EXIT_S390_SIEIC,
+	vcpu_run(t.kvm_vm, VCPU_ID);
+	TEST_ASSERT(t.run->exit_reason == KVM_EXIT_S390_SIEIC,
 		    "Unexpected exit reason: %u (%s)\n",
-		    run->exit_reason,
-		    exit_reason_str(run->exit_reason));
+		    t.run->exit_reason,
+		    exit_reason_str(t.run->exit_reason));
 
 	memset(mem2, 0xaa, sizeof(mem2));
 
 	/* Get the second array */
 	ksmo.gaddr = (uintptr_t)mem2;
 	ksmo.flags = 0;
-	ksmo.size = maxsize;
+	ksmo.size = t.size;
 	ksmo.op = KVM_S390_MEMOP_LOGICAL_READ;
 	ksmo.buf = (uintptr_t)mem2;
 	ksmo.ar = 0;
-	vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
 
-	TEST_ASSERT(!memcmp(mem1, mem2, maxsize),
+	TEST_ASSERT(!memcmp(mem1, mem2, t.size),
 		    "Memory contents do not match!");
 
+	kvm_vm_free(t.kvm_vm);
+}
+
+static void guest_idle(void)
+{
+	for (;;)
+		GUEST_SYNC(0);
+}
+
+static void test_errors(void)
+{
+	struct test_default t = test_default_init(guest_idle);
+	struct kvm_s390_mem_op ksmo;
+	int rv;
+
 	/* Check error conditions - first bad size: */
 	ksmo.gaddr = (uintptr_t)mem1;
 	ksmo.flags = 0;
@@ -90,7 +101,7 @@ int main(int argc, char *argv[])
 	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
 	ksmo.buf = (uintptr_t)mem1;
 	ksmo.ar = 0;
-	rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
 	TEST_ASSERT(rv == -1 && errno == E2BIG, "ioctl allows insane sizes");
 
 	/* Zero size: */
@@ -100,65 +111,65 @@ int main(int argc, char *argv[])
 	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
 	ksmo.buf = (uintptr_t)mem1;
 	ksmo.ar = 0;
-	rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
 	TEST_ASSERT(rv == -1 && (errno == EINVAL || errno == ENOMEM),
 		    "ioctl allows 0 as size");
 
 	/* Bad flags: */
 	ksmo.gaddr = (uintptr_t)mem1;
 	ksmo.flags = -1;
-	ksmo.size = maxsize;
+	ksmo.size = t.size;
 	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
 	ksmo.buf = (uintptr_t)mem1;
 	ksmo.ar = 0;
-	rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
 	TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows all flags");
 
 	/* Bad operation: */
 	ksmo.gaddr = (uintptr_t)mem1;
 	ksmo.flags = 0;
-	ksmo.size = maxsize;
+	ksmo.size = t.size;
 	ksmo.op = -1;
 	ksmo.buf = (uintptr_t)mem1;
 	ksmo.ar = 0;
-	rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
 	TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows bad operations");
 
 	/* Bad guest address: */
 	ksmo.gaddr = ~0xfffUL;
 	ksmo.flags = KVM_S390_MEMOP_F_CHECK_ONLY;
-	ksmo.size = maxsize;
+	ksmo.size = t.size;
 	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
 	ksmo.buf = (uintptr_t)mem1;
 	ksmo.ar = 0;
-	rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
 	TEST_ASSERT(rv > 0, "ioctl does not report bad guest memory access");
 
 	/* Bad host address: */
 	ksmo.gaddr = (uintptr_t)mem1;
 	ksmo.flags = 0;
-	ksmo.size = maxsize;
+	ksmo.size = t.size;
 	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
 	ksmo.buf = 0;
 	ksmo.ar = 0;
-	rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
 	TEST_ASSERT(rv == -1 && errno == EFAULT,
 		    "ioctl does not report bad host memory address");
 
 	/* Bad access register: */
-	run->psw_mask &= ~(3UL << (63 - 17));
-	run->psw_mask |= 1UL << (63 - 17);  /* Enable AR mode */
-	vcpu_run(vm, VCPU_ID);              /* To sync new state to SIE block */
+	t.run->psw_mask &= ~(3UL << (63 - 17));
+	t.run->psw_mask |= 1UL << (63 - 17);  /* Enable AR mode */
+	vcpu_run(t.kvm_vm, VCPU_ID);              /* To sync new state to SIE block */
 	ksmo.gaddr = (uintptr_t)mem1;
 	ksmo.flags = 0;
-	ksmo.size = maxsize;
+	ksmo.size = t.size;
 	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
 	ksmo.buf = (uintptr_t)mem1;
 	ksmo.ar = 17;
-	rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
 	TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows ARs > 15");
-	run->psw_mask &= ~(3UL << (63 - 17));   /* Disable AR mode */
-	vcpu_run(vm, VCPU_ID);                  /* Run to sync new state */
+	t.run->psw_mask &= ~(3UL << (63 - 17));   /* Disable AR mode */
+	vcpu_run(t.kvm_vm, VCPU_ID);                  /* Run to sync new state */
 
 	/* Check that the SIDA calls are rejected for non-protected guests */
 	ksmo.gaddr = 0;
@@ -167,15 +178,31 @@ int main(int argc, char *argv[])
 	ksmo.op = KVM_S390_MEMOP_SIDA_READ;
 	ksmo.buf = (uintptr_t)mem1;
 	ksmo.sida_offset = 0x1c0;
-	rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
 	TEST_ASSERT(rv == -1 && errno == EINVAL,
 		    "ioctl does not reject SIDA_READ in non-protected mode");
 	ksmo.op = KVM_S390_MEMOP_SIDA_WRITE;
-	rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
 	TEST_ASSERT(rv == -1 && errno == EINVAL,
 		    "ioctl does not reject SIDA_WRITE in non-protected mode");
 
-	kvm_vm_free(vm);
+	kvm_vm_free(t.kvm_vm);
+}
+
+int main(int argc, char *argv[])
+{
+	int memop_cap;
+
+	setbuf(stdout, NULL);	/* Tell stdout not to buffer its content */
+
+	memop_cap = kvm_check_cap(KVM_CAP_S390_MEM_OP);
+	if (!memop_cap) {
+		print_skip("CAP_S390_MEM_OP not supported");
+		exit(KSFT_SKIP);
+	}
+
+	test_copy();
+	test_errors();
 
 	return 0;
 }
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v2 2/5] KVM: s390: selftests: Add macro as abstraction for MEM_OP
  2022-02-25 15:53   ` [PATCH v2 0/5] memop selftest for storage key checking Janis Schoetterl-Glausch
  2022-02-25 15:53     ` [PATCH v2 1/5] KVM: s390: selftests: Split memop tests Janis Schoetterl-Glausch
@ 2022-02-25 15:53     ` Janis Schoetterl-Glausch
  2022-02-25 15:53     ` [PATCH v2 3/5] KVM: s390: selftests: Add named stages for memop test Janis Schoetterl-Glausch
                       ` (3 subsequent siblings)
  5 siblings, 0 replies; 39+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-25 15:53 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank, Claudio Imbrenda, Shuah Khan
  Cc: Janis Schoetterl-Glausch, Thomas Huth, David Hildenbrand, kvm,
	linux-kselftest, linux-kernel

In order to achieve good test coverage we need to be able to invoke the
MEM_OP ioctl with all possible parametrizations.
However, for a given test, we want to be concise and not specify a long
list of default values for parameters not relevant for the test, so the
readers attention is not needlessly diverted.
Add a macro that enables this and convert the existing test to use it.
The macro emulates named arguments and hides some of the ioctl's
redundancy, e.g. sets the key flag if an access key is specified.

Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
---
 tools/testing/selftests/kvm/s390x/memop.c | 272 ++++++++++++++++------
 1 file changed, 197 insertions(+), 75 deletions(-)

diff --git a/tools/testing/selftests/kvm/s390x/memop.c b/tools/testing/selftests/kvm/s390x/memop.c
index b9b673acb766..e2ad3d70bae4 100644
--- a/tools/testing/selftests/kvm/s390x/memop.c
+++ b/tools/testing/selftests/kvm/s390x/memop.c
@@ -13,6 +13,188 @@
 #include "test_util.h"
 #include "kvm_util.h"
 
+enum mop_target {
+	LOGICAL,
+	SIDA,
+	ABSOLUTE,
+	INVALID,
+};
+
+enum mop_access_mode {
+	READ,
+	WRITE,
+};
+
+struct mop_desc {
+	uintptr_t gaddr;
+	uintptr_t gaddr_v;
+	uint64_t set_flags;
+	unsigned int f_check : 1;
+	unsigned int f_inject : 1;
+	unsigned int f_key : 1;
+	unsigned int _gaddr_v : 1;
+	unsigned int _set_flags : 1;
+	unsigned int _sida_offset : 1;
+	unsigned int _ar : 1;
+	uint32_t size;
+	enum mop_target target;
+	enum mop_access_mode mode;
+	void *buf;
+	uint32_t sida_offset;
+	uint8_t ar;
+	uint8_t key;
+};
+
+static struct kvm_s390_mem_op ksmo_from_desc(struct mop_desc desc)
+{
+	struct kvm_s390_mem_op ksmo = {
+		.gaddr = (uintptr_t)desc.gaddr,
+		.size = desc.size,
+		.buf = ((uintptr_t)desc.buf),
+		.reserved = "ignored_ignored_ignored_ignored"
+	};
+
+	switch (desc.target) {
+	case LOGICAL:
+		if (desc.mode == READ)
+			ksmo.op = KVM_S390_MEMOP_LOGICAL_READ;
+		if (desc.mode == WRITE)
+			ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
+		break;
+	case SIDA:
+		if (desc.mode == READ)
+			ksmo.op = KVM_S390_MEMOP_SIDA_READ;
+		if (desc.mode == WRITE)
+			ksmo.op = KVM_S390_MEMOP_SIDA_WRITE;
+		break;
+	case ABSOLUTE:
+		if (desc.mode == READ)
+			ksmo.op = KVM_S390_MEMOP_ABSOLUTE_READ;
+		if (desc.mode == WRITE)
+			ksmo.op = KVM_S390_MEMOP_ABSOLUTE_WRITE;
+		break;
+	case INVALID:
+		ksmo.op = -1;
+	}
+	if (desc.f_check)
+		ksmo.flags |= KVM_S390_MEMOP_F_CHECK_ONLY;
+	if (desc.f_inject)
+		ksmo.flags |= KVM_S390_MEMOP_F_INJECT_EXCEPTION;
+	if (desc._set_flags)
+		ksmo.flags = desc.set_flags;
+	if (desc.f_key) {
+		ksmo.flags |= KVM_S390_MEMOP_F_SKEY_PROTECTION;
+		ksmo.key = desc.key;
+	}
+	if (desc._ar)
+		ksmo.ar = desc.ar;
+	else
+		ksmo.ar = 0;
+	if (desc._sida_offset)
+		ksmo.sida_offset = desc.sida_offset;
+
+	return ksmo;
+}
+
+/* vcpu dummy id signifying that vm instead of vcpu ioctl is to occur */
+const uint32_t VM_VCPU_ID = (uint32_t)-1;
+
+struct test_vcpu {
+	struct kvm_vm *vm;
+	uint32_t id;
+};
+
+#define PRINT_MEMOP false
+static void print_memop(uint32_t vcpu_id, const struct kvm_s390_mem_op *ksmo)
+{
+	if (!PRINT_MEMOP)
+		return;
+
+	if (vcpu_id == VM_VCPU_ID)
+		printf("vm memop(");
+	else
+		printf("vcpu memop(");
+	switch (ksmo->op) {
+	case KVM_S390_MEMOP_LOGICAL_READ:
+		printf("LOGICAL, READ, ");
+		break;
+	case KVM_S390_MEMOP_LOGICAL_WRITE:
+		printf("LOGICAL, WRITE, ");
+		break;
+	case KVM_S390_MEMOP_SIDA_READ:
+		printf("SIDA, READ, ");
+		break;
+	case KVM_S390_MEMOP_SIDA_WRITE:
+		printf("SIDA, WRITE, ");
+		break;
+	case KVM_S390_MEMOP_ABSOLUTE_READ:
+		printf("ABSOLUTE, READ, ");
+		break;
+	case KVM_S390_MEMOP_ABSOLUTE_WRITE:
+		printf("ABSOLUTE, WRITE, ");
+		break;
+	}
+	printf("gaddr=%llu, size=%u, buf=%llu, ar=%u, key=%u",
+	       ksmo->gaddr, ksmo->size, ksmo->buf, ksmo->ar, ksmo->key);
+	if (ksmo->flags & KVM_S390_MEMOP_F_CHECK_ONLY)
+		printf(", CHECK_ONLY");
+	if (ksmo->flags & KVM_S390_MEMOP_F_INJECT_EXCEPTION)
+		printf(", INJECT_EXCEPTION");
+	if (ksmo->flags & KVM_S390_MEMOP_F_SKEY_PROTECTION)
+		printf(", SKEY_PROTECTION");
+	puts(")");
+}
+
+static void memop_ioctl(struct test_vcpu vcpu, struct kvm_s390_mem_op *ksmo)
+{
+	if (vcpu.id == VM_VCPU_ID)
+		vm_ioctl(vcpu.vm, KVM_S390_MEM_OP, ksmo);
+	else
+		vcpu_ioctl(vcpu.vm, vcpu.id, KVM_S390_MEM_OP, ksmo);
+}
+
+static int err_memop_ioctl(struct test_vcpu vcpu, struct kvm_s390_mem_op *ksmo)
+{
+	if (vcpu.id == VM_VCPU_ID)
+		return _vm_ioctl(vcpu.vm, KVM_S390_MEM_OP, ksmo);
+	else
+		return _vcpu_ioctl(vcpu.vm, vcpu.id, KVM_S390_MEM_OP, ksmo);
+}
+
+#define MEMOP(err, vcpu_p, mop_target_p, access_mode_p, buf_p, size_p, ...)	\
+({										\
+	struct test_vcpu __vcpu = (vcpu_p);					\
+	struct mop_desc __desc = {						\
+		.target = (mop_target_p),					\
+		.mode = (access_mode_p),					\
+		.buf = (buf_p),							\
+		.size = (size_p),						\
+		__VA_ARGS__							\
+	};									\
+	struct kvm_s390_mem_op __ksmo;						\
+										\
+	if (__desc._gaddr_v) {							\
+		if (__desc.target == ABSOLUTE)					\
+			__desc.gaddr = addr_gva2gpa(__vcpu.vm, __desc.gaddr_v);	\
+		else								\
+			__desc.gaddr = __desc.gaddr_v;				\
+	}									\
+	__ksmo = ksmo_from_desc(__desc);					\
+	print_memop(__vcpu.id, &__ksmo);					\
+	err##memop_ioctl(__vcpu, &__ksmo);					\
+})
+
+#define MOP(...) MEMOP(, __VA_ARGS__)
+#define ERR_MOP(...) MEMOP(err_, __VA_ARGS__)
+
+#define GADDR(a) .gaddr = ((uintptr_t)a)
+#define GADDR_V(v) ._gaddr_v = 1, .gaddr_v = ((uintptr_t)v)
+#define CHECK_ONLY .f_check = 1
+#define SET_FLAGS(f) ._set_flags = 1, .set_flags = (f)
+#define SIDA_OFFSET(o) ._sida_offset = 1, .sida_offset = (o)
+#define AR(a) ._ar = 1, .ar = (a)
+#define KEY(a) .f_key = 1, .key = (a)
+
 #define VCPU_ID 1
 
 static uint8_t mem1[65536];
@@ -20,6 +202,7 @@ static uint8_t mem2[65536];
 
 struct test_default {
 	struct kvm_vm *kvm_vm;
+	struct test_vcpu vcpu;
 	struct kvm_run *run;
 	int size;
 };
@@ -30,6 +213,7 @@ static struct test_default test_default_init(void *guest_code)
 
 	t.size = min((size_t)kvm_check_cap(KVM_CAP_S390_MEM_OP), sizeof(mem1));
 	t.kvm_vm = vm_create_default(VCPU_ID, 0, guest_code);
+	t.vcpu = (struct test_vcpu) { t.kvm_vm, VCPU_ID };
 	t.run = vcpu_state(t.kvm_vm, VCPU_ID);
 	return t;
 }
@@ -43,20 +227,14 @@ static void guest_copy(void)
 static void test_copy(void)
 {
 	struct test_default t = test_default_init(guest_copy);
-	struct kvm_s390_mem_op ksmo;
 	int i;
 
 	for (i = 0; i < sizeof(mem1); i++)
 		mem1[i] = i * i + i;
 
 	/* Set the first array */
-	ksmo.gaddr = addr_gva2gpa(t.kvm_vm, (uintptr_t)mem1);
-	ksmo.flags = 0;
-	ksmo.size = t.size;
-	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
-	ksmo.buf = (uintptr_t)mem1;
-	ksmo.ar = 0;
-	vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	MOP(t.vcpu, LOGICAL, WRITE, mem1, t.size,
+	    GADDR(addr_gva2gpa(t.kvm_vm, (uintptr_t)mem1)));
 
 	/* Let the guest code copy the first array to the second */
 	vcpu_run(t.kvm_vm, VCPU_ID);
@@ -68,13 +246,7 @@ static void test_copy(void)
 	memset(mem2, 0xaa, sizeof(mem2));
 
 	/* Get the second array */
-	ksmo.gaddr = (uintptr_t)mem2;
-	ksmo.flags = 0;
-	ksmo.size = t.size;
-	ksmo.op = KVM_S390_MEMOP_LOGICAL_READ;
-	ksmo.buf = (uintptr_t)mem2;
-	ksmo.ar = 0;
-	vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	MOP(t.vcpu, LOGICAL, READ, mem2, t.size, GADDR_V(mem2));
 
 	TEST_ASSERT(!memcmp(mem1, mem2, t.size),
 		    "Memory contents do not match!");
@@ -91,68 +263,31 @@ static void guest_idle(void)
 static void test_errors(void)
 {
 	struct test_default t = test_default_init(guest_idle);
-	struct kvm_s390_mem_op ksmo;
 	int rv;
 
-	/* Check error conditions - first bad size: */
-	ksmo.gaddr = (uintptr_t)mem1;
-	ksmo.flags = 0;
-	ksmo.size = -1;
-	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
-	ksmo.buf = (uintptr_t)mem1;
-	ksmo.ar = 0;
-	rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	/* Bad size: */
+	rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, -1, GADDR_V(mem1));
 	TEST_ASSERT(rv == -1 && errno == E2BIG, "ioctl allows insane sizes");
 
 	/* Zero size: */
-	ksmo.gaddr = (uintptr_t)mem1;
-	ksmo.flags = 0;
-	ksmo.size = 0;
-	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
-	ksmo.buf = (uintptr_t)mem1;
-	ksmo.ar = 0;
-	rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, 0, GADDR_V(mem1));
 	TEST_ASSERT(rv == -1 && (errno == EINVAL || errno == ENOMEM),
 		    "ioctl allows 0 as size");
 
 	/* Bad flags: */
-	ksmo.gaddr = (uintptr_t)mem1;
-	ksmo.flags = -1;
-	ksmo.size = t.size;
-	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
-	ksmo.buf = (uintptr_t)mem1;
-	ksmo.ar = 0;
-	rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, t.size, GADDR_V(mem1), SET_FLAGS(-1));
 	TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows all flags");
 
 	/* Bad operation: */
-	ksmo.gaddr = (uintptr_t)mem1;
-	ksmo.flags = 0;
-	ksmo.size = t.size;
-	ksmo.op = -1;
-	ksmo.buf = (uintptr_t)mem1;
-	ksmo.ar = 0;
-	rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	rv = ERR_MOP(t.vcpu, INVALID, WRITE, mem1, t.size, GADDR_V(mem1));
 	TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows bad operations");
 
 	/* Bad guest address: */
-	ksmo.gaddr = ~0xfffUL;
-	ksmo.flags = KVM_S390_MEMOP_F_CHECK_ONLY;
-	ksmo.size = t.size;
-	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
-	ksmo.buf = (uintptr_t)mem1;
-	ksmo.ar = 0;
-	rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, t.size, GADDR((void *)~0xfffUL), CHECK_ONLY);
 	TEST_ASSERT(rv > 0, "ioctl does not report bad guest memory access");
 
 	/* Bad host address: */
-	ksmo.gaddr = (uintptr_t)mem1;
-	ksmo.flags = 0;
-	ksmo.size = t.size;
-	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
-	ksmo.buf = 0;
-	ksmo.ar = 0;
-	rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, 0, t.size, GADDR_V(mem1));
 	TEST_ASSERT(rv == -1 && errno == EFAULT,
 		    "ioctl does not report bad host memory address");
 
@@ -160,29 +295,16 @@ static void test_errors(void)
 	t.run->psw_mask &= ~(3UL << (63 - 17));
 	t.run->psw_mask |= 1UL << (63 - 17);  /* Enable AR mode */
 	vcpu_run(t.kvm_vm, VCPU_ID);              /* To sync new state to SIE block */
-	ksmo.gaddr = (uintptr_t)mem1;
-	ksmo.flags = 0;
-	ksmo.size = t.size;
-	ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE;
-	ksmo.buf = (uintptr_t)mem1;
-	ksmo.ar = 17;
-	rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, t.size, GADDR_V(mem1), AR(17));
 	TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows ARs > 15");
 	t.run->psw_mask &= ~(3UL << (63 - 17));   /* Disable AR mode */
 	vcpu_run(t.kvm_vm, VCPU_ID);                  /* Run to sync new state */
 
 	/* Check that the SIDA calls are rejected for non-protected guests */
-	ksmo.gaddr = 0;
-	ksmo.flags = 0;
-	ksmo.size = 8;
-	ksmo.op = KVM_S390_MEMOP_SIDA_READ;
-	ksmo.buf = (uintptr_t)mem1;
-	ksmo.sida_offset = 0x1c0;
-	rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	rv = ERR_MOP(t.vcpu, SIDA, READ, mem1, 8, GADDR(0), SIDA_OFFSET(0x1c0));
 	TEST_ASSERT(rv == -1 && errno == EINVAL,
 		    "ioctl does not reject SIDA_READ in non-protected mode");
-	ksmo.op = KVM_S390_MEMOP_SIDA_WRITE;
-	rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo);
+	rv = ERR_MOP(t.vcpu, SIDA, WRITE, mem1, 8, GADDR(0), SIDA_OFFSET(0x1c0));
 	TEST_ASSERT(rv == -1 && errno == EINVAL,
 		    "ioctl does not reject SIDA_WRITE in non-protected mode");
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v2 3/5] KVM: s390: selftests: Add named stages for memop test
  2022-02-25 15:53   ` [PATCH v2 0/5] memop selftest for storage key checking Janis Schoetterl-Glausch
  2022-02-25 15:53     ` [PATCH v2 1/5] KVM: s390: selftests: Split memop tests Janis Schoetterl-Glausch
  2022-02-25 15:53     ` [PATCH v2 2/5] KVM: s390: selftests: Add macro as abstraction for MEM_OP Janis Schoetterl-Glausch
@ 2022-02-25 15:53     ` Janis Schoetterl-Glausch
  2022-02-25 15:53     ` [PATCH v2 4/5] KVM: s390: selftests: Add more copy memop tests Janis Schoetterl-Glausch
                       ` (2 subsequent siblings)
  5 siblings, 0 replies; 39+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-25 15:53 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank, Claudio Imbrenda, Shuah Khan
  Cc: Janis Schoetterl-Glausch, Thomas Huth, David Hildenbrand, kvm,
	linux-kselftest, linux-kernel

The stages synchronize guest and host execution.
This helps the reader and constraits the execution of the test -- if the
observed staging differs from the expected the test fails.

Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
---
 tools/testing/selftests/kvm/s390x/memop.c | 44 +++++++++++++++++------
 1 file changed, 33 insertions(+), 11 deletions(-)

diff --git a/tools/testing/selftests/kvm/s390x/memop.c b/tools/testing/selftests/kvm/s390x/memop.c
index e2ad3d70bae4..d01e48c7c5e8 100644
--- a/tools/testing/selftests/kvm/s390x/memop.c
+++ b/tools/testing/selftests/kvm/s390x/memop.c
@@ -218,10 +218,32 @@ static struct test_default test_default_init(void *guest_code)
 	return t;
 }
 
+enum stage {
+	/* Synced state set by host, e.g. DAT */
+	STAGE_INITED,
+	/* Guest did nothing */
+	STAGE_IDLED,
+	/* Guest copied memory (locations up to test case) */
+	STAGE_COPIED,
+};
+
+#define HOST_SYNC(vcpu_p, stage)					\
+({									\
+	struct test_vcpu __vcpu = (vcpu_p);				\
+	struct ucall uc;						\
+	int __stage = (stage);						\
+									\
+	vcpu_run(__vcpu.vm, __vcpu.id);					\
+	get_ucall(__vcpu.vm, __vcpu.id, &uc);				\
+	ASSERT_EQ(uc.cmd, UCALL_SYNC);					\
+	ASSERT_EQ(uc.args[1], __stage);					\
+})									\
+
 static void guest_copy(void)
 {
+	GUEST_SYNC(STAGE_INITED);
 	memcpy(&mem2, &mem1, sizeof(mem2));
-	GUEST_SYNC(0);
+	GUEST_SYNC(STAGE_COPIED);
 }
 
 static void test_copy(void)
@@ -232,16 +254,13 @@ static void test_copy(void)
 	for (i = 0; i < sizeof(mem1); i++)
 		mem1[i] = i * i + i;
 
+	HOST_SYNC(t.vcpu, STAGE_INITED);
+
 	/* Set the first array */
-	MOP(t.vcpu, LOGICAL, WRITE, mem1, t.size,
-	    GADDR(addr_gva2gpa(t.kvm_vm, (uintptr_t)mem1)));
+	MOP(t.vcpu, LOGICAL, WRITE, mem1, t.size, GADDR_V(mem1));
 
 	/* Let the guest code copy the first array to the second */
-	vcpu_run(t.kvm_vm, VCPU_ID);
-	TEST_ASSERT(t.run->exit_reason == KVM_EXIT_S390_SIEIC,
-		    "Unexpected exit reason: %u (%s)\n",
-		    t.run->exit_reason,
-		    exit_reason_str(t.run->exit_reason));
+	HOST_SYNC(t.vcpu, STAGE_COPIED);
 
 	memset(mem2, 0xaa, sizeof(mem2));
 
@@ -256,8 +275,9 @@ static void test_copy(void)
 
 static void guest_idle(void)
 {
+	GUEST_SYNC(STAGE_INITED); /* for consistency's sake */
 	for (;;)
-		GUEST_SYNC(0);
+		GUEST_SYNC(STAGE_IDLED);
 }
 
 static void test_errors(void)
@@ -265,6 +285,8 @@ static void test_errors(void)
 	struct test_default t = test_default_init(guest_idle);
 	int rv;
 
+	HOST_SYNC(t.vcpu, STAGE_INITED);
+
 	/* Bad size: */
 	rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, -1, GADDR_V(mem1));
 	TEST_ASSERT(rv == -1 && errno == E2BIG, "ioctl allows insane sizes");
@@ -294,11 +316,11 @@ static void test_errors(void)
 	/* Bad access register: */
 	t.run->psw_mask &= ~(3UL << (63 - 17));
 	t.run->psw_mask |= 1UL << (63 - 17);  /* Enable AR mode */
-	vcpu_run(t.kvm_vm, VCPU_ID);              /* To sync new state to SIE block */
+	HOST_SYNC(t.vcpu, STAGE_IDLED); /* To sync new state to SIE block */
 	rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, t.size, GADDR_V(mem1), AR(17));
 	TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows ARs > 15");
 	t.run->psw_mask &= ~(3UL << (63 - 17));   /* Disable AR mode */
-	vcpu_run(t.kvm_vm, VCPU_ID);                  /* Run to sync new state */
+	HOST_SYNC(t.vcpu, STAGE_IDLED); /* Run to sync new state */
 
 	/* Check that the SIDA calls are rejected for non-protected guests */
 	rv = ERR_MOP(t.vcpu, SIDA, READ, mem1, 8, GADDR(0), SIDA_OFFSET(0x1c0));
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v2 4/5] KVM: s390: selftests: Add more copy memop tests
  2022-02-25 15:53   ` [PATCH v2 0/5] memop selftest for storage key checking Janis Schoetterl-Glausch
                       ` (2 preceding siblings ...)
  2022-02-25 15:53     ` [PATCH v2 3/5] KVM: s390: selftests: Add named stages for memop test Janis Schoetterl-Glausch
@ 2022-02-25 15:53     ` Janis Schoetterl-Glausch
  2022-02-25 15:53     ` [PATCH v2 5/5] KVM: s390: selftests: Add error " Janis Schoetterl-Glausch
  2022-03-08 10:16     ` [PATCH v2 0/5] memop selftest for storage key checking Christian Borntraeger
  5 siblings, 0 replies; 39+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-25 15:53 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank, Claudio Imbrenda, Shuah Khan
  Cc: Janis Schoetterl-Glausch, Thomas Huth, David Hildenbrand, kvm,
	linux-kselftest, linux-kernel

Do not just test the actual copy, but also that success is indicated
when using the check only flag.
Add copy test with storage key checking enabled, including tests for
storage and fetch protection override.
These test cover both logical vcpu ioctls as well as absolute vm ioctls.

Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
---
 tools/testing/selftests/kvm/s390x/memop.c | 243 ++++++++++++++++++++--
 1 file changed, 230 insertions(+), 13 deletions(-)

diff --git a/tools/testing/selftests/kvm/s390x/memop.c b/tools/testing/selftests/kvm/s390x/memop.c
index d01e48c7c5e8..088d1cc61709 100644
--- a/tools/testing/selftests/kvm/s390x/memop.c
+++ b/tools/testing/selftests/kvm/s390x/memop.c
@@ -195,13 +195,21 @@ static int err_memop_ioctl(struct test_vcpu vcpu, struct kvm_s390_mem_op *ksmo)
 #define AR(a) ._ar = 1, .ar = (a)
 #define KEY(a) .f_key = 1, .key = (a)
 
+#define CHECK_N_DO(f, ...) ({ f(__VA_ARGS__, CHECK_ONLY); f(__VA_ARGS__); })
+
 #define VCPU_ID 1
+#define PAGE_SHIFT 12
+#define PAGE_SIZE (1ULL << PAGE_SHIFT)
+#define PAGE_MASK (~(PAGE_SIZE - 1))
+#define CR0_FETCH_PROTECTION_OVERRIDE	(1UL << (63 - 38))
+#define CR0_STORAGE_PROTECTION_OVERRIDE	(1UL << (63 - 39))
 
 static uint8_t mem1[65536];
 static uint8_t mem2[65536];
 
 struct test_default {
 	struct kvm_vm *kvm_vm;
+	struct test_vcpu vm;
 	struct test_vcpu vcpu;
 	struct kvm_run *run;
 	int size;
@@ -213,6 +221,7 @@ static struct test_default test_default_init(void *guest_code)
 
 	t.size = min((size_t)kvm_check_cap(KVM_CAP_S390_MEM_OP), sizeof(mem1));
 	t.kvm_vm = vm_create_default(VCPU_ID, 0, guest_code);
+	t.vm = (struct test_vcpu) { t.kvm_vm, VM_VCPU_ID };
 	t.vcpu = (struct test_vcpu) { t.kvm_vm, VCPU_ID };
 	t.run = vcpu_state(t.kvm_vm, VCPU_ID);
 	return t;
@@ -223,6 +232,8 @@ enum stage {
 	STAGE_INITED,
 	/* Guest did nothing */
 	STAGE_IDLED,
+	/* Guest set storage keys (specifics up to test case) */
+	STAGE_SKEYS_SET,
 	/* Guest copied memory (locations up to test case) */
 	STAGE_COPIED,
 };
@@ -239,6 +250,47 @@ enum stage {
 	ASSERT_EQ(uc.args[1], __stage);					\
 })									\
 
+static void prepare_mem12(void)
+{
+	int i;
+
+	for (i = 0; i < sizeof(mem1); i++)
+		mem1[i] = rand();
+	memset(mem2, 0xaa, sizeof(mem2));
+}
+
+#define ASSERT_MEM_EQ(p1, p2, size) \
+	TEST_ASSERT(!memcmp(p1, p2, size), "Memory contents do not match!")
+
+#define DEFAULT_WRITE_READ(copy_cpu, mop_cpu, mop_target_p, size, ...)		\
+({										\
+	struct test_vcpu __copy_cpu = (copy_cpu), __mop_cpu = (mop_cpu);	\
+	enum mop_target __target = (mop_target_p);				\
+	uint32_t __size = (size);						\
+										\
+	prepare_mem12();							\
+	CHECK_N_DO(MOP, __mop_cpu, __target, WRITE, mem1, __size,		\
+			GADDR_V(mem1), ##__VA_ARGS__);				\
+	HOST_SYNC(__copy_cpu, STAGE_COPIED);					\
+	CHECK_N_DO(MOP, __mop_cpu, __target, READ, mem2, __size,		\
+			GADDR_V(mem2), ##__VA_ARGS__);				\
+	ASSERT_MEM_EQ(mem1, mem2, __size);					\
+})
+
+#define DEFAULT_READ(copy_cpu, mop_cpu, mop_target_p, size, ...)		\
+({										\
+	struct test_vcpu __copy_cpu = (copy_cpu), __mop_cpu = (mop_cpu);	\
+	enum mop_target __target = (mop_target_p);				\
+	uint32_t __size = (size);						\
+										\
+	prepare_mem12();							\
+	CHECK_N_DO(MOP, __mop_cpu, __target, WRITE, mem1, __size,		\
+			GADDR_V(mem1));						\
+	HOST_SYNC(__copy_cpu, STAGE_COPIED);					\
+	CHECK_N_DO(MOP, __mop_cpu, __target, READ, mem2, __size, ##__VA_ARGS__);\
+	ASSERT_MEM_EQ(mem1, mem2, __size);					\
+})
+
 static void guest_copy(void)
 {
 	GUEST_SYNC(STAGE_INITED);
@@ -249,30 +301,186 @@ static void guest_copy(void)
 static void test_copy(void)
 {
 	struct test_default t = test_default_init(guest_copy);
-	int i;
 
-	for (i = 0; i < sizeof(mem1); i++)
-		mem1[i] = i * i + i;
+	HOST_SYNC(t.vcpu, STAGE_INITED);
+
+	DEFAULT_WRITE_READ(t.vcpu, t.vcpu, LOGICAL, t.size);
+
+	kvm_vm_free(t.kvm_vm);
+}
+
+static void set_storage_key_range(void *addr, size_t len, uint8_t key)
+{
+	uintptr_t _addr, abs, i;
+	int not_mapped = 0;
+
+	_addr = (uintptr_t)addr;
+	for (i = _addr & PAGE_MASK; i < _addr + len; i += PAGE_SIZE) {
+		abs = i;
+		asm volatile (
+			       "lra	%[abs], 0(0,%[abs])\n"
+			"	jz	0f\n"
+			"	llill	%[not_mapped],1\n"
+			"	j	1f\n"
+			"0:	sske	%[key], %[abs]\n"
+			"1:"
+			: [abs] "+&a" (abs), [not_mapped] "+r" (not_mapped)
+			: [key] "r" (key)
+			: "cc"
+		);
+		GUEST_ASSERT_EQ(not_mapped, 0);
+	}
+}
+
+static void guest_copy_key(void)
+{
+	set_storage_key_range(mem1, sizeof(mem1), 0x90);
+	set_storage_key_range(mem2, sizeof(mem2), 0x90);
+	GUEST_SYNC(STAGE_SKEYS_SET);
+
+	for (;;) {
+		memcpy(&mem2, &mem1, sizeof(mem2));
+		GUEST_SYNC(STAGE_COPIED);
+	}
+}
+
+static void test_copy_key(void)
+{
+	struct test_default t = test_default_init(guest_copy_key);
+
+	HOST_SYNC(t.vcpu, STAGE_SKEYS_SET);
+
+	/* vm, no key */
+	DEFAULT_WRITE_READ(t.vcpu, t.vm, ABSOLUTE, t.size);
+
+	/* vm/vcpu, machting key or key 0 */
+	DEFAULT_WRITE_READ(t.vcpu, t.vcpu, LOGICAL, t.size, KEY(0));
+	DEFAULT_WRITE_READ(t.vcpu, t.vcpu, LOGICAL, t.size, KEY(9));
+	DEFAULT_WRITE_READ(t.vcpu, t.vm, ABSOLUTE, t.size, KEY(0));
+	DEFAULT_WRITE_READ(t.vcpu, t.vm, ABSOLUTE, t.size, KEY(9));
+	/*
+	 * There used to be different code paths for key handling depending on
+	 * if the region crossed a page boundary.
+	 * There currently are not, but the more tests the merrier.
+	 */
+	DEFAULT_WRITE_READ(t.vcpu, t.vcpu, LOGICAL, 1, KEY(0));
+	DEFAULT_WRITE_READ(t.vcpu, t.vcpu, LOGICAL, 1, KEY(9));
+	DEFAULT_WRITE_READ(t.vcpu, t.vm, ABSOLUTE, 1, KEY(0));
+	DEFAULT_WRITE_READ(t.vcpu, t.vm, ABSOLUTE, 1, KEY(9));
+
+	/* vm/vcpu, mismatching keys on read, but no fetch protection */
+	DEFAULT_READ(t.vcpu, t.vcpu, LOGICAL, t.size, GADDR_V(mem2), KEY(2));
+	DEFAULT_READ(t.vcpu, t.vm, ABSOLUTE, t.size, GADDR_V(mem1), KEY(2));
+
+	kvm_vm_free(t.kvm_vm);
+}
+
+static void guest_copy_key_fetch_prot(void)
+{
+	/*
+	 * For some reason combining the first sync with override enablement
+	 * results in an exception when calling HOST_SYNC.
+	 */
+	GUEST_SYNC(STAGE_INITED);
+	/* Storage protection override applies to both store and fetch. */
+	set_storage_key_range(mem1, sizeof(mem1), 0x98);
+	set_storage_key_range(mem2, sizeof(mem2), 0x98);
+	GUEST_SYNC(STAGE_SKEYS_SET);
+
+	for (;;) {
+		memcpy(&mem2, &mem1, sizeof(mem2));
+		GUEST_SYNC(STAGE_COPIED);
+	}
+}
+
+static void test_copy_key_storage_prot_override(void)
+{
+	struct test_default t = test_default_init(guest_copy_key_fetch_prot);
 
 	HOST_SYNC(t.vcpu, STAGE_INITED);
+	t.run->s.regs.crs[0] |= CR0_STORAGE_PROTECTION_OVERRIDE;
+	t.run->kvm_dirty_regs = KVM_SYNC_CRS;
+	HOST_SYNC(t.vcpu, STAGE_SKEYS_SET);
 
-	/* Set the first array */
-	MOP(t.vcpu, LOGICAL, WRITE, mem1, t.size, GADDR_V(mem1));
+	/* vcpu, mismatching keys, storage protection override in effect */
+	DEFAULT_WRITE_READ(t.vcpu, t.vcpu, LOGICAL, t.size, KEY(2));
 
-	/* Let the guest code copy the first array to the second */
-	HOST_SYNC(t.vcpu, STAGE_COPIED);
+	kvm_vm_free(t.kvm_vm);
+}
 
-	memset(mem2, 0xaa, sizeof(mem2));
+static void test_copy_key_fetch_prot(void)
+{
+	struct test_default t = test_default_init(guest_copy_key_fetch_prot);
 
-	/* Get the second array */
-	MOP(t.vcpu, LOGICAL, READ, mem2, t.size, GADDR_V(mem2));
+	HOST_SYNC(t.vcpu, STAGE_INITED);
+	HOST_SYNC(t.vcpu, STAGE_SKEYS_SET);
 
-	TEST_ASSERT(!memcmp(mem1, mem2, t.size),
-		    "Memory contents do not match!");
+	/* vm/vcpu, matching key, fetch protection in effect */
+	DEFAULT_READ(t.vcpu, t.vcpu, LOGICAL, t.size, GADDR_V(mem2), KEY(9));
+	DEFAULT_READ(t.vcpu, t.vm, ABSOLUTE, t.size, GADDR_V(mem2), KEY(9));
 
 	kvm_vm_free(t.kvm_vm);
 }
 
+const uint64_t last_page_addr = -PAGE_SIZE;
+
+static void guest_copy_key_fetch_prot_override(void)
+{
+	int i;
+	char *page_0 = 0;
+
+	GUEST_SYNC(STAGE_INITED);
+	set_storage_key_range(0, PAGE_SIZE, 0x18);
+	set_storage_key_range((void *)last_page_addr, PAGE_SIZE, 0x0);
+	asm volatile ("sske %[key],%[addr]\n" :: [addr] "r"(0), [key] "r"(0x18) : "cc");
+	GUEST_SYNC(STAGE_SKEYS_SET);
+
+	for (;;) {
+		for (i = 0; i < PAGE_SIZE; i++)
+			page_0[i] = mem1[i];
+		GUEST_SYNC(STAGE_COPIED);
+	}
+}
+
+static void test_copy_key_fetch_prot_override(void)
+{
+	struct test_default t = test_default_init(guest_copy_key_fetch_prot_override);
+	vm_vaddr_t guest_0_page, guest_last_page;
+
+	guest_0_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, 0);
+	guest_last_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr);
+	if (guest_0_page != 0 || guest_last_page != last_page_addr) {
+		print_skip("did not allocate guest pages at required positions");
+		goto out;
+	}
+
+	HOST_SYNC(t.vcpu, STAGE_INITED);
+	t.run->s.regs.crs[0] |= CR0_FETCH_PROTECTION_OVERRIDE;
+	t.run->kvm_dirty_regs = KVM_SYNC_CRS;
+	HOST_SYNC(t.vcpu, STAGE_SKEYS_SET);
+
+	/* vcpu, mismatching keys on fetch, fetch protection override applies */
+	prepare_mem12();
+	MOP(t.vcpu, LOGICAL, WRITE, mem1, PAGE_SIZE, GADDR_V(mem1));
+	HOST_SYNC(t.vcpu, STAGE_COPIED);
+	CHECK_N_DO(MOP, t.vcpu, LOGICAL, READ, mem2, 2048, GADDR_V(guest_0_page), KEY(2));
+	ASSERT_MEM_EQ(mem1, mem2, 2048);
+
+	/*
+	 * vcpu, mismatching keys on fetch, fetch protection override applies,
+	 * wraparound
+	 */
+	prepare_mem12();
+	MOP(t.vcpu, LOGICAL, WRITE, mem1, 2 * PAGE_SIZE, GADDR_V(guest_last_page));
+	HOST_SYNC(t.vcpu, STAGE_COPIED);
+	CHECK_N_DO(MOP, t.vcpu, LOGICAL, READ, mem2, PAGE_SIZE + 2048,
+			GADDR_V(guest_last_page), KEY(2));
+	ASSERT_MEM_EQ(mem1, mem2, 2048);
+
+out:
+	kvm_vm_free(t.kvm_vm);
+}
+
 static void guest_idle(void)
 {
 	GUEST_SYNC(STAGE_INITED); /* for consistencies sake */
@@ -335,17 +543,26 @@ static void test_errors(void)
 
 int main(int argc, char *argv[])
 {
-	int memop_cap;
+	int memop_cap, extension_cap;
 
 	setbuf(stdout, NULL);	/* Tell stdout not to buffer its content */
 
 	memop_cap = kvm_check_cap(KVM_CAP_S390_MEM_OP);
+	extension_cap = kvm_check_cap(KVM_CAP_S390_MEM_OP_EXTENSION);
 	if (!memop_cap) {
 		print_skip("CAP_S390_MEM_OP not supported");
 		exit(KSFT_SKIP);
 	}
 
 	test_copy();
+	if (extension_cap > 0) {
+		test_copy_key();
+		test_copy_key_storage_prot_override();
+		test_copy_key_fetch_prot();
+		test_copy_key_fetch_prot_override();
+	} else {
+		print_skip("storage key memop extension not supported");
+	}
 	test_errors();
 
 	return 0;
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v2 5/5] KVM: s390: selftests: Add error memop tests
  2022-02-25 15:53   ` [PATCH v2 0/5] memop selftest for storage key checking Janis Schoetterl-Glausch
                       ` (3 preceding siblings ...)
  2022-02-25 15:53     ` [PATCH v2 4/5] KVM: s390: selftests: Add more copy memop tests Janis Schoetterl-Glausch
@ 2022-02-25 15:53     ` Janis Schoetterl-Glausch
  2022-03-08 10:16     ` [PATCH v2 0/5] memop selftest for storage key checking Christian Borntraeger
  5 siblings, 0 replies; 39+ messages in thread
From: Janis Schoetterl-Glausch @ 2022-02-25 15:53 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank, Claudio Imbrenda, Shuah Khan
  Cc: Janis Schoetterl-Glausch, Thomas Huth, David Hildenbrand, kvm,
	linux-kselftest, linux-kernel

Test that errors occur if key protection disallows access, including
tests for storage and fetch protection override. Perform tests for both
logical vcpu and absolute vm ioctls.
Also extend the existing tests to the vm ioctl.

Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
---
 tools/testing/selftests/kvm/s390x/memop.c | 136 +++++++++++++++++++---
 1 file changed, 123 insertions(+), 13 deletions(-)

diff --git a/tools/testing/selftests/kvm/s390x/memop.c b/tools/testing/selftests/kvm/s390x/memop.c
index 088d1cc61709..55048bfecdcb 100644
--- a/tools/testing/selftests/kvm/s390x/memop.c
+++ b/tools/testing/selftests/kvm/s390x/memop.c
@@ -422,6 +422,46 @@ static void test_copy_key_fetch_prot(void)
 	kvm_vm_free(t.kvm_vm);
 }
 
+#define ERR_PROT_MOP(...)							\
+({										\
+	int rv;									\
+										\
+	rv = ERR_MOP(__VA_ARGS__);						\
+	TEST_ASSERT(rv == 4, "Should result in protection exception");		\
+})
+
+static void test_errors_key(void)
+{
+	struct test_default t = test_default_init(guest_copy_key_fetch_prot);
+
+	HOST_SYNC(t.vcpu, STAGE_INITED);
+	HOST_SYNC(t.vcpu, STAGE_SKEYS_SET);
+
+	/* vm/vcpu, mismatching keys, fetch protection in effect */
+	CHECK_N_DO(ERR_PROT_MOP, t.vcpu, LOGICAL, WRITE, mem1, t.size, GADDR_V(mem1), KEY(2));
+	CHECK_N_DO(ERR_PROT_MOP, t.vcpu, LOGICAL, READ, mem2, t.size, GADDR_V(mem2), KEY(2));
+	CHECK_N_DO(ERR_PROT_MOP, t.vm, ABSOLUTE, WRITE, mem1, t.size, GADDR_V(mem1), KEY(2));
+	CHECK_N_DO(ERR_PROT_MOP, t.vm, ABSOLUTE, READ, mem2, t.size, GADDR_V(mem2), KEY(2));
+
+	kvm_vm_free(t.kvm_vm);
+}
+
+static void test_errors_key_storage_prot_override(void)
+{
+	struct test_default t = test_default_init(guest_copy_key_fetch_prot);
+
+	HOST_SYNC(t.vcpu, STAGE_INITED);
+	t.run->s.regs.crs[0] |= CR0_STORAGE_PROTECTION_OVERRIDE;
+	t.run->kvm_dirty_regs = KVM_SYNC_CRS;
+	HOST_SYNC(t.vcpu, STAGE_SKEYS_SET);
+
+	/* vm, mismatching keys, storage protection override not applicable to vm */
+	CHECK_N_DO(ERR_PROT_MOP, t.vm, ABSOLUTE, WRITE, mem1, t.size, GADDR_V(mem1), KEY(2));
+	CHECK_N_DO(ERR_PROT_MOP, t.vm, ABSOLUTE, READ, mem2, t.size, GADDR_V(mem2), KEY(2));
+
+	kvm_vm_free(t.kvm_vm);
+}
+
 const uint64_t last_page_addr = -PAGE_SIZE;
 
 static void guest_copy_key_fetch_prot_override(void)
@@ -481,6 +521,57 @@ static void test_copy_key_fetch_prot_override(void)
 	kvm_vm_free(t.kvm_vm);
 }
 
+static void test_errors_key_fetch_prot_override_not_enabled(void)
+{
+	struct test_default t = test_default_init(guest_copy_key_fetch_prot_override);
+	vm_vaddr_t guest_0_page, guest_last_page;
+
+	guest_0_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, 0);
+	guest_last_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr);
+	if (guest_0_page != 0 || guest_last_page != last_page_addr) {
+		print_skip("did not allocate guest pages at required positions");
+		goto out;
+	}
+	HOST_SYNC(t.vcpu, STAGE_INITED);
+	HOST_SYNC(t.vcpu, STAGE_SKEYS_SET);
+
+	/* vcpu, mismatching keys on fetch, fetch protection override not enabled */
+	CHECK_N_DO(ERR_PROT_MOP, t.vcpu, LOGICAL, READ, mem2, 2048, GADDR_V(0), KEY(2));
+
+out:
+	kvm_vm_free(t.kvm_vm);
+}
+
+static void test_errors_key_fetch_prot_override_enabled(void)
+{
+	struct test_default t = test_default_init(guest_copy_key_fetch_prot_override);
+	vm_vaddr_t guest_0_page, guest_last_page;
+
+	guest_0_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, 0);
+	guest_last_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr);
+	if (guest_0_page != 0 || guest_last_page != last_page_addr) {
+		print_skip("did not allocate guest pages at required positions");
+		goto out;
+	}
+	HOST_SYNC(t.vcpu, STAGE_INITED);
+	t.run->s.regs.crs[0] |= CR0_FETCH_PROTECTION_OVERRIDE;
+	t.run->kvm_dirty_regs = KVM_SYNC_CRS;
+	HOST_SYNC(t.vcpu, STAGE_SKEYS_SET);
+
+	/*
+	 * vcpu, mismatching keys on fetch,
+	 * fetch protection override does not apply because memory range acceeded
+	 */
+	CHECK_N_DO(ERR_PROT_MOP, t.vcpu, LOGICAL, READ, mem2, 2048 + 1, GADDR_V(0), KEY(2));
+	CHECK_N_DO(ERR_PROT_MOP, t.vcpu, LOGICAL, READ, mem2, PAGE_SIZE + 2048 + 1,
+				 GADDR_V(guest_last_page), KEY(2));
+	CHECK_N_DO(ERR_PROT_MOP, t.vm, ABSOLUTE, READ, mem2, 2048, GADDR(0), KEY(2));
+	CHECK_N_DO(ERR_PROT_MOP, t.vm, ABSOLUTE, READ, mem2, 2048, GADDR_V(guest_0_page), KEY(2));
+
+out:
+	kvm_vm_free(t.kvm_vm);
+}
+
 static void guest_idle(void)
 {
 	GUEST_SYNC(STAGE_INITED); /* for consistencies sake */
@@ -488,39 +579,54 @@ static void guest_idle(void)
 		GUEST_SYNC(STAGE_IDLED);
 }
 
-static void test_errors(void)
+static void _test_errors_common(struct test_vcpu vcpu, enum mop_target target, int size)
 {
-	struct test_default t = test_default_init(guest_idle);
 	int rv;
 
-	HOST_SYNC(t.vcpu, STAGE_INITED);
-
 	/* Bad size: */
-	rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, -1, GADDR_V(mem1));
+	rv = ERR_MOP(vcpu, target, WRITE, mem1, -1, GADDR_V(mem1));
 	TEST_ASSERT(rv == -1 && errno == E2BIG, "ioctl allows insane sizes");
 
 	/* Zero size: */
-	rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, 0, GADDR_V(mem1));
+	rv = ERR_MOP(vcpu, target, WRITE, mem1, 0, GADDR_V(mem1));
 	TEST_ASSERT(rv == -1 && (errno == EINVAL || errno == ENOMEM),
 		    "ioctl allows 0 as size");
 
 	/* Bad flags: */
-	rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, t.size, GADDR_V(mem1), SET_FLAGS(-1));
+	rv = ERR_MOP(vcpu, target, WRITE, mem1, size, GADDR_V(mem1), SET_FLAGS(-1));
 	TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows all flags");
 
-	/* Bad operation: */
-	rv = ERR_MOP(t.vcpu, INVALID, WRITE, mem1, t.size, GADDR_V(mem1));
-	TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows bad operations");
-
 	/* Bad guest address: */
-	rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, t.size, GADDR((void *)~0xfffUL), CHECK_ONLY);
+	rv = ERR_MOP(vcpu, target, WRITE, mem1, size, GADDR((void *)~0xfffUL), CHECK_ONLY);
 	TEST_ASSERT(rv > 0, "ioctl does not report bad guest memory access");
 
 	/* Bad host address: */
-	rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, 0, t.size, GADDR_V(mem1));
+	rv = ERR_MOP(vcpu, target, WRITE, 0, size, GADDR_V(mem1));
 	TEST_ASSERT(rv == -1 && errno == EFAULT,
 		    "ioctl does not report bad host memory address");
 
+	/* Bad key: */
+	rv = ERR_MOP(vcpu, target, WRITE, mem1, size, GADDR_V(mem1), KEY(17));
+	TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows invalid key");
+}
+
+static void test_errors(void)
+{
+	struct test_default t = test_default_init(guest_idle);
+	int rv;
+
+	HOST_SYNC(t.vcpu, STAGE_INITED);
+
+	_test_errors_common(t.vcpu, LOGICAL, t.size);
+	_test_errors_common(t.vm, ABSOLUTE, t.size);
+
+	/* Bad operation: */
+	rv = ERR_MOP(t.vcpu, INVALID, WRITE, mem1, t.size, GADDR_V(mem1));
+	TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows bad operations");
+	/* virtual addresses are not translated when passing INVALID */
+	rv = ERR_MOP(t.vm, INVALID, WRITE, mem1, PAGE_SIZE, GADDR(0));
+	TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows bad operations");
+
 	/* Bad access register: */
 	t.run->psw_mask &= ~(3UL << (63 - 17));
 	t.run->psw_mask |= 1UL << (63 - 17);  /* Enable AR mode */
@@ -560,6 +666,10 @@ int main(int argc, char *argv[])
 		test_copy_key_storage_prot_override();
 		test_copy_key_fetch_prot();
 		test_copy_key_fetch_prot_override();
+		test_errors_key();
+		test_errors_key_storage_prot_override();
+		test_errors_key_fetch_prot_override_not_enabled();
+		test_errors_key_fetch_prot_override_enabled();
 	} else {
 		print_skip("storage key memop extension not supported");
 	}
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [PATCH v2 0/5] memop selftest for storage key checking
  2022-02-25 15:53   ` [PATCH v2 0/5] memop selftest for storage key checking Janis Schoetterl-Glausch
                       ` (4 preceding siblings ...)
  2022-02-25 15:53     ` [PATCH v2 5/5] KVM: s390: selftests: Add error " Janis Schoetterl-Glausch
@ 2022-03-08 10:16     ` Christian Borntraeger
  2022-03-08 21:11       ` Shuah Khan
  5 siblings, 1 reply; 39+ messages in thread
From: Christian Borntraeger @ 2022-03-08 10:16 UTC (permalink / raw)
  To: Janis Schoetterl-Glausch, Janosch Frank, Claudio Imbrenda, Shuah Khan
  Cc: Thomas Huth, David Hildenbrand, kvm, linux-kselftest, linux-kernel

Can you send this as a separate thread (not inside the original thread). Otherwise things like b4 get confused and people might also overlook this.


Am 25.02.22 um 16:53 schrieb Janis Schoetterl-Glausch:
> Refactor memop selftest and add tests.
> Add storage key tests, both for success as well as failure cases.
> Similarly test both vcpu and vm ioctls.
> 
> v1 -> v2
>   * restructure commits
>   * get rid of test_* wrapper functions that hid vm.vm
>   * minor changes
> 
> v0 -> v2
>   * complete rewrite
> 
> v1: https://lore.kernel.org/kvm/20220217145336.1794778-1-scgl@linux.ibm.com/
> v0: https://lore.kernel.org/kvm/20220211182215.2730017-11-scgl@linux.ibm.com/
> 
> Janis Schoetterl-Glausch (5):
>    KVM: s390: selftests: Split memop tests
>    KVM: s390: selftests: Add macro as abstraction for MEM_OP
>    KVM: s390: selftests: Add named stages for memop test
>    KVM: s390: selftests: Add more copy memop tests
>    KVM: s390: selftests: Add error memop tests
> 
>   tools/testing/selftests/kvm/s390x/memop.c | 734 ++++++++++++++++++----
>   1 file changed, 616 insertions(+), 118 deletions(-)
> 
> 
> base-commit: ee6a569d3bf64c9676eee3eecb861fb01cc11311

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v2 0/5] memop selftest for storage key checking
  2022-03-08 10:16     ` [PATCH v2 0/5] memop selftest for storage key checking Christian Borntraeger
@ 2022-03-08 21:11       ` Shuah Khan
  0 siblings, 0 replies; 39+ messages in thread
From: Shuah Khan @ 2022-03-08 21:11 UTC (permalink / raw)
  To: Christian Borntraeger, Janis Schoetterl-Glausch, Janosch Frank,
	Claudio Imbrenda, Shuah Khan
  Cc: Thomas Huth, David Hildenbrand, kvm, linux-kselftest,
	linux-kernel, Shuah Khan

On 3/8/22 3:16 AM, Christian Borntraeger wrote:
> Can you send this as a separate thread (not inside the original thread). Otherwise things like b4 get confused and people might also overlook this.
> 
> 

+1 - I missed the v2 until now.

> Am 25.02.22 um 16:53 schrieb Janis Schoetterl-Glausch:
>> Refactor memop selftest and add tests.
>> Add storage key tests, both for success as well as failure cases.
>> Similarly test both vcpu and vm ioctls.
>>
>> v1 -> v2
>>   * restructure commits
>>   * get rid of test_* wrapper functions that hid vm.vm
>>   * minor changes
>>

thanks,
-- Shuah



^ permalink raw reply	[flat|nested] 39+ messages in thread

end of thread, other threads:[~2022-03-08 21:12 UTC | newest]

Thread overview: 39+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-11 18:22 [PATCH v4 00/10] KVM: s390: Do storage key checking Janis Schoetterl-Glausch
2022-02-11 18:22 ` [PATCH v4 01/10] s390/uaccess: Add copy_from/to_user_key functions Janis Schoetterl-Glausch
2022-02-14 15:30   ` Claudio Imbrenda
2022-02-11 18:22 ` [PATCH v4 02/10] KVM: s390: Honor storage keys when accessing guest memory Janis Schoetterl-Glausch
2022-02-14 15:52   ` Claudio Imbrenda
2022-02-11 18:22 ` [PATCH v4 03/10] KVM: s390: handle_tprot: Honor storage keys Janis Schoetterl-Glausch
2022-02-11 18:22 ` [PATCH v4 04/10] KVM: s390: selftests: Test TEST PROTECTION emulation Janis Schoetterl-Glausch
2022-02-11 18:22 ` [PATCH v4 05/10] KVM: s390: Add optional storage key checking to MEMOP IOCTL Janis Schoetterl-Glausch
2022-02-11 18:22 ` [PATCH v4 06/10] KVM: s390: Add vm IOCTL for key checked guest absolute memory access Janis Schoetterl-Glausch
2022-02-14 17:58   ` Claudio Imbrenda
2022-02-21 16:32   ` [PATCH] KVM: s390: Add missing vm MEM_OP size check Janis Schoetterl-Glausch
2022-02-22  8:04     ` Christian Borntraeger
2022-02-11 18:22 ` [PATCH v4 07/10] KVM: s390: Rename existing vcpu memop functions Janis Schoetterl-Glausch
2022-02-11 18:22 ` [PATCH v4 08/10] KVM: s390: Add capability for storage key extension of MEM_OP IOCTL Janis Schoetterl-Glausch
2022-02-14 14:29   ` Christian Borntraeger
2022-02-14 17:47   ` Claudio Imbrenda
2022-02-11 18:22 ` [PATCH v4 09/10] KVM: s390: Update api documentation for memop ioctl Janis Schoetterl-Glausch
2022-02-14 17:52   ` Claudio Imbrenda
2022-02-21 14:36   ` [PATCH] KVM: s390: Clarify key argument for MEM_OP in api docs Janis Schoetterl-Glausch
2022-02-21 16:06     ` Christian Borntraeger
2022-02-11 18:22 ` [PATCH v4 10/10] KVM: s390: selftests: Test memops with storage keys Janis Schoetterl-Glausch
2022-02-17 14:53   ` Janis Schoetterl-Glausch
2022-02-17 14:53     ` [PATCH 1/2] KVM: s390: selftests: Refactor memop test Janis Schoetterl-Glausch
2022-02-17 17:36       ` Shuah Khan
2022-02-18 12:09         ` Janis Schoetterl-Glausch
2022-02-18 22:14           ` Shuah Khan
2022-02-17 14:53     ` [PATCH 2/2] KVM: s390: selftests: Test vm and vcpu memop with keys Janis Schoetterl-Glausch
2022-02-17 17:54       ` Shuah Khan
2022-02-18 12:14         ` Janis Schoetterl-Glausch
2022-02-18 21:14           ` Shuah Khan
2022-02-25 15:53   ` [PATCH v2 0/5] memop selftest for storage key checking Janis Schoetterl-Glausch
2022-02-25 15:53     ` [PATCH v2 1/5] KVM: s390: selftests: Split memop tests Janis Schoetterl-Glausch
2022-02-25 15:53     ` [PATCH v2 2/5] KVM: s390: selftests: Add macro as abstraction for MEM_OP Janis Schoetterl-Glausch
2022-02-25 15:53     ` [PATCH v2 3/5] KVM: s390: selftests: Add named stages for memop test Janis Schoetterl-Glausch
2022-02-25 15:53     ` [PATCH v2 4/5] KVM: s390: selftests: Add more copy memop tests Janis Schoetterl-Glausch
2022-02-25 15:53     ` [PATCH v2 5/5] KVM: s390: selftests: Add error " Janis Schoetterl-Glausch
2022-03-08 10:16     ` [PATCH v2 0/5] memop selftest for storage key checking Christian Borntraeger
2022-03-08 21:11       ` Shuah Khan
2022-02-14 14:52 ` [PATCH v4 00/10] KVM: s390: Do " Christian Borntraeger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).