All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/3] s390/kvm: fix MVPG when in VSIE
@ 2021-03-02 17:44 Claudio Imbrenda
  2021-03-02 17:44 ` [PATCH v5 1/3] s390/kvm: split kvm_s390_logical_to_effective Claudio Imbrenda
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Claudio Imbrenda @ 2021-03-02 17:44 UTC (permalink / raw)
  To: linux-kernel; +Cc: borntraeger, frankja, david, cohuck, kvm, linux-s390

The current handling of the MVPG instruction when executed in a nested
guest is wrong, and can lead to the nested guest hanging.

This patchset fixes the behaviour to be more architecturally correct,
and fixes the hangs observed.

v4->v5
* split kvm_s390_logical_to_effective so it can be reused for vSIE
* fix existing comments and add some more comments
* use the new split _kvm_s390_logical_to_effective in vsie_handle_mvpg

v3->v4
* added PEI_ prefix to DAT_PROT and NOT_PTE macros
* added small comment to explain what they are about

v2->v3
* improved some comments
* improved some variable and parameter names for increased readability
* fixed missing handling of page faults in the MVPG handler
* small readability improvements

v1->v2
* complete rewrite

Claudio Imbrenda (3):
  s390/kvm: split kvm_s390_logical_to_effective
  s390/kvm: extend kvm_s390_shadow_fault to return entry pointer
  s390/kvm: VSIE: correctly handle MVPG when in VSIE

 arch/s390/kvm/gaccess.c |  30 ++++++++++--
 arch/s390/kvm/gaccess.h |  35 ++++++++++---
 arch/s390/kvm/vsie.c    | 106 ++++++++++++++++++++++++++++++++++++----
 3 files changed, 151 insertions(+), 20 deletions(-)

-- 
2.26.2


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v5 1/3] s390/kvm: split kvm_s390_logical_to_effective
  2021-03-02 17:44 [PATCH v5 0/3] s390/kvm: fix MVPG when in VSIE Claudio Imbrenda
@ 2021-03-02 17:44 ` Claudio Imbrenda
  2021-03-05 15:08   ` Christian Borntraeger
  2021-03-02 17:44 ` [PATCH v5 2/3] s390/kvm: extend kvm_s390_shadow_fault to return entry pointer Claudio Imbrenda
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 10+ messages in thread
From: Claudio Imbrenda @ 2021-03-02 17:44 UTC (permalink / raw)
  To: linux-kernel; +Cc: borntraeger, frankja, david, cohuck, kvm, linux-s390

Split kvm_s390_logical_to_effective to a generic function called
_kvm_s390_logical_to_effective. The new function takes a PSW and an address
and returns the address with the appropriate bits masked off. The old
function now calls the new function with the appropriate PSW from the vCPU.

This is needed to avoid code duplication for vSIE.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 arch/s390/kvm/gaccess.h | 29 ++++++++++++++++++++++++-----
 1 file changed, 24 insertions(+), 5 deletions(-)

diff --git a/arch/s390/kvm/gaccess.h b/arch/s390/kvm/gaccess.h
index f4c51756c462..107fdfd2eadd 100644
--- a/arch/s390/kvm/gaccess.h
+++ b/arch/s390/kvm/gaccess.h
@@ -36,6 +36,29 @@ static inline unsigned long kvm_s390_real_to_abs(struct kvm_vcpu *vcpu,
 	return gra;
 }
 
+/**
+ * _kvm_s390_logical_to_effective - convert guest logical to effective address
+ * @psw: psw of the guest
+ * @ga: guest logical address
+ *
+ * Convert a guest logical address to an effective address by applying the
+ * rules of the addressing mode defined by bits 31 and 32 of the given PSW
+ * (extendended/basic addressing mode).
+ *
+ * Depending on the addressing mode, the upper 40 bits (24 bit addressing
+ * mode), 33 bits (31 bit addressing mode) or no bits (64 bit addressing
+ * mode) of @ga will be zeroed and the remaining bits will be returned.
+ */
+static inline unsigned long _kvm_s390_logical_to_effective(psw_t *psw,
+							   unsigned long ga)
+{
+	if (psw_bits(*psw).eaba == PSW_BITS_AMODE_64BIT)
+		return ga;
+	if (psw_bits(*psw).eaba == PSW_BITS_AMODE_31BIT)
+		return ga & ((1UL << 31) - 1);
+	return ga & ((1UL << 24) - 1);
+}
+
 /**
  * kvm_s390_logical_to_effective - convert guest logical to effective address
  * @vcpu: guest virtual cpu
@@ -54,11 +77,7 @@ static inline unsigned long kvm_s390_logical_to_effective(struct kvm_vcpu *vcpu,
 {
 	psw_t *psw = &vcpu->arch.sie_block->gpsw;
 
-	if (psw_bits(*psw).eaba == PSW_BITS_AMODE_64BIT)
-		return ga;
-	if (psw_bits(*psw).eaba == PSW_BITS_AMODE_31BIT)
-		return ga & ((1UL << 31) - 1);
-	return ga & ((1UL << 24) - 1);
+	return _kvm_s390_logical_to_effective(&vcpu->arch.sie_block->gpsw, ga);
 }
 
 /*
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v5 2/3] s390/kvm: extend kvm_s390_shadow_fault to return entry pointer
  2021-03-02 17:44 [PATCH v5 0/3] s390/kvm: fix MVPG when in VSIE Claudio Imbrenda
  2021-03-02 17:44 ` [PATCH v5 1/3] s390/kvm: split kvm_s390_logical_to_effective Claudio Imbrenda
@ 2021-03-02 17:44 ` Claudio Imbrenda
  2021-03-05 17:01   ` Christian Borntraeger
  2021-03-02 17:44 ` [PATCH v5 3/3] s390/kvm: VSIE: correctly handle MVPG when in VSIE Claudio Imbrenda
  2021-03-08 15:19 ` [PATCH v5 0/3] s390/kvm: fix " Christian Borntraeger
  3 siblings, 1 reply; 10+ messages in thread
From: Claudio Imbrenda @ 2021-03-02 17:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: borntraeger, frankja, david, cohuck, kvm, linux-s390, stable,
	Janosch Frank

Extend kvm_s390_shadow_fault to return the pointer to the valid leaf
DAT table entry, or to the invalid entry.

Also return some flags in the lower bits of the address:
PEI_DAT_PROT: indicates that DAT protection applies because of the
              protection bit in the segment (or, if EDAT, region) tables.
PEI_NOT_PTE: indicates that the address of the DAT table entry returned
             does not refer to a PTE, but to a segment or region table.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Cc: stable@vger.kernel.org
Reviewed-by: Janosch Frank <frankja@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
 arch/s390/kvm/gaccess.c | 30 +++++++++++++++++++++++++-----
 arch/s390/kvm/gaccess.h |  6 +++++-
 arch/s390/kvm/vsie.c    |  8 ++++----
 3 files changed, 34 insertions(+), 10 deletions(-)

diff --git a/arch/s390/kvm/gaccess.c b/arch/s390/kvm/gaccess.c
index 6d6b57059493..33a03e518640 100644
--- a/arch/s390/kvm/gaccess.c
+++ b/arch/s390/kvm/gaccess.c
@@ -976,7 +976,9 @@ int kvm_s390_check_low_addr_prot_real(struct kvm_vcpu *vcpu, unsigned long gra)
  * kvm_s390_shadow_tables - walk the guest page table and create shadow tables
  * @sg: pointer to the shadow guest address space structure
  * @saddr: faulting address in the shadow gmap
- * @pgt: pointer to the page table address result
+ * @pgt: pointer to the beginning of the page table for the given address if
+ *       successful (return value 0), or to the first invalid DAT entry in
+ *       case of exceptions (return value > 0)
  * @fake: pgt references contiguous guest memory block, not a pgtable
  */
 static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr,
@@ -1034,6 +1036,7 @@ static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr,
 			rfte.val = ptr;
 			goto shadow_r2t;
 		}
+		*pgt = ptr + vaddr.rfx * 8;
 		rc = gmap_read_table(parent, ptr + vaddr.rfx * 8, &rfte.val);
 		if (rc)
 			return rc;
@@ -1060,6 +1063,7 @@ static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr,
 			rste.val = ptr;
 			goto shadow_r3t;
 		}
+		*pgt = ptr + vaddr.rsx * 8;
 		rc = gmap_read_table(parent, ptr + vaddr.rsx * 8, &rste.val);
 		if (rc)
 			return rc;
@@ -1087,6 +1091,7 @@ static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr,
 			rtte.val = ptr;
 			goto shadow_sgt;
 		}
+		*pgt = ptr + vaddr.rtx * 8;
 		rc = gmap_read_table(parent, ptr + vaddr.rtx * 8, &rtte.val);
 		if (rc)
 			return rc;
@@ -1123,6 +1128,7 @@ static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr,
 			ste.val = ptr;
 			goto shadow_pgt;
 		}
+		*pgt = ptr + vaddr.sx * 8;
 		rc = gmap_read_table(parent, ptr + vaddr.sx * 8, &ste.val);
 		if (rc)
 			return rc;
@@ -1157,6 +1163,8 @@ static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr,
  * @vcpu: virtual cpu
  * @sg: pointer to the shadow guest address space structure
  * @saddr: faulting address in the shadow gmap
+ * @datptr: will contain the address of the faulting DAT table entry, or of
+ *          the valid leaf, plus some flags
  *
  * Returns: - 0 if the shadow fault was successfully resolved
  *	    - > 0 (pgm exception code) on exceptions while faulting
@@ -1165,11 +1173,11 @@ static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr,
  *	    - -ENOMEM if out of memory
  */
 int kvm_s390_shadow_fault(struct kvm_vcpu *vcpu, struct gmap *sg,
-			  unsigned long saddr)
+			  unsigned long saddr, unsigned long *datptr)
 {
 	union vaddress vaddr;
 	union page_table_entry pte;
-	unsigned long pgt;
+	unsigned long pgt = 0;
 	int dat_protection, fake;
 	int rc;
 
@@ -1191,8 +1199,20 @@ int kvm_s390_shadow_fault(struct kvm_vcpu *vcpu, struct gmap *sg,
 		pte.val = pgt + vaddr.px * PAGE_SIZE;
 		goto shadow_page;
 	}
-	if (!rc)
-		rc = gmap_read_table(sg->parent, pgt + vaddr.px * 8, &pte.val);
+
+	switch (rc) {
+	case PGM_SEGMENT_TRANSLATION:
+	case PGM_REGION_THIRD_TRANS:
+	case PGM_REGION_SECOND_TRANS:
+	case PGM_REGION_FIRST_TRANS:
+		pgt |= PEI_NOT_PTE;
+		break;
+	case 0:
+		pgt += vaddr.px * 8;
+		rc = gmap_read_table(sg->parent, pgt, &pte.val);
+	}
+	if (*datptr)
+		*datptr = pgt | dat_protection * PEI_DAT_PROT;
 	if (!rc && pte.i)
 		rc = PGM_PAGE_TRANSLATION;
 	if (!rc && pte.z)
diff --git a/arch/s390/kvm/gaccess.h b/arch/s390/kvm/gaccess.h
index 107fdfd2eadd..df4014f09e26 100644
--- a/arch/s390/kvm/gaccess.h
+++ b/arch/s390/kvm/gaccess.h
@@ -378,7 +378,11 @@ void ipte_unlock(struct kvm_vcpu *vcpu);
 int ipte_lock_held(struct kvm_vcpu *vcpu);
 int kvm_s390_check_low_addr_prot_real(struct kvm_vcpu *vcpu, unsigned long gra);
 
+/* MVPG PEI indication bits */
+#define PEI_DAT_PROT 2
+#define PEI_NOT_PTE 4
+
 int kvm_s390_shadow_fault(struct kvm_vcpu *vcpu, struct gmap *shadow,
-			  unsigned long saddr);
+			  unsigned long saddr, unsigned long *datptr);
 
 #endif /* __KVM_S390_GACCESS_H */
diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c
index bd803e091918..78b604326016 100644
--- a/arch/s390/kvm/vsie.c
+++ b/arch/s390/kvm/vsie.c
@@ -620,10 +620,10 @@ static int map_prefix(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
 	/* with mso/msl, the prefix lies at offset *mso* */
 	prefix += scb_s->mso;
 
-	rc = kvm_s390_shadow_fault(vcpu, vsie_page->gmap, prefix);
+	rc = kvm_s390_shadow_fault(vcpu, vsie_page->gmap, prefix, NULL);
 	if (!rc && (scb_s->ecb & ECB_TE))
 		rc = kvm_s390_shadow_fault(vcpu, vsie_page->gmap,
-					   prefix + PAGE_SIZE);
+					   prefix + PAGE_SIZE, NULL);
 	/*
 	 * We don't have to mprotect, we will be called for all unshadows.
 	 * SIE will detect if protection applies and trigger a validity.
@@ -914,7 +914,7 @@ static int handle_fault(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
 				    current->thread.gmap_addr, 1);
 
 	rc = kvm_s390_shadow_fault(vcpu, vsie_page->gmap,
-				   current->thread.gmap_addr);
+				   current->thread.gmap_addr, NULL);
 	if (rc > 0) {
 		rc = inject_fault(vcpu, rc,
 				  current->thread.gmap_addr,
@@ -936,7 +936,7 @@ static void handle_last_fault(struct kvm_vcpu *vcpu,
 {
 	if (vsie_page->fault_addr)
 		kvm_s390_shadow_fault(vcpu, vsie_page->gmap,
-				      vsie_page->fault_addr);
+				      vsie_page->fault_addr, NULL);
 	vsie_page->fault_addr = 0;
 }
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v5 3/3] s390/kvm: VSIE: correctly handle MVPG when in VSIE
  2021-03-02 17:44 [PATCH v5 0/3] s390/kvm: fix MVPG when in VSIE Claudio Imbrenda
  2021-03-02 17:44 ` [PATCH v5 1/3] s390/kvm: split kvm_s390_logical_to_effective Claudio Imbrenda
  2021-03-02 17:44 ` [PATCH v5 2/3] s390/kvm: extend kvm_s390_shadow_fault to return entry pointer Claudio Imbrenda
@ 2021-03-02 17:44 ` Claudio Imbrenda
  2021-03-08 15:18   ` Christian Borntraeger
  2021-03-08 15:19 ` [PATCH v5 0/3] s390/kvm: fix " Christian Borntraeger
  3 siblings, 1 reply; 10+ messages in thread
From: Claudio Imbrenda @ 2021-03-02 17:44 UTC (permalink / raw)
  To: linux-kernel; +Cc: borntraeger, frankja, david, cohuck, kvm, linux-s390, stable

Correctly handle the MVPG instruction when issued by a VSIE guest.

Fixes: a3508fbe9dc6d ("KVM: s390: vsie: initial support for nested virtualization")
Cc: stable@vger.kernel.org
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Acked-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
 arch/s390/kvm/vsie.c | 98 +++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 93 insertions(+), 5 deletions(-)

diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c
index 78b604326016..48aab6290a77 100644
--- a/arch/s390/kvm/vsie.c
+++ b/arch/s390/kvm/vsie.c
@@ -417,11 +417,6 @@ static void unshadow_scb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
 		memcpy((void *)((u64)scb_o + 0xc0),
 		       (void *)((u64)scb_s + 0xc0), 0xf0 - 0xc0);
 		break;
-	case ICPT_PARTEXEC:
-		/* MVPG only */
-		memcpy((void *)((u64)scb_o + 0xc0),
-		       (void *)((u64)scb_s + 0xc0), 0xd0 - 0xc0);
-		break;
 	}
 
 	if (scb_s->ihcpu != 0xffffU)
@@ -983,6 +978,95 @@ static int handle_stfle(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
 	return 0;
 }
 
+/*
+ * Get a register for a nested guest.
+ * @vcpu the vcpu of the guest
+ * @vsie_page the vsie_page for the nested guest
+ * @reg the register number, the upper 4 bits are ignored.
+ * returns: the value of the register.
+ */
+static u64 vsie_get_register(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page, u8 reg)
+{
+	/* no need to validate the parameter and/or perform error handling */
+	reg &= 0xf;
+	switch (reg) {
+	case 15:
+		return vsie_page->scb_s.gg15;
+	case 14:
+		return vsie_page->scb_s.gg14;
+	default:
+		return vcpu->run->s.regs.gprs[reg];
+	}
+}
+
+static int vsie_handle_mvpg(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
+{
+	struct kvm_s390_sie_block *scb_s = &vsie_page->scb_s;
+	unsigned long pei_dest, pei_src, src, dest, mask;
+	u64 *pei_block = &vsie_page->scb_o->mcic;
+	int edat, rc_dest, rc_src;
+	union ctlreg0 cr0;
+
+	cr0.val = vcpu->arch.sie_block->gcr[0];
+	edat = cr0.edat && test_kvm_facility(vcpu->kvm, 8);
+	mask = _kvm_s390_logical_to_effective(&scb_s->gpsw, PAGE_MASK);
+
+	dest = vsie_get_register(vcpu, vsie_page, scb_s->ipb >> 16) & mask;
+	src = vsie_get_register(vcpu, vsie_page, scb_s->ipb >> 20) & mask;
+
+	rc_dest = kvm_s390_shadow_fault(vcpu, vsie_page->gmap, dest, &pei_dest);
+	rc_src = kvm_s390_shadow_fault(vcpu, vsie_page->gmap, src, &pei_src);
+	/*
+	 * Either everything went well, or something non-critical went wrong
+	 * e.g. because of a race. In either case, simply retry.
+	 */
+	if (rc_dest == -EAGAIN || rc_src == -EAGAIN || (!rc_dest && !rc_src)) {
+		retry_vsie_icpt(vsie_page);
+		return -EAGAIN;
+	}
+	/* Something more serious went wrong, propagate the error */
+	if (rc_dest < 0)
+		return rc_dest;
+	if (rc_src < 0)
+		return rc_src;
+
+	/* The only possible suppressing exception: just deliver it */
+	if (rc_dest == PGM_TRANSLATION_SPEC || rc_src == PGM_TRANSLATION_SPEC) {
+		clear_vsie_icpt(vsie_page);
+		rc_dest = kvm_s390_inject_program_int(vcpu, PGM_TRANSLATION_SPEC);
+		WARN_ON_ONCE(rc_dest);
+		return 1;
+	}
+
+	/*
+	 * Forward the PEI intercept to the guest if it was a page fault, or
+	 * also for segment and region table faults if EDAT applies.
+	 */
+	if (edat) {
+		rc_dest = rc_dest == PGM_ASCE_TYPE ? rc_dest : 0;
+		rc_src = rc_src == PGM_ASCE_TYPE ? rc_src : 0;
+	} else {
+		rc_dest = rc_dest != PGM_PAGE_TRANSLATION ? rc_dest : 0;
+		rc_src = rc_src != PGM_PAGE_TRANSLATION ? rc_src : 0;
+	}
+	if (!rc_dest && !rc_src) {
+		pei_block[0] = pei_dest;
+		pei_block[1] = pei_src;
+		return 1;
+	}
+
+	retry_vsie_icpt(vsie_page);
+
+	/*
+	 * The host has edat, and the guest does not, or it was an ASCE type
+	 * exception. The host needs to inject the appropriate DAT interrupts
+	 * into the guest.
+	 */
+	if (rc_dest)
+		return inject_fault(vcpu, rc_dest, dest, 1);
+	return inject_fault(vcpu, rc_src, src, 0);
+}
+
 /*
  * Run the vsie on a shadow scb and a shadow gmap, without any further
  * sanity checks, handling SIE faults.
@@ -1071,6 +1155,10 @@ static int do_vsie_run(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
 		if ((scb_s->ipa & 0xf000) != 0xf000)
 			scb_s->ipa += 0x1000;
 		break;
+	case ICPT_PARTEXEC:
+		if (scb_s->ipa == 0xb254)
+			rc = vsie_handle_mvpg(vcpu, vsie_page);
+		break;
 	}
 	return rc;
 }
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v5 1/3] s390/kvm: split kvm_s390_logical_to_effective
  2021-03-02 17:44 ` [PATCH v5 1/3] s390/kvm: split kvm_s390_logical_to_effective Claudio Imbrenda
@ 2021-03-05 15:08   ` Christian Borntraeger
  0 siblings, 0 replies; 10+ messages in thread
From: Christian Borntraeger @ 2021-03-05 15:08 UTC (permalink / raw)
  To: Claudio Imbrenda, linux-kernel; +Cc: frankja, david, cohuck, kvm, linux-s390



On 02.03.21 18:44, Claudio Imbrenda wrote:
> Split kvm_s390_logical_to_effective to a generic function called
> _kvm_s390_logical_to_effective. The new function takes a PSW and an address
> and returns the address with the appropriate bits masked off. The old
> function now calls the new function with the appropriate PSW from the vCPU.
> 
> This is needed to avoid code duplication for vSIE.
> 
> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>

We might need cc stable here as well for patch 3?

Otherwise this looks good.
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>

> ---
>   arch/s390/kvm/gaccess.h | 29 ++++++++++++++++++++++++-----
>   1 file changed, 24 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/s390/kvm/gaccess.h b/arch/s390/kvm/gaccess.h
> index f4c51756c462..107fdfd2eadd 100644
> --- a/arch/s390/kvm/gaccess.h
> +++ b/arch/s390/kvm/gaccess.h
> @@ -36,6 +36,29 @@ static inline unsigned long kvm_s390_real_to_abs(struct kvm_vcpu *vcpu,
>   	return gra;
>   }
>   
> +/**
> + * _kvm_s390_logical_to_effective - convert guest logical to effective address
> + * @psw: psw of the guest
> + * @ga: guest logical address
> + *
> + * Convert a guest logical address to an effective address by applying the
> + * rules of the addressing mode defined by bits 31 and 32 of the given PSW
> + * (extendended/basic addressing mode).
> + *
> + * Depending on the addressing mode, the upper 40 bits (24 bit addressing
> + * mode), 33 bits (31 bit addressing mode) or no bits (64 bit addressing
> + * mode) of @ga will be zeroed and the remaining bits will be returned.
> + */
> +static inline unsigned long _kvm_s390_logical_to_effective(psw_t *psw,
> +							   unsigned long ga)
> +{
> +	if (psw_bits(*psw).eaba == PSW_BITS_AMODE_64BIT)
> +		return ga;
> +	if (psw_bits(*psw).eaba == PSW_BITS_AMODE_31BIT)
> +		return ga & ((1UL << 31) - 1);
> +	return ga & ((1UL << 24) - 1);
> +}
> +
>   /**
>    * kvm_s390_logical_to_effective - convert guest logical to effective address
>    * @vcpu: guest virtual cpu
> @@ -54,11 +77,7 @@ static inline unsigned long kvm_s390_logical_to_effective(struct kvm_vcpu *vcpu,
>   {
>   	psw_t *psw = &vcpu->arch.sie_block->gpsw;
>   
> -	if (psw_bits(*psw).eaba == PSW_BITS_AMODE_64BIT)
> -		return ga;
> -	if (psw_bits(*psw).eaba == PSW_BITS_AMODE_31BIT)
> -		return ga & ((1UL << 31) - 1);
> -	return ga & ((1UL << 24) - 1);
> +	return _kvm_s390_logical_to_effective(&vcpu->arch.sie_block->gpsw, ga);
>   }
>   
>   /*
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5 2/3] s390/kvm: extend kvm_s390_shadow_fault to return entry pointer
  2021-03-02 17:44 ` [PATCH v5 2/3] s390/kvm: extend kvm_s390_shadow_fault to return entry pointer Claudio Imbrenda
@ 2021-03-05 17:01   ` Christian Borntraeger
  0 siblings, 0 replies; 10+ messages in thread
From: Christian Borntraeger @ 2021-03-05 17:01 UTC (permalink / raw)
  To: Claudio Imbrenda, linux-kernel
  Cc: frankja, david, cohuck, kvm, linux-s390, stable, Janosch Frank



On 02.03.21 18:44, Claudio Imbrenda wrote:
> Extend kvm_s390_shadow_fault to return the pointer to the valid leaf
> DAT table entry, or to the invalid entry.
> 
> Also return some flags in the lower bits of the address:
> PEI_DAT_PROT: indicates that DAT protection applies because of the
>                protection bit in the segment (or, if EDAT, region) tables.
> PEI_NOT_PTE: indicates that the address of the DAT table entry returned
>               does not refer to a PTE, but to a segment or region table.
> 
> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> Cc: stable@vger.kernel.org
> Reviewed-by: Janosch Frank <frankja@de.ibm.com>
> Reviewed-by: David Hildenbrand <david@redhat.com>

Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5 3/3] s390/kvm: VSIE: correctly handle MVPG when in VSIE
  2021-03-02 17:44 ` [PATCH v5 3/3] s390/kvm: VSIE: correctly handle MVPG when in VSIE Claudio Imbrenda
@ 2021-03-08 15:18   ` Christian Borntraeger
  0 siblings, 0 replies; 10+ messages in thread
From: Christian Borntraeger @ 2021-03-08 15:18 UTC (permalink / raw)
  To: Claudio Imbrenda, linux-kernel
  Cc: frankja, david, cohuck, kvm, linux-s390, stable



On 02.03.21 18:44, Claudio Imbrenda wrote:
> Correctly handle the MVPG instruction when issued by a VSIE guest.
> 
> Fixes: a3508fbe9dc6d ("KVM: s390: vsie: initial support for nested virtualization")
> Cc: stable@vger.kernel.org
> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
> Acked-by: Janosch Frank <frankja@linux.ibm.com>
> Reviewed-by: David Hildenbrand <david@redhat.com>

looks sane.

Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>

> ---
>   arch/s390/kvm/vsie.c | 98 +++++++++++++++++++++++++++++++++++++++++---
>   1 file changed, 93 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c
> index 78b604326016..48aab6290a77 100644
> --- a/arch/s390/kvm/vsie.c
> +++ b/arch/s390/kvm/vsie.c
> @@ -417,11 +417,6 @@ static void unshadow_scb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
>   		memcpy((void *)((u64)scb_o + 0xc0),
>   		       (void *)((u64)scb_s + 0xc0), 0xf0 - 0xc0);
>   		break;
> -	case ICPT_PARTEXEC:
> -		/* MVPG only */
> -		memcpy((void *)((u64)scb_o + 0xc0),
> -		       (void *)((u64)scb_s + 0xc0), 0xd0 - 0xc0);
> -		break;
>   	}
> 
>   	if (scb_s->ihcpu != 0xffffU)
> @@ -983,6 +978,95 @@ static int handle_stfle(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
>   	return 0;
>   }
> 
> +/*
> + * Get a register for a nested guest.
> + * @vcpu the vcpu of the guest
> + * @vsie_page the vsie_page for the nested guest
> + * @reg the register number, the upper 4 bits are ignored.
> + * returns: the value of the register.
> + */
> +static u64 vsie_get_register(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page, u8 reg)
> +{
> +	/* no need to validate the parameter and/or perform error handling */
> +	reg &= 0xf;
> +	switch (reg) {
> +	case 15:
> +		return vsie_page->scb_s.gg15;
> +	case 14:
> +		return vsie_page->scb_s.gg14;
> +	default:
> +		return vcpu->run->s.regs.gprs[reg];
> +	}
> +}
> +
> +static int vsie_handle_mvpg(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
> +{
> +	struct kvm_s390_sie_block *scb_s = &vsie_page->scb_s;
> +	unsigned long pei_dest, pei_src, src, dest, mask;
> +	u64 *pei_block = &vsie_page->scb_o->mcic;
> +	int edat, rc_dest, rc_src;
> +	union ctlreg0 cr0;
> +
> +	cr0.val = vcpu->arch.sie_block->gcr[0];
> +	edat = cr0.edat && test_kvm_facility(vcpu->kvm, 8);
> +	mask = _kvm_s390_logical_to_effective(&scb_s->gpsw, PAGE_MASK);
> +
> +	dest = vsie_get_register(vcpu, vsie_page, scb_s->ipb >> 16) & mask;
> +	src = vsie_get_register(vcpu, vsie_page, scb_s->ipb >> 20) & mask;
> +
> +	rc_dest = kvm_s390_shadow_fault(vcpu, vsie_page->gmap, dest, &pei_dest);
> +	rc_src = kvm_s390_shadow_fault(vcpu, vsie_page->gmap, src, &pei_src);
> +	/*
> +	 * Either everything went well, or something non-critical went wrong
> +	 * e.g. because of a race. In either case, simply retry.
> +	 */
> +	if (rc_dest == -EAGAIN || rc_src == -EAGAIN || (!rc_dest && !rc_src)) {
> +		retry_vsie_icpt(vsie_page);
> +		return -EAGAIN;
> +	}
> +	/* Something more serious went wrong, propagate the error */
> +	if (rc_dest < 0)
> +		return rc_dest;
> +	if (rc_src < 0)
> +		return rc_src;
> +
> +	/* The only possible suppressing exception: just deliver it */
> +	if (rc_dest == PGM_TRANSLATION_SPEC || rc_src == PGM_TRANSLATION_SPEC) {
> +		clear_vsie_icpt(vsie_page);
> +		rc_dest = kvm_s390_inject_program_int(vcpu, PGM_TRANSLATION_SPEC);
> +		WARN_ON_ONCE(rc_dest);
> +		return 1;
> +	}
> +
> +	/*
> +	 * Forward the PEI intercept to the guest if it was a page fault, or
> +	 * also for segment and region table faults if EDAT applies.
> +	 */
> +	if (edat) {
> +		rc_dest = rc_dest == PGM_ASCE_TYPE ? rc_dest : 0;
> +		rc_src = rc_src == PGM_ASCE_TYPE ? rc_src : 0;
> +	} else {
> +		rc_dest = rc_dest != PGM_PAGE_TRANSLATION ? rc_dest : 0;
> +		rc_src = rc_src != PGM_PAGE_TRANSLATION ? rc_src : 0;
> +	}
> +	if (!rc_dest && !rc_src) {
> +		pei_block[0] = pei_dest;
> +		pei_block[1] = pei_src;
> +		return 1;
> +	}
> +
> +	retry_vsie_icpt(vsie_page);
> +
> +	/*
> +	 * The host has edat, and the guest does not, or it was an ASCE type
> +	 * exception. The host needs to inject the appropriate DAT interrupts
> +	 * into the guest.
> +	 */
> +	if (rc_dest)
> +		return inject_fault(vcpu, rc_dest, dest, 1);
> +	return inject_fault(vcpu, rc_src, src, 0);
> +}
> +
>   /*
>    * Run the vsie on a shadow scb and a shadow gmap, without any further
>    * sanity checks, handling SIE faults.
> @@ -1071,6 +1155,10 @@ static int do_vsie_run(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
>   		if ((scb_s->ipa & 0xf000) != 0xf000)
>   			scb_s->ipa += 0x1000;
>   		break;
> +	case ICPT_PARTEXEC:
> +		if (scb_s->ipa == 0xb254)
> +			rc = vsie_handle_mvpg(vcpu, vsie_page);
> +		break;
>   	}
>   	return rc;
>   }
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5 0/3] s390/kvm: fix MVPG when in VSIE
  2021-03-02 17:44 [PATCH v5 0/3] s390/kvm: fix MVPG when in VSIE Claudio Imbrenda
                   ` (2 preceding siblings ...)
  2021-03-02 17:44 ` [PATCH v5 3/3] s390/kvm: VSIE: correctly handle MVPG when in VSIE Claudio Imbrenda
@ 2021-03-08 15:19 ` Christian Borntraeger
  2021-03-08 15:26   ` Janosch Frank
  3 siblings, 1 reply; 10+ messages in thread
From: Christian Borntraeger @ 2021-03-08 15:19 UTC (permalink / raw)
  To: Claudio Imbrenda, linux-kernel; +Cc: frankja, david, cohuck, kvm, linux-s390

On 02.03.21 18:44, Claudio Imbrenda wrote:
> The current handling of the MVPG instruction when executed in a nested
> guest is wrong, and can lead to the nested guest hanging.
> 
> This patchset fixes the behaviour to be more architecturally correct,
> and fixes the hangs observed.
> 
> v4->v5
> * split kvm_s390_logical_to_effective so it can be reused for vSIE
> * fix existing comments and add some more comments
> * use the new split _kvm_s390_logical_to_effective in vsie_handle_mvpg
> 
> v3->v4
> * added PEI_ prefix to DAT_PROT and NOT_PTE macros
> * added small comment to explain what they are about
> 
> v2->v3
> * improved some comments
> * improved some variable and parameter names for increased readability
> * fixed missing handling of page faults in the MVPG handler
> * small readability improvements
> 
> v1->v2
> * complete rewrite


queued (with small fixups) for kvms390. Still not sure if this will land in master or next.
Opinions?
> 
> Claudio Imbrenda (3):
>    s390/kvm: split kvm_s390_logical_to_effective
>    s390/kvm: extend kvm_s390_shadow_fault to return entry pointer
>    s390/kvm: VSIE: correctly handle MVPG when in VSIE
> 
>   arch/s390/kvm/gaccess.c |  30 ++++++++++--
>   arch/s390/kvm/gaccess.h |  35 ++++++++++---
>   arch/s390/kvm/vsie.c    | 106 ++++++++++++++++++++++++++++++++++++----
>   3 files changed, 151 insertions(+), 20 deletions(-)
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5 0/3] s390/kvm: fix MVPG when in VSIE
  2021-03-08 15:19 ` [PATCH v5 0/3] s390/kvm: fix " Christian Borntraeger
@ 2021-03-08 15:26   ` Janosch Frank
  2021-03-08 15:30     ` Claudio Imbrenda
  0 siblings, 1 reply; 10+ messages in thread
From: Janosch Frank @ 2021-03-08 15:26 UTC (permalink / raw)
  To: Christian Borntraeger, Claudio Imbrenda, linux-kernel
  Cc: david, cohuck, kvm, linux-s390

On 3/8/21 4:19 PM, Christian Borntraeger wrote:
> On 02.03.21 18:44, Claudio Imbrenda wrote:
>> The current handling of the MVPG instruction when executed in a nested
>> guest is wrong, and can lead to the nested guest hanging.
>>
>> This patchset fixes the behaviour to be more architecturally correct,
>> and fixes the hangs observed.
>>
>> v4->v5
>> * split kvm_s390_logical_to_effective so it can be reused for vSIE
>> * fix existing comments and add some more comments
>> * use the new split _kvm_s390_logical_to_effective in vsie_handle_mvpg
>>
>> v3->v4
>> * added PEI_ prefix to DAT_PROT and NOT_PTE macros
>> * added small comment to explain what they are about
>>
>> v2->v3
>> * improved some comments
>> * improved some variable and parameter names for increased readability
>> * fixed missing handling of page faults in the MVPG handler
>> * small readability improvements
>>
>> v1->v2
>> * complete rewrite
> 
> 
> queued (with small fixups) for kvms390. Still not sure if this will land in master or next.
> Opinions?

I'd go for the next merge window

>>
>> Claudio Imbrenda (3):
>>    s390/kvm: split kvm_s390_logical_to_effective
>>    s390/kvm: extend kvm_s390_shadow_fault to return entry pointer
>>    s390/kvm: VSIE: correctly handle MVPG when in VSIE
>>
>>   arch/s390/kvm/gaccess.c |  30 ++++++++++--
>>   arch/s390/kvm/gaccess.h |  35 ++++++++++---
>>   arch/s390/kvm/vsie.c    | 106 ++++++++++++++++++++++++++++++++++++----
>>   3 files changed, 151 insertions(+), 20 deletions(-)
>>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5 0/3] s390/kvm: fix MVPG when in VSIE
  2021-03-08 15:26   ` Janosch Frank
@ 2021-03-08 15:30     ` Claudio Imbrenda
  0 siblings, 0 replies; 10+ messages in thread
From: Claudio Imbrenda @ 2021-03-08 15:30 UTC (permalink / raw)
  To: Janosch Frank
  Cc: Christian Borntraeger, linux-kernel, david, cohuck, kvm, linux-s390

On Mon, 8 Mar 2021 16:26:58 +0100
Janosch Frank <frankja@linux.ibm.com> wrote:

> On 3/8/21 4:19 PM, Christian Borntraeger wrote:
> > On 02.03.21 18:44, Claudio Imbrenda wrote:  
> >> The current handling of the MVPG instruction when executed in a
> >> nested guest is wrong, and can lead to the nested guest hanging.
> >>
> >> This patchset fixes the behaviour to be more architecturally
> >> correct, and fixes the hangs observed.
> >>
> >> v4->v5
> >> * split kvm_s390_logical_to_effective so it can be reused for vSIE
> >> * fix existing comments and add some more comments
> >> * use the new split _kvm_s390_logical_to_effective in
> >> vsie_handle_mvpg
> >>
> >> v3->v4
> >> * added PEI_ prefix to DAT_PROT and NOT_PTE macros
> >> * added small comment to explain what they are about
> >>
> >> v2->v3
> >> * improved some comments
> >> * improved some variable and parameter names for increased
> >> readability
> >> * fixed missing handling of page faults in the MVPG handler
> >> * small readability improvements
> >>
> >> v1->v2
> >> * complete rewrite  
> > 
> > 
> > queued (with small fixups) for kvms390. Still not sure if this will
> > land in master or next. Opinions?  
> 
> I'd go for the next merge window

I agree

> >>
> >> Claudio Imbrenda (3):
> >>    s390/kvm: split kvm_s390_logical_to_effective
> >>    s390/kvm: extend kvm_s390_shadow_fault to return entry pointer
> >>    s390/kvm: VSIE: correctly handle MVPG when in VSIE
> >>
> >>   arch/s390/kvm/gaccess.c |  30 ++++++++++--
> >>   arch/s390/kvm/gaccess.h |  35 ++++++++++---
> >>   arch/s390/kvm/vsie.c    | 106
> >> ++++++++++++++++++++++++++++++++++++---- 3 files changed, 151
> >> insertions(+), 20 deletions(-) 
> 


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-03-08 15:31 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-02 17:44 [PATCH v5 0/3] s390/kvm: fix MVPG when in VSIE Claudio Imbrenda
2021-03-02 17:44 ` [PATCH v5 1/3] s390/kvm: split kvm_s390_logical_to_effective Claudio Imbrenda
2021-03-05 15:08   ` Christian Borntraeger
2021-03-02 17:44 ` [PATCH v5 2/3] s390/kvm: extend kvm_s390_shadow_fault to return entry pointer Claudio Imbrenda
2021-03-05 17:01   ` Christian Borntraeger
2021-03-02 17:44 ` [PATCH v5 3/3] s390/kvm: VSIE: correctly handle MVPG when in VSIE Claudio Imbrenda
2021-03-08 15:18   ` Christian Borntraeger
2021-03-08 15:19 ` [PATCH v5 0/3] s390/kvm: fix " Christian Borntraeger
2021-03-08 15:26   ` Janosch Frank
2021-03-08 15:30     ` Claudio Imbrenda

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.