All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/2] KVM: PPC: Book3S HV: Nested guest state sanitising changes
@ 2021-07-26 20:17 ` Fabiano Rosas
  0 siblings, 0 replies; 12+ messages in thread
From: Fabiano Rosas @ 2021-07-26 20:17 UTC (permalink / raw)
  To: kvm-ppc; +Cc: linuxppc-dev, npiggin

This series aims to stop contaminating the l2_hv structure with bits
that might have come from L1 state.

Patch 1 makes l2_hv read-only (mostly). It is now only changed when we
explicitly want to pass information to L1.

Patch 2 makes sure that L1 is not forwarded HFU interrupts when the
host has decided to disable any facilities (theoretical for now, since
HFSCR bits are always the same between L1/Ln).

Changes since v4:
- moved setting of the Cause bits under BOOK3S_INTERRUPT_H_FAC_UNAVAIL.

v4:

- now passing lpcr separately into load_l2_hv_regs to solve the
  conflict with commit a19b70abc69a ("KVM: PPC: Book3S HV: Nested move
  LPCR sanitising to sanitise_hv_regs");

- patch 2 now forwards a HEAI instead of injecting a Program.

https://lkml.kernel.org/r/20210722221240.2384655-1-farosas@linux.ibm.com

v3:

- removed the sanitise functions;
- moved the entry code into a new load_l2_hv_regs and the exit code
  into the existing save_hv_return_state;
- new patch: removes the cause bits when L0 has disabled the
  corresponding facility.

https://lkml.kernel.org/r/20210415230948.3563415-1-farosas@linux.ibm.com

v2:

- made the change more generic, not only applies to hfscr anymore;
- sanitisation is now done directly on the vcpu struct, l2_hv is left
  unchanged.

https://lkml.kernel.org/r/20210406214645.3315819-1-farosas@linux.ibm.com

v1:
https://lkml.kernel.org/r/20210305231055.2913892-1-farosas@linux.ibm.com

Fabiano Rosas (2):
  KVM: PPC: Book3S HV: Sanitise vcpu registers in nested path
  KVM: PPC: Book3S HV: Stop forwarding all HFUs to L1

 arch/powerpc/kvm/book3s_hv_nested.c | 118 ++++++++++++++++------------
 1 file changed, 68 insertions(+), 50 deletions(-)

-- 
2.29.2


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v5 0/2] KVM: PPC: Book3S HV: Nested guest state sanitising changes
@ 2021-07-26 20:17 ` Fabiano Rosas
  0 siblings, 0 replies; 12+ messages in thread
From: Fabiano Rosas @ 2021-07-26 20:17 UTC (permalink / raw)
  To: kvm-ppc; +Cc: linuxppc-dev, npiggin

This series aims to stop contaminating the l2_hv structure with bits
that might have come from L1 state.

Patch 1 makes l2_hv read-only (mostly). It is now only changed when we
explicitly want to pass information to L1.

Patch 2 makes sure that L1 is not forwarded HFU interrupts when the
host has decided to disable any facilities (theoretical for now, since
HFSCR bits are always the same between L1/Ln).

Changes since v4:
- moved setting of the Cause bits under BOOK3S_INTERRUPT_H_FAC_UNAVAIL.

v4:

- now passing lpcr separately into load_l2_hv_regs to solve the
  conflict with commit a19b70abc69a ("KVM: PPC: Book3S HV: Nested move
  LPCR sanitising to sanitise_hv_regs");

- patch 2 now forwards a HEAI instead of injecting a Program.

https://lkml.kernel.org/r/20210722221240.2384655-1-farosas@linux.ibm.com

v3:

- removed the sanitise functions;
- moved the entry code into a new load_l2_hv_regs and the exit code
  into the existing save_hv_return_state;
- new patch: removes the cause bits when L0 has disabled the
  corresponding facility.

https://lkml.kernel.org/r/20210415230948.3563415-1-farosas@linux.ibm.com

v2:

- made the change more generic, not only applies to hfscr anymore;
- sanitisation is now done directly on the vcpu struct, l2_hv is left
  unchanged.

https://lkml.kernel.org/r/20210406214645.3315819-1-farosas@linux.ibm.com

v1:
https://lkml.kernel.org/r/20210305231055.2913892-1-farosas@linux.ibm.com

Fabiano Rosas (2):
  KVM: PPC: Book3S HV: Sanitise vcpu registers in nested path
  KVM: PPC: Book3S HV: Stop forwarding all HFUs to L1

 arch/powerpc/kvm/book3s_hv_nested.c | 118 ++++++++++++++++------------
 1 file changed, 68 insertions(+), 50 deletions(-)

-- 
2.29.2

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v5 1/2] KVM: PPC: Book3S HV: Sanitise vcpu registers in nested path
  2021-07-26 20:17 ` Fabiano Rosas
@ 2021-07-26 20:17   ` Fabiano Rosas
  -1 siblings, 0 replies; 12+ messages in thread
From: Fabiano Rosas @ 2021-07-26 20:17 UTC (permalink / raw)
  To: kvm-ppc; +Cc: linuxppc-dev, npiggin

As one of the arguments of the H_ENTER_NESTED hypercall, the nested
hypervisor (L1) prepares a structure containing the values of various
hypervisor-privileged registers with which it wants the nested guest
(L2) to run. Since the nested HV runs in supervisor mode it needs the
host to write to these registers.

To stop a nested HV manipulating this mechanism and using a nested
guest as a proxy to access a facility that has been made unavailable
to it, we have a routine that sanitises the values of the HV registers
before copying them into the nested guest's vcpu struct.

However, when coming out of the guest the values are copied as they
were back into L1 memory, which means that any sanitisation we did
during guest entry will be exposed to L1 after H_ENTER_NESTED returns.

This patch alters this sanitisation to have effect on the vcpu->arch
registers directly before entering and after exiting the guest,
leaving the structure that is copied back into L1 unchanged (except
when we really want L1 to access the value, e.g the Cause bits of
HFSCR).

Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/kvm/book3s_hv_nested.c | 94 ++++++++++++++---------------
 1 file changed, 46 insertions(+), 48 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
index 8543ad538b0c..8215dbd4be9a 100644
--- a/arch/powerpc/kvm/book3s_hv_nested.c
+++ b/arch/powerpc/kvm/book3s_hv_nested.c
@@ -105,7 +105,6 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
 	struct kvmppc_vcore *vc = vcpu->arch.vcore;
 
 	hr->dpdes = vc->dpdes;
-	hr->hfscr = vcpu->arch.hfscr;
 	hr->purr = vcpu->arch.purr;
 	hr->spurr = vcpu->arch.spurr;
 	hr->ic = vcpu->arch.ic;
@@ -128,55 +127,17 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
 	case BOOK3S_INTERRUPT_H_INST_STORAGE:
 		hr->asdr = vcpu->arch.fault_gpa;
 		break;
+	case BOOK3S_INTERRUPT_H_FAC_UNAVAIL:
+		hr->hfscr = ((~HFSCR_INTR_CAUSE & hr->hfscr) |
+			     (HFSCR_INTR_CAUSE & vcpu->arch.hfscr));
+		break;
 	case BOOK3S_INTERRUPT_H_EMUL_ASSIST:
 		hr->heir = vcpu->arch.emul_inst;
 		break;
 	}
 }
 
-/*
- * This can result in some L0 HV register state being leaked to an L1
- * hypervisor when the hv_guest_state is copied back to the guest after
- * being modified here.
- *
- * There is no known problem with such a leak, and in many cases these
- * register settings could be derived by the guest by observing behaviour
- * and timing, interrupts, etc., but it is an issue to consider.
- */
-static void sanitise_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr)
-{
-	struct kvmppc_vcore *vc = vcpu->arch.vcore;
-	u64 mask;
-
-	/*
-	 * Don't let L1 change LPCR bits for the L2 except these:
-	 */
-	mask = LPCR_DPFD | LPCR_ILE | LPCR_TC | LPCR_AIL | LPCR_LD |
-		LPCR_LPES | LPCR_MER;
-
-	/*
-	 * Additional filtering is required depending on hardware
-	 * and configuration.
-	 */
-	hr->lpcr = kvmppc_filter_lpcr_hv(vcpu->kvm,
-			(vc->lpcr & ~mask) | (hr->lpcr & mask));
-
-	/*
-	 * Don't let L1 enable features for L2 which we've disabled for L1,
-	 * but preserve the interrupt cause field.
-	 */
-	hr->hfscr &= (HFSCR_INTR_CAUSE | vcpu->arch.hfscr);
-
-	/* Don't let data address watchpoint match in hypervisor state */
-	hr->dawrx0 &= ~DAWRX_HYP;
-	hr->dawrx1 &= ~DAWRX_HYP;
-
-	/* Don't let completed instruction address breakpt match in HV state */
-	if ((hr->ciabr & CIABR_PRIV) == CIABR_PRIV_HYPER)
-		hr->ciabr &= ~CIABR_PRIV;
-}
-
-static void restore_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr)
+static void restore_hv_regs(struct kvm_vcpu *vcpu, const struct hv_guest_state *hr)
 {
 	struct kvmppc_vcore *vc = vcpu->arch.vcore;
 
@@ -288,6 +249,43 @@ static int kvmhv_write_guest_state_and_regs(struct kvm_vcpu *vcpu,
 				     sizeof(struct pt_regs));
 }
 
+static void load_l2_hv_regs(struct kvm_vcpu *vcpu,
+			    const struct hv_guest_state *l2_hv,
+			    const struct hv_guest_state *l1_hv, u64 *lpcr)
+{
+	struct kvmppc_vcore *vc = vcpu->arch.vcore;
+	u64 mask;
+
+	restore_hv_regs(vcpu, l2_hv);
+
+	/*
+	 * Don't let L1 change LPCR bits for the L2 except these:
+	 */
+	mask = LPCR_DPFD | LPCR_ILE | LPCR_TC | LPCR_AIL | LPCR_LD |
+		LPCR_LPES | LPCR_MER;
+
+	/*
+	 * Additional filtering is required depending on hardware
+	 * and configuration.
+	 */
+	*lpcr = kvmppc_filter_lpcr_hv(vcpu->kvm,
+				      (vc->lpcr & ~mask) | (*lpcr & mask));
+
+	/*
+	 * Don't let L1 enable features for L2 which we've disabled for L1,
+	 * but preserve the interrupt cause field.
+	 */
+	vcpu->arch.hfscr = l2_hv->hfscr & (HFSCR_INTR_CAUSE | l1_hv->hfscr);
+
+	/* Don't let data address watchpoint match in hypervisor state */
+	vcpu->arch.dawrx0 = l2_hv->dawrx0 & ~DAWRX_HYP;
+	vcpu->arch.dawrx1 = l2_hv->dawrx1 & ~DAWRX_HYP;
+
+	/* Don't let completed instruction address breakpt match in HV state */
+	if ((l2_hv->ciabr & CIABR_PRIV) == CIABR_PRIV_HYPER)
+		vcpu->arch.ciabr = l2_hv->ciabr & ~CIABR_PRIV;
+}
+
 long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
 {
 	long int err, r;
@@ -296,7 +294,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
 	struct hv_guest_state l2_hv = {0}, saved_l1_hv;
 	struct kvmppc_vcore *vc = vcpu->arch.vcore;
 	u64 hv_ptr, regs_ptr;
-	u64 hdec_exp;
+	u64 hdec_exp, lpcr;
 	s64 delta_purr, delta_spurr, delta_ic, delta_vtb;
 
 	if (vcpu->kvm->arch.l1_ptcr == 0)
@@ -349,8 +347,8 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
 	/* Guest must always run with ME enabled, HV disabled. */
 	vcpu->arch.shregs.msr = (vcpu->arch.regs.msr | MSR_ME) & ~MSR_HV;
 
-	sanitise_hv_regs(vcpu, &l2_hv);
-	restore_hv_regs(vcpu, &l2_hv);
+	lpcr = l2_hv.lpcr;
+	load_l2_hv_regs(vcpu, &l2_hv, &saved_l1_hv, &lpcr);
 
 	vcpu->arch.ret = RESUME_GUEST;
 	vcpu->arch.trap = 0;
@@ -360,7 +358,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
 			r = RESUME_HOST;
 			break;
 		}
-		r = kvmhv_run_single_vcpu(vcpu, hdec_exp, l2_hv.lpcr);
+		r = kvmhv_run_single_vcpu(vcpu, hdec_exp, lpcr);
 	} while (is_kvmppc_resume_guest(r));
 
 	/* save L2 state for return */
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v5 1/2] KVM: PPC: Book3S HV: Sanitise vcpu registers in nested path
@ 2021-07-26 20:17   ` Fabiano Rosas
  0 siblings, 0 replies; 12+ messages in thread
From: Fabiano Rosas @ 2021-07-26 20:17 UTC (permalink / raw)
  To: kvm-ppc; +Cc: linuxppc-dev, npiggin

As one of the arguments of the H_ENTER_NESTED hypercall, the nested
hypervisor (L1) prepares a structure containing the values of various
hypervisor-privileged registers with which it wants the nested guest
(L2) to run. Since the nested HV runs in supervisor mode it needs the
host to write to these registers.

To stop a nested HV manipulating this mechanism and using a nested
guest as a proxy to access a facility that has been made unavailable
to it, we have a routine that sanitises the values of the HV registers
before copying them into the nested guest's vcpu struct.

However, when coming out of the guest the values are copied as they
were back into L1 memory, which means that any sanitisation we did
during guest entry will be exposed to L1 after H_ENTER_NESTED returns.

This patch alters this sanitisation to have effect on the vcpu->arch
registers directly before entering and after exiting the guest,
leaving the structure that is copied back into L1 unchanged (except
when we really want L1 to access the value, e.g the Cause bits of
HFSCR).

Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/kvm/book3s_hv_nested.c | 94 ++++++++++++++---------------
 1 file changed, 46 insertions(+), 48 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
index 8543ad538b0c..8215dbd4be9a 100644
--- a/arch/powerpc/kvm/book3s_hv_nested.c
+++ b/arch/powerpc/kvm/book3s_hv_nested.c
@@ -105,7 +105,6 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
 	struct kvmppc_vcore *vc = vcpu->arch.vcore;
 
 	hr->dpdes = vc->dpdes;
-	hr->hfscr = vcpu->arch.hfscr;
 	hr->purr = vcpu->arch.purr;
 	hr->spurr = vcpu->arch.spurr;
 	hr->ic = vcpu->arch.ic;
@@ -128,55 +127,17 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
 	case BOOK3S_INTERRUPT_H_INST_STORAGE:
 		hr->asdr = vcpu->arch.fault_gpa;
 		break;
+	case BOOK3S_INTERRUPT_H_FAC_UNAVAIL:
+		hr->hfscr = ((~HFSCR_INTR_CAUSE & hr->hfscr) |
+			     (HFSCR_INTR_CAUSE & vcpu->arch.hfscr));
+		break;
 	case BOOK3S_INTERRUPT_H_EMUL_ASSIST:
 		hr->heir = vcpu->arch.emul_inst;
 		break;
 	}
 }
 
-/*
- * This can result in some L0 HV register state being leaked to an L1
- * hypervisor when the hv_guest_state is copied back to the guest after
- * being modified here.
- *
- * There is no known problem with such a leak, and in many cases these
- * register settings could be derived by the guest by observing behaviour
- * and timing, interrupts, etc., but it is an issue to consider.
- */
-static void sanitise_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr)
-{
-	struct kvmppc_vcore *vc = vcpu->arch.vcore;
-	u64 mask;
-
-	/*
-	 * Don't let L1 change LPCR bits for the L2 except these:
-	 */
-	mask = LPCR_DPFD | LPCR_ILE | LPCR_TC | LPCR_AIL | LPCR_LD |
-		LPCR_LPES | LPCR_MER;
-
-	/*
-	 * Additional filtering is required depending on hardware
-	 * and configuration.
-	 */
-	hr->lpcr = kvmppc_filter_lpcr_hv(vcpu->kvm,
-			(vc->lpcr & ~mask) | (hr->lpcr & mask));
-
-	/*
-	 * Don't let L1 enable features for L2 which we've disabled for L1,
-	 * but preserve the interrupt cause field.
-	 */
-	hr->hfscr &= (HFSCR_INTR_CAUSE | vcpu->arch.hfscr);
-
-	/* Don't let data address watchpoint match in hypervisor state */
-	hr->dawrx0 &= ~DAWRX_HYP;
-	hr->dawrx1 &= ~DAWRX_HYP;
-
-	/* Don't let completed instruction address breakpt match in HV state */
-	if ((hr->ciabr & CIABR_PRIV) = CIABR_PRIV_HYPER)
-		hr->ciabr &= ~CIABR_PRIV;
-}
-
-static void restore_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr)
+static void restore_hv_regs(struct kvm_vcpu *vcpu, const struct hv_guest_state *hr)
 {
 	struct kvmppc_vcore *vc = vcpu->arch.vcore;
 
@@ -288,6 +249,43 @@ static int kvmhv_write_guest_state_and_regs(struct kvm_vcpu *vcpu,
 				     sizeof(struct pt_regs));
 }
 
+static void load_l2_hv_regs(struct kvm_vcpu *vcpu,
+			    const struct hv_guest_state *l2_hv,
+			    const struct hv_guest_state *l1_hv, u64 *lpcr)
+{
+	struct kvmppc_vcore *vc = vcpu->arch.vcore;
+	u64 mask;
+
+	restore_hv_regs(vcpu, l2_hv);
+
+	/*
+	 * Don't let L1 change LPCR bits for the L2 except these:
+	 */
+	mask = LPCR_DPFD | LPCR_ILE | LPCR_TC | LPCR_AIL | LPCR_LD |
+		LPCR_LPES | LPCR_MER;
+
+	/*
+	 * Additional filtering is required depending on hardware
+	 * and configuration.
+	 */
+	*lpcr = kvmppc_filter_lpcr_hv(vcpu->kvm,
+				      (vc->lpcr & ~mask) | (*lpcr & mask));
+
+	/*
+	 * Don't let L1 enable features for L2 which we've disabled for L1,
+	 * but preserve the interrupt cause field.
+	 */
+	vcpu->arch.hfscr = l2_hv->hfscr & (HFSCR_INTR_CAUSE | l1_hv->hfscr);
+
+	/* Don't let data address watchpoint match in hypervisor state */
+	vcpu->arch.dawrx0 = l2_hv->dawrx0 & ~DAWRX_HYP;
+	vcpu->arch.dawrx1 = l2_hv->dawrx1 & ~DAWRX_HYP;
+
+	/* Don't let completed instruction address breakpt match in HV state */
+	if ((l2_hv->ciabr & CIABR_PRIV) = CIABR_PRIV_HYPER)
+		vcpu->arch.ciabr = l2_hv->ciabr & ~CIABR_PRIV;
+}
+
 long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
 {
 	long int err, r;
@@ -296,7 +294,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
 	struct hv_guest_state l2_hv = {0}, saved_l1_hv;
 	struct kvmppc_vcore *vc = vcpu->arch.vcore;
 	u64 hv_ptr, regs_ptr;
-	u64 hdec_exp;
+	u64 hdec_exp, lpcr;
 	s64 delta_purr, delta_spurr, delta_ic, delta_vtb;
 
 	if (vcpu->kvm->arch.l1_ptcr = 0)
@@ -349,8 +347,8 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
 	/* Guest must always run with ME enabled, HV disabled. */
 	vcpu->arch.shregs.msr = (vcpu->arch.regs.msr | MSR_ME) & ~MSR_HV;
 
-	sanitise_hv_regs(vcpu, &l2_hv);
-	restore_hv_regs(vcpu, &l2_hv);
+	lpcr = l2_hv.lpcr;
+	load_l2_hv_regs(vcpu, &l2_hv, &saved_l1_hv, &lpcr);
 
 	vcpu->arch.ret = RESUME_GUEST;
 	vcpu->arch.trap = 0;
@@ -360,7 +358,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
 			r = RESUME_HOST;
 			break;
 		}
-		r = kvmhv_run_single_vcpu(vcpu, hdec_exp, l2_hv.lpcr);
+		r = kvmhv_run_single_vcpu(vcpu, hdec_exp, lpcr);
 	} while (is_kvmppc_resume_guest(r));
 
 	/* save L2 state for return */
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v5 2/2] KVM: PPC: Book3S HV: Stop forwarding all HFUs to L1
  2021-07-26 20:17 ` Fabiano Rosas
@ 2021-07-26 20:17   ` Fabiano Rosas
  -1 siblings, 0 replies; 12+ messages in thread
From: Fabiano Rosas @ 2021-07-26 20:17 UTC (permalink / raw)
  To: kvm-ppc; +Cc: linuxppc-dev, npiggin

If the nested hypervisor has no access to a facility because it has
been disabled by the host, it should also not be able to see the
Hypervisor Facility Unavailable that arises from one of its guests
trying to access the facility.

This patch turns a HFU that happened in L2 into a Hypervisor Emulation
Assistance interrupt and forwards it to L1 for handling. The ones that
happened because L1 explicitly disabled the facility for L2 are still
let through, along with the corresponding Cause bits in the HFSCR.

Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/kvm/book3s_hv_nested.c | 32 +++++++++++++++++++++++------
 1 file changed, 26 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
index 8215dbd4be9a..d544b092b49a 100644
--- a/arch/powerpc/kvm/book3s_hv_nested.c
+++ b/arch/powerpc/kvm/book3s_hv_nested.c
@@ -99,7 +99,7 @@ static void byteswap_hv_regs(struct hv_guest_state *hr)
 	hr->dawrx1 = swab64(hr->dawrx1);
 }
 
-static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
+static void save_hv_return_state(struct kvm_vcpu *vcpu,
 				 struct hv_guest_state *hr)
 {
 	struct kvmppc_vcore *vc = vcpu->arch.vcore;
@@ -118,7 +118,7 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
 	hr->pidr = vcpu->arch.pid;
 	hr->cfar = vcpu->arch.cfar;
 	hr->ppr = vcpu->arch.ppr;
-	switch (trap) {
+	switch (vcpu->arch.trap) {
 	case BOOK3S_INTERRUPT_H_DATA_STORAGE:
 		hr->hdar = vcpu->arch.fault_dar;
 		hr->hdsisr = vcpu->arch.fault_dsisr;
@@ -128,9 +128,29 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
 		hr->asdr = vcpu->arch.fault_gpa;
 		break;
 	case BOOK3S_INTERRUPT_H_FAC_UNAVAIL:
-		hr->hfscr = ((~HFSCR_INTR_CAUSE & hr->hfscr) |
-			     (HFSCR_INTR_CAUSE & vcpu->arch.hfscr));
-		break;
+	{
+		u8 cause = vcpu->arch.hfscr >> 56;
+
+		WARN_ON_ONCE(cause >= BITS_PER_LONG);
+
+		if (!(hr->hfscr & (1UL << cause))) {
+			hr->hfscr = ((~HFSCR_INTR_CAUSE & hr->hfscr) |
+				     (HFSCR_INTR_CAUSE & vcpu->arch.hfscr));
+			break;
+		}
+
+		/*
+		 * We have disabled this facility, so it does not
+		 * exist from L1's perspective. Turn it into a HEAI.
+		 */
+		vcpu->arch.trap = BOOK3S_INTERRUPT_H_EMUL_ASSIST;
+		kvmppc_load_last_inst(vcpu, INST_GENERIC, &vcpu->arch.emul_inst);
+
+		/* Don't leak the cause field */
+		hr->hfscr &= ~HFSCR_INTR_CAUSE;
+
+		fallthrough;
+	}
 	case BOOK3S_INTERRUPT_H_EMUL_ASSIST:
 		hr->heir = vcpu->arch.emul_inst;
 		break;
@@ -368,7 +388,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
 	delta_spurr = vcpu->arch.spurr - l2_hv.spurr;
 	delta_ic = vcpu->arch.ic - l2_hv.ic;
 	delta_vtb = vc->vtb - l2_hv.vtb;
-	save_hv_return_state(vcpu, vcpu->arch.trap, &l2_hv);
+	save_hv_return_state(vcpu, &l2_hv);
 
 	/* restore L1 state */
 	vcpu->arch.nested = NULL;
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v5 2/2] KVM: PPC: Book3S HV: Stop forwarding all HFUs to L1
@ 2021-07-26 20:17   ` Fabiano Rosas
  0 siblings, 0 replies; 12+ messages in thread
From: Fabiano Rosas @ 2021-07-26 20:17 UTC (permalink / raw)
  To: kvm-ppc; +Cc: linuxppc-dev, npiggin

If the nested hypervisor has no access to a facility because it has
been disabled by the host, it should also not be able to see the
Hypervisor Facility Unavailable that arises from one of its guests
trying to access the facility.

This patch turns a HFU that happened in L2 into a Hypervisor Emulation
Assistance interrupt and forwards it to L1 for handling. The ones that
happened because L1 explicitly disabled the facility for L2 are still
let through, along with the corresponding Cause bits in the HFSCR.

Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/kvm/book3s_hv_nested.c | 32 +++++++++++++++++++++++------
 1 file changed, 26 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
index 8215dbd4be9a..d544b092b49a 100644
--- a/arch/powerpc/kvm/book3s_hv_nested.c
+++ b/arch/powerpc/kvm/book3s_hv_nested.c
@@ -99,7 +99,7 @@ static void byteswap_hv_regs(struct hv_guest_state *hr)
 	hr->dawrx1 = swab64(hr->dawrx1);
 }
 
-static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
+static void save_hv_return_state(struct kvm_vcpu *vcpu,
 				 struct hv_guest_state *hr)
 {
 	struct kvmppc_vcore *vc = vcpu->arch.vcore;
@@ -118,7 +118,7 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
 	hr->pidr = vcpu->arch.pid;
 	hr->cfar = vcpu->arch.cfar;
 	hr->ppr = vcpu->arch.ppr;
-	switch (trap) {
+	switch (vcpu->arch.trap) {
 	case BOOK3S_INTERRUPT_H_DATA_STORAGE:
 		hr->hdar = vcpu->arch.fault_dar;
 		hr->hdsisr = vcpu->arch.fault_dsisr;
@@ -128,9 +128,29 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
 		hr->asdr = vcpu->arch.fault_gpa;
 		break;
 	case BOOK3S_INTERRUPT_H_FAC_UNAVAIL:
-		hr->hfscr = ((~HFSCR_INTR_CAUSE & hr->hfscr) |
-			     (HFSCR_INTR_CAUSE & vcpu->arch.hfscr));
-		break;
+	{
+		u8 cause = vcpu->arch.hfscr >> 56;
+
+		WARN_ON_ONCE(cause >= BITS_PER_LONG);
+
+		if (!(hr->hfscr & (1UL << cause))) {
+			hr->hfscr = ((~HFSCR_INTR_CAUSE & hr->hfscr) |
+				     (HFSCR_INTR_CAUSE & vcpu->arch.hfscr));
+			break;
+		}
+
+		/*
+		 * We have disabled this facility, so it does not
+		 * exist from L1's perspective. Turn it into a HEAI.
+		 */
+		vcpu->arch.trap = BOOK3S_INTERRUPT_H_EMUL_ASSIST;
+		kvmppc_load_last_inst(vcpu, INST_GENERIC, &vcpu->arch.emul_inst);
+
+		/* Don't leak the cause field */
+		hr->hfscr &= ~HFSCR_INTR_CAUSE;
+
+		fallthrough;
+	}
 	case BOOK3S_INTERRUPT_H_EMUL_ASSIST:
 		hr->heir = vcpu->arch.emul_inst;
 		break;
@@ -368,7 +388,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
 	delta_spurr = vcpu->arch.spurr - l2_hv.spurr;
 	delta_ic = vcpu->arch.ic - l2_hv.ic;
 	delta_vtb = vc->vtb - l2_hv.vtb;
-	save_hv_return_state(vcpu, vcpu->arch.trap, &l2_hv);
+	save_hv_return_state(vcpu, &l2_hv);
 
 	/* restore L1 state */
 	vcpu->arch.nested = NULL;
-- 
2.29.2

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v5 2/2] KVM: PPC: Book3S HV: Stop forwarding all HFUs to L1
  2021-07-26 20:17   ` Fabiano Rosas
@ 2021-07-27  3:09     ` Nicholas Piggin
  -1 siblings, 0 replies; 12+ messages in thread
From: Nicholas Piggin @ 2021-07-27  3:09 UTC (permalink / raw)
  To: Fabiano Rosas, kvm-ppc; +Cc: linuxppc-dev

Excerpts from Fabiano Rosas's message of July 27, 2021 6:17 am:
> If the nested hypervisor has no access to a facility because it has
> been disabled by the host, it should also not be able to see the
> Hypervisor Facility Unavailable that arises from one of its guests
> trying to access the facility.
> 
> This patch turns a HFU that happened in L2 into a Hypervisor Emulation
> Assistance interrupt and forwards it to L1 for handling. The ones that
> happened because L1 explicitly disabled the facility for L2 are still
> let through, along with the corresponding Cause bits in the HFSCR.
> 
> Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>  arch/powerpc/kvm/book3s_hv_nested.c | 32 +++++++++++++++++++++++------
>  1 file changed, 26 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
> index 8215dbd4be9a..d544b092b49a 100644
> --- a/arch/powerpc/kvm/book3s_hv_nested.c
> +++ b/arch/powerpc/kvm/book3s_hv_nested.c
> @@ -99,7 +99,7 @@ static void byteswap_hv_regs(struct hv_guest_state *hr)
>  	hr->dawrx1 = swab64(hr->dawrx1);
>  }
>  
> -static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
> +static void save_hv_return_state(struct kvm_vcpu *vcpu,
>  				 struct hv_guest_state *hr)
>  {
>  	struct kvmppc_vcore *vc = vcpu->arch.vcore;
> @@ -118,7 +118,7 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
>  	hr->pidr = vcpu->arch.pid;
>  	hr->cfar = vcpu->arch.cfar;
>  	hr->ppr = vcpu->arch.ppr;
> -	switch (trap) {
> +	switch (vcpu->arch.trap) {
>  	case BOOK3S_INTERRUPT_H_DATA_STORAGE:
>  		hr->hdar = vcpu->arch.fault_dar;
>  		hr->hdsisr = vcpu->arch.fault_dsisr;
> @@ -128,9 +128,29 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
>  		hr->asdr = vcpu->arch.fault_gpa;
>  		break;
>  	case BOOK3S_INTERRUPT_H_FAC_UNAVAIL:
> -		hr->hfscr = ((~HFSCR_INTR_CAUSE & hr->hfscr) |
> -			     (HFSCR_INTR_CAUSE & vcpu->arch.hfscr));
> -		break;
> +	{
> +		u8 cause = vcpu->arch.hfscr >> 56;

Can this be u64 just to help gcc?

> +
> +		WARN_ON_ONCE(cause >= BITS_PER_LONG);
> +
> +		if (!(hr->hfscr & (1UL << cause))) {
> +			hr->hfscr = ((~HFSCR_INTR_CAUSE & hr->hfscr) |
> +				     (HFSCR_INTR_CAUSE & vcpu->arch.hfscr));
> +			break;
> +		}
> +
> +		/*
> +		 * We have disabled this facility, so it does not
> +		 * exist from L1's perspective. Turn it into a HEAI.
> +		 */
> +		vcpu->arch.trap = BOOK3S_INTERRUPT_H_EMUL_ASSIST;
> +		kvmppc_load_last_inst(vcpu, INST_GENERIC, &vcpu->arch.emul_inst);

Hmm, this doesn't handle kvmpc_load_last_inst failure. Other code tends 
to just resume guest and retry in this case. Can we do that here?

> +
> +		/* Don't leak the cause field */
> +		hr->hfscr &= ~HFSCR_INTR_CAUSE;

This hunk also remains -- shouldn't change HFSCR for HEA, only HFAC.

Thanks,
Nick


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v5 2/2] KVM: PPC: Book3S HV: Stop forwarding all HFUs to L1
@ 2021-07-27  3:09     ` Nicholas Piggin
  0 siblings, 0 replies; 12+ messages in thread
From: Nicholas Piggin @ 2021-07-27  3:09 UTC (permalink / raw)
  To: Fabiano Rosas, kvm-ppc; +Cc: linuxppc-dev

Excerpts from Fabiano Rosas's message of July 27, 2021 6:17 am:
> If the nested hypervisor has no access to a facility because it has
> been disabled by the host, it should also not be able to see the
> Hypervisor Facility Unavailable that arises from one of its guests
> trying to access the facility.
> 
> This patch turns a HFU that happened in L2 into a Hypervisor Emulation
> Assistance interrupt and forwards it to L1 for handling. The ones that
> happened because L1 explicitly disabled the facility for L2 are still
> let through, along with the corresponding Cause bits in the HFSCR.
> 
> Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>  arch/powerpc/kvm/book3s_hv_nested.c | 32 +++++++++++++++++++++++------
>  1 file changed, 26 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
> index 8215dbd4be9a..d544b092b49a 100644
> --- a/arch/powerpc/kvm/book3s_hv_nested.c
> +++ b/arch/powerpc/kvm/book3s_hv_nested.c
> @@ -99,7 +99,7 @@ static void byteswap_hv_regs(struct hv_guest_state *hr)
>  	hr->dawrx1 = swab64(hr->dawrx1);
>  }
>  
> -static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
> +static void save_hv_return_state(struct kvm_vcpu *vcpu,
>  				 struct hv_guest_state *hr)
>  {
>  	struct kvmppc_vcore *vc = vcpu->arch.vcore;
> @@ -118,7 +118,7 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
>  	hr->pidr = vcpu->arch.pid;
>  	hr->cfar = vcpu->arch.cfar;
>  	hr->ppr = vcpu->arch.ppr;
> -	switch (trap) {
> +	switch (vcpu->arch.trap) {
>  	case BOOK3S_INTERRUPT_H_DATA_STORAGE:
>  		hr->hdar = vcpu->arch.fault_dar;
>  		hr->hdsisr = vcpu->arch.fault_dsisr;
> @@ -128,9 +128,29 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
>  		hr->asdr = vcpu->arch.fault_gpa;
>  		break;
>  	case BOOK3S_INTERRUPT_H_FAC_UNAVAIL:
> -		hr->hfscr = ((~HFSCR_INTR_CAUSE & hr->hfscr) |
> -			     (HFSCR_INTR_CAUSE & vcpu->arch.hfscr));
> -		break;
> +	{
> +		u8 cause = vcpu->arch.hfscr >> 56;

Can this be u64 just to help gcc?

> +
> +		WARN_ON_ONCE(cause >= BITS_PER_LONG);
> +
> +		if (!(hr->hfscr & (1UL << cause))) {
> +			hr->hfscr = ((~HFSCR_INTR_CAUSE & hr->hfscr) |
> +				     (HFSCR_INTR_CAUSE & vcpu->arch.hfscr));
> +			break;
> +		}
> +
> +		/*
> +		 * We have disabled this facility, so it does not
> +		 * exist from L1's perspective. Turn it into a HEAI.
> +		 */
> +		vcpu->arch.trap = BOOK3S_INTERRUPT_H_EMUL_ASSIST;
> +		kvmppc_load_last_inst(vcpu, INST_GENERIC, &vcpu->arch.emul_inst);

Hmm, this doesn't handle kvmpc_load_last_inst failure. Other code tends 
to just resume guest and retry in this case. Can we do that here?

> +
> +		/* Don't leak the cause field */
> +		hr->hfscr &= ~HFSCR_INTR_CAUSE;

This hunk also remains -- shouldn't change HFSCR for HEA, only HFAC.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v5 2/2] KVM: PPC: Book3S HV: Stop forwarding all HFUs to L1
  2021-07-27  3:09     ` Nicholas Piggin
@ 2021-07-27 14:36       ` Fabiano Rosas
  -1 siblings, 0 replies; 12+ messages in thread
From: Fabiano Rosas @ 2021-07-27 14:36 UTC (permalink / raw)
  To: Nicholas Piggin, kvm-ppc; +Cc: linuxppc-dev

Nicholas Piggin <npiggin@gmail.com> writes:

> Excerpts from Fabiano Rosas's message of July 27, 2021 6:17 am:
>> If the nested hypervisor has no access to a facility because it has
>> been disabled by the host, it should also not be able to see the
>> Hypervisor Facility Unavailable that arises from one of its guests
>> trying to access the facility.
>> 
>> This patch turns a HFU that happened in L2 into a Hypervisor Emulation
>> Assistance interrupt and forwards it to L1 for handling. The ones that
>> happened because L1 explicitly disabled the facility for L2 are still
>> let through, along with the corresponding Cause bits in the HFSCR.
>> 
>> Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
>> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>>  arch/powerpc/kvm/book3s_hv_nested.c | 32 +++++++++++++++++++++++------
>>  1 file changed, 26 insertions(+), 6 deletions(-)
>> 
>> diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
>> index 8215dbd4be9a..d544b092b49a 100644
>> --- a/arch/powerpc/kvm/book3s_hv_nested.c
>> +++ b/arch/powerpc/kvm/book3s_hv_nested.c
>> @@ -99,7 +99,7 @@ static void byteswap_hv_regs(struct hv_guest_state *hr)
>>  	hr->dawrx1 = swab64(hr->dawrx1);
>>  }
>>  
>> -static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
>> +static void save_hv_return_state(struct kvm_vcpu *vcpu,
>>  				 struct hv_guest_state *hr)
>>  {
>>  	struct kvmppc_vcore *vc = vcpu->arch.vcore;
>> @@ -118,7 +118,7 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
>>  	hr->pidr = vcpu->arch.pid;
>>  	hr->cfar = vcpu->arch.cfar;
>>  	hr->ppr = vcpu->arch.ppr;
>> -	switch (trap) {
>> +	switch (vcpu->arch.trap) {
>>  	case BOOK3S_INTERRUPT_H_DATA_STORAGE:
>>  		hr->hdar = vcpu->arch.fault_dar;
>>  		hr->hdsisr = vcpu->arch.fault_dsisr;
>> @@ -128,9 +128,29 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
>>  		hr->asdr = vcpu->arch.fault_gpa;
>>  		break;
>>  	case BOOK3S_INTERRUPT_H_FAC_UNAVAIL:
>> -		hr->hfscr = ((~HFSCR_INTR_CAUSE & hr->hfscr) |
>> -			     (HFSCR_INTR_CAUSE & vcpu->arch.hfscr));
>> -		break;
>> +	{
>> +		u8 cause = vcpu->arch.hfscr >> 56;
>
> Can this be u64 just to help gcc?
>

Yes.

>> +
>> +		WARN_ON_ONCE(cause >= BITS_PER_LONG);
>> +
>> +		if (!(hr->hfscr & (1UL << cause))) {
>> +			hr->hfscr = ((~HFSCR_INTR_CAUSE & hr->hfscr) |
>> +				     (HFSCR_INTR_CAUSE & vcpu->arch.hfscr));
>> +			break;
>> +		}
>> +
>> +		/*
>> +		 * We have disabled this facility, so it does not
>> +		 * exist from L1's perspective. Turn it into a HEAI.
>> +		 */
>> +		vcpu->arch.trap = BOOK3S_INTERRUPT_H_EMUL_ASSIST;
>> +		kvmppc_load_last_inst(vcpu, INST_GENERIC, &vcpu->arch.emul_inst);
>
> Hmm, this doesn't handle kvmpc_load_last_inst failure. Other code tends 
> to just resume guest and retry in this case. Can we do that here?
>

Not at this point. The other code does that inside
kvmppc_handle_exit_hv, which is called from kvmhv_run_single_vcpu. And
since we're changing the interrupt, I cannot load the last instruction
at kvmppc_handle_nested_exit because at that point this is still an HFU.

Unless I do it anyway at the HFU handler and put a comment explaining
the situation.

Or I could check for failure and clear vcpu->arch.emul_inst and
therefore also hr->heir if we couldn't load the instruction.

>> +
>> +		/* Don't leak the cause field */
>> +		hr->hfscr &= ~HFSCR_INTR_CAUSE;
>
> This hunk also remains -- shouldn't change HFSCR for HEA, only HFAC.

Ah of course, thanks.

>
> Thanks,
> Nick

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v5 2/2] KVM: PPC: Book3S HV: Stop forwarding all HFUs to L1
@ 2021-07-27 14:36       ` Fabiano Rosas
  0 siblings, 0 replies; 12+ messages in thread
From: Fabiano Rosas @ 2021-07-27 14:36 UTC (permalink / raw)
  To: Nicholas Piggin, kvm-ppc; +Cc: linuxppc-dev

Nicholas Piggin <npiggin@gmail.com> writes:

> Excerpts from Fabiano Rosas's message of July 27, 2021 6:17 am:
>> If the nested hypervisor has no access to a facility because it has
>> been disabled by the host, it should also not be able to see the
>> Hypervisor Facility Unavailable that arises from one of its guests
>> trying to access the facility.
>> 
>> This patch turns a HFU that happened in L2 into a Hypervisor Emulation
>> Assistance interrupt and forwards it to L1 for handling. The ones that
>> happened because L1 explicitly disabled the facility for L2 are still
>> let through, along with the corresponding Cause bits in the HFSCR.
>> 
>> Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
>> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>>  arch/powerpc/kvm/book3s_hv_nested.c | 32 +++++++++++++++++++++++------
>>  1 file changed, 26 insertions(+), 6 deletions(-)
>> 
>> diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
>> index 8215dbd4be9a..d544b092b49a 100644
>> --- a/arch/powerpc/kvm/book3s_hv_nested.c
>> +++ b/arch/powerpc/kvm/book3s_hv_nested.c
>> @@ -99,7 +99,7 @@ static void byteswap_hv_regs(struct hv_guest_state *hr)
>>  	hr->dawrx1 = swab64(hr->dawrx1);
>>  }
>>  
>> -static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
>> +static void save_hv_return_state(struct kvm_vcpu *vcpu,
>>  				 struct hv_guest_state *hr)
>>  {
>>  	struct kvmppc_vcore *vc = vcpu->arch.vcore;
>> @@ -118,7 +118,7 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
>>  	hr->pidr = vcpu->arch.pid;
>>  	hr->cfar = vcpu->arch.cfar;
>>  	hr->ppr = vcpu->arch.ppr;
>> -	switch (trap) {
>> +	switch (vcpu->arch.trap) {
>>  	case BOOK3S_INTERRUPT_H_DATA_STORAGE:
>>  		hr->hdar = vcpu->arch.fault_dar;
>>  		hr->hdsisr = vcpu->arch.fault_dsisr;
>> @@ -128,9 +128,29 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
>>  		hr->asdr = vcpu->arch.fault_gpa;
>>  		break;
>>  	case BOOK3S_INTERRUPT_H_FAC_UNAVAIL:
>> -		hr->hfscr = ((~HFSCR_INTR_CAUSE & hr->hfscr) |
>> -			     (HFSCR_INTR_CAUSE & vcpu->arch.hfscr));
>> -		break;
>> +	{
>> +		u8 cause = vcpu->arch.hfscr >> 56;
>
> Can this be u64 just to help gcc?
>

Yes.

>> +
>> +		WARN_ON_ONCE(cause >= BITS_PER_LONG);
>> +
>> +		if (!(hr->hfscr & (1UL << cause))) {
>> +			hr->hfscr = ((~HFSCR_INTR_CAUSE & hr->hfscr) |
>> +				     (HFSCR_INTR_CAUSE & vcpu->arch.hfscr));
>> +			break;
>> +		}
>> +
>> +		/*
>> +		 * We have disabled this facility, so it does not
>> +		 * exist from L1's perspective. Turn it into a HEAI.
>> +		 */
>> +		vcpu->arch.trap = BOOK3S_INTERRUPT_H_EMUL_ASSIST;
>> +		kvmppc_load_last_inst(vcpu, INST_GENERIC, &vcpu->arch.emul_inst);
>
> Hmm, this doesn't handle kvmpc_load_last_inst failure. Other code tends 
> to just resume guest and retry in this case. Can we do that here?
>

Not at this point. The other code does that inside
kvmppc_handle_exit_hv, which is called from kvmhv_run_single_vcpu. And
since we're changing the interrupt, I cannot load the last instruction
at kvmppc_handle_nested_exit because at that point this is still an HFU.

Unless I do it anyway at the HFU handler and put a comment explaining
the situation.

Or I could check for failure and clear vcpu->arch.emul_inst and
therefore also hr->heir if we couldn't load the instruction.

>> +
>> +		/* Don't leak the cause field */
>> +		hr->hfscr &= ~HFSCR_INTR_CAUSE;
>
> This hunk also remains -- shouldn't change HFSCR for HEA, only HFAC.

Ah of course, thanks.

>
> Thanks,
> Nick

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v5 2/2] KVM: PPC: Book3S HV: Stop forwarding all HFUs to L1
  2021-07-27 14:36       ` Fabiano Rosas
@ 2021-07-29  3:52         ` Nicholas Piggin
  -1 siblings, 0 replies; 12+ messages in thread
From: Nicholas Piggin @ 2021-07-29  3:52 UTC (permalink / raw)
  To: Fabiano Rosas, kvm-ppc; +Cc: linuxppc-dev

Excerpts from Fabiano Rosas's message of July 28, 2021 12:36 am:
> Nicholas Piggin <npiggin@gmail.com> writes:
> 
>> Excerpts from Fabiano Rosas's message of July 27, 2021 6:17 am:
>>> If the nested hypervisor has no access to a facility because it has
>>> been disabled by the host, it should also not be able to see the
>>> Hypervisor Facility Unavailable that arises from one of its guests
>>> trying to access the facility.
>>> 
>>> This patch turns a HFU that happened in L2 into a Hypervisor Emulation
>>> Assistance interrupt and forwards it to L1 for handling. The ones that
>>> happened because L1 explicitly disabled the facility for L2 are still
>>> let through, along with the corresponding Cause bits in the HFSCR.
>>> 
>>> Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
>>> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
>>> ---
>>>  arch/powerpc/kvm/book3s_hv_nested.c | 32 +++++++++++++++++++++++------
>>>  1 file changed, 26 insertions(+), 6 deletions(-)
>>> 
>>> diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
>>> index 8215dbd4be9a..d544b092b49a 100644
>>> --- a/arch/powerpc/kvm/book3s_hv_nested.c
>>> +++ b/arch/powerpc/kvm/book3s_hv_nested.c
>>> @@ -99,7 +99,7 @@ static void byteswap_hv_regs(struct hv_guest_state *hr)
>>>  	hr->dawrx1 = swab64(hr->dawrx1);
>>>  }
>>>  
>>> -static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
>>> +static void save_hv_return_state(struct kvm_vcpu *vcpu,
>>>  				 struct hv_guest_state *hr)
>>>  {
>>>  	struct kvmppc_vcore *vc = vcpu->arch.vcore;
>>> @@ -118,7 +118,7 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
>>>  	hr->pidr = vcpu->arch.pid;
>>>  	hr->cfar = vcpu->arch.cfar;
>>>  	hr->ppr = vcpu->arch.ppr;
>>> -	switch (trap) {
>>> +	switch (vcpu->arch.trap) {
>>>  	case BOOK3S_INTERRUPT_H_DATA_STORAGE:
>>>  		hr->hdar = vcpu->arch.fault_dar;
>>>  		hr->hdsisr = vcpu->arch.fault_dsisr;
>>> @@ -128,9 +128,29 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
>>>  		hr->asdr = vcpu->arch.fault_gpa;
>>>  		break;
>>>  	case BOOK3S_INTERRUPT_H_FAC_UNAVAIL:
>>> -		hr->hfscr = ((~HFSCR_INTR_CAUSE & hr->hfscr) |
>>> -			     (HFSCR_INTR_CAUSE & vcpu->arch.hfscr));
>>> -		break;
>>> +	{
>>> +		u8 cause = vcpu->arch.hfscr >> 56;
>>
>> Can this be u64 just to help gcc?
>>
> 
> Yes.
> 
>>> +
>>> +		WARN_ON_ONCE(cause >= BITS_PER_LONG);
>>> +
>>> +		if (!(hr->hfscr & (1UL << cause))) {
>>> +			hr->hfscr = ((~HFSCR_INTR_CAUSE & hr->hfscr) |
>>> +				     (HFSCR_INTR_CAUSE & vcpu->arch.hfscr));
>>> +			break;
>>> +		}
>>> +
>>> +		/*
>>> +		 * We have disabled this facility, so it does not
>>> +		 * exist from L1's perspective. Turn it into a HEAI.
>>> +		 */
>>> +		vcpu->arch.trap = BOOK3S_INTERRUPT_H_EMUL_ASSIST;
>>> +		kvmppc_load_last_inst(vcpu, INST_GENERIC, &vcpu->arch.emul_inst);
>>
>> Hmm, this doesn't handle kvmpc_load_last_inst failure. Other code tends 
>> to just resume guest and retry in this case. Can we do that here?
>>
> 
> Not at this point. The other code does that inside
> kvmppc_handle_exit_hv, which is called from kvmhv_run_single_vcpu. And
> since we're changing the interrupt, I cannot load the last instruction
> at kvmppc_handle_nested_exit because at that point this is still an HFU.
> 
> Unless I do it anyway at the HFU handler and put a comment explaining
> the situation.

Yeah I think it would be better to move this logic to the nested exit 
handler.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v5 2/2] KVM: PPC: Book3S HV: Stop forwarding all HFUs to L1
@ 2021-07-29  3:52         ` Nicholas Piggin
  0 siblings, 0 replies; 12+ messages in thread
From: Nicholas Piggin @ 2021-07-29  3:52 UTC (permalink / raw)
  To: Fabiano Rosas, kvm-ppc; +Cc: linuxppc-dev

Excerpts from Fabiano Rosas's message of July 28, 2021 12:36 am:
> Nicholas Piggin <npiggin@gmail.com> writes:
> 
>> Excerpts from Fabiano Rosas's message of July 27, 2021 6:17 am:
>>> If the nested hypervisor has no access to a facility because it has
>>> been disabled by the host, it should also not be able to see the
>>> Hypervisor Facility Unavailable that arises from one of its guests
>>> trying to access the facility.
>>> 
>>> This patch turns a HFU that happened in L2 into a Hypervisor Emulation
>>> Assistance interrupt and forwards it to L1 for handling. The ones that
>>> happened because L1 explicitly disabled the facility for L2 are still
>>> let through, along with the corresponding Cause bits in the HFSCR.
>>> 
>>> Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
>>> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
>>> ---
>>>  arch/powerpc/kvm/book3s_hv_nested.c | 32 +++++++++++++++++++++++------
>>>  1 file changed, 26 insertions(+), 6 deletions(-)
>>> 
>>> diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
>>> index 8215dbd4be9a..d544b092b49a 100644
>>> --- a/arch/powerpc/kvm/book3s_hv_nested.c
>>> +++ b/arch/powerpc/kvm/book3s_hv_nested.c
>>> @@ -99,7 +99,7 @@ static void byteswap_hv_regs(struct hv_guest_state *hr)
>>>  	hr->dawrx1 = swab64(hr->dawrx1);
>>>  }
>>>  
>>> -static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
>>> +static void save_hv_return_state(struct kvm_vcpu *vcpu,
>>>  				 struct hv_guest_state *hr)
>>>  {
>>>  	struct kvmppc_vcore *vc = vcpu->arch.vcore;
>>> @@ -118,7 +118,7 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
>>>  	hr->pidr = vcpu->arch.pid;
>>>  	hr->cfar = vcpu->arch.cfar;
>>>  	hr->ppr = vcpu->arch.ppr;
>>> -	switch (trap) {
>>> +	switch (vcpu->arch.trap) {
>>>  	case BOOK3S_INTERRUPT_H_DATA_STORAGE:
>>>  		hr->hdar = vcpu->arch.fault_dar;
>>>  		hr->hdsisr = vcpu->arch.fault_dsisr;
>>> @@ -128,9 +128,29 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
>>>  		hr->asdr = vcpu->arch.fault_gpa;
>>>  		break;
>>>  	case BOOK3S_INTERRUPT_H_FAC_UNAVAIL:
>>> -		hr->hfscr = ((~HFSCR_INTR_CAUSE & hr->hfscr) |
>>> -			     (HFSCR_INTR_CAUSE & vcpu->arch.hfscr));
>>> -		break;
>>> +	{
>>> +		u8 cause = vcpu->arch.hfscr >> 56;
>>
>> Can this be u64 just to help gcc?
>>
> 
> Yes.
> 
>>> +
>>> +		WARN_ON_ONCE(cause >= BITS_PER_LONG);
>>> +
>>> +		if (!(hr->hfscr & (1UL << cause))) {
>>> +			hr->hfscr = ((~HFSCR_INTR_CAUSE & hr->hfscr) |
>>> +				     (HFSCR_INTR_CAUSE & vcpu->arch.hfscr));
>>> +			break;
>>> +		}
>>> +
>>> +		/*
>>> +		 * We have disabled this facility, so it does not
>>> +		 * exist from L1's perspective. Turn it into a HEAI.
>>> +		 */
>>> +		vcpu->arch.trap = BOOK3S_INTERRUPT_H_EMUL_ASSIST;
>>> +		kvmppc_load_last_inst(vcpu, INST_GENERIC, &vcpu->arch.emul_inst);
>>
>> Hmm, this doesn't handle kvmpc_load_last_inst failure. Other code tends 
>> to just resume guest and retry in this case. Can we do that here?
>>
> 
> Not at this point. The other code does that inside
> kvmppc_handle_exit_hv, which is called from kvmhv_run_single_vcpu. And
> since we're changing the interrupt, I cannot load the last instruction
> at kvmppc_handle_nested_exit because at that point this is still an HFU.
> 
> Unless I do it anyway at the HFU handler and put a comment explaining
> the situation.

Yeah I think it would be better to move this logic to the nested exit 
handler.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2021-07-29  3:53 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-26 20:17 [PATCH v5 0/2] KVM: PPC: Book3S HV: Nested guest state sanitising changes Fabiano Rosas
2021-07-26 20:17 ` Fabiano Rosas
2021-07-26 20:17 ` [PATCH v5 1/2] KVM: PPC: Book3S HV: Sanitise vcpu registers in nested path Fabiano Rosas
2021-07-26 20:17   ` Fabiano Rosas
2021-07-26 20:17 ` [PATCH v5 2/2] KVM: PPC: Book3S HV: Stop forwarding all HFUs to L1 Fabiano Rosas
2021-07-26 20:17   ` Fabiano Rosas
2021-07-27  3:09   ` Nicholas Piggin
2021-07-27  3:09     ` Nicholas Piggin
2021-07-27 14:36     ` Fabiano Rosas
2021-07-27 14:36       ` Fabiano Rosas
2021-07-29  3:52       ` Nicholas Piggin
2021-07-29  3:52         ` Nicholas Piggin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.