* [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations
@ 2021-07-26 3:49 Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 01/55] KVM: PPC: Book3S HV: Remove TM emulation from POWER7/8 path Nicholas Piggin
` (54 more replies)
0 siblings, 55 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:49 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
This reduces radix guest full entry/exit latency on POWER9 and POWER10
by almost 2x.
Nested HV guests should see smaller improvements in their L1 entry/exit,
but this is also combined with most L0 speedups also applying to nested
entry. nginx localhost throughput test in a SMP nested guest is improved
about 10% (in a direct guest it doesn't change much because it uses XIVE
for IPIs) when L0 and L1 are patched.
It does this in several main ways:
- Rearrange code to optimise SPR accesses. Mainly, avoid scoreboard
stalls.
- Test SPR values to avoid mtSPRs where possible. mtSPRs are expensive.
- Reduce mftb. mftb is expensive.
- Demand fault certain facilities to avoid saving and/or restoring them
(at the cost of fault when they are used, but this is mitigated over
a number of entries, like the facilities when context switching
processes). PM, TM, and EBB so far.
- Defer some sequences that are made just in case a guest is interrupted
in the middle of a critical section to the case where the guest is
scheduled on a different CPU, rather than every time (at the cost of
an extra IPI in this case). Namely the tlbsync sequence for radix with
GTSE, which is very expensive.
This also adds the 2nd round patches to the series, which improve
performance mostly by reducing locking, barriers, and atomics related
to the vcpus-per-vcore > 1 handling that the P9 path does not require.
Some of the numbers quoted in changelogs may have changed a bit with
patches being updated, reordered, etc. They give a bit of a guide, but
I might remove them from the final submission because they're too much
to maintain.
Changes since RFC:
- Rebased with Fabiano's HV sanitising patches at the front.
- Several demand faulting bug fixes mostly relating to nested guests.
- Removed facility demand-faulting from L0 nested entry/exit handler.
Demand faulting is still done in the L1, but not the L0. The reason
is to reduce complexity (although it's only a small amount of
complexity), reduce demand faulting overhead that may require several
interrupts, and allow better testing of the L1 demand faulting,
because we may run on hypervisors that do not implement L0 demand
faulting. In future, depending on performance and such, we could add
demand faulting to L0 nested entry handling and/or remove it from the
L1.
- Fixed a timebase problem with the HMI subcore patch.
Fabiano Rosas (2):
KVM: PPC: Book3S HV: Sanitise vcpu registers in nested path
KVM: PPC: Book3S HV: Stop forwarding all HFUs to L1
Nicholas Piggin (53):
KVM: PPC: Book3S HV: Remove TM emulation from POWER7/8 path
KVM: PPC: Book3S HV P9: Fixes for TM softpatch interrupt
KVM: PPC: Book3S HV Nested: Reflect guest PMU in-use to L0 when guest
SPRs are live
powerpc/64s: Remove WORT SPR from POWER9/10
KMV: PPC: Book3S HV P9: Use set_dec to set decrementer to host
KVM: PPC: Book3S HV P9: Use host timer accounting to avoid decrementer
read
KVM: PPC: Book3S HV P9: Use large decrementer for HDEC
KVM: PPC: Book3S HV P9: Reduce mftb per guest entry/exit
powerpc/time: add API for KVM to re-arm the host timer/decrementer
KVM: PPC: Book3S HV: POWER10 enable HAIL when running radix guests
powerpc/64s: Keep AMOR SPR a constant ~0 at runtime
KVM: PPC: Book3S HV: Don't always save PMU for guest capable of
nesting
powerpc/64s: Always set PMU control registers to frozen/disabled when
not in use
powerpc/64s: Implement PMU override command line option
KVM: PPC: Book3S HV P9: Implement PMU save/restore in C
KVM: PPC: Book3S HV P9: Factor PMU save/load into context switch
functions
KVM: PPC: Book3S HV P9: Demand fault PMU SPRs when marked not inuse
KVM: PPC: Book3S HV P9: Factor out yield_count increment
KVM: PPC: Book3S HV: CTRL SPR does not require read-modify-write
KVM: PPC: Book3S HV P9: Move SPRG restore to restore_p9_host_os_sprs
KVM: PPC: Book3S HV P9: Reduce mtmsrd instructions required to save
host SPRs
KVM: PPC: Book3S HV P9: Improve mtmsrd scheduling by delaying MSR[EE]
disable
KVM: PPC: Book3S HV P9: Add kvmppc_stop_thread to match
kvmppc_start_thread
KVM: PPC: Book3S HV: Change dec_expires to be relative to guest
timebase
KVM: PPC: Book3S HV P9: Move TB updates
KVM: PPC: Book3S HV P9: Optimise timebase reads
KVM: PPC: Book3S HV P9: Avoid SPR scoreboard stalls
KVM: PPC: Book3S HV P9: Only execute mtSPR if the value changed
KVM: PPC: Book3S HV P9: Juggle SPR switching around
KVM: PPC: Book3S HV P9: Move vcpu register save/restore into functions
KVM: PPC: Book3S HV P9: Move host OS save/restore functions to
built-in
KVM: PPC: Book3S HV P9: Move nested guest entry into its own function
KVM: PPC: Book3S HV P9: Move remaining SPR and MSR access into low
level entry
KVM: PPC: Book3S HV P9: Implement TM fastpath for guest entry/exit
KVM: PPC: Book3S HV P9: Switch PMU to guest as late as possible
KVM: PPC: Book3S HV P9: Restrict DSISR canary workaround to processors
that require it
KVM: PPC: Book3S HV P9: More SPR speed improvements
KVM: PPC: Book3S HV P9: Demand fault EBB facility registers
KVM: PPC: Book3S HV P9: Demand fault TM facility registers
KVM: PPC: Book3S HV P9: Use Linux SPR save/restore to manage some host
SPRs
KVM: PPC: Book3S HV P9: Comment and fix MMU context switching code
KVM: PPC: Book3S HV P9: Test dawr_enabled() before saving host DAWR
SPRs
KVM: PPC: Book3S HV P9: Don't restore PSSCR if not needed
KVM: PPC: Book3S HV P9: Avoid tlbsync sequence on radix guest exit
KVM: PPC: Book3S HV Nested: Avoid extra mftb() in nested entry
KVM: PPC: Book3S HV P9: Improve mfmsr performance on entry
KVM: PPC: Book3S HV P9: Optimise hash guest SLB saving
KVM: PPC: Book3S HV P9: Add unlikely annotation for !mmu_ready
KVM: PPC: Book3S HV P9: Avoid cpu_in_guest atomics on entry and exit
KVM: PPC: Book3S HV P9: Remove most of the vcore logic
KVM: PPC: Book3S HV P9: Tidy kvmppc_create_dtl_entry
KVM: PPC: Book3S HV P9: Stop using vc->dpdes
KVM: PPC: Book3S HV P9: Remove subcore HMI handling
.../admin-guide/kernel-parameters.txt | 7 +
arch/powerpc/include/asm/asm-prototypes.h | 5 -
arch/powerpc/include/asm/kvm_asm.h | 1 +
arch/powerpc/include/asm/kvm_book3s.h | 6 +
arch/powerpc/include/asm/kvm_book3s_64.h | 6 +-
arch/powerpc/include/asm/kvm_host.h | 6 +-
arch/powerpc/include/asm/pmc.h | 7 +
arch/powerpc/include/asm/reg.h | 3 +-
arch/powerpc/include/asm/switch_to.h | 2 +
arch/powerpc/include/asm/time.h | 19 +-
arch/powerpc/kernel/cpu_setup_power.c | 12 +-
arch/powerpc/kernel/dt_cpu_ftrs.c | 8 +-
arch/powerpc/kernel/process.c | 30 +
arch/powerpc/kernel/time.c | 54 +-
arch/powerpc/kvm/book3s_64_mmu_radix.c | 4 +
arch/powerpc/kvm/book3s_hv.c | 857 ++++++++++--------
arch/powerpc/kvm/book3s_hv.h | 36 +
arch/powerpc/kvm/book3s_hv_builtin.c | 2 +
arch/powerpc/kvm/book3s_hv_hmi.c | 7 +-
arch/powerpc/kvm/book3s_hv_interrupts.S | 13 +-
arch/powerpc/kvm/book3s_hv_nested.c | 131 +--
arch/powerpc/kvm/book3s_hv_p9_entry.c | 793 ++++++++++++++--
arch/powerpc/kvm/book3s_hv_ras.c | 4 +
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 116 +--
arch/powerpc/kvm/book3s_hv_tm.c | 57 +-
arch/powerpc/mm/book3s64/radix_pgtable.c | 15 -
arch/powerpc/perf/core-book3s.c | 35 +
arch/powerpc/platforms/powernv/idle.c | 10 +-
28 files changed, 1508 insertions(+), 738 deletions(-)
create mode 100644 arch/powerpc/kvm/book3s_hv.h
--
2.23.0
^ permalink raw reply [flat|nested] 79+ messages in thread
* [PATCH v1 01/55] KVM: PPC: Book3S HV: Remove TM emulation from POWER7/8 path
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
@ 2021-07-26 3:49 ` Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 02/55] KVM: PPC: Book3S HV P9: Fixes for TM softpatch interrupt Nicholas Piggin
` (53 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:49 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
TM fake-suspend emulation is only used by POWER9. Remove it from the old
code path.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 42 -------------------------
1 file changed, 42 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 8dd437d7a2c6..75079397c2a5 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -1088,12 +1088,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
cmpwi r12, BOOK3S_INTERRUPT_H_INST_STORAGE
beq kvmppc_hisi
-#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
- /* For softpatch interrupt, go off and do TM instruction emulation */
- cmpwi r12, BOOK3S_INTERRUPT_HV_SOFTPATCH
- beq kvmppc_tm_emul
-#endif
-
/* See if this is a leftover HDEC interrupt */
cmpwi r12,BOOK3S_INTERRUPT_HV_DECREMENTER
bne 2f
@@ -1599,42 +1593,6 @@ maybe_reenter_guest:
blt deliver_guest_interrupt
b guest_exit_cont
-#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
-/*
- * Softpatch interrupt for transactional memory emulation cases
- * on POWER9 DD2.2. This is early in the guest exit path - we
- * haven't saved registers or done a treclaim yet.
- */
-kvmppc_tm_emul:
- /* Save instruction image in HEIR */
- mfspr r3, SPRN_HEIR
- stw r3, VCPU_HEIR(r9)
-
- /*
- * The cases we want to handle here are those where the guest
- * is in real suspend mode and is trying to transition to
- * transactional mode.
- */
- lbz r0, HSTATE_FAKE_SUSPEND(r13)
- cmpwi r0, 0 /* keep exiting guest if in fake suspend */
- bne guest_exit_cont
- rldicl r3, r11, 64 - MSR_TS_S_LG, 62
- cmpwi r3, 1 /* or if not in suspend state */
- bne guest_exit_cont
-
- /* Call C code to do the emulation */
- mr r3, r9
- bl kvmhv_p9_tm_emulation_early
- nop
- ld r9, HSTATE_KVM_VCPU(r13)
- li r12, BOOK3S_INTERRUPT_HV_SOFTPATCH
- cmpwi r3, 0
- beq guest_exit_cont /* continue exiting if not handled */
- ld r10, VCPU_PC(r9)
- ld r11, VCPU_MSR(r9)
- b fast_interrupt_c_return /* go back to guest if handled */
-#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
-
/*
* Check whether an HDSI is an HPTE not found fault or something else.
* If it is an HPTE not found fault that is due to the guest accessing
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 02/55] KVM: PPC: Book3S HV P9: Fixes for TM softpatch interrupt
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 01/55] KVM: PPC: Book3S HV: Remove TM emulation from POWER7/8 path Nicholas Piggin
@ 2021-07-26 3:49 ` Nicholas Piggin
2021-08-06 1:16 ` Michael Ellerman
2021-07-26 3:49 ` [PATCH v1 03/55] KVM: PPC: Book3S HV: Sanitise vcpu registers in nested path Nicholas Piggin
` (52 subsequent siblings)
54 siblings, 1 reply; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:49 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
The softpatch interrupt sets HSRR0 to the faulting instruction +4, so
it should subtract 4 for the faulting instruction address. Also have it
emulate and deliver HFAC interrupts correctly, which is important for
nested HV and facility demand-faulting in future.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/reg.h | 3 +-
arch/powerpc/kvm/book3s_hv.c | 35 ++++++++++++--------
arch/powerpc/kvm/book3s_hv_tm.c | 57 +++++++++++++++++++++------------
3 files changed, 61 insertions(+), 34 deletions(-)
diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index be85cf156a1f..e9d27265253b 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -415,6 +415,7 @@
#define FSCR_TAR __MASK(FSCR_TAR_LG)
#define FSCR_EBB __MASK(FSCR_EBB_LG)
#define FSCR_DSCR __MASK(FSCR_DSCR_LG)
+#define FSCR_INTR_CAUSE (ASM_CONST(0xFF) << 56) /* interrupt cause */
#define SPRN_HFSCR 0xbe /* HV=1 Facility Status & Control Register */
#define HFSCR_PREFIX __MASK(FSCR_PREFIX_LG)
#define HFSCR_MSGP __MASK(FSCR_MSGP_LG)
@@ -426,7 +427,7 @@
#define HFSCR_DSCR __MASK(FSCR_DSCR_LG)
#define HFSCR_VECVSX __MASK(FSCR_VECVSX_LG)
#define HFSCR_FP __MASK(FSCR_FP_LG)
-#define HFSCR_INTR_CAUSE (ASM_CONST(0xFF) << 56) /* interrupt cause */
+#define HFSCR_INTR_CAUSE FSCR_INTR_CAUSE
#define SPRN_TAR 0x32f /* Target Address Register */
#define SPRN_LPCR 0x13E /* LPAR Control Register */
#define LPCR_VPM0 ASM_CONST(0x8000000000000000)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index ce7ff12cfc03..adac1a6431a0 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -1682,6 +1682,21 @@ XXX benchmark guest exits
r = RESUME_GUEST;
}
break;
+
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+ case BOOK3S_INTERRUPT_HV_SOFTPATCH:
+ /*
+ * This occurs for various TM-related instructions that
+ * we need to emulate on POWER9 DD2.2. We have already
+ * handled the cases where the guest was in real-suspend
+ * mode and was transitioning to transactional state.
+ */
+ r = kvmhv_p9_tm_emulation(vcpu);
+ if (r != -1)
+ break;
+ fallthrough; /* go to facility unavailable handler */
+#endif
+
/*
* This occurs if the guest (kernel or userspace), does something that
* is prohibited by HFSCR.
@@ -1700,18 +1715,6 @@ XXX benchmark guest exits
}
break;
-#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
- case BOOK3S_INTERRUPT_HV_SOFTPATCH:
- /*
- * This occurs for various TM-related instructions that
- * we need to emulate on POWER9 DD2.2. We have already
- * handled the cases where the guest was in real-suspend
- * mode and was transitioning to transactional state.
- */
- r = kvmhv_p9_tm_emulation(vcpu);
- break;
-#endif
-
case BOOK3S_INTERRUPT_HV_RM_HARD:
r = RESUME_PASSTHROUGH;
break;
@@ -1814,9 +1817,15 @@ static int kvmppc_handle_nested_exit(struct kvm_vcpu *vcpu)
* mode and was transitioning to transactional state.
*/
r = kvmhv_p9_tm_emulation(vcpu);
- break;
+ if (r != -1)
+ break;
+ fallthrough; /* go to facility unavailable handler */
#endif
+ case BOOK3S_INTERRUPT_H_FAC_UNAVAIL:
+ r = RESUME_HOST;
+ break;
+
case BOOK3S_INTERRUPT_HV_RM_HARD:
vcpu->arch.trap = 0;
r = RESUME_GUEST;
diff --git a/arch/powerpc/kvm/book3s_hv_tm.c b/arch/powerpc/kvm/book3s_hv_tm.c
index cc90b8b82329..e4fd4a9dee08 100644
--- a/arch/powerpc/kvm/book3s_hv_tm.c
+++ b/arch/powerpc/kvm/book3s_hv_tm.c
@@ -74,19 +74,23 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
case PPC_INST_RFEBB:
if ((msr & MSR_PR) && (vcpu->arch.vcore->pcr & PCR_ARCH_206)) {
/* generate an illegal instruction interrupt */
+ vcpu->arch.regs.nip -= 4;
kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
return RESUME_GUEST;
}
/* check EBB facility is available */
if (!(vcpu->arch.hfscr & HFSCR_EBB)) {
- /* generate an illegal instruction interrupt */
- kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
- return RESUME_GUEST;
+ vcpu->arch.regs.nip -= 4;
+ vcpu->arch.hfscr &= ~HFSCR_INTR_CAUSE;
+ vcpu->arch.hfscr |= (u64)FSCR_EBB_LG << 56;
+ vcpu->arch.trap = BOOK3S_INTERRUPT_H_FAC_UNAVAIL;
+ return -1; /* rerun host interrupt handler */
}
if ((msr & MSR_PR) && !(vcpu->arch.fscr & FSCR_EBB)) {
/* generate a facility unavailable interrupt */
- vcpu->arch.fscr = (vcpu->arch.fscr & ~(0xffull << 56)) |
- ((u64)FSCR_EBB_LG << 56);
+ vcpu->arch.regs.nip -= 4;
+ vcpu->arch.fscr &= ~FSCR_INTR_CAUSE;
+ vcpu->arch.fscr |= (u64)FSCR_EBB_LG << 56;
kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_FAC_UNAVAIL);
return RESUME_GUEST;
}
@@ -123,19 +127,23 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
/* check for PR=1 and arch 2.06 bit set in PCR */
if ((msr & MSR_PR) && (vcpu->arch.vcore->pcr & PCR_ARCH_206)) {
/* generate an illegal instruction interrupt */
+ vcpu->arch.regs.nip -= 4;
kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
return RESUME_GUEST;
}
/* check for TM disabled in the HFSCR or MSR */
if (!(vcpu->arch.hfscr & HFSCR_TM)) {
- /* generate an illegal instruction interrupt */
- kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
- return RESUME_GUEST;
+ vcpu->arch.regs.nip -= 4;
+ vcpu->arch.hfscr &= ~HFSCR_INTR_CAUSE;
+ vcpu->arch.hfscr |= (u64)FSCR_TM_LG << 56;
+ vcpu->arch.trap = BOOK3S_INTERRUPT_H_FAC_UNAVAIL;
+ return -1; /* rerun host interrupt handler */
}
if (!(msr & MSR_TM)) {
/* generate a facility unavailable interrupt */
- vcpu->arch.fscr = (vcpu->arch.fscr & ~(0xffull << 56)) |
- ((u64)FSCR_TM_LG << 56);
+ vcpu->arch.regs.nip -= 4;
+ vcpu->arch.fscr &= ~FSCR_INTR_CAUSE;
+ vcpu->arch.fscr |= (u64)FSCR_TM_LG << 56;
kvmppc_book3s_queue_irqprio(vcpu,
BOOK3S_INTERRUPT_FAC_UNAVAIL);
return RESUME_GUEST;
@@ -158,20 +166,24 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
case (PPC_INST_TRECLAIM & PO_XOP_OPCODE_MASK):
/* check for TM disabled in the HFSCR or MSR */
if (!(vcpu->arch.hfscr & HFSCR_TM)) {
- /* generate an illegal instruction interrupt */
- kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
- return RESUME_GUEST;
+ vcpu->arch.regs.nip -= 4;
+ vcpu->arch.hfscr &= ~HFSCR_INTR_CAUSE;
+ vcpu->arch.hfscr |= (u64)FSCR_TM_LG << 56;
+ vcpu->arch.trap = BOOK3S_INTERRUPT_H_FAC_UNAVAIL;
+ return -1; /* rerun host interrupt handler */
}
if (!(msr & MSR_TM)) {
/* generate a facility unavailable interrupt */
- vcpu->arch.fscr = (vcpu->arch.fscr & ~(0xffull << 56)) |
- ((u64)FSCR_TM_LG << 56);
+ vcpu->arch.regs.nip -= 4;
+ vcpu->arch.fscr &= ~FSCR_INTR_CAUSE;
+ vcpu->arch.fscr |= (u64)FSCR_TM_LG << 56;
kvmppc_book3s_queue_irqprio(vcpu,
BOOK3S_INTERRUPT_FAC_UNAVAIL);
return RESUME_GUEST;
}
/* If no transaction active, generate TM bad thing */
if (!MSR_TM_ACTIVE(msr)) {
+ vcpu->arch.regs.nip -= 4;
kvmppc_core_queue_program(vcpu, SRR1_PROGTM);
return RESUME_GUEST;
}
@@ -196,20 +208,24 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
/* XXX do we need to check for PR=0 here? */
/* check for TM disabled in the HFSCR or MSR */
if (!(vcpu->arch.hfscr & HFSCR_TM)) {
- /* generate an illegal instruction interrupt */
- kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
- return RESUME_GUEST;
+ vcpu->arch.regs.nip -= 4;
+ vcpu->arch.hfscr &= ~HFSCR_INTR_CAUSE;
+ vcpu->arch.hfscr |= (u64)FSCR_TM_LG << 56;
+ vcpu->arch.trap = BOOK3S_INTERRUPT_H_FAC_UNAVAIL;
+ return -1; /* rerun host interrupt handler */
}
if (!(msr & MSR_TM)) {
/* generate a facility unavailable interrupt */
- vcpu->arch.fscr = (vcpu->arch.fscr & ~(0xffull << 56)) |
- ((u64)FSCR_TM_LG << 56);
+ vcpu->arch.regs.nip -= 4;
+ vcpu->arch.fscr &= ~FSCR_INTR_CAUSE;
+ vcpu->arch.fscr |= (u64)FSCR_TM_LG << 56;
kvmppc_book3s_queue_irqprio(vcpu,
BOOK3S_INTERRUPT_FAC_UNAVAIL);
return RESUME_GUEST;
}
/* If transaction active or TEXASR[FS] = 0, bad thing */
if (MSR_TM_ACTIVE(msr) || !(vcpu->arch.texasr & TEXASR_FS)) {
+ vcpu->arch.regs.nip -= 4;
kvmppc_core_queue_program(vcpu, SRR1_PROGTM);
return RESUME_GUEST;
}
@@ -224,6 +240,7 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
}
/* What should we do here? We didn't recognize the instruction */
+ vcpu->arch.regs.nip -= 4;
kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
pr_warn_ratelimited("Unrecognized TM-related instruction %#x for emulation", instr);
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 03/55] KVM: PPC: Book3S HV: Sanitise vcpu registers in nested path
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 01/55] KVM: PPC: Book3S HV: Remove TM emulation from POWER7/8 path Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 02/55] KVM: PPC: Book3S HV P9: Fixes for TM softpatch interrupt Nicholas Piggin
@ 2021-07-26 3:49 ` Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 04/55] KVM: PPC: Book3S HV: Stop forwarding all HFUs to L1 Nicholas Piggin
` (51 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:49 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin, Fabiano Rosas
From: Fabiano Rosas <farosas@linux.ibm.com>
As one of the arguments of the H_ENTER_NESTED hypercall, the nested
hypervisor (L1) prepares a structure containing the values of various
hypervisor-privileged registers with which it wants the nested guest
(L2) to run. Since the nested HV runs in supervisor mode it needs the
host to write to these registers.
To stop a nested HV manipulating this mechanism and using a nested
guest as a proxy to access a facility that has been made unavailable
to it, we have a routine that sanitises the values of the HV registers
before copying them into the nested guest's vcpu struct.
However, when coming out of the guest the values are copied as they
were back into L1 memory, which means that any sanitisation we did
during guest entry will be exposed to L1 after H_ENTER_NESTED returns.
This patch alters this sanitisation to have effect on the vcpu->arch
registers directly before entering and after exiting the guest,
leaving the structure that is copied back into L1 unchanged (except
when we really want L1 to access the value, e.g the Cause bits of
HFSCR).
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
---
arch/powerpc/kvm/book3s_hv_nested.c | 100 +++++++++++++++-------------
1 file changed, 52 insertions(+), 48 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
index 898f942eb198..9bb0788d312c 100644
--- a/arch/powerpc/kvm/book3s_hv_nested.c
+++ b/arch/powerpc/kvm/book3s_hv_nested.c
@@ -104,8 +104,17 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
{
struct kvmppc_vcore *vc = vcpu->arch.vcore;
+ /*
+ * When loading the hypervisor-privileged registers to run L2,
+ * we might have used bits from L1 state to restrict what the
+ * L2 state is allowed to be. Since L1 is not allowed to read
+ * the HV registers, do not include these modifications in the
+ * return state.
+ */
+ hr->hfscr = ((~HFSCR_INTR_CAUSE & hr->hfscr) |
+ (HFSCR_INTR_CAUSE & vcpu->arch.hfscr));
+
hr->dpdes = vc->dpdes;
- hr->hfscr = vcpu->arch.hfscr;
hr->purr = vcpu->arch.purr;
hr->spurr = vcpu->arch.spurr;
hr->ic = vcpu->arch.ic;
@@ -134,49 +143,7 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
}
}
-/*
- * This can result in some L0 HV register state being leaked to an L1
- * hypervisor when the hv_guest_state is copied back to the guest after
- * being modified here.
- *
- * There is no known problem with such a leak, and in many cases these
- * register settings could be derived by the guest by observing behaviour
- * and timing, interrupts, etc., but it is an issue to consider.
- */
-static void sanitise_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr)
-{
- struct kvmppc_vcore *vc = vcpu->arch.vcore;
- u64 mask;
-
- /*
- * Don't let L1 change LPCR bits for the L2 except these:
- */
- mask = LPCR_DPFD | LPCR_ILE | LPCR_TC | LPCR_AIL | LPCR_LD |
- LPCR_LPES | LPCR_MER;
-
- /*
- * Additional filtering is required depending on hardware
- * and configuration.
- */
- hr->lpcr = kvmppc_filter_lpcr_hv(vcpu->kvm,
- (vc->lpcr & ~mask) | (hr->lpcr & mask));
-
- /*
- * Don't let L1 enable features for L2 which we've disabled for L1,
- * but preserve the interrupt cause field.
- */
- hr->hfscr &= (HFSCR_INTR_CAUSE | vcpu->arch.hfscr);
-
- /* Don't let data address watchpoint match in hypervisor state */
- hr->dawrx0 &= ~DAWRX_HYP;
- hr->dawrx1 &= ~DAWRX_HYP;
-
- /* Don't let completed instruction address breakpt match in HV state */
- if ((hr->ciabr & CIABR_PRIV) == CIABR_PRIV_HYPER)
- hr->ciabr &= ~CIABR_PRIV;
-}
-
-static void restore_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr)
+static void restore_hv_regs(struct kvm_vcpu *vcpu, const struct hv_guest_state *hr)
{
struct kvmppc_vcore *vc = vcpu->arch.vcore;
@@ -288,6 +255,43 @@ static int kvmhv_write_guest_state_and_regs(struct kvm_vcpu *vcpu,
sizeof(struct pt_regs));
}
+static void load_l2_hv_regs(struct kvm_vcpu *vcpu,
+ const struct hv_guest_state *l2_hv,
+ const struct hv_guest_state *l1_hv, u64 *lpcr)
+{
+ struct kvmppc_vcore *vc = vcpu->arch.vcore;
+ u64 mask;
+
+ restore_hv_regs(vcpu, l2_hv);
+
+ /*
+ * Don't let L1 change LPCR bits for the L2 except these:
+ */
+ mask = LPCR_DPFD | LPCR_ILE | LPCR_TC | LPCR_AIL | LPCR_LD |
+ LPCR_LPES | LPCR_MER;
+
+ /*
+ * Additional filtering is required depending on hardware
+ * and configuration.
+ */
+ *lpcr = kvmppc_filter_lpcr_hv(vcpu->kvm,
+ (vc->lpcr & ~mask) | (*lpcr & mask));
+
+ /*
+ * Don't let L1 enable features for L2 which we've disabled for L1,
+ * but preserve the interrupt cause field.
+ */
+ vcpu->arch.hfscr = l2_hv->hfscr & (HFSCR_INTR_CAUSE | l1_hv->hfscr);
+
+ /* Don't let data address watchpoint match in hypervisor state */
+ vcpu->arch.dawrx0 = l2_hv->dawrx0 & ~DAWRX_HYP;
+ vcpu->arch.dawrx1 = l2_hv->dawrx1 & ~DAWRX_HYP;
+
+ /* Don't let completed instruction address breakpt match in HV state */
+ if ((l2_hv->ciabr & CIABR_PRIV) == CIABR_PRIV_HYPER)
+ vcpu->arch.ciabr = l2_hv->ciabr & ~CIABR_PRIV;
+}
+
long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
{
long int err, r;
@@ -296,7 +300,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
struct hv_guest_state l2_hv = {0}, saved_l1_hv;
struct kvmppc_vcore *vc = vcpu->arch.vcore;
u64 hv_ptr, regs_ptr;
- u64 hdec_exp;
+ u64 hdec_exp, lpcr;
s64 delta_purr, delta_spurr, delta_ic, delta_vtb;
if (vcpu->kvm->arch.l1_ptcr == 0)
@@ -369,8 +373,8 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
/* Guest must always run with ME enabled, HV disabled. */
vcpu->arch.shregs.msr = (vcpu->arch.regs.msr | MSR_ME) & ~MSR_HV;
- sanitise_hv_regs(vcpu, &l2_hv);
- restore_hv_regs(vcpu, &l2_hv);
+ lpcr = l2_hv.lpcr;
+ load_l2_hv_regs(vcpu, &l2_hv, &saved_l1_hv, &lpcr);
vcpu->arch.ret = RESUME_GUEST;
vcpu->arch.trap = 0;
@@ -380,7 +384,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
r = RESUME_HOST;
break;
}
- r = kvmhv_run_single_vcpu(vcpu, hdec_exp, l2_hv.lpcr);
+ r = kvmhv_run_single_vcpu(vcpu, hdec_exp, lpcr);
} while (is_kvmppc_resume_guest(r));
/* save L2 state for return */
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 04/55] KVM: PPC: Book3S HV: Stop forwarding all HFUs to L1
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (2 preceding siblings ...)
2021-07-26 3:49 ` [PATCH v1 03/55] KVM: PPC: Book3S HV: Sanitise vcpu registers in nested path Nicholas Piggin
@ 2021-07-26 3:49 ` Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 05/55] KVM: PPC: Book3S HV Nested: Reflect guest PMU in-use to L0 when guest SPRs are live Nicholas Piggin
` (50 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:49 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin, Fabiano Rosas
From: Fabiano Rosas <farosas@linux.ibm.com>
If the nested hypervisor has no access to a facility because it has
been disabled by the host, it should also not be able to see the
Hypervisor Facility Unavailable that arises from one of its guests
trying to access the facility.
This patch turns a HFU that happened in L2 into a Hypervisor Emulation
Assistance interrupt and forwards it to L1 for handling. The ones that
happened because L1 explicitly disabled the facility for L2 are still
let through, along with the corresponding Cause bits in the HFSCR.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
---
arch/powerpc/kvm/book3s_hv_nested.c | 27 ++++++++++++++++++++++++---
1 file changed, 24 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
index 9bb0788d312c..983628ed4376 100644
--- a/arch/powerpc/kvm/book3s_hv_nested.c
+++ b/arch/powerpc/kvm/book3s_hv_nested.c
@@ -99,7 +99,7 @@ static void byteswap_hv_regs(struct hv_guest_state *hr)
hr->dawrx1 = swab64(hr->dawrx1);
}
-static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
+static void save_hv_return_state(struct kvm_vcpu *vcpu,
struct hv_guest_state *hr)
{
struct kvmppc_vcore *vc = vcpu->arch.vcore;
@@ -128,7 +128,7 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
hr->pidr = vcpu->arch.pid;
hr->cfar = vcpu->arch.cfar;
hr->ppr = vcpu->arch.ppr;
- switch (trap) {
+ switch (vcpu->arch.trap) {
case BOOK3S_INTERRUPT_H_DATA_STORAGE:
hr->hdar = vcpu->arch.fault_dar;
hr->hdsisr = vcpu->arch.fault_dsisr;
@@ -137,6 +137,27 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu, int trap,
case BOOK3S_INTERRUPT_H_INST_STORAGE:
hr->asdr = vcpu->arch.fault_gpa;
break;
+ case BOOK3S_INTERRUPT_H_FAC_UNAVAIL:
+ {
+ u8 cause = vcpu->arch.hfscr >> 56;
+
+ WARN_ON_ONCE(cause >= BITS_PER_LONG);
+
+ if (!(hr->hfscr & (1UL << cause)))
+ break;
+
+ /*
+ * We have disabled this facility, so it does not
+ * exist from L1's perspective. Turn it into a HEAI.
+ */
+ vcpu->arch.trap = BOOK3S_INTERRUPT_H_EMUL_ASSIST;
+ kvmppc_load_last_inst(vcpu, INST_GENERIC, &vcpu->arch.emul_inst);
+
+ /* Don't leak the cause field */
+ hr->hfscr &= ~HFSCR_INTR_CAUSE;
+
+ fallthrough;
+ }
case BOOK3S_INTERRUPT_H_EMUL_ASSIST:
hr->heir = vcpu->arch.emul_inst;
break;
@@ -394,7 +415,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
delta_spurr = vcpu->arch.spurr - l2_hv.spurr;
delta_ic = vcpu->arch.ic - l2_hv.ic;
delta_vtb = vc->vtb - l2_hv.vtb;
- save_hv_return_state(vcpu, vcpu->arch.trap, &l2_hv);
+ save_hv_return_state(vcpu, &l2_hv);
/* restore L1 state */
vcpu->arch.nested = NULL;
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 05/55] KVM: PPC: Book3S HV Nested: Reflect guest PMU in-use to L0 when guest SPRs are live
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (3 preceding siblings ...)
2021-07-26 3:49 ` [PATCH v1 04/55] KVM: PPC: Book3S HV: Stop forwarding all HFUs to L1 Nicholas Piggin
@ 2021-07-26 3:49 ` Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 06/55] powerpc/64s: Remove WORT SPR from POWER9/10 Nicholas Piggin
` (49 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:49 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin, Fabiano Rosas
After the L1 saves its PMU SPRs but before loading the L2's PMU SPRs,
switch the pmcregs_in_use field in the L1 lppaca to the value advertised
by the L2 in its VPA. On the way out of the L2, set it back after saving
the L2 PMU registers (if they were in-use).
This transfers the PMU liveness indication between the L1 and L2 at the
points where the registers are not live.
This fixes the nested HV bug for which a workaround was added to the L0
HV by commit 63279eeb7f93a ("KVM: PPC: Book3S HV: Always save guest pmu
for guest capable of nesting"), which explains the problem in detail.
That workaround is no longer required for guests that include this bug
fix.
Fixes: 360cae313702 ("KVM: PPC: Book3S HV: Nested guest entry via hypercall")
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/pmc.h | 7 +++++++
arch/powerpc/kvm/book3s_hv.c | 20 ++++++++++++++++++++
2 files changed, 27 insertions(+)
diff --git a/arch/powerpc/include/asm/pmc.h b/arch/powerpc/include/asm/pmc.h
index c6bbe9778d3c..3c09109e708e 100644
--- a/arch/powerpc/include/asm/pmc.h
+++ b/arch/powerpc/include/asm/pmc.h
@@ -34,6 +34,13 @@ static inline void ppc_set_pmu_inuse(int inuse)
#endif
}
+#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
+static inline int ppc_get_pmu_inuse(void)
+{
+ return get_paca()->pmcregs_in_use;
+}
+#endif
+
extern void power4_enable_pmcs(void);
#else /* CONFIG_PPC64 */
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index adac1a6431a0..c743020837e7 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -59,6 +59,7 @@
#include <asm/kvm_book3s.h>
#include <asm/mmu_context.h>
#include <asm/lppaca.h>
+#include <asm/pmc.h>
#include <asm/processor.h>
#include <asm/cputhreads.h>
#include <asm/page.h>
@@ -3864,6 +3865,18 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true);
+#ifdef CONFIG_PPC_PSERIES
+ if (kvmhv_on_pseries()) {
+ barrier();
+ if (vcpu->arch.vpa.pinned_addr) {
+ struct lppaca *lp = vcpu->arch.vpa.pinned_addr;
+ get_lppaca()->pmcregs_in_use = lp->pmcregs_in_use;
+ } else {
+ get_lppaca()->pmcregs_in_use = 1;
+ }
+ barrier();
+ }
+#endif
kvmhv_load_guest_pmu(vcpu);
msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX);
@@ -3998,6 +4011,13 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
save_pmu |= nesting_enabled(vcpu->kvm);
kvmhv_save_guest_pmu(vcpu, save_pmu);
+#ifdef CONFIG_PPC_PSERIES
+ if (kvmhv_on_pseries()) {
+ barrier();
+ get_lppaca()->pmcregs_in_use = ppc_get_pmu_inuse();
+ barrier();
+ }
+#endif
vc->entry_exit_map = 0x101;
vc->in_guest = 0;
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 06/55] powerpc/64s: Remove WORT SPR from POWER9/10
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (4 preceding siblings ...)
2021-07-26 3:49 ` [PATCH v1 05/55] KVM: PPC: Book3S HV Nested: Reflect guest PMU in-use to L0 when guest SPRs are live Nicholas Piggin
@ 2021-07-26 3:49 ` Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 07/55] KMV: PPC: Book3S HV P9: Use set_dec to set decrementer to host Nicholas Piggin
` (48 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:49 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin, Fabiano Rosas
This register is not architected and not implemented in POWER9 or 10,
it just reads back zeroes for compatibility.
-78 cycles (9255) cycles POWER9 virt-mode NULL hcall
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 3 ---
arch/powerpc/platforms/powernv/idle.c | 2 --
2 files changed, 5 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index c743020837e7..905bf29940ea 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -3740,7 +3740,6 @@ static void load_spr_state(struct kvm_vcpu *vcpu)
mtspr(SPRN_EBBHR, vcpu->arch.ebbhr);
mtspr(SPRN_EBBRR, vcpu->arch.ebbrr);
mtspr(SPRN_BESCR, vcpu->arch.bescr);
- mtspr(SPRN_WORT, vcpu->arch.wort);
mtspr(SPRN_TIDR, vcpu->arch.tid);
mtspr(SPRN_AMR, vcpu->arch.amr);
mtspr(SPRN_UAMOR, vcpu->arch.uamor);
@@ -3767,7 +3766,6 @@ static void store_spr_state(struct kvm_vcpu *vcpu)
vcpu->arch.ebbhr = mfspr(SPRN_EBBHR);
vcpu->arch.ebbrr = mfspr(SPRN_EBBRR);
vcpu->arch.bescr = mfspr(SPRN_BESCR);
- vcpu->arch.wort = mfspr(SPRN_WORT);
vcpu->arch.tid = mfspr(SPRN_TIDR);
vcpu->arch.amr = mfspr(SPRN_AMR);
vcpu->arch.uamor = mfspr(SPRN_UAMOR);
@@ -3799,7 +3797,6 @@ static void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu,
struct p9_host_os_sprs *host_os_sprs)
{
mtspr(SPRN_PSPB, 0);
- mtspr(SPRN_WORT, 0);
mtspr(SPRN_UAMOR, 0);
mtspr(SPRN_DSCR, host_os_sprs->dscr);
diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
index 1e908536890b..df19e2ff9d3c 100644
--- a/arch/powerpc/platforms/powernv/idle.c
+++ b/arch/powerpc/platforms/powernv/idle.c
@@ -667,7 +667,6 @@ static unsigned long power9_idle_stop(unsigned long psscr)
sprs.purr = mfspr(SPRN_PURR);
sprs.spurr = mfspr(SPRN_SPURR);
sprs.dscr = mfspr(SPRN_DSCR);
- sprs.wort = mfspr(SPRN_WORT);
sprs.ciabr = mfspr(SPRN_CIABR);
sprs.mmcra = mfspr(SPRN_MMCRA);
@@ -785,7 +784,6 @@ static unsigned long power9_idle_stop(unsigned long psscr)
mtspr(SPRN_PURR, sprs.purr);
mtspr(SPRN_SPURR, sprs.spurr);
mtspr(SPRN_DSCR, sprs.dscr);
- mtspr(SPRN_WORT, sprs.wort);
mtspr(SPRN_CIABR, sprs.ciabr);
mtspr(SPRN_MMCRA, sprs.mmcra);
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 07/55] KMV: PPC: Book3S HV P9: Use set_dec to set decrementer to host
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (5 preceding siblings ...)
2021-07-26 3:49 ` [PATCH v1 06/55] powerpc/64s: Remove WORT SPR from POWER9/10 Nicholas Piggin
@ 2021-07-26 3:49 ` Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 08/55] KVM: PPC: Book3S HV P9: Use host timer accounting to avoid decrementer read Nicholas Piggin
` (47 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:49 UTC (permalink / raw)
To: kvm-ppc; +Cc: Alexey Kardashevskiy, linuxppc-dev, Nicholas Piggin
The host Linux timer code arms the decrementer with the value
'decrementers_next_tb - current_tb' using set_dec(), which stores
val - 1 on Book3S-64, which is not quite the same as what KVM does
to re-arm the host decrementer when exiting the guest.
This shouldn't be a significant change, but it makes the logic match
and avoids this small extra change being brought into the next patch.
Suggested-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 905bf29940ea..7020cbbf3aa1 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -4019,7 +4019,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
vc->entry_exit_map = 0x101;
vc->in_guest = 0;
- mtspr(SPRN_DEC, local_paca->kvm_hstate.dec_expires - mftb());
+ set_dec(local_paca->kvm_hstate.dec_expires - mftb());
/* We may have raced with new irq work */
if (test_irq_work_pending())
set_dec(1);
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 08/55] KVM: PPC: Book3S HV P9: Use host timer accounting to avoid decrementer read
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (6 preceding siblings ...)
2021-07-26 3:49 ` [PATCH v1 07/55] KMV: PPC: Book3S HV P9: Use set_dec to set decrementer to host Nicholas Piggin
@ 2021-07-26 3:49 ` Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 09/55] KVM: PPC: Book3S HV P9: Use large decrementer for HDEC Nicholas Piggin
` (46 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:49 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
There is no need to save away the host DEC value, as it is derived
from the host timer subsystem which maintains the next timer time,
so it can be restored from there.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/time.h | 5 +++++
arch/powerpc/kernel/time.c | 1 +
arch/powerpc/kvm/book3s_hv.c | 14 +++++++-------
3 files changed, 13 insertions(+), 7 deletions(-)
diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h
index 8c2c3dd4ddba..fd09b4797fd7 100644
--- a/arch/powerpc/include/asm/time.h
+++ b/arch/powerpc/include/asm/time.h
@@ -111,6 +111,11 @@ static inline unsigned long test_irq_work_pending(void)
DECLARE_PER_CPU(u64, decrementers_next_tb);
+static inline u64 timer_get_next_tb(void)
+{
+ return __this_cpu_read(decrementers_next_tb);
+}
+
/* Convert timebase ticks to nanoseconds */
unsigned long long tb_to_ns(unsigned long long tb_ticks);
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index e45ce427bffb..01df89918aa4 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -108,6 +108,7 @@ struct clock_event_device decrementer_clockevent = {
EXPORT_SYMBOL(decrementer_clockevent);
DEFINE_PER_CPU(u64, decrementers_next_tb);
+EXPORT_SYMBOL_GPL(decrementers_next_tb);
static DEFINE_PER_CPU(struct clock_event_device, decrementers);
#define XSEC_PER_SEC (1024*1024)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 7020cbbf3aa1..82976f734bd1 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -3829,18 +3829,17 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
struct kvmppc_vcore *vc = vcpu->arch.vcore;
struct p9_host_os_sprs host_os_sprs;
s64 dec;
- u64 tb;
+ u64 tb, next_timer;
int trap, save_pmu;
WARN_ON_ONCE(vcpu->arch.ceded);
- dec = mfspr(SPRN_DEC);
tb = mftb();
- if (dec < 0)
+ next_timer = timer_get_next_tb();
+ if (tb >= next_timer)
return BOOK3S_INTERRUPT_HV_DECREMENTER;
- local_paca->kvm_hstate.dec_expires = dec + tb;
- if (local_paca->kvm_hstate.dec_expires < time_limit)
- time_limit = local_paca->kvm_hstate.dec_expires;
+ if (next_timer < time_limit)
+ time_limit = next_timer;
save_p9_host_os_sprs(&host_os_sprs);
@@ -4019,7 +4018,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
vc->entry_exit_map = 0x101;
vc->in_guest = 0;
- set_dec(local_paca->kvm_hstate.dec_expires - mftb());
+ next_timer = timer_get_next_tb();
+ set_dec(next_timer - mftb());
/* We may have raced with new irq work */
if (test_irq_work_pending())
set_dec(1);
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 09/55] KVM: PPC: Book3S HV P9: Use large decrementer for HDEC
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (7 preceding siblings ...)
2021-07-26 3:49 ` [PATCH v1 08/55] KVM: PPC: Book3S HV P9: Use host timer accounting to avoid decrementer read Nicholas Piggin
@ 2021-07-26 3:49 ` Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 10/55] KVM: PPC: Book3S HV P9: Reduce mftb per guest entry/exit Nicholas Piggin
` (45 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:49 UTC (permalink / raw)
To: kvm-ppc; +Cc: Alexey Kardashevskiy, linuxppc-dev, Nicholas Piggin
On processors that don't suppress the HDEC exceptions when LPCR[HDICE]=0,
this could help reduce needless guest exits due to leftover exceptions on
entering the guest.
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/time.h | 2 ++
arch/powerpc/kernel/time.c | 1 +
arch/powerpc/kvm/book3s_hv_p9_entry.c | 3 ++-
3 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h
index fd09b4797fd7..69b6be617772 100644
--- a/arch/powerpc/include/asm/time.h
+++ b/arch/powerpc/include/asm/time.h
@@ -18,6 +18,8 @@
#include <asm/vdso/timebase.h>
/* time.c */
+extern u64 decrementer_max;
+
extern unsigned long tb_ticks_per_jiffy;
extern unsigned long tb_ticks_per_usec;
extern unsigned long tb_ticks_per_sec;
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index 01df89918aa4..72d872b49167 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -89,6 +89,7 @@ static struct clocksource clocksource_timebase = {
#define DECREMENTER_DEFAULT_MAX 0x7FFFFFFF
u64 decrementer_max = DECREMENTER_DEFAULT_MAX;
+EXPORT_SYMBOL_GPL(decrementer_max); /* for KVM HDEC */
static int decrementer_set_next_event(unsigned long evt,
struct clock_event_device *dev);
diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
index 961b3d70483c..0ff9ddb5e7ca 100644
--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
+++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
@@ -504,7 +504,8 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
vc->tb_offset_applied = 0;
}
- mtspr(SPRN_HDEC, 0x7fffffff);
+ /* HDEC must be at least as large as DEC, so decrementer_max fits */
+ mtspr(SPRN_HDEC, decrementer_max);
save_clear_guest_mmu(kvm, vcpu);
switch_mmu_to_host(kvm, host_pidr);
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 10/55] KVM: PPC: Book3S HV P9: Reduce mftb per guest entry/exit
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (8 preceding siblings ...)
2021-07-26 3:49 ` [PATCH v1 09/55] KVM: PPC: Book3S HV P9: Use large decrementer for HDEC Nicholas Piggin
@ 2021-07-26 3:49 ` Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 11/55] powerpc/time: add API for KVM to re-arm the host timer/decrementer Nicholas Piggin
` (44 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:49 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin, Fabiano Rosas
mftb is serialising (dispatch next-to-complete) so it is heavy weight
for a mfspr. Avoid reading it multiple times in the entry or exit paths.
A small number of cycles delay to timers is tolerable.
-118 cycles (9137) POWER9 virt-mode NULL hcall
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 4 ++--
arch/powerpc/kvm/book3s_hv_p9_entry.c | 5 +++--
2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 82976f734bd1..6e6cfb10e9bb 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -3896,7 +3896,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
*
* XXX: Another day's problem.
*/
- mtspr(SPRN_DEC, vcpu->arch.dec_expires - mftb());
+ mtspr(SPRN_DEC, vcpu->arch.dec_expires - tb);
if (kvmhv_on_pseries()) {
/*
@@ -4019,7 +4019,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
vc->in_guest = 0;
next_timer = timer_get_next_tb();
- set_dec(next_timer - mftb());
+ set_dec(next_timer - tb);
/* We may have raced with new irq work */
if (test_irq_work_pending())
set_dec(1);
diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
index 0ff9ddb5e7ca..bd8cf0a65ce8 100644
--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
+++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
@@ -203,7 +203,8 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
unsigned long host_dawr1;
unsigned long host_dawrx1;
- hdec = time_limit - mftb();
+ tb = mftb();
+ hdec = time_limit - tb;
if (hdec < 0)
return BOOK3S_INTERRUPT_HV_DECREMENTER;
@@ -215,7 +216,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
vcpu->arch.ceded = 0;
if (vc->tb_offset) {
- u64 new_tb = mftb() + vc->tb_offset;
+ u64 new_tb = tb + vc->tb_offset;
mtspr(SPRN_TBU40, new_tb);
tb = mftb();
if ((tb & 0xffffff) < (new_tb & 0xffffff))
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 11/55] powerpc/time: add API for KVM to re-arm the host timer/decrementer
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (9 preceding siblings ...)
2021-07-26 3:49 ` [PATCH v1 10/55] KVM: PPC: Book3S HV P9: Reduce mftb per guest entry/exit Nicholas Piggin
@ 2021-07-26 3:49 ` Nicholas Piggin
2021-08-05 7:22 ` Christophe Leroy
2021-07-26 3:49 ` [PATCH v1 12/55] KVM: PPC: Book3S HV: POWER10 enable HAIL when running radix guests Nicholas Piggin
` (43 subsequent siblings)
54 siblings, 1 reply; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:49 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
Rather than have KVM look up the host timer and fiddle with the
irq-work internal details, have the powerpc/time.c code provide a
function for KVM to re-arm the Linux timer code when exiting a
guest.
This is implementation has an improvement over existing code of
marking a decrementer interrupt as soft-pending if a timer has
expired, rather than setting DEC to a -ve value, which tended to
cause host timers to take two interrupts (first hdec to exit the
guest, then the immediate dec).
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/time.h | 16 +++-------
arch/powerpc/kernel/time.c | 52 +++++++++++++++++++++++++++------
arch/powerpc/kvm/book3s_hv.c | 7 ++---
3 files changed, 49 insertions(+), 26 deletions(-)
diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h
index 69b6be617772..924b2157882f 100644
--- a/arch/powerpc/include/asm/time.h
+++ b/arch/powerpc/include/asm/time.h
@@ -99,18 +99,6 @@ extern void div128_by_32(u64 dividend_high, u64 dividend_low,
extern void secondary_cpu_time_init(void);
extern void __init time_init(void);
-#ifdef CONFIG_PPC64
-static inline unsigned long test_irq_work_pending(void)
-{
- unsigned long x;
-
- asm volatile("lbz %0,%1(13)"
- : "=r" (x)
- : "i" (offsetof(struct paca_struct, irq_work_pending)));
- return x;
-}
-#endif
-
DECLARE_PER_CPU(u64, decrementers_next_tb);
static inline u64 timer_get_next_tb(void)
@@ -118,6 +106,10 @@ static inline u64 timer_get_next_tb(void)
return __this_cpu_read(decrementers_next_tb);
}
+#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
+void timer_rearm_host_dec(u64 now);
+#endif
+
/* Convert timebase ticks to nanoseconds */
unsigned long long tb_to_ns(unsigned long long tb_ticks);
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index 72d872b49167..016828b7401b 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -499,6 +499,16 @@ EXPORT_SYMBOL(profile_pc);
* 64-bit uses a byte in the PACA, 32-bit uses a per-cpu variable...
*/
#ifdef CONFIG_PPC64
+static inline unsigned long test_irq_work_pending(void)
+{
+ unsigned long x;
+
+ asm volatile("lbz %0,%1(13)"
+ : "=r" (x)
+ : "i" (offsetof(struct paca_struct, irq_work_pending)));
+ return x;
+}
+
static inline void set_irq_work_pending_flag(void)
{
asm volatile("stb %0,%1(13)" : :
@@ -542,13 +552,44 @@ void arch_irq_work_raise(void)
preempt_enable();
}
+static void set_dec_or_work(u64 val)
+{
+ set_dec(val);
+ /* We may have raced with new irq work */
+ if (unlikely(test_irq_work_pending()))
+ set_dec(1);
+}
+
#else /* CONFIG_IRQ_WORK */
#define test_irq_work_pending() 0
#define clear_irq_work_pending()
+static void set_dec_or_work(u64 val)
+{
+ set_dec(val);
+}
#endif /* CONFIG_IRQ_WORK */
+#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
+void timer_rearm_host_dec(u64 now)
+{
+ u64 *next_tb = this_cpu_ptr(&decrementers_next_tb);
+
+ WARN_ON_ONCE(!arch_irqs_disabled());
+ WARN_ON_ONCE(mfmsr() & MSR_EE);
+
+ if (now >= *next_tb) {
+ local_paca->irq_happened |= PACA_IRQ_DEC;
+ } else {
+ now = *next_tb - now;
+ if (now <= decrementer_max)
+ set_dec_or_work(now);
+ }
+}
+EXPORT_SYMBOL_GPL(timer_rearm_host_dec);
+#endif
+
/*
* timer_interrupt - gets called when the decrementer overflows,
* with interrupts disabled.
@@ -609,10 +650,7 @@ DEFINE_INTERRUPT_HANDLER_ASYNC(timer_interrupt)
} else {
now = *next_tb - now;
if (now <= decrementer_max)
- set_dec(now);
- /* We may have raced with new irq work */
- if (test_irq_work_pending())
- set_dec(1);
+ set_dec_or_work(now);
__this_cpu_inc(irq_stat.timer_irqs_others);
}
@@ -854,11 +892,7 @@ static int decrementer_set_next_event(unsigned long evt,
struct clock_event_device *dev)
{
__this_cpu_write(decrementers_next_tb, get_tb() + evt);
- set_dec(evt);
-
- /* We may have raced with new irq work */
- if (test_irq_work_pending())
- set_dec(1);
+ set_dec_or_work(evt);
return 0;
}
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 6e6cfb10e9bb..0cef578930f9 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -4018,11 +4018,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
vc->entry_exit_map = 0x101;
vc->in_guest = 0;
- next_timer = timer_get_next_tb();
- set_dec(next_timer - tb);
- /* We may have raced with new irq work */
- if (test_irq_work_pending())
- set_dec(1);
+ timer_rearm_host_dec(tb);
+
mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso);
kvmhv_load_host_pmu();
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 12/55] KVM: PPC: Book3S HV: POWER10 enable HAIL when running radix guests
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (10 preceding siblings ...)
2021-07-26 3:49 ` [PATCH v1 11/55] powerpc/time: add API for KVM to re-arm the host timer/decrementer Nicholas Piggin
@ 2021-07-26 3:49 ` Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 13/55] powerpc/64s: Keep AMOR SPR a constant ~0 at runtime Nicholas Piggin
` (42 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:49 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin, Fabiano Rosas
HV interrupts may be taken with the MMU enabled when radix guests are
running. Enable LPCR[HAIL] on ISA v3.1 processors for radix guests.
Make this depend on the host LPCR[HAIL] being enabled. Currently that is
always enabled, but having this test means any issue that might require
LPCR[HAIL] to be disabled in the host will not have to be duplicated in
KVM.
-1380 cycles on P10 NULL hcall entry+exit
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 29 +++++++++++++++++++++++++----
1 file changed, 25 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 0cef578930f9..e7f8cc04944b 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -5004,6 +5004,8 @@ static int kvmppc_hv_setup_htab_rma(struct kvm_vcpu *vcpu)
*/
int kvmppc_switch_mmu_to_hpt(struct kvm *kvm)
{
+ unsigned long lpcr, lpcr_mask;
+
if (nesting_enabled(kvm))
kvmhv_release_all_nested(kvm);
kvmppc_rmap_reset(kvm);
@@ -5013,8 +5015,13 @@ int kvmppc_switch_mmu_to_hpt(struct kvm *kvm)
kvm->arch.radix = 0;
spin_unlock(&kvm->mmu_lock);
kvmppc_free_radix(kvm);
- kvmppc_update_lpcr(kvm, LPCR_VPM1,
- LPCR_VPM1 | LPCR_UPRT | LPCR_GTSE | LPCR_HR);
+
+ lpcr = LPCR_VPM1;
+ lpcr_mask = LPCR_VPM1 | LPCR_UPRT | LPCR_GTSE | LPCR_HR;
+ if (cpu_has_feature(CPU_FTR_ARCH_31))
+ lpcr_mask |= LPCR_HAIL;
+ kvmppc_update_lpcr(kvm, lpcr, lpcr_mask);
+
return 0;
}
@@ -5024,6 +5031,7 @@ int kvmppc_switch_mmu_to_hpt(struct kvm *kvm)
*/
int kvmppc_switch_mmu_to_radix(struct kvm *kvm)
{
+ unsigned long lpcr, lpcr_mask;
int err;
err = kvmppc_init_vm_radix(kvm);
@@ -5035,8 +5043,17 @@ int kvmppc_switch_mmu_to_radix(struct kvm *kvm)
kvm->arch.radix = 1;
spin_unlock(&kvm->mmu_lock);
kvmppc_free_hpt(&kvm->arch.hpt);
- kvmppc_update_lpcr(kvm, LPCR_UPRT | LPCR_GTSE | LPCR_HR,
- LPCR_VPM1 | LPCR_UPRT | LPCR_GTSE | LPCR_HR);
+
+ lpcr = LPCR_UPRT | LPCR_GTSE | LPCR_HR;
+ lpcr_mask = LPCR_VPM1 | LPCR_UPRT | LPCR_GTSE | LPCR_HR;
+ if (cpu_has_feature(CPU_FTR_ARCH_31)) {
+ lpcr_mask |= LPCR_HAIL;
+ if (cpu_has_feature(CPU_FTR_HVMODE) &&
+ (kvm->arch.host_lpcr & LPCR_HAIL))
+ lpcr |= LPCR_HAIL;
+ }
+ kvmppc_update_lpcr(kvm, lpcr, lpcr_mask);
+
return 0;
}
@@ -5200,6 +5217,10 @@ static int kvmppc_core_init_vm_hv(struct kvm *kvm)
kvm->arch.mmu_ready = 1;
lpcr &= ~LPCR_VPM1;
lpcr |= LPCR_UPRT | LPCR_GTSE | LPCR_HR;
+ if (cpu_has_feature(CPU_FTR_HVMODE) &&
+ cpu_has_feature(CPU_FTR_ARCH_31) &&
+ (kvm->arch.host_lpcr & LPCR_HAIL))
+ lpcr |= LPCR_HAIL;
ret = kvmppc_init_vm_radix(kvm);
if (ret) {
kvmppc_free_lpid(kvm->arch.lpid);
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 13/55] powerpc/64s: Keep AMOR SPR a constant ~0 at runtime
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (11 preceding siblings ...)
2021-07-26 3:49 ` [PATCH v1 12/55] KVM: PPC: Book3S HV: POWER10 enable HAIL when running radix guests Nicholas Piggin
@ 2021-07-26 3:49 ` Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 14/55] KVM: PPC: Book3S HV: Don't always save PMU for guest capable of nesting Nicholas Piggin
` (41 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:49 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin, Fabiano Rosas
This register controls supervisor SPR modifications, and as such is only
relevant for KVM. KVM always sets AMOR to ~0 on guest entry, and never
restores it coming back out to the host, so it can be kept constant and
avoid the mtSPR in KVM guest entry.
-21 cycles (9116) cycles POWER9 virt-mode NULL hcall
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kernel/cpu_setup_power.c | 8 ++++++++
arch/powerpc/kernel/dt_cpu_ftrs.c | 2 ++
arch/powerpc/kvm/book3s_hv_p9_entry.c | 2 --
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 2 --
arch/powerpc/mm/book3s64/radix_pgtable.c | 15 ---------------
arch/powerpc/platforms/powernv/idle.c | 8 +++-----
6 files changed, 13 insertions(+), 24 deletions(-)
diff --git a/arch/powerpc/kernel/cpu_setup_power.c b/arch/powerpc/kernel/cpu_setup_power.c
index 3cca88ee96d7..a29dc8326622 100644
--- a/arch/powerpc/kernel/cpu_setup_power.c
+++ b/arch/powerpc/kernel/cpu_setup_power.c
@@ -137,6 +137,7 @@ void __setup_cpu_power7(unsigned long offset, struct cpu_spec *t)
return;
mtspr(SPRN_LPID, 0);
+ mtspr(SPRN_AMOR, ~0);
mtspr(SPRN_PCR, PCR_MASK);
init_LPCR_ISA206(mfspr(SPRN_LPCR), LPCR_LPES1 >> LPCR_LPES_SH);
}
@@ -150,6 +151,7 @@ void __restore_cpu_power7(void)
return;
mtspr(SPRN_LPID, 0);
+ mtspr(SPRN_AMOR, ~0);
mtspr(SPRN_PCR, PCR_MASK);
init_LPCR_ISA206(mfspr(SPRN_LPCR), LPCR_LPES1 >> LPCR_LPES_SH);
}
@@ -164,6 +166,7 @@ void __setup_cpu_power8(unsigned long offset, struct cpu_spec *t)
return;
mtspr(SPRN_LPID, 0);
+ mtspr(SPRN_AMOR, ~0);
mtspr(SPRN_PCR, PCR_MASK);
init_LPCR_ISA206(mfspr(SPRN_LPCR) | LPCR_PECEDH, 0); /* LPES = 0 */
init_HFSCR();
@@ -184,6 +187,7 @@ void __restore_cpu_power8(void)
return;
mtspr(SPRN_LPID, 0);
+ mtspr(SPRN_AMOR, ~0);
mtspr(SPRN_PCR, PCR_MASK);
init_LPCR_ISA206(mfspr(SPRN_LPCR) | LPCR_PECEDH, 0); /* LPES = 0 */
init_HFSCR();
@@ -202,6 +206,7 @@ void __setup_cpu_power9(unsigned long offset, struct cpu_spec *t)
mtspr(SPRN_PSSCR, 0);
mtspr(SPRN_LPID, 0);
mtspr(SPRN_PID, 0);
+ mtspr(SPRN_AMOR, ~0);
mtspr(SPRN_PCR, PCR_MASK);
init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\
LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0);
@@ -223,6 +228,7 @@ void __restore_cpu_power9(void)
mtspr(SPRN_PSSCR, 0);
mtspr(SPRN_LPID, 0);
mtspr(SPRN_PID, 0);
+ mtspr(SPRN_AMOR, ~0);
mtspr(SPRN_PCR, PCR_MASK);
init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\
LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0);
@@ -242,6 +248,7 @@ void __setup_cpu_power10(unsigned long offset, struct cpu_spec *t)
mtspr(SPRN_PSSCR, 0);
mtspr(SPRN_LPID, 0);
mtspr(SPRN_PID, 0);
+ mtspr(SPRN_AMOR, ~0);
mtspr(SPRN_PCR, PCR_MASK);
init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\
LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0);
@@ -264,6 +271,7 @@ void __restore_cpu_power10(void)
mtspr(SPRN_PSSCR, 0);
mtspr(SPRN_LPID, 0);
mtspr(SPRN_PID, 0);
+ mtspr(SPRN_AMOR, ~0);
mtspr(SPRN_PCR, PCR_MASK);
init_LPCR_ISA300((mfspr(SPRN_LPCR) | LPCR_PECEDH | LPCR_PECE_HVEE |\
LPCR_HVICE | LPCR_HEIC) & ~(LPCR_UPRT | LPCR_HR), 0);
diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c
index af95f337e54b..38ea20fadc4a 100644
--- a/arch/powerpc/kernel/dt_cpu_ftrs.c
+++ b/arch/powerpc/kernel/dt_cpu_ftrs.c
@@ -80,6 +80,7 @@ static void __restore_cpu_cpufeatures(void)
mtspr(SPRN_LPCR, system_registers.lpcr);
if (hv_mode) {
mtspr(SPRN_LPID, 0);
+ mtspr(SPRN_AMOR, ~0);
mtspr(SPRN_HFSCR, system_registers.hfscr);
mtspr(SPRN_PCR, system_registers.pcr);
}
@@ -216,6 +217,7 @@ static int __init feat_enable_hv(struct dt_cpu_feature *f)
}
mtspr(SPRN_LPID, 0);
+ mtspr(SPRN_AMOR, ~0);
lpcr = mfspr(SPRN_LPCR);
lpcr &= ~LPCR_LPES0; /* HV external interrupts */
diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
index bd8cf0a65ce8..a7f63082b4e3 100644
--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
+++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
@@ -286,8 +286,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
mtspr(SPRN_SPRG2, vcpu->arch.shregs.sprg2);
mtspr(SPRN_SPRG3, vcpu->arch.shregs.sprg3);
- mtspr(SPRN_AMOR, ~0UL);
-
local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_HV_P9;
/*
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 75079397c2a5..9021052f1579 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -772,10 +772,8 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
/* Restore AMR and UAMOR, set AMOR to all 1s */
ld r5,VCPU_AMR(r4)
ld r6,VCPU_UAMOR(r4)
- li r7,-1
mtspr SPRN_AMR,r5
mtspr SPRN_UAMOR,r6
- mtspr SPRN_AMOR,r7
/* Restore state of CTRL run bit; assume 1 on entry */
lwz r5,VCPU_CTRL(r4)
diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index e50ddf129c15..5aebd70ef66a 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -572,18 +572,6 @@ void __init radix__early_init_devtree(void)
return;
}
-static void radix_init_amor(void)
-{
- /*
- * In HV mode, we init AMOR (Authority Mask Override Register) so that
- * the hypervisor and guest can setup IAMR (Instruction Authority Mask
- * Register), enable key 0 and set it to 1.
- *
- * AMOR = 0b1100 .... 0000 (Mask for key 0 is 11)
- */
- mtspr(SPRN_AMOR, (3ul << 62));
-}
-
void __init radix__early_init_mmu(void)
{
unsigned long lpcr;
@@ -644,7 +632,6 @@ void __init radix__early_init_mmu(void)
lpcr = mfspr(SPRN_LPCR);
mtspr(SPRN_LPCR, lpcr | LPCR_UPRT | LPCR_HR);
radix_init_partition_table();
- radix_init_amor();
} else {
radix_init_pseries();
}
@@ -668,8 +655,6 @@ void radix__early_init_mmu_secondary(void)
set_ptcr_when_no_uv(__pa(partition_tb) |
(PATB_SIZE_SHIFT - 12));
-
- radix_init_amor();
}
radix__switch_mmu_context(NULL, &init_mm);
diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
index df19e2ff9d3c..721ac4f7e2d1 100644
--- a/arch/powerpc/platforms/powernv/idle.c
+++ b/arch/powerpc/platforms/powernv/idle.c
@@ -306,8 +306,8 @@ struct p7_sprs {
/* per thread SPRs that get lost in shallow states */
u64 amr;
u64 iamr;
- u64 amor;
u64 uamor;
+ /* amor is restored to constant ~0 */
};
static unsigned long power7_idle_insn(unsigned long type)
@@ -378,7 +378,6 @@ static unsigned long power7_idle_insn(unsigned long type)
if (cpu_has_feature(CPU_FTR_ARCH_207S)) {
sprs.amr = mfspr(SPRN_AMR);
sprs.iamr = mfspr(SPRN_IAMR);
- sprs.amor = mfspr(SPRN_AMOR);
sprs.uamor = mfspr(SPRN_UAMOR);
}
@@ -397,7 +396,7 @@ static unsigned long power7_idle_insn(unsigned long type)
*/
mtspr(SPRN_AMR, sprs.amr);
mtspr(SPRN_IAMR, sprs.iamr);
- mtspr(SPRN_AMOR, sprs.amor);
+ mtspr(SPRN_AMOR, ~0);
mtspr(SPRN_UAMOR, sprs.uamor);
}
}
@@ -687,7 +686,6 @@ static unsigned long power9_idle_stop(unsigned long psscr)
sprs.amr = mfspr(SPRN_AMR);
sprs.iamr = mfspr(SPRN_IAMR);
- sprs.amor = mfspr(SPRN_AMOR);
sprs.uamor = mfspr(SPRN_UAMOR);
srr1 = isa300_idle_stop_mayloss(psscr); /* go idle */
@@ -708,7 +706,7 @@ static unsigned long power9_idle_stop(unsigned long psscr)
*/
mtspr(SPRN_AMR, sprs.amr);
mtspr(SPRN_IAMR, sprs.iamr);
- mtspr(SPRN_AMOR, sprs.amor);
+ mtspr(SPRN_AMOR, ~0);
mtspr(SPRN_UAMOR, sprs.uamor);
/*
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 14/55] KVM: PPC: Book3S HV: Don't always save PMU for guest capable of nesting
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (12 preceding siblings ...)
2021-07-26 3:49 ` [PATCH v1 13/55] powerpc/64s: Keep AMOR SPR a constant ~0 at runtime Nicholas Piggin
@ 2021-07-26 3:49 ` Nicholas Piggin
2021-08-06 7:34 ` Michael Ellerman
2021-07-26 3:49 ` [PATCH v1 15/55] powerpc/64s: Always set PMU control registers to frozen/disabled when not in use Nicholas Piggin
` (40 subsequent siblings)
54 siblings, 1 reply; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:49 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
Revert the workaround added by commit 63279eeb7f93a ("KVM: PPC: Book3S
HV: Always save guest pmu for guest capable of nesting").
Nested capable guests running with the earlier commit ("KVM: PPC: Book3S
HV Nested: Indicate guest PMU in-use in VPA") will now indicate the PMU
in-use status of their guests, which means the parent does not need to
unconditionally save the PMU for nested capable guests.
This will cause the PMU to break for nested guests when running older
nested hypervisor guests under a kernel with this change. It's unclear
there's an easy way to avoid that, so this could wait for a release or
so for the fix to filter into stable kernels.
-134 cycles (8982) POWER9 virt-mode NULL hcall
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index e7f8cc04944b..ab89db561c85 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -4003,8 +4003,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
vcpu->arch.vpa.dirty = 1;
save_pmu = lp->pmcregs_in_use;
}
- /* Must save pmu if this guest is capable of running nested guests */
- save_pmu |= nesting_enabled(vcpu->kvm);
kvmhv_save_guest_pmu(vcpu, save_pmu);
#ifdef CONFIG_PPC_PSERIES
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 15/55] powerpc/64s: Always set PMU control registers to frozen/disabled when not in use
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (13 preceding siblings ...)
2021-07-26 3:49 ` [PATCH v1 14/55] KVM: PPC: Book3S HV: Don't always save PMU for guest capable of nesting Nicholas Piggin
@ 2021-07-26 3:49 ` Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 16/55] powerpc/64s: Implement PMU override command line option Nicholas Piggin
` (39 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:49 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
KVM PMU management code looks for particular frozen/disabled bits in
the PMU registers so it knows whether it must clear them when coming
out of a guest or not. Setting this up helps KVM make these optimisations
without getting confused. Longer term the better approach might be to
move guest/host PMU switching to the perf subsystem.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kernel/cpu_setup_power.c | 4 ++--
arch/powerpc/kernel/dt_cpu_ftrs.c | 6 +++---
arch/powerpc/kvm/book3s_hv.c | 5 +++++
3 files changed, 10 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/kernel/cpu_setup_power.c b/arch/powerpc/kernel/cpu_setup_power.c
index a29dc8326622..3dc61e203f37 100644
--- a/arch/powerpc/kernel/cpu_setup_power.c
+++ b/arch/powerpc/kernel/cpu_setup_power.c
@@ -109,7 +109,7 @@ static void init_PMU_HV_ISA207(void)
static void init_PMU(void)
{
mtspr(SPRN_MMCRA, 0);
- mtspr(SPRN_MMCR0, 0);
+ mtspr(SPRN_MMCR0, MMCR0_FC);
mtspr(SPRN_MMCR1, 0);
mtspr(SPRN_MMCR2, 0);
}
@@ -123,7 +123,7 @@ static void init_PMU_ISA31(void)
{
mtspr(SPRN_MMCR3, 0);
mtspr(SPRN_MMCRA, MMCRA_BHRB_DISABLE);
- mtspr(SPRN_MMCR0, MMCR0_PMCCEXT);
+ mtspr(SPRN_MMCR0, MMCR0_FC | MMCR0_PMCCEXT);
}
/*
diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c
index 38ea20fadc4a..a6bb0ee179cd 100644
--- a/arch/powerpc/kernel/dt_cpu_ftrs.c
+++ b/arch/powerpc/kernel/dt_cpu_ftrs.c
@@ -353,7 +353,7 @@ static void init_pmu_power8(void)
}
mtspr(SPRN_MMCRA, 0);
- mtspr(SPRN_MMCR0, 0);
+ mtspr(SPRN_MMCR0, MMCR0_FC);
mtspr(SPRN_MMCR1, 0);
mtspr(SPRN_MMCR2, 0);
mtspr(SPRN_MMCRS, 0);
@@ -392,7 +392,7 @@ static void init_pmu_power9(void)
mtspr(SPRN_MMCRC, 0);
mtspr(SPRN_MMCRA, 0);
- mtspr(SPRN_MMCR0, 0);
+ mtspr(SPRN_MMCR0, MMCR0_FC);
mtspr(SPRN_MMCR1, 0);
mtspr(SPRN_MMCR2, 0);
}
@@ -428,7 +428,7 @@ static void init_pmu_power10(void)
mtspr(SPRN_MMCR3, 0);
mtspr(SPRN_MMCRA, MMCRA_BHRB_DISABLE);
- mtspr(SPRN_MMCR0, MMCR0_PMCCEXT);
+ mtspr(SPRN_MMCR0, MMCR0_FC | MMCR0_PMCCEXT);
}
static int __init feat_enable_pmu_power10(struct dt_cpu_feature *f)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index ab89db561c85..2eef708c4354 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -2691,6 +2691,11 @@ static int kvmppc_core_vcpu_create_hv(struct kvm_vcpu *vcpu)
#endif
#endif
vcpu->arch.mmcr[0] = MMCR0_FC;
+ if (cpu_has_feature(CPU_FTR_ARCH_31)) {
+ vcpu->arch.mmcr[0] |= MMCR0_PMCCEXT;
+ vcpu->arch.mmcra = MMCRA_BHRB_DISABLE;
+ }
+
vcpu->arch.ctrl = CTRL_RUNLATCH;
/* default to host PVR, since we can't spoof it */
kvmppc_set_pvr_hv(vcpu, mfspr(SPRN_PVR));
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 16/55] powerpc/64s: Implement PMU override command line option
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (14 preceding siblings ...)
2021-07-26 3:49 ` [PATCH v1 15/55] powerpc/64s: Always set PMU control registers to frozen/disabled when not in use Nicholas Piggin
@ 2021-07-26 3:49 ` Nicholas Piggin
2021-08-06 7:33 ` Madhavan Srinivasan
2021-08-06 9:28 ` Athira Rajeev
2021-07-26 3:49 ` [PATCH v1 17/55] KVM: PPC: Book3S HV P9: Implement PMU save/restore in C Nicholas Piggin
` (38 subsequent siblings)
54 siblings, 2 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:49 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
It can be useful in simulators (with very constrained environments)
to allow some PMCs to run from boot so they can be sampled directly
by a test harness, rather than having to run perf.
A previous change freezes counters at boot by default, so provide
a boot time option to un-freeze (plus a bit more flexibility).
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
.../admin-guide/kernel-parameters.txt | 7 ++++
arch/powerpc/perf/core-book3s.c | 35 +++++++++++++++++++
2 files changed, 42 insertions(+)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index bdb22006f713..96b7d0ebaa40 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4089,6 +4089,13 @@
Override pmtimer IOPort with a hex value.
e.g. pmtmr=0x508
+ pmu= [PPC] Manually enable the PMU.
+ Enable the PMU by setting MMCR0 to 0 (clear FC bit).
+ This option is implemented for Book3S processors.
+ If a number is given, then MMCR1 is set to that number,
+ otherwise (e.g., 'pmu=on'), it is left 0. The perf
+ subsystem is disabled if this option is used.
+
pm_debug_messages [SUSPEND,KNL]
Enable suspend/resume debug messages during boot up.
diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
index 65795cadb475..e7cef4fe17d7 100644
--- a/arch/powerpc/perf/core-book3s.c
+++ b/arch/powerpc/perf/core-book3s.c
@@ -2428,8 +2428,24 @@ int register_power_pmu(struct power_pmu *pmu)
}
#ifdef CONFIG_PPC64
+static bool pmu_override = false;
+static unsigned long pmu_override_val;
+static void do_pmu_override(void *data)
+{
+ ppc_set_pmu_inuse(1);
+ if (pmu_override_val)
+ mtspr(SPRN_MMCR1, pmu_override_val);
+ mtspr(SPRN_MMCR0, mfspr(SPRN_MMCR0) & ~MMCR0_FC);
+}
+
static int __init init_ppc64_pmu(void)
{
+ if (cpu_has_feature(CPU_FTR_HVMODE) && pmu_override) {
+ printk(KERN_WARNING "perf: disabling perf due to pmu= command line option.\n");
+ on_each_cpu(do_pmu_override, NULL, 1);
+ return 0;
+ }
+
/* run through all the pmu drivers one at a time */
if (!init_power5_pmu())
return 0;
@@ -2451,4 +2467,23 @@ static int __init init_ppc64_pmu(void)
return init_generic_compat_pmu();
}
early_initcall(init_ppc64_pmu);
+
+static int __init pmu_setup(char *str)
+{
+ unsigned long val;
+
+ if (!early_cpu_has_feature(CPU_FTR_HVMODE))
+ return 0;
+
+ pmu_override = true;
+
+ if (kstrtoul(str, 0, &val))
+ val = 0;
+
+ pmu_override_val = val;
+
+ return 1;
+}
+__setup("pmu=", pmu_setup);
+
#endif
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 17/55] KVM: PPC: Book3S HV P9: Implement PMU save/restore in C
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (15 preceding siblings ...)
2021-07-26 3:49 ` [PATCH v1 16/55] powerpc/64s: Implement PMU override command line option Nicholas Piggin
@ 2021-07-26 3:49 ` Nicholas Piggin
2021-08-09 3:03 ` Athira Rajeev
2021-07-26 3:49 ` [PATCH v1 18/55] KVM: PPC: Book3S HV P9: Factor PMU save/load into context switch functions Nicholas Piggin
` (37 subsequent siblings)
54 siblings, 1 reply; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:49 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
Implement the P9 path PMU save/restore code in C, and remove the
POWER9/10 code from the P7/8 path assembly.
-449 cycles (8533) POWER9 virt-mode NULL hcall
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/asm-prototypes.h | 5 -
arch/powerpc/kvm/book3s_hv.c | 205 ++++++++++++++++++++--
arch/powerpc/kvm/book3s_hv_interrupts.S | 13 +-
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 43 +----
4 files changed, 200 insertions(+), 66 deletions(-)
diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h
index 222823861a67..41b8a1e1144a 100644
--- a/arch/powerpc/include/asm/asm-prototypes.h
+++ b/arch/powerpc/include/asm/asm-prototypes.h
@@ -141,11 +141,6 @@ static inline void kvmppc_restore_tm_hv(struct kvm_vcpu *vcpu, u64 msr,
bool preserve_nv) { }
#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
-void kvmhv_save_host_pmu(void);
-void kvmhv_load_host_pmu(void);
-void kvmhv_save_guest_pmu(struct kvm_vcpu *vcpu, bool pmu_in_use);
-void kvmhv_load_guest_pmu(struct kvm_vcpu *vcpu);
-
void kvmppc_p9_enter_guest(struct kvm_vcpu *vcpu);
long kvmppc_h_set_dabr(struct kvm_vcpu *vcpu, unsigned long dabr);
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 2eef708c4354..d20b579ddcdf 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -3735,6 +3735,188 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc)
trace_kvmppc_run_core(vc, 1);
}
+/*
+ * Privileged (non-hypervisor) host registers to save.
+ */
+struct p9_host_os_sprs {
+ unsigned long dscr;
+ unsigned long tidr;
+ unsigned long iamr;
+ unsigned long amr;
+ unsigned long fscr;
+
+ unsigned int pmc1;
+ unsigned int pmc2;
+ unsigned int pmc3;
+ unsigned int pmc4;
+ unsigned int pmc5;
+ unsigned int pmc6;
+ unsigned long mmcr0;
+ unsigned long mmcr1;
+ unsigned long mmcr2;
+ unsigned long mmcr3;
+ unsigned long mmcra;
+ unsigned long siar;
+ unsigned long sier1;
+ unsigned long sier2;
+ unsigned long sier3;
+ unsigned long sdar;
+};
+
+static void freeze_pmu(unsigned long mmcr0, unsigned long mmcra)
+{
+ if (!(mmcr0 & MMCR0_FC))
+ goto do_freeze;
+ if (mmcra & MMCRA_SAMPLE_ENABLE)
+ goto do_freeze;
+ if (cpu_has_feature(CPU_FTR_ARCH_31)) {
+ if (!(mmcr0 & MMCR0_PMCCEXT))
+ goto do_freeze;
+ if (!(mmcra & MMCRA_BHRB_DISABLE))
+ goto do_freeze;
+ }
+ return;
+
+do_freeze:
+ mmcr0 = MMCR0_FC;
+ mmcra = 0;
+ if (cpu_has_feature(CPU_FTR_ARCH_31)) {
+ mmcr0 |= MMCR0_PMCCEXT;
+ mmcra = MMCRA_BHRB_DISABLE;
+ }
+
+ mtspr(SPRN_MMCR0, mmcr0);
+ mtspr(SPRN_MMCRA, mmcra);
+ isync();
+}
+
+static void save_p9_host_pmu(struct p9_host_os_sprs *host_os_sprs)
+{
+ if (ppc_get_pmu_inuse()) {
+ /*
+ * It might be better to put PMU handling (at least for the
+ * host) in the perf subsystem because it knows more about what
+ * is being used.
+ */
+
+ /* POWER9, POWER10 do not implement HPMC or SPMC */
+
+ host_os_sprs->mmcr0 = mfspr(SPRN_MMCR0);
+ host_os_sprs->mmcra = mfspr(SPRN_MMCRA);
+
+ freeze_pmu(host_os_sprs->mmcr0, host_os_sprs->mmcra);
+
+ host_os_sprs->pmc1 = mfspr(SPRN_PMC1);
+ host_os_sprs->pmc2 = mfspr(SPRN_PMC2);
+ host_os_sprs->pmc3 = mfspr(SPRN_PMC3);
+ host_os_sprs->pmc4 = mfspr(SPRN_PMC4);
+ host_os_sprs->pmc5 = mfspr(SPRN_PMC5);
+ host_os_sprs->pmc6 = mfspr(SPRN_PMC6);
+ host_os_sprs->mmcr1 = mfspr(SPRN_MMCR1);
+ host_os_sprs->mmcr2 = mfspr(SPRN_MMCR2);
+ host_os_sprs->sdar = mfspr(SPRN_SDAR);
+ host_os_sprs->siar = mfspr(SPRN_SIAR);
+ host_os_sprs->sier1 = mfspr(SPRN_SIER);
+
+ if (cpu_has_feature(CPU_FTR_ARCH_31)) {
+ host_os_sprs->mmcr3 = mfspr(SPRN_MMCR3);
+ host_os_sprs->sier2 = mfspr(SPRN_SIER2);
+ host_os_sprs->sier3 = mfspr(SPRN_SIER3);
+ }
+ }
+}
+
+static void load_p9_guest_pmu(struct kvm_vcpu *vcpu)
+{
+ mtspr(SPRN_PMC1, vcpu->arch.pmc[0]);
+ mtspr(SPRN_PMC2, vcpu->arch.pmc[1]);
+ mtspr(SPRN_PMC3, vcpu->arch.pmc[2]);
+ mtspr(SPRN_PMC4, vcpu->arch.pmc[3]);
+ mtspr(SPRN_PMC5, vcpu->arch.pmc[4]);
+ mtspr(SPRN_PMC6, vcpu->arch.pmc[5]);
+ mtspr(SPRN_MMCR1, vcpu->arch.mmcr[1]);
+ mtspr(SPRN_MMCR2, vcpu->arch.mmcr[2]);
+ mtspr(SPRN_SDAR, vcpu->arch.sdar);
+ mtspr(SPRN_SIAR, vcpu->arch.siar);
+ mtspr(SPRN_SIER, vcpu->arch.sier[0]);
+
+ if (cpu_has_feature(CPU_FTR_ARCH_31)) {
+ mtspr(SPRN_MMCR3, vcpu->arch.mmcr[3]);
+ mtspr(SPRN_SIER2, vcpu->arch.sier[1]);
+ mtspr(SPRN_SIER3, vcpu->arch.sier[2]);
+ }
+
+ /* Set MMCRA then MMCR0 last */
+ mtspr(SPRN_MMCRA, vcpu->arch.mmcra);
+ mtspr(SPRN_MMCR0, vcpu->arch.mmcr[0]);
+ /* No isync necessary because we're starting counters */
+}
+
+static void save_p9_guest_pmu(struct kvm_vcpu *vcpu)
+{
+ struct lppaca *lp;
+ int save_pmu = 1;
+
+ lp = vcpu->arch.vpa.pinned_addr;
+ if (lp)
+ save_pmu = lp->pmcregs_in_use;
+
+ if (save_pmu) {
+ vcpu->arch.mmcr[0] = mfspr(SPRN_MMCR0);
+ vcpu->arch.mmcra = mfspr(SPRN_MMCRA);
+
+ freeze_pmu(vcpu->arch.mmcr[0], vcpu->arch.mmcra);
+
+ vcpu->arch.pmc[0] = mfspr(SPRN_PMC1);
+ vcpu->arch.pmc[1] = mfspr(SPRN_PMC2);
+ vcpu->arch.pmc[2] = mfspr(SPRN_PMC3);
+ vcpu->arch.pmc[3] = mfspr(SPRN_PMC4);
+ vcpu->arch.pmc[4] = mfspr(SPRN_PMC5);
+ vcpu->arch.pmc[5] = mfspr(SPRN_PMC6);
+ vcpu->arch.mmcr[1] = mfspr(SPRN_MMCR1);
+ vcpu->arch.mmcr[2] = mfspr(SPRN_MMCR2);
+ vcpu->arch.sdar = mfspr(SPRN_SDAR);
+ vcpu->arch.siar = mfspr(SPRN_SIAR);
+ vcpu->arch.sier[0] = mfspr(SPRN_SIER);
+
+ if (cpu_has_feature(CPU_FTR_ARCH_31)) {
+ vcpu->arch.mmcr[3] = mfspr(SPRN_MMCR3);
+ vcpu->arch.sier[1] = mfspr(SPRN_SIER2);
+ vcpu->arch.sier[2] = mfspr(SPRN_SIER3);
+ }
+ } else {
+ freeze_pmu(mfspr(SPRN_MMCR0), mfspr(SPRN_MMCRA));
+ }
+}
+
+static void load_p9_host_pmu(struct p9_host_os_sprs *host_os_sprs)
+{
+ if (ppc_get_pmu_inuse()) {
+ mtspr(SPRN_PMC1, host_os_sprs->pmc1);
+ mtspr(SPRN_PMC2, host_os_sprs->pmc2);
+ mtspr(SPRN_PMC3, host_os_sprs->pmc3);
+ mtspr(SPRN_PMC4, host_os_sprs->pmc4);
+ mtspr(SPRN_PMC5, host_os_sprs->pmc5);
+ mtspr(SPRN_PMC6, host_os_sprs->pmc6);
+ mtspr(SPRN_MMCR1, host_os_sprs->mmcr1);
+ mtspr(SPRN_MMCR2, host_os_sprs->mmcr2);
+ mtspr(SPRN_SDAR, host_os_sprs->sdar);
+ mtspr(SPRN_SIAR, host_os_sprs->siar);
+ mtspr(SPRN_SIER, host_os_sprs->sier1);
+
+ if (cpu_has_feature(CPU_FTR_ARCH_31)) {
+ mtspr(SPRN_MMCR3, host_os_sprs->mmcr3);
+ mtspr(SPRN_SIER2, host_os_sprs->sier2);
+ mtspr(SPRN_SIER3, host_os_sprs->sier3);
+ }
+
+ /* Set MMCRA then MMCR0 last */
+ mtspr(SPRN_MMCRA, host_os_sprs->mmcra);
+ mtspr(SPRN_MMCR0, host_os_sprs->mmcr0);
+ isync();
+ }
+}
+
static void load_spr_state(struct kvm_vcpu *vcpu)
{
mtspr(SPRN_DSCR, vcpu->arch.dscr);
@@ -3777,17 +3959,6 @@ static void store_spr_state(struct kvm_vcpu *vcpu)
vcpu->arch.dscr = mfspr(SPRN_DSCR);
}
-/*
- * Privileged (non-hypervisor) host registers to save.
- */
-struct p9_host_os_sprs {
- unsigned long dscr;
- unsigned long tidr;
- unsigned long iamr;
- unsigned long amr;
- unsigned long fscr;
-};
-
static void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs)
{
host_os_sprs->dscr = mfspr(SPRN_DSCR);
@@ -3835,7 +4006,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
struct p9_host_os_sprs host_os_sprs;
s64 dec;
u64 tb, next_timer;
- int trap, save_pmu;
+ int trap;
WARN_ON_ONCE(vcpu->arch.ceded);
@@ -3848,7 +4019,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
save_p9_host_os_sprs(&host_os_sprs);
- kvmhv_save_host_pmu(); /* saves it to PACA kvm_hstate */
+ save_p9_host_pmu(&host_os_sprs);
kvmppc_subcore_enter_guest();
@@ -3878,7 +4049,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
barrier();
}
#endif
- kvmhv_load_guest_pmu(vcpu);
+ load_p9_guest_pmu(vcpu);
msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX);
load_fp_state(&vcpu->arch.fp);
@@ -4000,16 +4171,14 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true);
- save_pmu = 1;
if (vcpu->arch.vpa.pinned_addr) {
struct lppaca *lp = vcpu->arch.vpa.pinned_addr;
u32 yield_count = be32_to_cpu(lp->yield_count) + 1;
lp->yield_count = cpu_to_be32(yield_count);
vcpu->arch.vpa.dirty = 1;
- save_pmu = lp->pmcregs_in_use;
}
- kvmhv_save_guest_pmu(vcpu, save_pmu);
+ save_p9_guest_pmu(vcpu);
#ifdef CONFIG_PPC_PSERIES
if (kvmhv_on_pseries()) {
barrier();
@@ -4025,7 +4194,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso);
- kvmhv_load_host_pmu();
+ load_p9_host_pmu(&host_os_sprs);
kvmppc_subcore_exit_guest();
diff --git a/arch/powerpc/kvm/book3s_hv_interrupts.S b/arch/powerpc/kvm/book3s_hv_interrupts.S
index 4444f83cb133..59d89e4b154a 100644
--- a/arch/powerpc/kvm/book3s_hv_interrupts.S
+++ b/arch/powerpc/kvm/book3s_hv_interrupts.S
@@ -104,7 +104,10 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
mtlr r0
blr
-_GLOBAL(kvmhv_save_host_pmu)
+/*
+ * void kvmhv_save_host_pmu(void)
+ */
+kvmhv_save_host_pmu:
BEGIN_FTR_SECTION
/* Work around P8 PMAE bug */
li r3, -1
@@ -138,14 +141,6 @@ BEGIN_FTR_SECTION
std r8, HSTATE_MMCR2(r13)
std r9, HSTATE_SIER(r13)
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
-BEGIN_FTR_SECTION
- mfspr r5, SPRN_MMCR3
- mfspr r6, SPRN_SIER2
- mfspr r7, SPRN_SIER3
- std r5, HSTATE_MMCR3(r13)
- std r6, HSTATE_SIER2(r13)
- std r7, HSTATE_SIER3(r13)
-END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31)
mfspr r3, SPRN_PMC1
mfspr r5, SPRN_PMC2
mfspr r6, SPRN_PMC3
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 9021052f1579..551ce223b40c 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -2738,10 +2738,11 @@ kvmppc_msr_interrupt:
blr
/*
+ * void kvmhv_load_guest_pmu(struct kvm_vcpu *vcpu)
+ *
* Load up guest PMU state. R3 points to the vcpu struct.
*/
-_GLOBAL(kvmhv_load_guest_pmu)
-EXPORT_SYMBOL_GPL(kvmhv_load_guest_pmu)
+kvmhv_load_guest_pmu:
mr r4, r3
mflr r0
li r3, 1
@@ -2775,27 +2776,17 @@ END_FTR_SECTION_IFSET(CPU_FTR_PMAO_BUG)
mtspr SPRN_MMCRA, r6
mtspr SPRN_SIAR, r7
mtspr SPRN_SDAR, r8
-BEGIN_FTR_SECTION
- ld r5, VCPU_MMCR + 24(r4)
- ld r6, VCPU_SIER + 8(r4)
- ld r7, VCPU_SIER + 16(r4)
- mtspr SPRN_MMCR3, r5
- mtspr SPRN_SIER2, r6
- mtspr SPRN_SIER3, r7
-END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31)
BEGIN_FTR_SECTION
ld r5, VCPU_MMCR + 16(r4)
ld r6, VCPU_SIER(r4)
mtspr SPRN_MMCR2, r5
mtspr SPRN_SIER, r6
-BEGIN_FTR_SECTION_NESTED(96)
lwz r7, VCPU_PMC + 24(r4)
lwz r8, VCPU_PMC + 28(r4)
ld r9, VCPU_MMCRS(r4)
mtspr SPRN_SPMC1, r7
mtspr SPRN_SPMC2, r8
mtspr SPRN_MMCRS, r9
-END_FTR_SECTION_NESTED(CPU_FTR_ARCH_300, 0, 96)
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
mtspr SPRN_MMCR0, r3
isync
@@ -2803,10 +2794,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
blr
/*
+ * void kvmhv_load_host_pmu(void)
+ *
* Reload host PMU state saved in the PACA by kvmhv_save_host_pmu.
*/
-_GLOBAL(kvmhv_load_host_pmu)
-EXPORT_SYMBOL_GPL(kvmhv_load_host_pmu)
+kvmhv_load_host_pmu:
mflr r0
lbz r4, PACA_PMCINUSE(r13) /* is the host using the PMU? */
cmpwi r4, 0
@@ -2844,25 +2836,18 @@ BEGIN_FTR_SECTION
mtspr SPRN_MMCR2, r8
mtspr SPRN_SIER, r9
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
-BEGIN_FTR_SECTION
- ld r5, HSTATE_MMCR3(r13)
- ld r6, HSTATE_SIER2(r13)
- ld r7, HSTATE_SIER3(r13)
- mtspr SPRN_MMCR3, r5
- mtspr SPRN_SIER2, r6
- mtspr SPRN_SIER3, r7
-END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31)
mtspr SPRN_MMCR0, r3
isync
mtlr r0
23: blr
/*
+ * void kvmhv_save_guest_pmu(struct kvm_vcpu *vcpu, bool pmu_in_use)
+ *
* Save guest PMU state into the vcpu struct.
* r3 = vcpu, r4 = full save flag (PMU in use flag set in VPA)
*/
-_GLOBAL(kvmhv_save_guest_pmu)
-EXPORT_SYMBOL_GPL(kvmhv_save_guest_pmu)
+kvmhv_save_guest_pmu:
mr r9, r3
mr r8, r4
BEGIN_FTR_SECTION
@@ -2911,14 +2896,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
BEGIN_FTR_SECTION
std r10, VCPU_MMCR + 16(r9)
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
-BEGIN_FTR_SECTION
- mfspr r5, SPRN_MMCR3
- mfspr r6, SPRN_SIER2
- mfspr r7, SPRN_SIER3
- std r5, VCPU_MMCR + 24(r9)
- std r6, VCPU_SIER + 8(r9)
- std r7, VCPU_SIER + 16(r9)
-END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31)
std r7, VCPU_SIAR(r9)
std r8, VCPU_SDAR(r9)
mfspr r3, SPRN_PMC1
@@ -2936,7 +2913,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31)
BEGIN_FTR_SECTION
mfspr r5, SPRN_SIER
std r5, VCPU_SIER(r9)
-BEGIN_FTR_SECTION_NESTED(96)
mfspr r6, SPRN_SPMC1
mfspr r7, SPRN_SPMC2
mfspr r8, SPRN_MMCRS
@@ -2945,7 +2921,6 @@ BEGIN_FTR_SECTION_NESTED(96)
std r8, VCPU_MMCRS(r9)
lis r4, 0x8000
mtspr SPRN_MMCRS, r4
-END_FTR_SECTION_NESTED(CPU_FTR_ARCH_300, 0, 96)
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
22: blr
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 18/55] KVM: PPC: Book3S HV P9: Factor PMU save/load into context switch functions
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (16 preceding siblings ...)
2021-07-26 3:49 ` [PATCH v1 17/55] KVM: PPC: Book3S HV P9: Implement PMU save/restore in C Nicholas Piggin
@ 2021-07-26 3:49 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 19/55] KVM: PPC: Book3S HV P9: Demand fault PMU SPRs when marked not inuse Nicholas Piggin
` (36 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:49 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
Rather than guest/host save/retsore functions, implement context switch
functions that take care of details like the VPA update for nested.
The reason to split these kind of helpers into explicit save/load
functions is mainly to schedule SPR access nicely, but PMU is a special
case where the load requires mtSPR (to stop counters) and other
difficulties, so there's less possibility to schedule those nicely. The
SPR accesses also have side-effects if the PMU is running, and in later
changes we keep the host PMU running as long as possible so this code
can be better profiled, which also complicates scheduling.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 61 +++++++++++++++++-------------------
1 file changed, 28 insertions(+), 33 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index d20b579ddcdf..091b67ef6eba 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -3790,7 +3790,8 @@ static void freeze_pmu(unsigned long mmcr0, unsigned long mmcra)
isync();
}
-static void save_p9_host_pmu(struct p9_host_os_sprs *host_os_sprs)
+static void switch_pmu_to_guest(struct kvm_vcpu *vcpu,
+ struct p9_host_os_sprs *host_os_sprs)
{
if (ppc_get_pmu_inuse()) {
/*
@@ -3824,10 +3825,21 @@ static void save_p9_host_pmu(struct p9_host_os_sprs *host_os_sprs)
host_os_sprs->sier3 = mfspr(SPRN_SIER3);
}
}
-}
-static void load_p9_guest_pmu(struct kvm_vcpu *vcpu)
-{
+#ifdef CONFIG_PPC_PSERIES
+ if (kvmhv_on_pseries()) {
+ barrier();
+ if (vcpu->arch.vpa.pinned_addr) {
+ struct lppaca *lp = vcpu->arch.vpa.pinned_addr;
+ get_lppaca()->pmcregs_in_use = lp->pmcregs_in_use;
+ } else {
+ get_lppaca()->pmcregs_in_use = 1;
+ }
+ barrier();
+ }
+#endif
+
+ /* load guest */
mtspr(SPRN_PMC1, vcpu->arch.pmc[0]);
mtspr(SPRN_PMC2, vcpu->arch.pmc[1]);
mtspr(SPRN_PMC3, vcpu->arch.pmc[2]);
@@ -3852,7 +3864,8 @@ static void load_p9_guest_pmu(struct kvm_vcpu *vcpu)
/* No isync necessary because we're starting counters */
}
-static void save_p9_guest_pmu(struct kvm_vcpu *vcpu)
+static void switch_pmu_to_host(struct kvm_vcpu *vcpu,
+ struct p9_host_os_sprs *host_os_sprs)
{
struct lppaca *lp;
int save_pmu = 1;
@@ -3887,10 +3900,15 @@ static void save_p9_guest_pmu(struct kvm_vcpu *vcpu)
} else {
freeze_pmu(mfspr(SPRN_MMCR0), mfspr(SPRN_MMCRA));
}
-}
-static void load_p9_host_pmu(struct p9_host_os_sprs *host_os_sprs)
-{
+#ifdef CONFIG_PPC_PSERIES
+ if (kvmhv_on_pseries()) {
+ barrier();
+ get_lppaca()->pmcregs_in_use = ppc_get_pmu_inuse();
+ barrier();
+ }
+#endif
+
if (ppc_get_pmu_inuse()) {
mtspr(SPRN_PMC1, host_os_sprs->pmc1);
mtspr(SPRN_PMC2, host_os_sprs->pmc2);
@@ -4019,8 +4037,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
save_p9_host_os_sprs(&host_os_sprs);
- save_p9_host_pmu(&host_os_sprs);
-
kvmppc_subcore_enter_guest();
vc->entry_exit_map = 1;
@@ -4037,19 +4053,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true);
-#ifdef CONFIG_PPC_PSERIES
- if (kvmhv_on_pseries()) {
- barrier();
- if (vcpu->arch.vpa.pinned_addr) {
- struct lppaca *lp = vcpu->arch.vpa.pinned_addr;
- get_lppaca()->pmcregs_in_use = lp->pmcregs_in_use;
- } else {
- get_lppaca()->pmcregs_in_use = 1;
- }
- barrier();
- }
-#endif
- load_p9_guest_pmu(vcpu);
+ switch_pmu_to_guest(vcpu, &host_os_sprs);
msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX);
load_fp_state(&vcpu->arch.fp);
@@ -4178,14 +4182,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
vcpu->arch.vpa.dirty = 1;
}
- save_p9_guest_pmu(vcpu);
-#ifdef CONFIG_PPC_PSERIES
- if (kvmhv_on_pseries()) {
- barrier();
- get_lppaca()->pmcregs_in_use = ppc_get_pmu_inuse();
- barrier();
- }
-#endif
+ switch_pmu_to_host(vcpu, &host_os_sprs);
vc->entry_exit_map = 0x101;
vc->in_guest = 0;
@@ -4194,8 +4191,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso);
- load_p9_host_pmu(&host_os_sprs);
-
kvmppc_subcore_exit_guest();
return trap;
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 19/55] KVM: PPC: Book3S HV P9: Demand fault PMU SPRs when marked not inuse
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (17 preceding siblings ...)
2021-07-26 3:49 ` [PATCH v1 18/55] KVM: PPC: Book3S HV P9: Factor PMU save/load into context switch functions Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 20/55] KVM: PPC: Book3S HV P9: Factor out yield_count increment Nicholas Piggin
` (35 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
The pmcregs_in_use field in the guest VPA can not be trusted to reflect
what the guest is doing with PMU SPRs, so the PMU must always be managed
(stopped) when exiting the guest, and SPR values set when entering the
guest to ensure it can't cause a covert channel or otherwise cause other
guests or the host to misbehave.
So prevent guest access to the PMU with HFSCR[PM] if pmcregs_in_use is
clear, and avoid the PMU SPR access on every partition switch. Guests
that set pmcregs_in_use incorrectly or when first setting it and using
the PMU will take a hypervisor facility unavailable interrupt that will
bring in the PMU SPRs.
-774 cycles (7759) cycles POWER9 virt-mode NULL hcall
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/kvm_book3s_64.h | 1 +
arch/powerpc/include/asm/kvm_host.h | 1 +
arch/powerpc/kvm/book3s_hv.c | 133 +++++++++++++++++------
arch/powerpc/kvm/book3s_hv_nested.c | 38 +++----
4 files changed, 119 insertions(+), 54 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h
index eaf3a562bf1e..df6bed4b2a46 100644
--- a/arch/powerpc/include/asm/kvm_book3s_64.h
+++ b/arch/powerpc/include/asm/kvm_book3s_64.h
@@ -39,6 +39,7 @@ struct kvm_nested_guest {
pgd_t *shadow_pgtable; /* our page table for this guest */
u64 l1_gr_to_hr; /* L1's addr of part'n-scoped table */
u64 process_table; /* process table entry for this guest */
+ u64 hfscr; /* L1's HFSCR */
long refcnt; /* number of pointers to this struct */
struct mutex tlb_lock; /* serialize page faults and tlbies */
struct kvm_nested_guest *next;
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 9f52f282b1aa..aee41edcfe6b 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -804,6 +804,7 @@ struct kvm_vcpu_arch {
struct kvmppc_vpa slb_shadow;
spinlock_t tbacct_lock;
+ u64 hfscr_permitted; /* A mask of permitted HFSCR facilities */
u64 busy_stolen;
u64 busy_preempt;
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 091b67ef6eba..7c75f63648d6 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -1421,6 +1421,23 @@ static int kvmppc_emulate_doorbell_instr(struct kvm_vcpu *vcpu)
return RESUME_GUEST;
}
+/*
+ * If the lppaca had pmcregs_in_use clear when we exited the guest, then
+ * HFSCR_PM is cleared for next entry. If the guest then tries to access
+ * the PMU SPRs, we get this facility unavailable interrupt. Putting HFSCR_PM
+ * back in the guest HFSCR will cause the next entry to load the PMU SPRs and
+ * allow the guest access to continue.
+ */
+static int kvmppc_pmu_unavailable(struct kvm_vcpu *vcpu)
+{
+ if (!(vcpu->arch.hfscr_permitted & HFSCR_PM))
+ return EMULATE_FAIL;
+
+ vcpu->arch.hfscr |= HFSCR_PM;
+
+ return RESUME_GUEST;
+}
+
static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu,
struct task_struct *tsk)
{
@@ -1705,16 +1722,22 @@ XXX benchmark guest exits
* to emulate.
* Otherwise, we just generate a program interrupt to the guest.
*/
- case BOOK3S_INTERRUPT_H_FAC_UNAVAIL:
+ case BOOK3S_INTERRUPT_H_FAC_UNAVAIL: {
r = EMULATE_FAIL;
- if (((vcpu->arch.hfscr >> 56) == FSCR_MSGP_LG) &&
- cpu_has_feature(CPU_FTR_ARCH_300))
- r = kvmppc_emulate_doorbell_instr(vcpu);
+ if (cpu_has_feature(CPU_FTR_ARCH_300)) {
+ unsigned long cause = vcpu->arch.hfscr >> 56;
+
+ if (cause == FSCR_MSGP_LG)
+ r = kvmppc_emulate_doorbell_instr(vcpu);
+ if (cause == FSCR_PM_LG)
+ r = kvmppc_pmu_unavailable(vcpu);
+ }
if (r == EMULATE_FAIL) {
kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
r = RESUME_GUEST;
}
break;
+ }
case BOOK3S_INTERRUPT_HV_RM_HARD:
r = RESUME_PASSTHROUGH;
@@ -2723,6 +2746,13 @@ static int kvmppc_core_vcpu_create_hv(struct kvm_vcpu *vcpu)
if (cpu_has_feature(CPU_FTR_TM_COMP))
vcpu->arch.hfscr |= HFSCR_TM;
+ vcpu->arch.hfscr_permitted = vcpu->arch.hfscr;
+
+ /*
+ * PM is demand-faulted so start with it clear.
+ */
+ vcpu->arch.hfscr &= ~HFSCR_PM;
+
kvmppc_mmu_book3s_hv_init(vcpu);
vcpu->arch.state = KVMPPC_VCPU_NOTREADY;
@@ -3793,6 +3823,14 @@ static void freeze_pmu(unsigned long mmcr0, unsigned long mmcra)
static void switch_pmu_to_guest(struct kvm_vcpu *vcpu,
struct p9_host_os_sprs *host_os_sprs)
{
+ struct lppaca *lp;
+ int load_pmu = 1;
+
+ lp = vcpu->arch.vpa.pinned_addr;
+ if (lp)
+ load_pmu = lp->pmcregs_in_use;
+
+ /* Save host */
if (ppc_get_pmu_inuse()) {
/*
* It might be better to put PMU handling (at least for the
@@ -3827,41 +3865,47 @@ static void switch_pmu_to_guest(struct kvm_vcpu *vcpu,
}
#ifdef CONFIG_PPC_PSERIES
+ /* After saving PMU, before loading guest PMU, flip pmcregs_in_use */
if (kvmhv_on_pseries()) {
barrier();
- if (vcpu->arch.vpa.pinned_addr) {
- struct lppaca *lp = vcpu->arch.vpa.pinned_addr;
- get_lppaca()->pmcregs_in_use = lp->pmcregs_in_use;
- } else {
- get_lppaca()->pmcregs_in_use = 1;
- }
+ get_lppaca()->pmcregs_in_use = load_pmu;
barrier();
}
#endif
- /* load guest */
- mtspr(SPRN_PMC1, vcpu->arch.pmc[0]);
- mtspr(SPRN_PMC2, vcpu->arch.pmc[1]);
- mtspr(SPRN_PMC3, vcpu->arch.pmc[2]);
- mtspr(SPRN_PMC4, vcpu->arch.pmc[3]);
- mtspr(SPRN_PMC5, vcpu->arch.pmc[4]);
- mtspr(SPRN_PMC6, vcpu->arch.pmc[5]);
- mtspr(SPRN_MMCR1, vcpu->arch.mmcr[1]);
- mtspr(SPRN_MMCR2, vcpu->arch.mmcr[2]);
- mtspr(SPRN_SDAR, vcpu->arch.sdar);
- mtspr(SPRN_SIAR, vcpu->arch.siar);
- mtspr(SPRN_SIER, vcpu->arch.sier[0]);
+ /*
+ * Load guest. If the VPA said the PMCs are not in use but the guest
+ * tried to access them anyway, HFSCR[PM] will be set by the HFAC
+ * fault so we can make forward progress.
+ */
+ if (load_pmu || (vcpu->arch.hfscr & HFSCR_PM)) {
+ mtspr(SPRN_PMC1, vcpu->arch.pmc[0]);
+ mtspr(SPRN_PMC2, vcpu->arch.pmc[1]);
+ mtspr(SPRN_PMC3, vcpu->arch.pmc[2]);
+ mtspr(SPRN_PMC4, vcpu->arch.pmc[3]);
+ mtspr(SPRN_PMC5, vcpu->arch.pmc[4]);
+ mtspr(SPRN_PMC6, vcpu->arch.pmc[5]);
+ mtspr(SPRN_MMCR1, vcpu->arch.mmcr[1]);
+ mtspr(SPRN_MMCR2, vcpu->arch.mmcr[2]);
+ mtspr(SPRN_SDAR, vcpu->arch.sdar);
+ mtspr(SPRN_SIAR, vcpu->arch.siar);
+ mtspr(SPRN_SIER, vcpu->arch.sier[0]);
+
+ if (cpu_has_feature(CPU_FTR_ARCH_31)) {
+ mtspr(SPRN_MMCR3, vcpu->arch.mmcr[3]);
+ mtspr(SPRN_SIER2, vcpu->arch.sier[1]);
+ mtspr(SPRN_SIER3, vcpu->arch.sier[2]);
+ }
- if (cpu_has_feature(CPU_FTR_ARCH_31)) {
- mtspr(SPRN_MMCR3, vcpu->arch.mmcr[3]);
- mtspr(SPRN_SIER2, vcpu->arch.sier[1]);
- mtspr(SPRN_SIER3, vcpu->arch.sier[2]);
- }
+ /* Set MMCRA then MMCR0 last */
+ mtspr(SPRN_MMCRA, vcpu->arch.mmcra);
+ mtspr(SPRN_MMCR0, vcpu->arch.mmcr[0]);
+ /* No isync necessary because we're starting counters */
- /* Set MMCRA then MMCR0 last */
- mtspr(SPRN_MMCRA, vcpu->arch.mmcra);
- mtspr(SPRN_MMCR0, vcpu->arch.mmcr[0]);
- /* No isync necessary because we're starting counters */
+ if (!vcpu->arch.nested &&
+ (vcpu->arch.hfscr_permitted & HFSCR_PM))
+ vcpu->arch.hfscr |= HFSCR_PM;
+ }
}
static void switch_pmu_to_host(struct kvm_vcpu *vcpu,
@@ -3897,9 +3941,32 @@ static void switch_pmu_to_host(struct kvm_vcpu *vcpu,
vcpu->arch.sier[1] = mfspr(SPRN_SIER2);
vcpu->arch.sier[2] = mfspr(SPRN_SIER3);
}
- } else {
+
+ } else if (vcpu->arch.hfscr & HFSCR_PM) {
+ /*
+ * The guest accessed PMC SPRs without specifying they should
+ * be preserved, or it cleared pmcregs_in_use after the last
+ * access. Just ensure they are frozen.
+ */
freeze_pmu(mfspr(SPRN_MMCR0), mfspr(SPRN_MMCRA));
- }
+
+ /*
+ * Demand-fault PMU register access in the guest.
+ *
+ * This is used to grab the guest's VPA pmcregs_in_use value
+ * and reflect it into the host's VPA in the case of a nested
+ * hypervisor.
+ *
+ * It also avoids having to zero-out SPRs after each guest
+ * exit to avoid side-channels when.
+ *
+ * This is cleared here when we exit the guest, so later HFSCR
+ * interrupt handling can add it back to run the guest with
+ * PM enabled next time.
+ */
+ if (!vcpu->arch.nested)
+ vcpu->arch.hfscr &= ~HFSCR_PM;
+ } /* otherwise the PMU should still be frozen */
#ifdef CONFIG_PPC_PSERIES
if (kvmhv_on_pseries()) {
diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
index 983628ed4376..3ffc63ffebc5 100644
--- a/arch/powerpc/kvm/book3s_hv_nested.c
+++ b/arch/powerpc/kvm/book3s_hv_nested.c
@@ -104,16 +104,6 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu,
{
struct kvmppc_vcore *vc = vcpu->arch.vcore;
- /*
- * When loading the hypervisor-privileged registers to run L2,
- * we might have used bits from L1 state to restrict what the
- * L2 state is allowed to be. Since L1 is not allowed to read
- * the HV registers, do not include these modifications in the
- * return state.
- */
- hr->hfscr = ((~HFSCR_INTR_CAUSE & hr->hfscr) |
- (HFSCR_INTR_CAUSE & vcpu->arch.hfscr));
-
hr->dpdes = vc->dpdes;
hr->purr = vcpu->arch.purr;
hr->spurr = vcpu->arch.spurr;
@@ -137,14 +127,23 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu,
case BOOK3S_INTERRUPT_H_INST_STORAGE:
hr->asdr = vcpu->arch.fault_gpa;
break;
- case BOOK3S_INTERRUPT_H_FAC_UNAVAIL:
- {
- u8 cause = vcpu->arch.hfscr >> 56;
+ case BOOK3S_INTERRUPT_H_FAC_UNAVAIL: {
+ u64 cause = vcpu->arch.hfscr >> 56;
WARN_ON_ONCE(cause >= BITS_PER_LONG);
- if (!(hr->hfscr & (1UL << cause)))
+ /*
+ * When loading the hypervisor-privileged registers to run L2,
+ * we might have used bits from L1 state to restrict what the
+ * L2 state is allowed to be. Since L1 is not allowed to read
+ * the HV registers, do not include these modifications in the
+ * return state.
+ */
+ hr->hfscr &= ~HFSCR_INTR_CAUSE;
+ if (!(hr->hfscr & (1UL << cause))) {
+ hr->hfscr |= vcpu->arch.hfscr & HFSCR_INTR_CAUSE;
break;
+ }
/*
* We have disabled this facility, so it does not
@@ -152,10 +151,6 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu,
*/
vcpu->arch.trap = BOOK3S_INTERRUPT_H_EMUL_ASSIST;
kvmppc_load_last_inst(vcpu, INST_GENERIC, &vcpu->arch.emul_inst);
-
- /* Don't leak the cause field */
- hr->hfscr &= ~HFSCR_INTR_CAUSE;
-
fallthrough;
}
case BOOK3S_INTERRUPT_H_EMUL_ASSIST:
@@ -299,10 +294,10 @@ static void load_l2_hv_regs(struct kvm_vcpu *vcpu,
(vc->lpcr & ~mask) | (*lpcr & mask));
/*
- * Don't let L1 enable features for L2 which we've disabled for L1,
- * but preserve the interrupt cause field.
+ * Don't let L1 enable features for L2 which we disallow for L1.
+ * Preserve the interrupt cause field.
*/
- vcpu->arch.hfscr = l2_hv->hfscr & (HFSCR_INTR_CAUSE | l1_hv->hfscr);
+ vcpu->arch.hfscr = l2_hv->hfscr & (HFSCR_INTR_CAUSE | vcpu->arch.hfscr_permitted);
/* Don't let data address watchpoint match in hypervisor state */
vcpu->arch.dawrx0 = l2_hv->dawrx0 & ~DAWRX_HYP;
@@ -389,6 +384,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
/* set L1 state to L2 state */
vcpu->arch.nested = l2;
vcpu->arch.nested_vcpu_id = l2_hv.vcpu_token;
+ l2->hfscr = l2_hv.hfscr;
vcpu->arch.regs = l2_regs;
/* Guest must always run with ME enabled, HV disabled. */
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 20/55] KVM: PPC: Book3S HV P9: Factor out yield_count increment
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (18 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 19/55] KVM: PPC: Book3S HV P9: Demand fault PMU SPRs when marked not inuse Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 21/55] KVM: PPC: Book3S HV: CTRL SPR does not require read-modify-write Nicholas Piggin
` (34 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin, Fabiano Rosas
Factor duplicated code into a helper function.
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 24 ++++++++++++------------
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 7c75f63648d6..772f1e6c93e1 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -4081,6 +4081,16 @@ static inline bool hcall_is_xics(unsigned long req)
req == H_IPOLL || req == H_XIRR || req == H_XIRR_X;
}
+static void vcpu_vpa_increment_dispatch(struct kvm_vcpu *vcpu)
+{
+ struct lppaca *lp = vcpu->arch.vpa.pinned_addr;
+ if (lp) {
+ u32 yield_count = be32_to_cpu(lp->yield_count) + 1;
+ lp->yield_count = cpu_to_be32(yield_count);
+ vcpu->arch.vpa.dirty = 1;
+ }
+}
+
/*
* Guest entry for POWER9 and later CPUs.
*/
@@ -4109,12 +4119,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
vc->entry_exit_map = 1;
vc->in_guest = 1;
- if (vcpu->arch.vpa.pinned_addr) {
- struct lppaca *lp = vcpu->arch.vpa.pinned_addr;
- u32 yield_count = be32_to_cpu(lp->yield_count) + 1;
- lp->yield_count = cpu_to_be32(yield_count);
- vcpu->arch.vpa.dirty = 1;
- }
+ vcpu_vpa_increment_dispatch(vcpu);
if (cpu_has_feature(CPU_FTR_TM) ||
cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
@@ -4242,12 +4247,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true);
- if (vcpu->arch.vpa.pinned_addr) {
- struct lppaca *lp = vcpu->arch.vpa.pinned_addr;
- u32 yield_count = be32_to_cpu(lp->yield_count) + 1;
- lp->yield_count = cpu_to_be32(yield_count);
- vcpu->arch.vpa.dirty = 1;
- }
+ vcpu_vpa_increment_dispatch(vcpu);
switch_pmu_to_host(vcpu, &host_os_sprs);
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 21/55] KVM: PPC: Book3S HV: CTRL SPR does not require read-modify-write
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (19 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 20/55] KVM: PPC: Book3S HV P9: Factor out yield_count increment Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 22/55] KVM: PPC: Book3S HV P9: Move SPRG restore to restore_p9_host_os_sprs Nicholas Piggin
` (33 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
Processors that support KVM HV do not require read-modify-write of
the CTRL SPR to set/clear their thread's runlatch. Just write 1 or 0
to it.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 2 +-
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 15 ++++++---------
2 files changed, 7 insertions(+), 10 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 772f1e6c93e1..f212d5013622 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -4024,7 +4024,7 @@ static void load_spr_state(struct kvm_vcpu *vcpu)
*/
if (!(vcpu->arch.ctrl & 1))
- mtspr(SPRN_CTRLT, mfspr(SPRN_CTRLF) & ~1);
+ mtspr(SPRN_CTRLT, 0);
}
static void store_spr_state(struct kvm_vcpu *vcpu)
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 551ce223b40c..05be8648937d 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -775,12 +775,11 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
mtspr SPRN_AMR,r5
mtspr SPRN_UAMOR,r6
- /* Restore state of CTRL run bit; assume 1 on entry */
+ /* Restore state of CTRL run bit; the host currently has it set to 1 */
lwz r5,VCPU_CTRL(r4)
andi. r5,r5,1
bne 4f
- mfspr r6,SPRN_CTRLF
- clrrdi r6,r6,1
+ li r6,0
mtspr SPRN_CTRLT,r6
4:
/* Secondary threads wait for primary to have done partition switch */
@@ -1203,12 +1202,12 @@ guest_bypass:
stw r0, VCPU_CPU(r9)
stw r0, VCPU_THREAD_CPU(r9)
- /* Save guest CTRL register, set runlatch to 1 */
+ /* Save guest CTRL register, set runlatch to 1 if it was clear */
mfspr r6,SPRN_CTRLF
stw r6,VCPU_CTRL(r9)
andi. r0,r6,1
bne 4f
- ori r6,r6,1
+ li r6,1
mtspr SPRN_CTRLT,r6
4:
/*
@@ -2178,8 +2177,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_TM)
* Also clear the runlatch bit before napping.
*/
kvm_do_nap:
- mfspr r0, SPRN_CTRLF
- clrrdi r0, r0, 1
+ li r0,0
mtspr SPRN_CTRLT, r0
li r0,1
@@ -2198,8 +2196,7 @@ kvm_nap_sequence: /* desired LPCR value in r5 */
bl isa206_idle_insn_mayloss
- mfspr r0, SPRN_CTRLF
- ori r0, r0, 1
+ li r0,1
mtspr SPRN_CTRLT, r0
mtspr SPRN_SRR1, r3
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 22/55] KVM: PPC: Book3S HV P9: Move SPRG restore to restore_p9_host_os_sprs
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (20 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 21/55] KVM: PPC: Book3S HV: CTRL SPR does not require read-modify-write Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 23/55] KVM: PPC: Book3S HV P9: Reduce mtmsrd instructions required to save host SPRs Nicholas Piggin
` (32 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
Move the SPR update into its relevant helper function. This will
help with SPR scheduling improvements in later changes.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index f212d5013622..2e966d62a583 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -4057,6 +4057,8 @@ static void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs)
static void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu,
struct p9_host_os_sprs *host_os_sprs)
{
+ mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso);
+
mtspr(SPRN_PSPB, 0);
mtspr(SPRN_UAMOR, 0);
@@ -4256,8 +4258,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
timer_rearm_host_dec(tb);
- mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso);
-
kvmppc_subcore_exit_guest();
return trap;
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 23/55] KVM: PPC: Book3S HV P9: Reduce mtmsrd instructions required to save host SPRs
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (21 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 22/55] KVM: PPC: Book3S HV P9: Move SPRG restore to restore_p9_host_os_sprs Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 6:57 ` kernel test robot
2021-07-26 7:01 ` kernel test robot
2021-07-26 3:50 ` [PATCH v1 24/55] KVM: PPC: Book3S HV P9: Improve mtmsrd scheduling by delaying MSR[EE] disable Nicholas Piggin
` (31 subsequent siblings)
54 siblings, 2 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
This reduces the number of mtmsrd required to enable facility bits when
saving/restoring registers, by having the KVM code set all bits up front
rather than using individual facility functions that set their particular
MSR bits.
-42 cycles (7803) POWER9 virt-mode NULL hcall
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kernel/process.c | 24 +++++++++++
arch/powerpc/kvm/book3s_hv.c | 61 ++++++++++++++++++---------
arch/powerpc/kvm/book3s_hv_p9_entry.c | 1 +
3 files changed, 67 insertions(+), 19 deletions(-)
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 185beb290580..00b55b38a460 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -593,6 +593,30 @@ static void save_all(struct task_struct *tsk)
msr_check_and_clear(msr_all_available);
}
+void save_user_regs_kvm(void)
+{
+ unsigned long usermsr;
+
+ if (!current->thread.regs)
+ return;
+
+ usermsr = current->thread.regs->msr;
+
+ if (usermsr & MSR_FP)
+ save_fpu(current);
+
+ if (usermsr & MSR_VEC)
+ save_altivec(current);
+
+ if (usermsr & MSR_TM) {
+ current->thread.tm_tfhar = mfspr(SPRN_TFHAR);
+ current->thread.tm_tfiar = mfspr(SPRN_TFIAR);
+ current->thread.tm_texasr = mfspr(SPRN_TEXASR);
+ current->thread.regs->msr &= ~MSR_TM;
+ }
+}
+EXPORT_SYMBOL_GPL(save_user_regs_kvm);
+
void flush_all_to_thread(struct task_struct *tsk)
{
if (tsk->thread.regs) {
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 2e966d62a583..dedcf3ddba3b 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -4103,6 +4103,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
struct p9_host_os_sprs host_os_sprs;
s64 dec;
u64 tb, next_timer;
+ unsigned long msr;
int trap;
WARN_ON_ONCE(vcpu->arch.ceded);
@@ -4114,8 +4115,23 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
if (next_timer < time_limit)
time_limit = next_timer;
+ vcpu->arch.ceded = 0;
+
save_p9_host_os_sprs(&host_os_sprs);
+ /* MSR bits may have been cleared by context switch */
+ msr = 0;
+ if (IS_ENABLED(CONFIG_PPC_FPU))
+ msr |= MSR_FP;
+ if (cpu_has_feature(CPU_FTR_ALTIVEC))
+ msr |= MSR_VEC;
+ if (cpu_has_feature(CPU_FTR_VSX))
+ msr |= MSR_VSX;
+ if (cpu_has_feature(CPU_FTR_TM) ||
+ cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
+ msr |= MSR_TM;
+ msr = msr_check_and_set(msr);
+
kvmppc_subcore_enter_guest();
vc->entry_exit_map = 1;
@@ -4124,12 +4140,13 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
vcpu_vpa_increment_dispatch(vcpu);
if (cpu_has_feature(CPU_FTR_TM) ||
- cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
+ cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) {
kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true);
+ msr = mfmsr(); /* TM restore can update msr */
+ }
switch_pmu_to_guest(vcpu, &host_os_sprs);
- msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX);
load_fp_state(&vcpu->arch.fp);
#ifdef CONFIG_ALTIVEC
load_vr_state(&vcpu->arch.vr);
@@ -4238,7 +4255,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
restore_p9_host_os_sprs(vcpu, &host_os_sprs);
- msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX);
store_fp_state(&vcpu->arch.fp);
#ifdef CONFIG_ALTIVEC
store_vr_state(&vcpu->arch.vr);
@@ -4767,6 +4783,8 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
goto done;
}
+void save_user_regs_kvm(void);
+
static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu)
{
struct kvm_run *run = vcpu->run;
@@ -4776,19 +4794,24 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu)
unsigned long user_tar = 0;
unsigned int user_vrsave;
struct kvm *kvm;
+ unsigned long msr;
if (!vcpu->arch.sane) {
run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
return -EINVAL;
}
+ /* No need to go into the guest when all we'll do is come back out */
+ if (signal_pending(current)) {
+ run->exit_reason = KVM_EXIT_INTR;
+ return -EINTR;
+ }
+
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
/*
* Don't allow entry with a suspended transaction, because
* the guest entry/exit code will lose it.
- * If the guest has TM enabled, save away their TM-related SPRs
- * (they will get restored by the TM unavailable interrupt).
*/
-#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
if (cpu_has_feature(CPU_FTR_TM) && current->thread.regs &&
(current->thread.regs->msr & MSR_TM)) {
if (MSR_TM_ACTIVE(current->thread.regs->msr)) {
@@ -4796,12 +4819,6 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu)
run->fail_entry.hardware_entry_failure_reason = 0;
return -EINVAL;
}
- /* Enable TM so we can read the TM SPRs */
- mtmsr(mfmsr() | MSR_TM);
- current->thread.tm_tfhar = mfspr(SPRN_TFHAR);
- current->thread.tm_tfiar = mfspr(SPRN_TFIAR);
- current->thread.tm_texasr = mfspr(SPRN_TEXASR);
- current->thread.regs->msr &= ~MSR_TM;
}
#endif
@@ -4816,18 +4833,24 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu)
kvmppc_core_prepare_to_enter(vcpu);
- /* No need to go into the guest when all we'll do is come back out */
- if (signal_pending(current)) {
- run->exit_reason = KVM_EXIT_INTR;
- return -EINTR;
- }
-
kvm = vcpu->kvm;
atomic_inc(&kvm->arch.vcpus_running);
/* Order vcpus_running vs. mmu_ready, see kvmppc_alloc_reset_hpt */
smp_mb();
- flush_all_to_thread(current);
+ msr = 0;
+ if (IS_ENABLED(CONFIG_PPC_FPU))
+ msr |= MSR_FP;
+ if (cpu_has_feature(CPU_FTR_ALTIVEC))
+ msr |= MSR_VEC;
+ if (cpu_has_feature(CPU_FTR_VSX))
+ msr |= MSR_VSX;
+ if (cpu_has_feature(CPU_FTR_TM) ||
+ cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
+ msr |= MSR_TM;
+ msr = msr_check_and_set(msr);
+
+ save_user_regs_kvm();
/* Save userspace EBB and other register values */
if (cpu_has_feature(CPU_FTR_ARCH_207S)) {
diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
index a7f63082b4e3..fb9cb34445ea 100644
--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
+++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
@@ -224,6 +224,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
vc->tb_offset_applied = vc->tb_offset;
}
+ /* Could avoid mfmsr by passing around, but probably no big deal */
msr = mfmsr();
host_hfscr = mfspr(SPRN_HFSCR);
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 24/55] KVM: PPC: Book3S HV P9: Improve mtmsrd scheduling by delaying MSR[EE] disable
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (22 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 23/55] KVM: PPC: Book3S HV P9: Reduce mtmsrd instructions required to save host SPRs Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 25/55] KVM: PPC: Book3S HV P9: Add kvmppc_stop_thread to match kvmppc_start_thread Nicholas Piggin
` (30 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
Moving the mtmsrd after the host SPRs are saved and before the guest
SPRs start to be loaded can prevent an SPR scoreboard stall (because
the mtmsrd is L=1 type which does not cause context synchronisation.
This is also now more convenient to combined with the mtmsrd L=0
instruction to enable facilities just below, but that is not done yet.
-12 cycles (7791) POWER9 virt-mode NULL hcall
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 23 ++++++++++++++++++-----
1 file changed, 18 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index dedcf3ddba3b..7654235c1507 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -4119,6 +4119,18 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
save_p9_host_os_sprs(&host_os_sprs);
+ /*
+ * This could be combined with MSR[RI] clearing, but that expands
+ * the unrecoverable window. It would be better to cover unrecoverable
+ * with KVM bad interrupt handling rather than use MSR[RI] at all.
+ *
+ * Much more difficult and less worthwhile to combine with IR/DR
+ * disable.
+ */
+ hard_irq_disable();
+ if (lazy_irq_pending())
+ return 0;
+
/* MSR bits may have been cleared by context switch */
msr = 0;
if (IS_ENABLED(CONFIG_PPC_FPU))
@@ -4618,6 +4630,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
struct kvmppc_vcore *vc;
struct kvm *kvm = vcpu->kvm;
struct kvm_nested_guest *nested = vcpu->arch.nested;
+ unsigned long flags;
trace_kvmppc_run_vcpu_enter(vcpu);
@@ -4661,11 +4674,11 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
if (kvm_is_radix(kvm))
kvmppc_prepare_radix_vcpu(vcpu, pcpu);
- local_irq_disable();
- hard_irq_disable();
+ /* flags save not required, but irq_pmu has no disable/enable API */
+ powerpc_local_irq_pmu_save(flags);
if (signal_pending(current))
goto sigpend;
- if (lazy_irq_pending() || need_resched() || !kvm->arch.mmu_ready)
+ if (need_resched() || !kvm->arch.mmu_ready)
goto out;
if (!nested) {
@@ -4720,7 +4733,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
guest_exit_irqoff();
- local_irq_enable();
+ powerpc_local_irq_pmu_restore(flags);
cpumask_clear_cpu(pcpu, &kvm->arch.cpu_in_guest);
@@ -4778,7 +4791,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
run->exit_reason = KVM_EXIT_INTR;
vcpu->arch.ret = -EINTR;
out:
- local_irq_enable();
+ powerpc_local_irq_pmu_restore(flags);
preempt_enable();
goto done;
}
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 25/55] KVM: PPC: Book3S HV P9: Add kvmppc_stop_thread to match kvmppc_start_thread
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (23 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 24/55] KVM: PPC: Book3S HV P9: Improve mtmsrd scheduling by delaying MSR[EE] disable Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 26/55] KVM: PPC: Book3S HV: Change dec_expires to be relative to guest timebase Nicholas Piggin
` (29 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin, Fabiano Rosas
Small cleanup makes it a bit easier to match up entry and exit
operations.
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 7654235c1507..4d757e4904c4 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -3045,6 +3045,13 @@ static void kvmppc_start_thread(struct kvm_vcpu *vcpu, struct kvmppc_vcore *vc)
kvmppc_ipi_thread(cpu);
}
+/* Old path does this in asm */
+static void kvmppc_stop_thread(struct kvm_vcpu *vcpu)
+{
+ vcpu->cpu = -1;
+ vcpu->arch.thread_cpu = -1;
+}
+
static void kvmppc_wait_for_nap(int n_threads)
{
int cpu = smp_processor_id();
@@ -4260,8 +4267,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
dec = (s32) dec;
tb = mftb();
vcpu->arch.dec_expires = dec + tb;
- vcpu->cpu = -1;
- vcpu->arch.thread_cpu = -1;
store_spr_state(vcpu);
@@ -4733,6 +4738,8 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
guest_exit_irqoff();
+ kvmppc_stop_thread(vcpu);
+
powerpc_local_irq_pmu_restore(flags);
cpumask_clear_cpu(pcpu, &kvm->arch.cpu_in_guest);
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 26/55] KVM: PPC: Book3S HV: Change dec_expires to be relative to guest timebase
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (24 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 25/55] KVM: PPC: Book3S HV P9: Add kvmppc_stop_thread to match kvmppc_start_thread Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-08-07 23:17 ` Michael Ellerman
2021-07-26 3:50 ` [PATCH v1 27/55] KVM: PPC: Book3S HV P9: Move TB updates Nicholas Piggin
` (28 subsequent siblings)
54 siblings, 1 reply; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
Change dec_expires to be relative to the guest timebase, and allow
it to be moved into low level P9 guest entry functions, to improve
SPR access scheduling.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/kvm_book3s.h | 6 +++
arch/powerpc/include/asm/kvm_host.h | 2 +-
arch/powerpc/kvm/book3s_hv.c | 58 +++++++++++++------------
arch/powerpc/kvm/book3s_hv_nested.c | 3 ++
arch/powerpc/kvm/book3s_hv_p9_entry.c | 10 ++++-
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 14 ------
6 files changed, 49 insertions(+), 44 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index caaa0f592d8e..15b573671f99 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -406,6 +406,12 @@ static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
return vcpu->arch.fault_dar;
}
+/* Expiry time of vcpu DEC relative to host TB */
+static inline u64 kvmppc_dec_expires_host_tb(struct kvm_vcpu *vcpu)
+{
+ return vcpu->arch.dec_expires - vcpu->arch.vcore->tb_offset;
+}
+
static inline bool is_kvmppc_resume_guest(int r)
{
return (r == RESUME_GUEST || r == RESUME_GUEST_NV);
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index aee41edcfe6b..f105eaeb4521 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -742,7 +742,7 @@ struct kvm_vcpu_arch {
struct hrtimer dec_timer;
u64 dec_jiffies;
- u64 dec_expires;
+ u64 dec_expires; /* Relative to guest timebase. */
unsigned long pending_exceptions;
u8 ceded;
u8 prodded;
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 4d757e4904c4..027ae0b60e70 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -2237,8 +2237,7 @@ static int kvmppc_get_one_reg_hv(struct kvm_vcpu *vcpu, u64 id,
*val = get_reg_val(id, vcpu->arch.vcore->arch_compat);
break;
case KVM_REG_PPC_DEC_EXPIRY:
- *val = get_reg_val(id, vcpu->arch.dec_expires +
- vcpu->arch.vcore->tb_offset);
+ *val = get_reg_val(id, vcpu->arch.dec_expires);
break;
case KVM_REG_PPC_ONLINE:
*val = get_reg_val(id, vcpu->arch.online);
@@ -2490,8 +2489,7 @@ static int kvmppc_set_one_reg_hv(struct kvm_vcpu *vcpu, u64 id,
r = kvmppc_set_arch_compat(vcpu, set_reg_val(id, *val));
break;
case KVM_REG_PPC_DEC_EXPIRY:
- vcpu->arch.dec_expires = set_reg_val(id, *val) -
- vcpu->arch.vcore->tb_offset;
+ vcpu->arch.dec_expires = set_reg_val(id, *val);
break;
case KVM_REG_PPC_ONLINE:
i = set_reg_val(id, *val);
@@ -2877,13 +2875,13 @@ static void kvmppc_set_timer(struct kvm_vcpu *vcpu)
unsigned long dec_nsec, now;
now = get_tb();
- if (now > vcpu->arch.dec_expires) {
+ if (now > kvmppc_dec_expires_host_tb(vcpu)) {
/* decrementer has already gone negative */
kvmppc_core_queue_dec(vcpu);
kvmppc_core_prepare_to_enter(vcpu);
return;
}
- dec_nsec = tb_to_ns(vcpu->arch.dec_expires - now);
+ dec_nsec = tb_to_ns(kvmppc_dec_expires_host_tb(vcpu) - now);
hrtimer_start(&vcpu->arch.dec_timer, dec_nsec, HRTIMER_MODE_REL);
vcpu->arch.timer_running = 1;
}
@@ -3355,7 +3353,7 @@ static void post_guest_process(struct kvmppc_vcore *vc, bool is_master)
*/
spin_unlock(&vc->lock);
/* cancel pending dec exception if dec is positive */
- if (now < vcpu->arch.dec_expires &&
+ if (now < kvmppc_dec_expires_host_tb(vcpu) &&
kvmppc_core_pending_dec(vcpu))
kvmppc_core_dequeue_dec(vcpu);
@@ -4174,20 +4172,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
load_spr_state(vcpu);
- /*
- * When setting DEC, we must always deal with irq_work_raise via NMI vs
- * setting DEC. The problem occurs right as we switch into guest mode
- * if a NMI hits and sets pending work and sets DEC, then that will
- * apply to the guest and not bring us back to the host.
- *
- * irq_work_raise could check a flag (or possibly LPCR[HDICE] for
- * example) and set HDEC to 1? That wouldn't solve the nested hv
- * case which needs to abort the hcall or zero the time limit.
- *
- * XXX: Another day's problem.
- */
- mtspr(SPRN_DEC, vcpu->arch.dec_expires - tb);
-
if (kvmhv_on_pseries()) {
/*
* We need to save and restore the guest visible part of the
@@ -4213,6 +4197,23 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
hvregs.vcpu_token = vcpu->vcpu_id;
}
hvregs.hdec_expiry = time_limit;
+
+ /*
+ * When setting DEC, we must always deal with irq_work_raise
+ * via NMI vs setting DEC. The problem occurs right as we
+ * switch into guest mode if a NMI hits and sets pending work
+ * and sets DEC, then that will apply to the guest and not
+ * bring us back to the host.
+ *
+ * irq_work_raise could check a flag (or possibly LPCR[HDICE]
+ * for example) and set HDEC to 1? That wouldn't solve the
+ * nested hv case which needs to abort the hcall or zero the
+ * time limit.
+ *
+ * XXX: Another day's problem.
+ */
+ mtspr(SPRN_DEC, kvmppc_dec_expires_host_tb(vcpu) - tb);
+
mtspr(SPRN_DAR, vcpu->arch.shregs.dar);
mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr);
trap = plpar_hcall_norets(H_ENTER_NESTED, __pa(&hvregs),
@@ -4224,6 +4225,12 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
vcpu->arch.psscr = mfspr(SPRN_PSSCR_PR);
mtspr(SPRN_PSSCR_PR, host_psscr);
+ dec = mfspr(SPRN_DEC);
+ if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */
+ dec = (s32) dec;
+ tb = mftb();
+ vcpu->arch.dec_expires = dec + (tb + vc->tb_offset);
+
/* H_CEDE has to be handled now, not later */
if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested &&
kvmppc_get_gpr(vcpu, 3) == H_CEDE) {
@@ -4231,6 +4238,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
kvmppc_set_gpr(vcpu, 3, 0);
trap = 0;
}
+
} else {
kvmppc_xive_push_vcpu(vcpu);
trap = kvmhv_vcpu_entry_p9(vcpu, time_limit, lpcr);
@@ -4262,12 +4270,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
vcpu->arch.slb_max = 0;
}
- dec = mfspr(SPRN_DEC);
- if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */
- dec = (s32) dec;
- tb = mftb();
- vcpu->arch.dec_expires = dec + tb;
-
store_spr_state(vcpu);
restore_p9_host_os_sprs(vcpu, &host_os_sprs);
@@ -4752,7 +4754,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
* by L2 and the L1 decrementer is provided in hdec_expires
*/
if (kvmppc_core_pending_dec(vcpu) &&
- ((get_tb() < vcpu->arch.dec_expires) ||
+ ((get_tb() < kvmppc_dec_expires_host_tb(vcpu)) ||
(trap == BOOK3S_INTERRUPT_SYSCALL &&
kvmppc_get_gpr(vcpu, 3) == H_ENTER_NESTED)))
kvmppc_core_dequeue_dec(vcpu);
diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
index 3ffc63ffebc5..fad7bc8736ea 100644
--- a/arch/powerpc/kvm/book3s_hv_nested.c
+++ b/arch/powerpc/kvm/book3s_hv_nested.c
@@ -380,6 +380,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
/* convert TB values/offsets to host (L0) values */
hdec_exp = l2_hv.hdec_expiry - vc->tb_offset;
vc->tb_offset += l2_hv.tb_offset;
+ vcpu->arch.dec_expires += l2_hv.tb_offset;
/* set L1 state to L2 state */
vcpu->arch.nested = l2;
@@ -421,6 +422,8 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
if (l2_regs.msr & MSR_TS_MASK)
vcpu->arch.shregs.msr |= MSR_TS_S;
vc->tb_offset = saved_l1_hv.tb_offset;
+ /* XXX: is this always the same delta as saved_l1_hv.tb_offset? */
+ vcpu->arch.dec_expires -= l2_hv.tb_offset;
restore_hv_regs(vcpu, &saved_l1_hv);
vcpu->arch.purr += delta_purr;
vcpu->arch.spurr += delta_spurr;
diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
index fb9cb34445ea..814b0dfd590f 100644
--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
+++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
@@ -188,7 +188,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
struct kvm *kvm = vcpu->kvm;
struct kvm_nested_guest *nested = vcpu->arch.nested;
struct kvmppc_vcore *vc = vcpu->arch.vcore;
- s64 hdec;
+ s64 hdec, dec;
u64 tb, purr, spurr;
u64 *exsave;
bool ri_set;
@@ -317,6 +317,8 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
*/
mtspr(SPRN_HDEC, hdec);
+ mtspr(SPRN_DEC, vcpu->arch.dec_expires - tb);
+
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
tm_return_to_guest:
#endif
@@ -461,6 +463,12 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
vcpu->arch.shregs.sprg2 = mfspr(SPRN_SPRG2);
vcpu->arch.shregs.sprg3 = mfspr(SPRN_SPRG3);
+ dec = mfspr(SPRN_DEC);
+ if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */
+ dec = (s32) dec;
+ tb = mftb();
+ vcpu->arch.dec_expires = dec + tb;
+
/* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */
mtspr(SPRN_PSSCR, host_psscr |
(local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG));
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 05be8648937d..16cb3240df52 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -808,10 +808,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
* Set the decrementer to the guest decrementer.
*/
ld r8,VCPU_DEC_EXPIRES(r4)
- /* r8 is a host timebase value here, convert to guest TB */
- ld r5,HSTATE_KVM_VCORE(r13)
- ld r6,VCORE_TB_OFFSET_APPL(r5)
- add r8,r8,r6
mftb r7
subf r3,r7,r8
mtspr SPRN_DEC,r3
@@ -1186,9 +1182,6 @@ guest_bypass:
mftb r6
extsw r5,r5
16: add r5,r5,r6
- /* r5 is a guest timebase value here, convert to host TB */
- ld r4,VCORE_TB_OFFSET_APPL(r3)
- subf r5,r4,r5
std r5,VCPU_DEC_EXPIRES(r9)
/* Increment exit count, poke other threads to exit */
@@ -2153,10 +2146,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_TM)
67:
/* save expiry time of guest decrementer */
add r3, r3, r5
- ld r4, HSTATE_KVM_VCPU(r13)
- ld r5, HSTATE_KVM_VCORE(r13)
- ld r6, VCORE_TB_OFFSET_APPL(r5)
- subf r3, r6, r3 /* convert to host TB value */
std r3, VCPU_DEC_EXPIRES(r4)
#ifdef CONFIG_KVM_BOOK3S_HV_EXIT_TIMING
@@ -2253,9 +2242,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_TM)
/* Restore guest decrementer */
ld r3, VCPU_DEC_EXPIRES(r4)
- ld r5, HSTATE_KVM_VCORE(r13)
- ld r6, VCORE_TB_OFFSET_APPL(r5)
- add r3, r3, r6 /* convert host TB to guest TB value */
mftb r7
subf r3, r7, r3
mtspr SPRN_DEC, r3
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 27/55] KVM: PPC: Book3S HV P9: Move TB updates
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (25 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 26/55] KVM: PPC: Book3S HV: Change dec_expires to be relative to guest timebase Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 28/55] KVM: PPC: Book3S HV P9: Optimise timebase reads Nicholas Piggin
` (27 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
Move the TB updates between saving and loading guest and host SPRs,
to improve scheduling by keeping issue-NTC operations together as
much as possible.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv_p9_entry.c | 36 +++++++++++++--------------
1 file changed, 18 insertions(+), 18 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
index 814b0dfd590f..e7793bb806eb 100644
--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
+++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
@@ -215,15 +215,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
vcpu->arch.ceded = 0;
- if (vc->tb_offset) {
- u64 new_tb = tb + vc->tb_offset;
- mtspr(SPRN_TBU40, new_tb);
- tb = mftb();
- if ((tb & 0xffffff) < (new_tb & 0xffffff))
- mtspr(SPRN_TBU40, new_tb + 0x1000000);
- vc->tb_offset_applied = vc->tb_offset;
- }
-
/* Could avoid mfmsr by passing around, but probably no big deal */
msr = mfmsr();
@@ -238,6 +229,15 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
host_dawrx1 = mfspr(SPRN_DAWRX1);
}
+ if (vc->tb_offset) {
+ u64 new_tb = tb + vc->tb_offset;
+ mtspr(SPRN_TBU40, new_tb);
+ tb = mftb();
+ if ((tb & 0xffffff) < (new_tb & 0xffffff))
+ mtspr(SPRN_TBU40, new_tb + 0x1000000);
+ vc->tb_offset_applied = vc->tb_offset;
+ }
+
if (vc->pcr)
mtspr(SPRN_PCR, vc->pcr | PCR_MASK);
mtspr(SPRN_DPDES, vc->dpdes);
@@ -469,6 +469,15 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
tb = mftb();
vcpu->arch.dec_expires = dec + tb;
+ if (vc->tb_offset_applied) {
+ u64 new_tb = tb - vc->tb_offset_applied;
+ mtspr(SPRN_TBU40, new_tb);
+ tb = mftb();
+ if ((tb & 0xffffff) < (new_tb & 0xffffff))
+ mtspr(SPRN_TBU40, new_tb + 0x1000000);
+ vc->tb_offset_applied = 0;
+ }
+
/* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */
mtspr(SPRN_PSSCR, host_psscr |
(local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG));
@@ -503,15 +512,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
if (vc->pcr)
mtspr(SPRN_PCR, PCR_MASK);
- if (vc->tb_offset_applied) {
- u64 new_tb = mftb() - vc->tb_offset_applied;
- mtspr(SPRN_TBU40, new_tb);
- tb = mftb();
- if ((tb & 0xffffff) < (new_tb & 0xffffff))
- mtspr(SPRN_TBU40, new_tb + 0x1000000);
- vc->tb_offset_applied = 0;
- }
-
/* HDEC must be at least as large as DEC, so decrementer_max fits */
mtspr(SPRN_HDEC, decrementer_max);
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 28/55] KVM: PPC: Book3S HV P9: Optimise timebase reads
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (26 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 27/55] KVM: PPC: Book3S HV P9: Move TB updates Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 29/55] KVM: PPC: Book3S HV P9: Avoid SPR scoreboard stalls Nicholas Piggin
` (26 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
Reduce the number of mfTB executed by passing the current timebase
around entry and exit code rather than read it multiple times.
-213 cycles (7578) POWER9 virt-mode NULL hcall
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/kvm_book3s_64.h | 2 +-
arch/powerpc/kvm/book3s_hv.c | 88 +++++++++++++-----------
arch/powerpc/kvm/book3s_hv_p9_entry.c | 33 +++++----
3 files changed, 65 insertions(+), 58 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h
index df6bed4b2a46..52e2b7a352c7 100644
--- a/arch/powerpc/include/asm/kvm_book3s_64.h
+++ b/arch/powerpc/include/asm/kvm_book3s_64.h
@@ -154,7 +154,7 @@ static inline bool kvmhv_vcpu_is_radix(struct kvm_vcpu *vcpu)
return radix;
}
-int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr);
+int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb);
#define KVM_DEFAULT_HPT_ORDER 24 /* 16MB HPT by default */
#endif
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 027ae0b60e70..fa44bbca75e4 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -276,22 +276,22 @@ static void kvmppc_fast_vcpu_kick_hv(struct kvm_vcpu *vcpu)
* they should never fail.)
*/
-static void kvmppc_core_start_stolen(struct kvmppc_vcore *vc)
+static void kvmppc_core_start_stolen(struct kvmppc_vcore *vc, u64 tb)
{
unsigned long flags;
spin_lock_irqsave(&vc->stoltb_lock, flags);
- vc->preempt_tb = mftb();
+ vc->preempt_tb = tb;
spin_unlock_irqrestore(&vc->stoltb_lock, flags);
}
-static void kvmppc_core_end_stolen(struct kvmppc_vcore *vc)
+static void kvmppc_core_end_stolen(struct kvmppc_vcore *vc, u64 tb)
{
unsigned long flags;
spin_lock_irqsave(&vc->stoltb_lock, flags);
if (vc->preempt_tb != TB_NIL) {
- vc->stolen_tb += mftb() - vc->preempt_tb;
+ vc->stolen_tb += tb - vc->preempt_tb;
vc->preempt_tb = TB_NIL;
}
spin_unlock_irqrestore(&vc->stoltb_lock, flags);
@@ -301,6 +301,7 @@ static void kvmppc_core_vcpu_load_hv(struct kvm_vcpu *vcpu, int cpu)
{
struct kvmppc_vcore *vc = vcpu->arch.vcore;
unsigned long flags;
+ u64 now = mftb();
/*
* We can test vc->runner without taking the vcore lock,
@@ -309,12 +310,12 @@ static void kvmppc_core_vcpu_load_hv(struct kvm_vcpu *vcpu, int cpu)
* ever sets it to NULL.
*/
if (vc->runner == vcpu && vc->vcore_state >= VCORE_SLEEPING)
- kvmppc_core_end_stolen(vc);
+ kvmppc_core_end_stolen(vc, now);
spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags);
if (vcpu->arch.state == KVMPPC_VCPU_BUSY_IN_HOST &&
vcpu->arch.busy_preempt != TB_NIL) {
- vcpu->arch.busy_stolen += mftb() - vcpu->arch.busy_preempt;
+ vcpu->arch.busy_stolen += now - vcpu->arch.busy_preempt;
vcpu->arch.busy_preempt = TB_NIL;
}
spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags);
@@ -324,13 +325,14 @@ static void kvmppc_core_vcpu_put_hv(struct kvm_vcpu *vcpu)
{
struct kvmppc_vcore *vc = vcpu->arch.vcore;
unsigned long flags;
+ u64 now = mftb();
if (vc->runner == vcpu && vc->vcore_state >= VCORE_SLEEPING)
- kvmppc_core_start_stolen(vc);
+ kvmppc_core_start_stolen(vc, now);
spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags);
if (vcpu->arch.state == KVMPPC_VCPU_BUSY_IN_HOST)
- vcpu->arch.busy_preempt = mftb();
+ vcpu->arch.busy_preempt = now;
spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags);
}
@@ -685,7 +687,7 @@ static u64 vcore_stolen_time(struct kvmppc_vcore *vc, u64 now)
}
static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu,
- struct kvmppc_vcore *vc)
+ struct kvmppc_vcore *vc, u64 tb)
{
struct dtl_entry *dt;
struct lppaca *vpa;
@@ -696,7 +698,7 @@ static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu,
dt = vcpu->arch.dtl_ptr;
vpa = vcpu->arch.vpa.pinned_addr;
- now = mftb();
+ now = tb;
core_stolen = vcore_stolen_time(vc, now);
stolen = core_stolen - vcpu->arch.stolen_logged;
vcpu->arch.stolen_logged = core_stolen;
@@ -2889,14 +2891,14 @@ static void kvmppc_set_timer(struct kvm_vcpu *vcpu)
extern int __kvmppc_vcore_entry(void);
static void kvmppc_remove_runnable(struct kvmppc_vcore *vc,
- struct kvm_vcpu *vcpu)
+ struct kvm_vcpu *vcpu, u64 tb)
{
u64 now;
if (vcpu->arch.state != KVMPPC_VCPU_RUNNABLE)
return;
spin_lock_irq(&vcpu->arch.tbacct_lock);
- now = mftb();
+ now = tb;
vcpu->arch.busy_stolen += vcore_stolen_time(vc, now) -
vcpu->arch.stolen_logged;
vcpu->arch.busy_preempt = now;
@@ -3147,14 +3149,14 @@ static void kvmppc_vcore_preempt(struct kvmppc_vcore *vc)
}
/* Start accumulating stolen time */
- kvmppc_core_start_stolen(vc);
+ kvmppc_core_start_stolen(vc, mftb());
}
static void kvmppc_vcore_end_preempt(struct kvmppc_vcore *vc)
{
struct preempted_vcore_list *lp;
- kvmppc_core_end_stolen(vc);
+ kvmppc_core_end_stolen(vc, mftb());
if (!list_empty(&vc->preempt_list)) {
lp = &per_cpu(preempted_vcores, vc->pcpu);
spin_lock(&lp->lock);
@@ -3281,7 +3283,7 @@ static void prepare_threads(struct kvmppc_vcore *vc)
vcpu->arch.ret = RESUME_GUEST;
else
continue;
- kvmppc_remove_runnable(vc, vcpu);
+ kvmppc_remove_runnable(vc, vcpu, mftb());
wake_up(&vcpu->arch.cpu_run);
}
}
@@ -3300,7 +3302,7 @@ static void collect_piggybacks(struct core_info *cip, int target_threads)
list_del_init(&pvc->preempt_list);
if (pvc->runner == NULL) {
pvc->vcore_state = VCORE_INACTIVE;
- kvmppc_core_end_stolen(pvc);
+ kvmppc_core_end_stolen(pvc, mftb());
}
spin_unlock(&pvc->lock);
continue;
@@ -3309,7 +3311,7 @@ static void collect_piggybacks(struct core_info *cip, int target_threads)
spin_unlock(&pvc->lock);
continue;
}
- kvmppc_core_end_stolen(pvc);
+ kvmppc_core_end_stolen(pvc, mftb());
pvc->vcore_state = VCORE_PIGGYBACK;
if (cip->total_threads >= target_threads)
break;
@@ -3376,7 +3378,7 @@ static void post_guest_process(struct kvmppc_vcore *vc, bool is_master)
else
++still_running;
} else {
- kvmppc_remove_runnable(vc, vcpu);
+ kvmppc_remove_runnable(vc, vcpu, mftb());
wake_up(&vcpu->arch.cpu_run);
}
}
@@ -3385,7 +3387,7 @@ static void post_guest_process(struct kvmppc_vcore *vc, bool is_master)
kvmppc_vcore_preempt(vc);
} else if (vc->runner) {
vc->vcore_state = VCORE_PREEMPT;
- kvmppc_core_start_stolen(vc);
+ kvmppc_core_start_stolen(vc, mftb());
} else {
vc->vcore_state = VCORE_INACTIVE;
}
@@ -3516,7 +3518,7 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc)
((vc->num_threads > threads_per_subcore) || !on_primary_thread())) {
for_each_runnable_thread(i, vcpu, vc) {
vcpu->arch.ret = -EBUSY;
- kvmppc_remove_runnable(vc, vcpu);
+ kvmppc_remove_runnable(vc, vcpu, mftb());
wake_up(&vcpu->arch.cpu_run);
}
goto out;
@@ -3648,7 +3650,7 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc)
pvc->pcpu = pcpu + thr;
for_each_runnable_thread(i, vcpu, pvc) {
kvmppc_start_thread(vcpu, pvc);
- kvmppc_create_dtl_entry(vcpu, pvc);
+ kvmppc_create_dtl_entry(vcpu, pvc, mftb());
trace_kvm_guest_enter(vcpu);
if (!vcpu->arch.ptid)
thr0_done = true;
@@ -4102,20 +4104,17 @@ static void vcpu_vpa_increment_dispatch(struct kvm_vcpu *vcpu)
* Guest entry for POWER9 and later CPUs.
*/
static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
- unsigned long lpcr)
+ unsigned long lpcr, u64 *tb)
{
struct kvmppc_vcore *vc = vcpu->arch.vcore;
struct p9_host_os_sprs host_os_sprs;
s64 dec;
- u64 tb, next_timer;
+ u64 next_timer;
unsigned long msr;
int trap;
- WARN_ON_ONCE(vcpu->arch.ceded);
-
- tb = mftb();
next_timer = timer_get_next_tb();
- if (tb >= next_timer)
+ if (*tb >= next_timer)
return BOOK3S_INTERRUPT_HV_DECREMENTER;
if (next_timer < time_limit)
time_limit = next_timer;
@@ -4212,7 +4211,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
*
* XXX: Another day's problem.
*/
- mtspr(SPRN_DEC, kvmppc_dec_expires_host_tb(vcpu) - tb);
+ mtspr(SPRN_DEC, kvmppc_dec_expires_host_tb(vcpu) - *tb);
mtspr(SPRN_DAR, vcpu->arch.shregs.dar);
mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr);
@@ -4228,8 +4227,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
dec = mfspr(SPRN_DEC);
if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */
dec = (s32) dec;
- tb = mftb();
- vcpu->arch.dec_expires = dec + (tb + vc->tb_offset);
+ *tb = mftb();
+ vcpu->arch.dec_expires = dec + (*tb + vc->tb_offset);
/* H_CEDE has to be handled now, not later */
if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested &&
@@ -4241,7 +4240,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
} else {
kvmppc_xive_push_vcpu(vcpu);
- trap = kvmhv_vcpu_entry_p9(vcpu, time_limit, lpcr);
+ trap = kvmhv_vcpu_entry_p9(vcpu, time_limit, lpcr, tb);
if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested &&
!(vcpu->arch.shregs.msr & MSR_PR)) {
unsigned long req = kvmppc_get_gpr(vcpu, 3);
@@ -4272,6 +4271,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
store_spr_state(vcpu);
+ timer_rearm_host_dec(*tb);
+
restore_p9_host_os_sprs(vcpu, &host_os_sprs);
store_fp_state(&vcpu->arch.fp);
@@ -4291,8 +4292,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
vc->entry_exit_map = 0x101;
vc->in_guest = 0;
- timer_rearm_host_dec(tb);
-
kvmppc_subcore_exit_guest();
return trap;
@@ -4534,7 +4533,7 @@ static int kvmppc_run_vcpu(struct kvm_vcpu *vcpu)
if ((vc->vcore_state == VCORE_PIGGYBACK ||
vc->vcore_state == VCORE_RUNNING) &&
!VCORE_IS_EXITING(vc)) {
- kvmppc_create_dtl_entry(vcpu, vc);
+ kvmppc_create_dtl_entry(vcpu, vc, mftb());
kvmppc_start_thread(vcpu, vc);
trace_kvm_guest_enter(vcpu);
} else if (vc->vcore_state == VCORE_SLEEPING) {
@@ -4569,7 +4568,7 @@ static int kvmppc_run_vcpu(struct kvm_vcpu *vcpu)
for_each_runnable_thread(i, v, vc) {
kvmppc_core_prepare_to_enter(v);
if (signal_pending(v->arch.run_task)) {
- kvmppc_remove_runnable(vc, v);
+ kvmppc_remove_runnable(vc, v, mftb());
v->stat.signal_exits++;
v->run->exit_reason = KVM_EXIT_INTR;
v->arch.ret = -EINTR;
@@ -4610,7 +4609,7 @@ static int kvmppc_run_vcpu(struct kvm_vcpu *vcpu)
kvmppc_vcore_end_preempt(vc);
if (vcpu->arch.state == KVMPPC_VCPU_RUNNABLE) {
- kvmppc_remove_runnable(vc, vcpu);
+ kvmppc_remove_runnable(vc, vcpu, mftb());
vcpu->stat.signal_exits++;
run->exit_reason = KVM_EXIT_INTR;
vcpu->arch.ret = -EINTR;
@@ -4638,6 +4637,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
struct kvm *kvm = vcpu->kvm;
struct kvm_nested_guest *nested = vcpu->arch.nested;
unsigned long flags;
+ u64 tb;
trace_kvmppc_run_vcpu_enter(vcpu);
@@ -4648,7 +4648,6 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
vc = vcpu->arch.vcore;
vcpu->arch.ceded = 0;
vcpu->arch.run_task = current;
- vcpu->arch.stolen_logged = vcore_stolen_time(vc, mftb());
vcpu->arch.state = KVMPPC_VCPU_RUNNABLE;
vcpu->arch.busy_preempt = TB_NIL;
vcpu->arch.last_inst = KVM_INST_FETCH_FAILED;
@@ -4673,7 +4672,6 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
kvmppc_update_vpas(vcpu);
init_vcore_to_run(vc);
- vc->preempt_tb = TB_NIL;
preempt_disable();
pcpu = smp_processor_id();
@@ -4683,6 +4681,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
/* flags save not required, but irq_pmu has no disable/enable API */
powerpc_local_irq_pmu_save(flags);
+
if (signal_pending(current))
goto sigpend;
if (need_resched() || !kvm->arch.mmu_ready)
@@ -4705,12 +4704,17 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
goto out;
}
+ tb = mftb();
+
+ vcpu->arch.stolen_logged = vcore_stolen_time(vc, tb);
+ vc->preempt_tb = TB_NIL;
+
kvmppc_clear_host_core(pcpu);
local_paca->kvm_hstate.napping = 0;
local_paca->kvm_hstate.kvm_split_mode = NULL;
kvmppc_start_thread(vcpu, vc);
- kvmppc_create_dtl_entry(vcpu, vc);
+ kvmppc_create_dtl_entry(vcpu, vc, tb);
trace_kvm_guest_enter(vcpu);
vc->vcore_state = VCORE_RUNNING;
@@ -4725,7 +4729,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
/* Tell lockdep that we're about to enable interrupts */
trace_hardirqs_on();
- trap = kvmhv_p9_guest_entry(vcpu, time_limit, lpcr);
+ trap = kvmhv_p9_guest_entry(vcpu, time_limit, lpcr, &tb);
vcpu->arch.trap = trap;
trace_hardirqs_off();
@@ -4754,7 +4758,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
* by L2 and the L1 decrementer is provided in hdec_expires
*/
if (kvmppc_core_pending_dec(vcpu) &&
- ((get_tb() < kvmppc_dec_expires_host_tb(vcpu)) ||
+ ((tb < kvmppc_dec_expires_host_tb(vcpu)) ||
(trap == BOOK3S_INTERRUPT_SYSCALL &&
kvmppc_get_gpr(vcpu, 3) == H_ENTER_NESTED)))
kvmppc_core_dequeue_dec(vcpu);
@@ -4790,7 +4794,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
trace_kvmppc_run_core(vc, 1);
done:
- kvmppc_remove_runnable(vc, vcpu);
+ kvmppc_remove_runnable(vc, vcpu, tb);
trace_kvmppc_run_vcpu_exit(vcpu);
return vcpu->arch.ret;
diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
index e7793bb806eb..2bd96d8256d1 100644
--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
+++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
@@ -183,13 +183,13 @@ static void save_clear_guest_mmu(struct kvm *kvm, struct kvm_vcpu *vcpu)
}
}
-int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr)
+int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb)
{
struct kvm *kvm = vcpu->kvm;
struct kvm_nested_guest *nested = vcpu->arch.nested;
struct kvmppc_vcore *vc = vcpu->arch.vcore;
s64 hdec, dec;
- u64 tb, purr, spurr;
+ u64 purr, spurr;
u64 *exsave;
bool ri_set;
int trap;
@@ -203,8 +203,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
unsigned long host_dawr1;
unsigned long host_dawrx1;
- tb = mftb();
- hdec = time_limit - tb;
+ hdec = time_limit - *tb;
if (hdec < 0)
return BOOK3S_INTERRUPT_HV_DECREMENTER;
@@ -230,11 +229,13 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
}
if (vc->tb_offset) {
- u64 new_tb = tb + vc->tb_offset;
+ u64 new_tb = *tb + vc->tb_offset;
mtspr(SPRN_TBU40, new_tb);
- tb = mftb();
- if ((tb & 0xffffff) < (new_tb & 0xffffff))
- mtspr(SPRN_TBU40, new_tb + 0x1000000);
+ if ((mftb() & 0xffffff) < (new_tb & 0xffffff)) {
+ new_tb += 0x1000000;
+ mtspr(SPRN_TBU40, new_tb);
+ }
+ *tb = new_tb;
vc->tb_offset_applied = vc->tb_offset;
}
@@ -317,7 +318,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
*/
mtspr(SPRN_HDEC, hdec);
- mtspr(SPRN_DEC, vcpu->arch.dec_expires - tb);
+ mtspr(SPRN_DEC, vcpu->arch.dec_expires - *tb);
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
tm_return_to_guest:
@@ -466,15 +467,17 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
dec = mfspr(SPRN_DEC);
if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */
dec = (s32) dec;
- tb = mftb();
- vcpu->arch.dec_expires = dec + tb;
+ *tb = mftb();
+ vcpu->arch.dec_expires = dec + *tb;
if (vc->tb_offset_applied) {
- u64 new_tb = tb - vc->tb_offset_applied;
+ u64 new_tb = *tb - vc->tb_offset_applied;
mtspr(SPRN_TBU40, new_tb);
- tb = mftb();
- if ((tb & 0xffffff) < (new_tb & 0xffffff))
- mtspr(SPRN_TBU40, new_tb + 0x1000000);
+ if ((mftb() & 0xffffff) < (new_tb & 0xffffff)) {
+ new_tb += 0x1000000;
+ mtspr(SPRN_TBU40, new_tb);
+ }
+ *tb = new_tb;
vc->tb_offset_applied = 0;
}
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 29/55] KVM: PPC: Book3S HV P9: Avoid SPR scoreboard stalls
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (27 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 28/55] KVM: PPC: Book3S HV P9: Optimise timebase reads Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 30/55] KVM: PPC: Book3S HV P9: Only execute mtSPR if the value changed Nicholas Piggin
` (25 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
Avoid interleaving mfSPR and mtSPR.
-151 cycles (7427) POWER9 virt-mode NULL hcall
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 8 ++++----
arch/powerpc/kvm/book3s_hv_p9_entry.c | 19 +++++++++++--------
2 files changed, 15 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index fa44bbca75e4..0d97138e6fa4 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -4271,10 +4271,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
store_spr_state(vcpu);
- timer_rearm_host_dec(*tb);
-
- restore_p9_host_os_sprs(vcpu, &host_os_sprs);
-
store_fp_state(&vcpu->arch.fp);
#ifdef CONFIG_ALTIVEC
store_vr_state(&vcpu->arch.vr);
@@ -4289,6 +4285,10 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
switch_pmu_to_host(vcpu, &host_os_sprs);
+ timer_rearm_host_dec(*tb);
+
+ restore_p9_host_os_sprs(vcpu, &host_os_sprs);
+
vc->entry_exit_map = 0x101;
vc->in_guest = 0;
diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
index 2bd96d8256d1..bd0021cd3a67 100644
--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
+++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
@@ -228,6 +228,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
host_dawrx1 = mfspr(SPRN_DAWRX1);
}
+ local_paca->kvm_hstate.host_purr = mfspr(SPRN_PURR);
+ local_paca->kvm_hstate.host_spurr = mfspr(SPRN_SPURR);
+
if (vc->tb_offset) {
u64 new_tb = *tb + vc->tb_offset;
mtspr(SPRN_TBU40, new_tb);
@@ -244,8 +247,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
mtspr(SPRN_DPDES, vc->dpdes);
mtspr(SPRN_VTB, vc->vtb);
- local_paca->kvm_hstate.host_purr = mfspr(SPRN_PURR);
- local_paca->kvm_hstate.host_spurr = mfspr(SPRN_SPURR);
mtspr(SPRN_PURR, vcpu->arch.purr);
mtspr(SPRN_SPURR, vcpu->arch.spurr);
@@ -448,10 +449,8 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
/* Advance host PURR/SPURR by the amount used by guest */
purr = mfspr(SPRN_PURR);
spurr = mfspr(SPRN_SPURR);
- mtspr(SPRN_PURR, local_paca->kvm_hstate.host_purr +
- purr - vcpu->arch.purr);
- mtspr(SPRN_SPURR, local_paca->kvm_hstate.host_spurr +
- spurr - vcpu->arch.spurr);
+ local_paca->kvm_hstate.host_purr += purr - vcpu->arch.purr;
+ local_paca->kvm_hstate.host_spurr += spurr - vcpu->arch.spurr;
vcpu->arch.purr = purr;
vcpu->arch.spurr = spurr;
@@ -464,6 +463,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
vcpu->arch.shregs.sprg2 = mfspr(SPRN_SPRG2);
vcpu->arch.shregs.sprg3 = mfspr(SPRN_SPRG3);
+ vc->dpdes = mfspr(SPRN_DPDES);
+ vc->vtb = mfspr(SPRN_VTB);
+
dec = mfspr(SPRN_DEC);
if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */
dec = (s32) dec;
@@ -481,6 +483,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
vc->tb_offset_applied = 0;
}
+ mtspr(SPRN_PURR, local_paca->kvm_hstate.host_purr);
+ mtspr(SPRN_SPURR, local_paca->kvm_hstate.host_spurr);
+
/* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */
mtspr(SPRN_PSSCR, host_psscr |
(local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG));
@@ -509,8 +514,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
if (cpu_has_feature(CPU_FTR_ARCH_31))
asm volatile(PPC_CP_ABORT);
- vc->dpdes = mfspr(SPRN_DPDES);
- vc->vtb = mfspr(SPRN_VTB);
mtspr(SPRN_DPDES, 0);
if (vc->pcr)
mtspr(SPRN_PCR, PCR_MASK);
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 30/55] KVM: PPC: Book3S HV P9: Only execute mtSPR if the value changed
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (28 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 29/55] KVM: PPC: Book3S HV P9: Avoid SPR scoreboard stalls Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-08-06 20:45 ` Fabiano Rosas
2021-07-26 3:50 ` [PATCH v1 31/55] KVM: PPC: Book3S HV P9: Juggle SPR switching around Nicholas Piggin
` (24 subsequent siblings)
54 siblings, 1 reply; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
Keep better track of the current SPR value in places where
they are to be loaded with a new context, to reduce expensive
mtSPR operations.
-73 cycles (7354) POWER9 virt-mode NULL hcall
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 64 ++++++++++++++++++++++--------------
1 file changed, 39 insertions(+), 25 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 0d97138e6fa4..56429b53f4dc 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -4009,19 +4009,28 @@ static void switch_pmu_to_host(struct kvm_vcpu *vcpu,
}
}
-static void load_spr_state(struct kvm_vcpu *vcpu)
+static void load_spr_state(struct kvm_vcpu *vcpu,
+ struct p9_host_os_sprs *host_os_sprs)
{
- mtspr(SPRN_DSCR, vcpu->arch.dscr);
- mtspr(SPRN_IAMR, vcpu->arch.iamr);
- mtspr(SPRN_PSPB, vcpu->arch.pspb);
- mtspr(SPRN_FSCR, vcpu->arch.fscr);
mtspr(SPRN_TAR, vcpu->arch.tar);
mtspr(SPRN_EBBHR, vcpu->arch.ebbhr);
mtspr(SPRN_EBBRR, vcpu->arch.ebbrr);
mtspr(SPRN_BESCR, vcpu->arch.bescr);
- mtspr(SPRN_TIDR, vcpu->arch.tid);
- mtspr(SPRN_AMR, vcpu->arch.amr);
- mtspr(SPRN_UAMOR, vcpu->arch.uamor);
+
+ if (!cpu_has_feature(CPU_FTR_ARCH_31))
+ mtspr(SPRN_TIDR, vcpu->arch.tid);
+ if (host_os_sprs->iamr != vcpu->arch.iamr)
+ mtspr(SPRN_IAMR, vcpu->arch.iamr);
+ if (host_os_sprs->amr != vcpu->arch.amr)
+ mtspr(SPRN_AMR, vcpu->arch.amr);
+ if (vcpu->arch.uamor != 0)
+ mtspr(SPRN_UAMOR, vcpu->arch.uamor);
+ if (host_os_sprs->fscr != vcpu->arch.fscr)
+ mtspr(SPRN_FSCR, vcpu->arch.fscr);
+ if (host_os_sprs->dscr != vcpu->arch.dscr)
+ mtspr(SPRN_DSCR, vcpu->arch.dscr);
+ if (vcpu->arch.pspb != 0)
+ mtspr(SPRN_PSPB, vcpu->arch.pspb);
/*
* DAR, DSISR, and for nested HV, SPRGs must be set with MSR[RI]
@@ -4036,28 +4045,31 @@ static void load_spr_state(struct kvm_vcpu *vcpu)
static void store_spr_state(struct kvm_vcpu *vcpu)
{
- vcpu->arch.ctrl = mfspr(SPRN_CTRLF);
-
- vcpu->arch.iamr = mfspr(SPRN_IAMR);
- vcpu->arch.pspb = mfspr(SPRN_PSPB);
- vcpu->arch.fscr = mfspr(SPRN_FSCR);
vcpu->arch.tar = mfspr(SPRN_TAR);
vcpu->arch.ebbhr = mfspr(SPRN_EBBHR);
vcpu->arch.ebbrr = mfspr(SPRN_EBBRR);
vcpu->arch.bescr = mfspr(SPRN_BESCR);
- vcpu->arch.tid = mfspr(SPRN_TIDR);
+
+ if (!cpu_has_feature(CPU_FTR_ARCH_31))
+ vcpu->arch.tid = mfspr(SPRN_TIDR);
+ vcpu->arch.iamr = mfspr(SPRN_IAMR);
vcpu->arch.amr = mfspr(SPRN_AMR);
vcpu->arch.uamor = mfspr(SPRN_UAMOR);
+ vcpu->arch.fscr = mfspr(SPRN_FSCR);
vcpu->arch.dscr = mfspr(SPRN_DSCR);
+ vcpu->arch.pspb = mfspr(SPRN_PSPB);
+
+ vcpu->arch.ctrl = mfspr(SPRN_CTRLF);
}
static void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs)
{
- host_os_sprs->dscr = mfspr(SPRN_DSCR);
- host_os_sprs->tidr = mfspr(SPRN_TIDR);
+ if (!cpu_has_feature(CPU_FTR_ARCH_31))
+ host_os_sprs->tidr = mfspr(SPRN_TIDR);
host_os_sprs->iamr = mfspr(SPRN_IAMR);
host_os_sprs->amr = mfspr(SPRN_AMR);
host_os_sprs->fscr = mfspr(SPRN_FSCR);
+ host_os_sprs->dscr = mfspr(SPRN_DSCR);
}
/* vcpu guest regs must already be saved */
@@ -4066,18 +4078,20 @@ static void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu,
{
mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso);
- mtspr(SPRN_PSPB, 0);
- mtspr(SPRN_UAMOR, 0);
-
- mtspr(SPRN_DSCR, host_os_sprs->dscr);
- mtspr(SPRN_TIDR, host_os_sprs->tidr);
- mtspr(SPRN_IAMR, host_os_sprs->iamr);
-
+ if (!cpu_has_feature(CPU_FTR_ARCH_31))
+ mtspr(SPRN_TIDR, host_os_sprs->tidr);
+ if (host_os_sprs->iamr != vcpu->arch.iamr)
+ mtspr(SPRN_IAMR, host_os_sprs->iamr);
+ if (vcpu->arch.uamor != 0)
+ mtspr(SPRN_UAMOR, 0);
if (host_os_sprs->amr != vcpu->arch.amr)
mtspr(SPRN_AMR, host_os_sprs->amr);
-
if (host_os_sprs->fscr != vcpu->arch.fscr)
mtspr(SPRN_FSCR, host_os_sprs->fscr);
+ if (host_os_sprs->dscr != vcpu->arch.dscr)
+ mtspr(SPRN_DSCR, host_os_sprs->dscr);
+ if (vcpu->arch.pspb != 0)
+ mtspr(SPRN_PSPB, 0);
/* Save guest CTRL register, set runlatch to 1 */
if (!(vcpu->arch.ctrl & 1))
@@ -4169,7 +4183,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
#endif
mtspr(SPRN_VRSAVE, vcpu->arch.vrsave);
- load_spr_state(vcpu);
+ load_spr_state(vcpu, &host_os_sprs);
if (kvmhv_on_pseries()) {
/*
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 31/55] KVM: PPC: Book3S HV P9: Juggle SPR switching around
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (29 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 30/55] KVM: PPC: Book3S HV P9: Only execute mtSPR if the value changed Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-08-06 20:46 ` Fabiano Rosas
2021-07-26 3:50 ` [PATCH v1 32/55] KVM: PPC: Book3S HV P9: Move vcpu register save/restore into functions Nicholas Piggin
` (23 subsequent siblings)
54 siblings, 1 reply; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
This juggles SPR switching on the entry and exit sides to be more
symmetric, which makes the next refactoring patch possible with no
functional change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 56429b53f4dc..c2c72875fca9 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -4175,7 +4175,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
msr = mfmsr(); /* TM restore can update msr */
}
- switch_pmu_to_guest(vcpu, &host_os_sprs);
+ load_spr_state(vcpu, &host_os_sprs);
load_fp_state(&vcpu->arch.fp);
#ifdef CONFIG_ALTIVEC
@@ -4183,7 +4183,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
#endif
mtspr(SPRN_VRSAVE, vcpu->arch.vrsave);
- load_spr_state(vcpu, &host_os_sprs);
+ switch_pmu_to_guest(vcpu, &host_os_sprs);
if (kvmhv_on_pseries()) {
/*
@@ -4283,6 +4283,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
vcpu->arch.slb_max = 0;
}
+ switch_pmu_to_host(vcpu, &host_os_sprs);
+
store_spr_state(vcpu);
store_fp_state(&vcpu->arch.fp);
@@ -4297,8 +4299,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
vcpu_vpa_increment_dispatch(vcpu);
- switch_pmu_to_host(vcpu, &host_os_sprs);
-
timer_rearm_host_dec(*tb);
restore_p9_host_os_sprs(vcpu, &host_os_sprs);
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 32/55] KVM: PPC: Book3S HV P9: Move vcpu register save/restore into functions
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (30 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 31/55] KVM: PPC: Book3S HV P9: Juggle SPR switching around Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-08-06 20:49 ` Fabiano Rosas
2021-07-26 3:50 ` [PATCH v1 33/55] KVM: PPC: Book3S HV P9: Move host OS save/restore functions to built-in Nicholas Piggin
` (22 subsequent siblings)
54 siblings, 1 reply; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
This should be no functional difference but makes the caller easier
to read.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 65 +++++++++++++++++++++++-------------
1 file changed, 41 insertions(+), 24 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index c2c72875fca9..45211458ac05 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -4062,6 +4062,44 @@ static void store_spr_state(struct kvm_vcpu *vcpu)
vcpu->arch.ctrl = mfspr(SPRN_CTRLF);
}
+/* Returns true if current MSR and/or guest MSR may have changed */
+static bool load_vcpu_state(struct kvm_vcpu *vcpu,
+ struct p9_host_os_sprs *host_os_sprs)
+{
+ bool ret = false;
+
+ if (cpu_has_feature(CPU_FTR_TM) ||
+ cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) {
+ kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true);
+ ret = true;
+ }
+
+ load_spr_state(vcpu, host_os_sprs);
+
+ load_fp_state(&vcpu->arch.fp);
+#ifdef CONFIG_ALTIVEC
+ load_vr_state(&vcpu->arch.vr);
+#endif
+ mtspr(SPRN_VRSAVE, vcpu->arch.vrsave);
+
+ return ret;
+}
+
+static void store_vcpu_state(struct kvm_vcpu *vcpu)
+{
+ store_spr_state(vcpu);
+
+ store_fp_state(&vcpu->arch.fp);
+#ifdef CONFIG_ALTIVEC
+ store_vr_state(&vcpu->arch.vr);
+#endif
+ vcpu->arch.vrsave = mfspr(SPRN_VRSAVE);
+
+ if (cpu_has_feature(CPU_FTR_TM) ||
+ cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
+ kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true);
+}
+
static void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs)
{
if (!cpu_has_feature(CPU_FTR_ARCH_31))
@@ -4169,19 +4207,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
vcpu_vpa_increment_dispatch(vcpu);
- if (cpu_has_feature(CPU_FTR_TM) ||
- cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) {
- kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true);
- msr = mfmsr(); /* TM restore can update msr */
- }
-
- load_spr_state(vcpu, &host_os_sprs);
-
- load_fp_state(&vcpu->arch.fp);
-#ifdef CONFIG_ALTIVEC
- load_vr_state(&vcpu->arch.vr);
-#endif
- mtspr(SPRN_VRSAVE, vcpu->arch.vrsave);
+ if (unlikely(load_vcpu_state(vcpu, &host_os_sprs)))
+ msr = mfmsr(); /* MSR may have been updated */
switch_pmu_to_guest(vcpu, &host_os_sprs);
@@ -4285,17 +4312,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
switch_pmu_to_host(vcpu, &host_os_sprs);
- store_spr_state(vcpu);
-
- store_fp_state(&vcpu->arch.fp);
-#ifdef CONFIG_ALTIVEC
- store_vr_state(&vcpu->arch.vr);
-#endif
- vcpu->arch.vrsave = mfspr(SPRN_VRSAVE);
-
- if (cpu_has_feature(CPU_FTR_TM) ||
- cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
- kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true);
+ store_vcpu_state(vcpu);
vcpu_vpa_increment_dispatch(vcpu);
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 33/55] KVM: PPC: Book3S HV P9: Move host OS save/restore functions to built-in
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (31 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 32/55] KVM: PPC: Book3S HV P9: Move vcpu register save/restore into functions Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 34/55] KVM: PPC: Book3S HV P9: Move nested guest entry into its own function Nicholas Piggin
` (21 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
Move the P9 guest/host register switching functions to the built-in
P9 entry code, and export it for nested to use as well.
This allows more flexibility in scheduling these supervisor privileged
SPR accesses with the HV privileged and PR SPR accesses in the low level
entry code.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 365 +-------------------------
arch/powerpc/kvm/book3s_hv.h | 39 +++
arch/powerpc/kvm/book3s_hv_p9_entry.c | 345 ++++++++++++++++++++++++
3 files changed, 385 insertions(+), 364 deletions(-)
create mode 100644 arch/powerpc/kvm/book3s_hv.h
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 45211458ac05..977712eb74e0 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -80,6 +80,7 @@
#include <asm/plpar_wrappers.h>
#include "book3s.h"
+#include "book3s_hv.h"
#define CREATE_TRACE_POINTS
#include "trace_hv.h"
@@ -3772,370 +3773,6 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc)
trace_kvmppc_run_core(vc, 1);
}
-/*
- * Privileged (non-hypervisor) host registers to save.
- */
-struct p9_host_os_sprs {
- unsigned long dscr;
- unsigned long tidr;
- unsigned long iamr;
- unsigned long amr;
- unsigned long fscr;
-
- unsigned int pmc1;
- unsigned int pmc2;
- unsigned int pmc3;
- unsigned int pmc4;
- unsigned int pmc5;
- unsigned int pmc6;
- unsigned long mmcr0;
- unsigned long mmcr1;
- unsigned long mmcr2;
- unsigned long mmcr3;
- unsigned long mmcra;
- unsigned long siar;
- unsigned long sier1;
- unsigned long sier2;
- unsigned long sier3;
- unsigned long sdar;
-};
-
-static void freeze_pmu(unsigned long mmcr0, unsigned long mmcra)
-{
- if (!(mmcr0 & MMCR0_FC))
- goto do_freeze;
- if (mmcra & MMCRA_SAMPLE_ENABLE)
- goto do_freeze;
- if (cpu_has_feature(CPU_FTR_ARCH_31)) {
- if (!(mmcr0 & MMCR0_PMCCEXT))
- goto do_freeze;
- if (!(mmcra & MMCRA_BHRB_DISABLE))
- goto do_freeze;
- }
- return;
-
-do_freeze:
- mmcr0 = MMCR0_FC;
- mmcra = 0;
- if (cpu_has_feature(CPU_FTR_ARCH_31)) {
- mmcr0 |= MMCR0_PMCCEXT;
- mmcra = MMCRA_BHRB_DISABLE;
- }
-
- mtspr(SPRN_MMCR0, mmcr0);
- mtspr(SPRN_MMCRA, mmcra);
- isync();
-}
-
-static void switch_pmu_to_guest(struct kvm_vcpu *vcpu,
- struct p9_host_os_sprs *host_os_sprs)
-{
- struct lppaca *lp;
- int load_pmu = 1;
-
- lp = vcpu->arch.vpa.pinned_addr;
- if (lp)
- load_pmu = lp->pmcregs_in_use;
-
- /* Save host */
- if (ppc_get_pmu_inuse()) {
- /*
- * It might be better to put PMU handling (at least for the
- * host) in the perf subsystem because it knows more about what
- * is being used.
- */
-
- /* POWER9, POWER10 do not implement HPMC or SPMC */
-
- host_os_sprs->mmcr0 = mfspr(SPRN_MMCR0);
- host_os_sprs->mmcra = mfspr(SPRN_MMCRA);
-
- freeze_pmu(host_os_sprs->mmcr0, host_os_sprs->mmcra);
-
- host_os_sprs->pmc1 = mfspr(SPRN_PMC1);
- host_os_sprs->pmc2 = mfspr(SPRN_PMC2);
- host_os_sprs->pmc3 = mfspr(SPRN_PMC3);
- host_os_sprs->pmc4 = mfspr(SPRN_PMC4);
- host_os_sprs->pmc5 = mfspr(SPRN_PMC5);
- host_os_sprs->pmc6 = mfspr(SPRN_PMC6);
- host_os_sprs->mmcr1 = mfspr(SPRN_MMCR1);
- host_os_sprs->mmcr2 = mfspr(SPRN_MMCR2);
- host_os_sprs->sdar = mfspr(SPRN_SDAR);
- host_os_sprs->siar = mfspr(SPRN_SIAR);
- host_os_sprs->sier1 = mfspr(SPRN_SIER);
-
- if (cpu_has_feature(CPU_FTR_ARCH_31)) {
- host_os_sprs->mmcr3 = mfspr(SPRN_MMCR3);
- host_os_sprs->sier2 = mfspr(SPRN_SIER2);
- host_os_sprs->sier3 = mfspr(SPRN_SIER3);
- }
- }
-
-#ifdef CONFIG_PPC_PSERIES
- /* After saving PMU, before loading guest PMU, flip pmcregs_in_use */
- if (kvmhv_on_pseries()) {
- barrier();
- get_lppaca()->pmcregs_in_use = load_pmu;
- barrier();
- }
-#endif
-
- /*
- * Load guest. If the VPA said the PMCs are not in use but the guest
- * tried to access them anyway, HFSCR[PM] will be set by the HFAC
- * fault so we can make forward progress.
- */
- if (load_pmu || (vcpu->arch.hfscr & HFSCR_PM)) {
- mtspr(SPRN_PMC1, vcpu->arch.pmc[0]);
- mtspr(SPRN_PMC2, vcpu->arch.pmc[1]);
- mtspr(SPRN_PMC3, vcpu->arch.pmc[2]);
- mtspr(SPRN_PMC4, vcpu->arch.pmc[3]);
- mtspr(SPRN_PMC5, vcpu->arch.pmc[4]);
- mtspr(SPRN_PMC6, vcpu->arch.pmc[5]);
- mtspr(SPRN_MMCR1, vcpu->arch.mmcr[1]);
- mtspr(SPRN_MMCR2, vcpu->arch.mmcr[2]);
- mtspr(SPRN_SDAR, vcpu->arch.sdar);
- mtspr(SPRN_SIAR, vcpu->arch.siar);
- mtspr(SPRN_SIER, vcpu->arch.sier[0]);
-
- if (cpu_has_feature(CPU_FTR_ARCH_31)) {
- mtspr(SPRN_MMCR3, vcpu->arch.mmcr[3]);
- mtspr(SPRN_SIER2, vcpu->arch.sier[1]);
- mtspr(SPRN_SIER3, vcpu->arch.sier[2]);
- }
-
- /* Set MMCRA then MMCR0 last */
- mtspr(SPRN_MMCRA, vcpu->arch.mmcra);
- mtspr(SPRN_MMCR0, vcpu->arch.mmcr[0]);
- /* No isync necessary because we're starting counters */
-
- if (!vcpu->arch.nested &&
- (vcpu->arch.hfscr_permitted & HFSCR_PM))
- vcpu->arch.hfscr |= HFSCR_PM;
- }
-}
-
-static void switch_pmu_to_host(struct kvm_vcpu *vcpu,
- struct p9_host_os_sprs *host_os_sprs)
-{
- struct lppaca *lp;
- int save_pmu = 1;
-
- lp = vcpu->arch.vpa.pinned_addr;
- if (lp)
- save_pmu = lp->pmcregs_in_use;
-
- if (save_pmu) {
- vcpu->arch.mmcr[0] = mfspr(SPRN_MMCR0);
- vcpu->arch.mmcra = mfspr(SPRN_MMCRA);
-
- freeze_pmu(vcpu->arch.mmcr[0], vcpu->arch.mmcra);
-
- vcpu->arch.pmc[0] = mfspr(SPRN_PMC1);
- vcpu->arch.pmc[1] = mfspr(SPRN_PMC2);
- vcpu->arch.pmc[2] = mfspr(SPRN_PMC3);
- vcpu->arch.pmc[3] = mfspr(SPRN_PMC4);
- vcpu->arch.pmc[4] = mfspr(SPRN_PMC5);
- vcpu->arch.pmc[5] = mfspr(SPRN_PMC6);
- vcpu->arch.mmcr[1] = mfspr(SPRN_MMCR1);
- vcpu->arch.mmcr[2] = mfspr(SPRN_MMCR2);
- vcpu->arch.sdar = mfspr(SPRN_SDAR);
- vcpu->arch.siar = mfspr(SPRN_SIAR);
- vcpu->arch.sier[0] = mfspr(SPRN_SIER);
-
- if (cpu_has_feature(CPU_FTR_ARCH_31)) {
- vcpu->arch.mmcr[3] = mfspr(SPRN_MMCR3);
- vcpu->arch.sier[1] = mfspr(SPRN_SIER2);
- vcpu->arch.sier[2] = mfspr(SPRN_SIER3);
- }
-
- } else if (vcpu->arch.hfscr & HFSCR_PM) {
- /*
- * The guest accessed PMC SPRs without specifying they should
- * be preserved, or it cleared pmcregs_in_use after the last
- * access. Just ensure they are frozen.
- */
- freeze_pmu(mfspr(SPRN_MMCR0), mfspr(SPRN_MMCRA));
-
- /*
- * Demand-fault PMU register access in the guest.
- *
- * This is used to grab the guest's VPA pmcregs_in_use value
- * and reflect it into the host's VPA in the case of a nested
- * hypervisor.
- *
- * It also avoids having to zero-out SPRs after each guest
- * exit to avoid side-channels when.
- *
- * This is cleared here when we exit the guest, so later HFSCR
- * interrupt handling can add it back to run the guest with
- * PM enabled next time.
- */
- if (!vcpu->arch.nested)
- vcpu->arch.hfscr &= ~HFSCR_PM;
- } /* otherwise the PMU should still be frozen */
-
-#ifdef CONFIG_PPC_PSERIES
- if (kvmhv_on_pseries()) {
- barrier();
- get_lppaca()->pmcregs_in_use = ppc_get_pmu_inuse();
- barrier();
- }
-#endif
-
- if (ppc_get_pmu_inuse()) {
- mtspr(SPRN_PMC1, host_os_sprs->pmc1);
- mtspr(SPRN_PMC2, host_os_sprs->pmc2);
- mtspr(SPRN_PMC3, host_os_sprs->pmc3);
- mtspr(SPRN_PMC4, host_os_sprs->pmc4);
- mtspr(SPRN_PMC5, host_os_sprs->pmc5);
- mtspr(SPRN_PMC6, host_os_sprs->pmc6);
- mtspr(SPRN_MMCR1, host_os_sprs->mmcr1);
- mtspr(SPRN_MMCR2, host_os_sprs->mmcr2);
- mtspr(SPRN_SDAR, host_os_sprs->sdar);
- mtspr(SPRN_SIAR, host_os_sprs->siar);
- mtspr(SPRN_SIER, host_os_sprs->sier1);
-
- if (cpu_has_feature(CPU_FTR_ARCH_31)) {
- mtspr(SPRN_MMCR3, host_os_sprs->mmcr3);
- mtspr(SPRN_SIER2, host_os_sprs->sier2);
- mtspr(SPRN_SIER3, host_os_sprs->sier3);
- }
-
- /* Set MMCRA then MMCR0 last */
- mtspr(SPRN_MMCRA, host_os_sprs->mmcra);
- mtspr(SPRN_MMCR0, host_os_sprs->mmcr0);
- isync();
- }
-}
-
-static void load_spr_state(struct kvm_vcpu *vcpu,
- struct p9_host_os_sprs *host_os_sprs)
-{
- mtspr(SPRN_TAR, vcpu->arch.tar);
- mtspr(SPRN_EBBHR, vcpu->arch.ebbhr);
- mtspr(SPRN_EBBRR, vcpu->arch.ebbrr);
- mtspr(SPRN_BESCR, vcpu->arch.bescr);
-
- if (!cpu_has_feature(CPU_FTR_ARCH_31))
- mtspr(SPRN_TIDR, vcpu->arch.tid);
- if (host_os_sprs->iamr != vcpu->arch.iamr)
- mtspr(SPRN_IAMR, vcpu->arch.iamr);
- if (host_os_sprs->amr != vcpu->arch.amr)
- mtspr(SPRN_AMR, vcpu->arch.amr);
- if (vcpu->arch.uamor != 0)
- mtspr(SPRN_UAMOR, vcpu->arch.uamor);
- if (host_os_sprs->fscr != vcpu->arch.fscr)
- mtspr(SPRN_FSCR, vcpu->arch.fscr);
- if (host_os_sprs->dscr != vcpu->arch.dscr)
- mtspr(SPRN_DSCR, vcpu->arch.dscr);
- if (vcpu->arch.pspb != 0)
- mtspr(SPRN_PSPB, vcpu->arch.pspb);
-
- /*
- * DAR, DSISR, and for nested HV, SPRGs must be set with MSR[RI]
- * clear (or hstate set appropriately to catch those registers
- * being clobbered if we take a MCE or SRESET), so those are done
- * later.
- */
-
- if (!(vcpu->arch.ctrl & 1))
- mtspr(SPRN_CTRLT, 0);
-}
-
-static void store_spr_state(struct kvm_vcpu *vcpu)
-{
- vcpu->arch.tar = mfspr(SPRN_TAR);
- vcpu->arch.ebbhr = mfspr(SPRN_EBBHR);
- vcpu->arch.ebbrr = mfspr(SPRN_EBBRR);
- vcpu->arch.bescr = mfspr(SPRN_BESCR);
-
- if (!cpu_has_feature(CPU_FTR_ARCH_31))
- vcpu->arch.tid = mfspr(SPRN_TIDR);
- vcpu->arch.iamr = mfspr(SPRN_IAMR);
- vcpu->arch.amr = mfspr(SPRN_AMR);
- vcpu->arch.uamor = mfspr(SPRN_UAMOR);
- vcpu->arch.fscr = mfspr(SPRN_FSCR);
- vcpu->arch.dscr = mfspr(SPRN_DSCR);
- vcpu->arch.pspb = mfspr(SPRN_PSPB);
-
- vcpu->arch.ctrl = mfspr(SPRN_CTRLF);
-}
-
-/* Returns true if current MSR and/or guest MSR may have changed */
-static bool load_vcpu_state(struct kvm_vcpu *vcpu,
- struct p9_host_os_sprs *host_os_sprs)
-{
- bool ret = false;
-
- if (cpu_has_feature(CPU_FTR_TM) ||
- cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) {
- kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true);
- ret = true;
- }
-
- load_spr_state(vcpu, host_os_sprs);
-
- load_fp_state(&vcpu->arch.fp);
-#ifdef CONFIG_ALTIVEC
- load_vr_state(&vcpu->arch.vr);
-#endif
- mtspr(SPRN_VRSAVE, vcpu->arch.vrsave);
-
- return ret;
-}
-
-static void store_vcpu_state(struct kvm_vcpu *vcpu)
-{
- store_spr_state(vcpu);
-
- store_fp_state(&vcpu->arch.fp);
-#ifdef CONFIG_ALTIVEC
- store_vr_state(&vcpu->arch.vr);
-#endif
- vcpu->arch.vrsave = mfspr(SPRN_VRSAVE);
-
- if (cpu_has_feature(CPU_FTR_TM) ||
- cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
- kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true);
-}
-
-static void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs)
-{
- if (!cpu_has_feature(CPU_FTR_ARCH_31))
- host_os_sprs->tidr = mfspr(SPRN_TIDR);
- host_os_sprs->iamr = mfspr(SPRN_IAMR);
- host_os_sprs->amr = mfspr(SPRN_AMR);
- host_os_sprs->fscr = mfspr(SPRN_FSCR);
- host_os_sprs->dscr = mfspr(SPRN_DSCR);
-}
-
-/* vcpu guest regs must already be saved */
-static void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu,
- struct p9_host_os_sprs *host_os_sprs)
-{
- mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso);
-
- if (!cpu_has_feature(CPU_FTR_ARCH_31))
- mtspr(SPRN_TIDR, host_os_sprs->tidr);
- if (host_os_sprs->iamr != vcpu->arch.iamr)
- mtspr(SPRN_IAMR, host_os_sprs->iamr);
- if (vcpu->arch.uamor != 0)
- mtspr(SPRN_UAMOR, 0);
- if (host_os_sprs->amr != vcpu->arch.amr)
- mtspr(SPRN_AMR, host_os_sprs->amr);
- if (host_os_sprs->fscr != vcpu->arch.fscr)
- mtspr(SPRN_FSCR, host_os_sprs->fscr);
- if (host_os_sprs->dscr != vcpu->arch.dscr)
- mtspr(SPRN_DSCR, host_os_sprs->dscr);
- if (vcpu->arch.pspb != 0)
- mtspr(SPRN_PSPB, 0);
-
- /* Save guest CTRL register, set runlatch to 1 */
- if (!(vcpu->arch.ctrl & 1))
- mtspr(SPRN_CTRLT, 1);
-}
-
static inline bool hcall_is_xics(unsigned long req)
{
return req == H_EOI || req == H_CPPR || req == H_IPI ||
diff --git a/arch/powerpc/kvm/book3s_hv.h b/arch/powerpc/kvm/book3s_hv.h
new file mode 100644
index 000000000000..a9065a380547
--- /dev/null
+++ b/arch/powerpc/kvm/book3s_hv.h
@@ -0,0 +1,39 @@
+
+/*
+ * Privileged (non-hypervisor) host registers to save.
+ */
+struct p9_host_os_sprs {
+ unsigned long dscr;
+ unsigned long tidr;
+ unsigned long iamr;
+ unsigned long amr;
+ unsigned long fscr;
+
+ unsigned int pmc1;
+ unsigned int pmc2;
+ unsigned int pmc3;
+ unsigned int pmc4;
+ unsigned int pmc5;
+ unsigned int pmc6;
+ unsigned long mmcr0;
+ unsigned long mmcr1;
+ unsigned long mmcr2;
+ unsigned long mmcr3;
+ unsigned long mmcra;
+ unsigned long siar;
+ unsigned long sier1;
+ unsigned long sier2;
+ unsigned long sier3;
+ unsigned long sdar;
+};
+
+bool load_vcpu_state(struct kvm_vcpu *vcpu,
+ struct p9_host_os_sprs *host_os_sprs);
+void store_vcpu_state(struct kvm_vcpu *vcpu);
+void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs);
+void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu,
+ struct p9_host_os_sprs *host_os_sprs);
+void switch_pmu_to_guest(struct kvm_vcpu *vcpu,
+ struct p9_host_os_sprs *host_os_sprs);
+void switch_pmu_to_host(struct kvm_vcpu *vcpu,
+ struct p9_host_os_sprs *host_os_sprs);
diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
index bd0021cd3a67..5a34f0199bfe 100644
--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
+++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
@@ -4,8 +4,353 @@
#include <asm/asm-prototypes.h>
#include <asm/dbell.h>
#include <asm/kvm_ppc.h>
+#include <asm/pmc.h>
#include <asm/ppc-opcode.h>
+#include "book3s_hv.h"
+
+static void freeze_pmu(unsigned long mmcr0, unsigned long mmcra)
+{
+ if (!(mmcr0 & MMCR0_FC))
+ goto do_freeze;
+ if (mmcra & MMCRA_SAMPLE_ENABLE)
+ goto do_freeze;
+ if (cpu_has_feature(CPU_FTR_ARCH_31)) {
+ if (!(mmcr0 & MMCR0_PMCCEXT))
+ goto do_freeze;
+ if (!(mmcra & MMCRA_BHRB_DISABLE))
+ goto do_freeze;
+ }
+ return;
+
+do_freeze:
+ mmcr0 = MMCR0_FC;
+ mmcra = 0;
+ if (cpu_has_feature(CPU_FTR_ARCH_31)) {
+ mmcr0 |= MMCR0_PMCCEXT;
+ mmcra = MMCRA_BHRB_DISABLE;
+ }
+
+ mtspr(SPRN_MMCR0, mmcr0);
+ mtspr(SPRN_MMCRA, mmcra);
+ isync();
+}
+
+void switch_pmu_to_guest(struct kvm_vcpu *vcpu,
+ struct p9_host_os_sprs *host_os_sprs)
+{
+ struct lppaca *lp;
+ int load_pmu = 1;
+
+ lp = vcpu->arch.vpa.pinned_addr;
+ if (lp)
+ load_pmu = lp->pmcregs_in_use;
+
+ /* Save host */
+ if (ppc_get_pmu_inuse()) {
+ /*
+ * It might be better to put PMU handling (at least for the
+ * host) in the perf subsystem because it knows more about what
+ * is being used.
+ */
+
+ /* POWER9, POWER10 do not implement HPMC or SPMC */
+
+ host_os_sprs->mmcr0 = mfspr(SPRN_MMCR0);
+ host_os_sprs->mmcra = mfspr(SPRN_MMCRA);
+
+ freeze_pmu(host_os_sprs->mmcr0, host_os_sprs->mmcra);
+
+ host_os_sprs->pmc1 = mfspr(SPRN_PMC1);
+ host_os_sprs->pmc2 = mfspr(SPRN_PMC2);
+ host_os_sprs->pmc3 = mfspr(SPRN_PMC3);
+ host_os_sprs->pmc4 = mfspr(SPRN_PMC4);
+ host_os_sprs->pmc5 = mfspr(SPRN_PMC5);
+ host_os_sprs->pmc6 = mfspr(SPRN_PMC6);
+ host_os_sprs->mmcr1 = mfspr(SPRN_MMCR1);
+ host_os_sprs->mmcr2 = mfspr(SPRN_MMCR2);
+ host_os_sprs->sdar = mfspr(SPRN_SDAR);
+ host_os_sprs->siar = mfspr(SPRN_SIAR);
+ host_os_sprs->sier1 = mfspr(SPRN_SIER);
+
+ if (cpu_has_feature(CPU_FTR_ARCH_31)) {
+ host_os_sprs->mmcr3 = mfspr(SPRN_MMCR3);
+ host_os_sprs->sier2 = mfspr(SPRN_SIER2);
+ host_os_sprs->sier3 = mfspr(SPRN_SIER3);
+ }
+ }
+
+#ifdef CONFIG_PPC_PSERIES
+ /* After saving PMU, before loading guest PMU, flip pmcregs_in_use */
+ if (kvmhv_on_pseries()) {
+ barrier();
+ get_lppaca()->pmcregs_in_use = load_pmu;
+ barrier();
+ }
+#endif
+
+ /*
+ * Load guest. If the VPA said the PMCs are not in use but the guest
+ * tried to access them anyway, HFSCR[PM] will be set by the HFAC
+ * fault so we can make forward progress.
+ */
+ if (load_pmu || (vcpu->arch.hfscr & HFSCR_PM)) {
+ mtspr(SPRN_PMC1, vcpu->arch.pmc[0]);
+ mtspr(SPRN_PMC2, vcpu->arch.pmc[1]);
+ mtspr(SPRN_PMC3, vcpu->arch.pmc[2]);
+ mtspr(SPRN_PMC4, vcpu->arch.pmc[3]);
+ mtspr(SPRN_PMC5, vcpu->arch.pmc[4]);
+ mtspr(SPRN_PMC6, vcpu->arch.pmc[5]);
+ mtspr(SPRN_MMCR1, vcpu->arch.mmcr[1]);
+ mtspr(SPRN_MMCR2, vcpu->arch.mmcr[2]);
+ mtspr(SPRN_SDAR, vcpu->arch.sdar);
+ mtspr(SPRN_SIAR, vcpu->arch.siar);
+ mtspr(SPRN_SIER, vcpu->arch.sier[0]);
+
+ if (cpu_has_feature(CPU_FTR_ARCH_31)) {
+ mtspr(SPRN_MMCR3, vcpu->arch.mmcr[3]);
+ mtspr(SPRN_SIER2, vcpu->arch.sier[1]);
+ mtspr(SPRN_SIER3, vcpu->arch.sier[2]);
+ }
+
+ /* Set MMCRA then MMCR0 last */
+ mtspr(SPRN_MMCRA, vcpu->arch.mmcra);
+ mtspr(SPRN_MMCR0, vcpu->arch.mmcr[0]);
+ /* No isync necessary because we're starting counters */
+
+ if (!vcpu->arch.nested &&
+ (vcpu->arch.hfscr_permitted & HFSCR_PM))
+ vcpu->arch.hfscr |= HFSCR_PM;
+ }
+}
+EXPORT_SYMBOL_GPL(switch_pmu_to_guest);
+
+void switch_pmu_to_host(struct kvm_vcpu *vcpu,
+ struct p9_host_os_sprs *host_os_sprs)
+{
+ struct lppaca *lp;
+ int save_pmu = 1;
+
+ lp = vcpu->arch.vpa.pinned_addr;
+ if (lp)
+ save_pmu = lp->pmcregs_in_use;
+
+ if (save_pmu) {
+ vcpu->arch.mmcr[0] = mfspr(SPRN_MMCR0);
+ vcpu->arch.mmcra = mfspr(SPRN_MMCRA);
+
+ freeze_pmu(vcpu->arch.mmcr[0], vcpu->arch.mmcra);
+
+ vcpu->arch.pmc[0] = mfspr(SPRN_PMC1);
+ vcpu->arch.pmc[1] = mfspr(SPRN_PMC2);
+ vcpu->arch.pmc[2] = mfspr(SPRN_PMC3);
+ vcpu->arch.pmc[3] = mfspr(SPRN_PMC4);
+ vcpu->arch.pmc[4] = mfspr(SPRN_PMC5);
+ vcpu->arch.pmc[5] = mfspr(SPRN_PMC6);
+ vcpu->arch.mmcr[1] = mfspr(SPRN_MMCR1);
+ vcpu->arch.mmcr[2] = mfspr(SPRN_MMCR2);
+ vcpu->arch.sdar = mfspr(SPRN_SDAR);
+ vcpu->arch.siar = mfspr(SPRN_SIAR);
+ vcpu->arch.sier[0] = mfspr(SPRN_SIER);
+
+ if (cpu_has_feature(CPU_FTR_ARCH_31)) {
+ vcpu->arch.mmcr[3] = mfspr(SPRN_MMCR3);
+ vcpu->arch.sier[1] = mfspr(SPRN_SIER2);
+ vcpu->arch.sier[2] = mfspr(SPRN_SIER3);
+ }
+
+ } else if (vcpu->arch.hfscr & HFSCR_PM) {
+ /*
+ * The guest accessed PMC SPRs without specifying they should
+ * be preserved, or it cleared pmcregs_in_use after the last
+ * access. Just ensure they are frozen.
+ */
+ freeze_pmu(mfspr(SPRN_MMCR0), mfspr(SPRN_MMCRA));
+
+ /*
+ * Demand-fault PMU register access in the guest.
+ *
+ * This is used to grab the guest's VPA pmcregs_in_use value
+ * and reflect it into the host's VPA in the case of a nested
+ * hypervisor.
+ *
+ * It also avoids having to zero-out SPRs after each guest
+ * exit to avoid side-channels when.
+ *
+ * This is cleared here when we exit the guest, so later HFSCR
+ * interrupt handling can add it back to run the guest with
+ * PM enabled next time.
+ */
+ if (!vcpu->arch.nested)
+ vcpu->arch.hfscr &= ~HFSCR_PM;
+ } /* otherwise the PMU should still be frozen */
+
+#ifdef CONFIG_PPC_PSERIES
+ if (kvmhv_on_pseries()) {
+ barrier();
+ get_lppaca()->pmcregs_in_use = ppc_get_pmu_inuse();
+ barrier();
+ }
+#endif
+
+ if (ppc_get_pmu_inuse()) {
+ mtspr(SPRN_PMC1, host_os_sprs->pmc1);
+ mtspr(SPRN_PMC2, host_os_sprs->pmc2);
+ mtspr(SPRN_PMC3, host_os_sprs->pmc3);
+ mtspr(SPRN_PMC4, host_os_sprs->pmc4);
+ mtspr(SPRN_PMC5, host_os_sprs->pmc5);
+ mtspr(SPRN_PMC6, host_os_sprs->pmc6);
+ mtspr(SPRN_MMCR1, host_os_sprs->mmcr1);
+ mtspr(SPRN_MMCR2, host_os_sprs->mmcr2);
+ mtspr(SPRN_SDAR, host_os_sprs->sdar);
+ mtspr(SPRN_SIAR, host_os_sprs->siar);
+ mtspr(SPRN_SIER, host_os_sprs->sier1);
+
+ if (cpu_has_feature(CPU_FTR_ARCH_31)) {
+ mtspr(SPRN_MMCR3, host_os_sprs->mmcr3);
+ mtspr(SPRN_SIER2, host_os_sprs->sier2);
+ mtspr(SPRN_SIER3, host_os_sprs->sier3);
+ }
+
+ /* Set MMCRA then MMCR0 last */
+ mtspr(SPRN_MMCRA, host_os_sprs->mmcra);
+ mtspr(SPRN_MMCR0, host_os_sprs->mmcr0);
+ isync();
+ }
+}
+EXPORT_SYMBOL_GPL(switch_pmu_to_host);
+
+static void load_spr_state(struct kvm_vcpu *vcpu,
+ struct p9_host_os_sprs *host_os_sprs)
+{
+ mtspr(SPRN_TAR, vcpu->arch.tar);
+ mtspr(SPRN_EBBHR, vcpu->arch.ebbhr);
+ mtspr(SPRN_EBBRR, vcpu->arch.ebbrr);
+ mtspr(SPRN_BESCR, vcpu->arch.bescr);
+
+ if (!cpu_has_feature(CPU_FTR_ARCH_31))
+ mtspr(SPRN_TIDR, vcpu->arch.tid);
+ if (host_os_sprs->iamr != vcpu->arch.iamr)
+ mtspr(SPRN_IAMR, vcpu->arch.iamr);
+ if (host_os_sprs->amr != vcpu->arch.amr)
+ mtspr(SPRN_AMR, vcpu->arch.amr);
+ if (vcpu->arch.uamor != 0)
+ mtspr(SPRN_UAMOR, vcpu->arch.uamor);
+ if (host_os_sprs->fscr != vcpu->arch.fscr)
+ mtspr(SPRN_FSCR, vcpu->arch.fscr);
+ if (host_os_sprs->dscr != vcpu->arch.dscr)
+ mtspr(SPRN_DSCR, vcpu->arch.dscr);
+ if (vcpu->arch.pspb != 0)
+ mtspr(SPRN_PSPB, vcpu->arch.pspb);
+
+ /*
+ * DAR, DSISR, and for nested HV, SPRGs must be set with MSR[RI]
+ * clear (or hstate set appropriately to catch those registers
+ * being clobbered if we take a MCE or SRESET), so those are done
+ * later.
+ */
+
+ if (!(vcpu->arch.ctrl & 1))
+ mtspr(SPRN_CTRLT, 0);
+}
+
+static void store_spr_state(struct kvm_vcpu *vcpu)
+{
+ vcpu->arch.tar = mfspr(SPRN_TAR);
+ vcpu->arch.ebbhr = mfspr(SPRN_EBBHR);
+ vcpu->arch.ebbrr = mfspr(SPRN_EBBRR);
+ vcpu->arch.bescr = mfspr(SPRN_BESCR);
+
+ if (!cpu_has_feature(CPU_FTR_ARCH_31))
+ vcpu->arch.tid = mfspr(SPRN_TIDR);
+ vcpu->arch.iamr = mfspr(SPRN_IAMR);
+ vcpu->arch.amr = mfspr(SPRN_AMR);
+ vcpu->arch.uamor = mfspr(SPRN_UAMOR);
+ vcpu->arch.fscr = mfspr(SPRN_FSCR);
+ vcpu->arch.dscr = mfspr(SPRN_DSCR);
+ vcpu->arch.pspb = mfspr(SPRN_PSPB);
+
+ vcpu->arch.ctrl = mfspr(SPRN_CTRLF);
+}
+
+/* Returns true if current MSR and/or guest MSR may have changed */
+bool load_vcpu_state(struct kvm_vcpu *vcpu,
+ struct p9_host_os_sprs *host_os_sprs)
+{
+ bool ret = false;
+
+ if (cpu_has_feature(CPU_FTR_TM) ||
+ cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) {
+ kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true);
+ ret = true;
+ }
+
+ load_spr_state(vcpu, host_os_sprs);
+
+ load_fp_state(&vcpu->arch.fp);
+#ifdef CONFIG_ALTIVEC
+ load_vr_state(&vcpu->arch.vr);
+#endif
+ mtspr(SPRN_VRSAVE, vcpu->arch.vrsave);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(load_vcpu_state);
+
+void store_vcpu_state(struct kvm_vcpu *vcpu)
+{
+ store_spr_state(vcpu);
+
+ store_fp_state(&vcpu->arch.fp);
+#ifdef CONFIG_ALTIVEC
+ store_vr_state(&vcpu->arch.vr);
+#endif
+ vcpu->arch.vrsave = mfspr(SPRN_VRSAVE);
+
+ if (cpu_has_feature(CPU_FTR_TM) ||
+ cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
+ kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true);
+}
+EXPORT_SYMBOL_GPL(store_vcpu_state);
+
+void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs)
+{
+ if (!cpu_has_feature(CPU_FTR_ARCH_31))
+ host_os_sprs->tidr = mfspr(SPRN_TIDR);
+ host_os_sprs->iamr = mfspr(SPRN_IAMR);
+ host_os_sprs->amr = mfspr(SPRN_AMR);
+ host_os_sprs->fscr = mfspr(SPRN_FSCR);
+ host_os_sprs->dscr = mfspr(SPRN_DSCR);
+}
+EXPORT_SYMBOL_GPL(save_p9_host_os_sprs);
+
+/* vcpu guest regs must already be saved */
+void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu,
+ struct p9_host_os_sprs *host_os_sprs)
+{
+ mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso);
+
+ if (!cpu_has_feature(CPU_FTR_ARCH_31))
+ mtspr(SPRN_TIDR, host_os_sprs->tidr);
+ if (host_os_sprs->iamr != vcpu->arch.iamr)
+ mtspr(SPRN_IAMR, host_os_sprs->iamr);
+ if (vcpu->arch.uamor != 0)
+ mtspr(SPRN_UAMOR, 0);
+ if (host_os_sprs->amr != vcpu->arch.amr)
+ mtspr(SPRN_AMR, host_os_sprs->amr);
+ if (host_os_sprs->fscr != vcpu->arch.fscr)
+ mtspr(SPRN_FSCR, host_os_sprs->fscr);
+ if (host_os_sprs->dscr != vcpu->arch.dscr)
+ mtspr(SPRN_DSCR, host_os_sprs->dscr);
+ if (vcpu->arch.pspb != 0)
+ mtspr(SPRN_PSPB, 0);
+
+ /* Save guest CTRL register, set runlatch to 1 */
+ if (!(vcpu->arch.ctrl & 1))
+ mtspr(SPRN_CTRLT, 1);
+}
+EXPORT_SYMBOL_GPL(restore_p9_host_os_sprs);
+
#ifdef CONFIG_KVM_BOOK3S_HV_EXIT_TIMING
static void __start_timing(struct kvm_vcpu *vcpu, struct kvmhv_tb_accumulator *next)
{
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 34/55] KVM: PPC: Book3S HV P9: Move nested guest entry into its own function
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (32 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 33/55] KVM: PPC: Book3S HV P9: Move host OS save/restore functions to built-in Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 35/55] KVM: PPC: Book3S HV P9: Move remaining SPR and MSR access into low level entry Nicholas Piggin
` (20 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
This is just refactoring.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 125 +++++++++++++++++++----------------
1 file changed, 67 insertions(+), 58 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 977712eb74e0..cb66c9534dbf 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -3789,6 +3789,72 @@ static void vcpu_vpa_increment_dispatch(struct kvm_vcpu *vcpu)
}
}
+/* call our hypervisor to load up HV regs and go */
+static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb)
+{
+ struct kvmppc_vcore *vc = vcpu->arch.vcore;
+ unsigned long host_psscr;
+ struct hv_guest_state hvregs;
+ int trap;
+ s64 dec;
+
+ /*
+ * We need to save and restore the guest visible part of the
+ * psscr (i.e. using SPRN_PSSCR_PR) since the hypervisor
+ * doesn't do this for us. Note only required if pseries since
+ * this is done in kvmhv_vcpu_entry_p9() below otherwise.
+ */
+ host_psscr = mfspr(SPRN_PSSCR_PR);
+ mtspr(SPRN_PSSCR_PR, vcpu->arch.psscr);
+ kvmhv_save_hv_regs(vcpu, &hvregs);
+ hvregs.lpcr = lpcr;
+ vcpu->arch.regs.msr = vcpu->arch.shregs.msr;
+ hvregs.version = HV_GUEST_STATE_VERSION;
+ if (vcpu->arch.nested) {
+ hvregs.lpid = vcpu->arch.nested->shadow_lpid;
+ hvregs.vcpu_token = vcpu->arch.nested_vcpu_id;
+ } else {
+ hvregs.lpid = vcpu->kvm->arch.lpid;
+ hvregs.vcpu_token = vcpu->vcpu_id;
+ }
+ hvregs.hdec_expiry = time_limit;
+
+ /*
+ * When setting DEC, we must always deal with irq_work_raise
+ * via NMI vs setting DEC. The problem occurs right as we
+ * switch into guest mode if a NMI hits and sets pending work
+ * and sets DEC, then that will apply to the guest and not
+ * bring us back to the host.
+ *
+ * irq_work_raise could check a flag (or possibly LPCR[HDICE]
+ * for example) and set HDEC to 1? That wouldn't solve the
+ * nested hv case which needs to abort the hcall or zero the
+ * time limit.
+ *
+ * XXX: Another day's problem.
+ */
+ mtspr(SPRN_DEC, kvmppc_dec_expires_host_tb(vcpu) - *tb);
+
+ mtspr(SPRN_DAR, vcpu->arch.shregs.dar);
+ mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr);
+ trap = plpar_hcall_norets(H_ENTER_NESTED, __pa(&hvregs),
+ __pa(&vcpu->arch.regs));
+ kvmhv_restore_hv_return_state(vcpu, &hvregs);
+ vcpu->arch.shregs.msr = vcpu->arch.regs.msr;
+ vcpu->arch.shregs.dar = mfspr(SPRN_DAR);
+ vcpu->arch.shregs.dsisr = mfspr(SPRN_DSISR);
+ vcpu->arch.psscr = mfspr(SPRN_PSSCR_PR);
+ mtspr(SPRN_PSSCR_PR, host_psscr);
+
+ dec = mfspr(SPRN_DEC);
+ if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */
+ dec = (s32) dec;
+ *tb = mftb();
+ vcpu->arch.dec_expires = dec + (*tb + vc->tb_offset);
+
+ return trap;
+}
+
/*
* Guest entry for POWER9 and later CPUs.
*/
@@ -3797,7 +3863,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
{
struct kvmppc_vcore *vc = vcpu->arch.vcore;
struct p9_host_os_sprs host_os_sprs;
- s64 dec;
u64 next_timer;
unsigned long msr;
int trap;
@@ -3850,63 +3915,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
switch_pmu_to_guest(vcpu, &host_os_sprs);
if (kvmhv_on_pseries()) {
- /*
- * We need to save and restore the guest visible part of the
- * psscr (i.e. using SPRN_PSSCR_PR) since the hypervisor
- * doesn't do this for us. Note only required if pseries since
- * this is done in kvmhv_vcpu_entry_p9() below otherwise.
- */
- unsigned long host_psscr;
- /* call our hypervisor to load up HV regs and go */
- struct hv_guest_state hvregs;
-
- host_psscr = mfspr(SPRN_PSSCR_PR);
- mtspr(SPRN_PSSCR_PR, vcpu->arch.psscr);
- kvmhv_save_hv_regs(vcpu, &hvregs);
- hvregs.lpcr = lpcr;
- vcpu->arch.regs.msr = vcpu->arch.shregs.msr;
- hvregs.version = HV_GUEST_STATE_VERSION;
- if (vcpu->arch.nested) {
- hvregs.lpid = vcpu->arch.nested->shadow_lpid;
- hvregs.vcpu_token = vcpu->arch.nested_vcpu_id;
- } else {
- hvregs.lpid = vcpu->kvm->arch.lpid;
- hvregs.vcpu_token = vcpu->vcpu_id;
- }
- hvregs.hdec_expiry = time_limit;
-
- /*
- * When setting DEC, we must always deal with irq_work_raise
- * via NMI vs setting DEC. The problem occurs right as we
- * switch into guest mode if a NMI hits and sets pending work
- * and sets DEC, then that will apply to the guest and not
- * bring us back to the host.
- *
- * irq_work_raise could check a flag (or possibly LPCR[HDICE]
- * for example) and set HDEC to 1? That wouldn't solve the
- * nested hv case which needs to abort the hcall or zero the
- * time limit.
- *
- * XXX: Another day's problem.
- */
- mtspr(SPRN_DEC, kvmppc_dec_expires_host_tb(vcpu) - *tb);
-
- mtspr(SPRN_DAR, vcpu->arch.shregs.dar);
- mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr);
- trap = plpar_hcall_norets(H_ENTER_NESTED, __pa(&hvregs),
- __pa(&vcpu->arch.regs));
- kvmhv_restore_hv_return_state(vcpu, &hvregs);
- vcpu->arch.shregs.msr = vcpu->arch.regs.msr;
- vcpu->arch.shregs.dar = mfspr(SPRN_DAR);
- vcpu->arch.shregs.dsisr = mfspr(SPRN_DSISR);
- vcpu->arch.psscr = mfspr(SPRN_PSSCR_PR);
- mtspr(SPRN_PSSCR_PR, host_psscr);
-
- dec = mfspr(SPRN_DEC);
- if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */
- dec = (s32) dec;
- *tb = mftb();
- vcpu->arch.dec_expires = dec + (*tb + vc->tb_offset);
+ trap = kvmhv_vcpu_entry_p9_nested(vcpu, time_limit, lpcr, tb);
/* H_CEDE has to be handled now, not later */
if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested &&
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 35/55] KVM: PPC: Book3S HV P9: Move remaining SPR and MSR access into low level entry
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (33 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 34/55] KVM: PPC: Book3S HV P9: Move nested guest entry into its own function Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 36/55] KVM: PPC: Book3S HV P9: Implement TM fastpath for guest entry/exit Nicholas Piggin
` (19 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
Move register saving and loading from kvmhv_p9_guest_entry() into the HV
and nested entry handlers.
Accesses are scheduled to reduce mtSPR / mfSPR interleaving which
reduces SPR scoreboard stalls.
XXX +212 cycles here somewhere (7566), investigate POWER9 virt-mode NULL hcall
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 79 ++++++++++------------
arch/powerpc/kvm/book3s_hv_p9_entry.c | 96 ++++++++++++++++++++-------
2 files changed, 109 insertions(+), 66 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index cb66c9534dbf..8c1c93ebd669 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -3794,9 +3794,15 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns
{
struct kvmppc_vcore *vc = vcpu->arch.vcore;
unsigned long host_psscr;
+ unsigned long msr;
struct hv_guest_state hvregs;
- int trap;
+ struct p9_host_os_sprs host_os_sprs;
s64 dec;
+ int trap;
+
+ switch_pmu_to_guest(vcpu, &host_os_sprs);
+
+ save_p9_host_os_sprs(&host_os_sprs);
/*
* We need to save and restore the guest visible part of the
@@ -3805,6 +3811,27 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns
* this is done in kvmhv_vcpu_entry_p9() below otherwise.
*/
host_psscr = mfspr(SPRN_PSSCR_PR);
+
+ hard_irq_disable();
+ if (lazy_irq_pending())
+ return 0;
+
+ /* MSR bits may have been cleared by context switch */
+ msr = 0;
+ if (IS_ENABLED(CONFIG_PPC_FPU))
+ msr |= MSR_FP;
+ if (cpu_has_feature(CPU_FTR_ALTIVEC))
+ msr |= MSR_VEC;
+ if (cpu_has_feature(CPU_FTR_VSX))
+ msr |= MSR_VSX;
+ if (cpu_has_feature(CPU_FTR_TM) ||
+ cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
+ msr |= MSR_TM;
+ msr = msr_check_and_set(msr);
+
+ if (unlikely(load_vcpu_state(vcpu, &host_os_sprs)))
+ msr = mfmsr(); /* TM restore can update msr */
+
mtspr(SPRN_PSSCR_PR, vcpu->arch.psscr);
kvmhv_save_hv_regs(vcpu, &hvregs);
hvregs.lpcr = lpcr;
@@ -3846,12 +3873,20 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns
vcpu->arch.psscr = mfspr(SPRN_PSSCR_PR);
mtspr(SPRN_PSSCR_PR, host_psscr);
+ store_vcpu_state(vcpu);
+
dec = mfspr(SPRN_DEC);
if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */
dec = (s32) dec;
*tb = mftb();
vcpu->arch.dec_expires = dec + (*tb + vc->tb_offset);
+ timer_rearm_host_dec(*tb);
+
+ restore_p9_host_os_sprs(vcpu, &host_os_sprs);
+
+ switch_pmu_to_host(vcpu, &host_os_sprs);
+
return trap;
}
@@ -3862,9 +3897,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
unsigned long lpcr, u64 *tb)
{
struct kvmppc_vcore *vc = vcpu->arch.vcore;
- struct p9_host_os_sprs host_os_sprs;
u64 next_timer;
- unsigned long msr;
int trap;
next_timer = timer_get_next_tb();
@@ -3875,33 +3908,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
vcpu->arch.ceded = 0;
- save_p9_host_os_sprs(&host_os_sprs);
-
- /*
- * This could be combined with MSR[RI] clearing, but that expands
- * the unrecoverable window. It would be better to cover unrecoverable
- * with KVM bad interrupt handling rather than use MSR[RI] at all.
- *
- * Much more difficult and less worthwhile to combine with IR/DR
- * disable.
- */
- hard_irq_disable();
- if (lazy_irq_pending())
- return 0;
-
- /* MSR bits may have been cleared by context switch */
- msr = 0;
- if (IS_ENABLED(CONFIG_PPC_FPU))
- msr |= MSR_FP;
- if (cpu_has_feature(CPU_FTR_ALTIVEC))
- msr |= MSR_VEC;
- if (cpu_has_feature(CPU_FTR_VSX))
- msr |= MSR_VSX;
- if (cpu_has_feature(CPU_FTR_TM) ||
- cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
- msr |= MSR_TM;
- msr = msr_check_and_set(msr);
-
kvmppc_subcore_enter_guest();
vc->entry_exit_map = 1;
@@ -3909,11 +3915,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
vcpu_vpa_increment_dispatch(vcpu);
- if (unlikely(load_vcpu_state(vcpu, &host_os_sprs)))
- msr = mfmsr(); /* MSR may have been updated */
-
- switch_pmu_to_guest(vcpu, &host_os_sprs);
-
if (kvmhv_on_pseries()) {
trap = kvmhv_vcpu_entry_p9_nested(vcpu, time_limit, lpcr, tb);
@@ -3956,16 +3957,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
vcpu->arch.slb_max = 0;
}
- switch_pmu_to_host(vcpu, &host_os_sprs);
-
- store_vcpu_state(vcpu);
-
vcpu_vpa_increment_dispatch(vcpu);
- timer_rearm_host_dec(*tb);
-
- restore_p9_host_os_sprs(vcpu, &host_os_sprs);
-
vc->entry_exit_map = 0x101;
vc->in_guest = 0;
diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
index 5a34f0199bfe..ea531f76f116 100644
--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
+++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
@@ -530,6 +530,7 @@ static void save_clear_guest_mmu(struct kvm *kvm, struct kvm_vcpu *vcpu)
int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb)
{
+ struct p9_host_os_sprs host_os_sprs;
struct kvm *kvm = vcpu->kvm;
struct kvm_nested_guest *nested = vcpu->arch.nested;
struct kvmppc_vcore *vc = vcpu->arch.vcore;
@@ -559,9 +560,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
vcpu->arch.ceded = 0;
- /* Could avoid mfmsr by passing around, but probably no big deal */
- msr = mfmsr();
-
host_hfscr = mfspr(SPRN_HFSCR);
host_ciabr = mfspr(SPRN_CIABR);
host_dawr0 = mfspr(SPRN_DAWR0);
@@ -576,6 +574,41 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
local_paca->kvm_hstate.host_purr = mfspr(SPRN_PURR);
local_paca->kvm_hstate.host_spurr = mfspr(SPRN_SPURR);
+ switch_pmu_to_guest(vcpu, &host_os_sprs);
+
+ save_p9_host_os_sprs(&host_os_sprs);
+
+ /*
+ * This could be combined with MSR[RI] clearing, but that expands
+ * the unrecoverable window. It would be better to cover unrecoverable
+ * with KVM bad interrupt handling rather than use MSR[RI] at all.
+ *
+ * Much more difficult and less worthwhile to combine with IR/DR
+ * disable.
+ */
+ hard_irq_disable();
+ if (lazy_irq_pending()) {
+ trap = 0;
+ goto out;
+ }
+
+ /* MSR bits may have been cleared by context switch */
+ msr = 0;
+ if (IS_ENABLED(CONFIG_PPC_FPU))
+ msr |= MSR_FP;
+ if (cpu_has_feature(CPU_FTR_ALTIVEC))
+ msr |= MSR_VEC;
+ if (cpu_has_feature(CPU_FTR_VSX))
+ msr |= MSR_VSX;
+ if (cpu_has_feature(CPU_FTR_TM) ||
+ cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
+ msr |= MSR_TM;
+ msr = msr_check_and_set(msr);
+ /* Save MSR for restore. This is after hard disable, so EE is clear. */
+
+ if (unlikely(load_vcpu_state(vcpu, &host_os_sprs)))
+ msr = mfmsr(); /* MSR may have been updated */
+
if (vc->tb_offset) {
u64 new_tb = *tb + vc->tb_offset;
mtspr(SPRN_TBU40, new_tb);
@@ -634,6 +667,14 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
mtspr(SPRN_SPRG2, vcpu->arch.shregs.sprg2);
mtspr(SPRN_SPRG3, vcpu->arch.shregs.sprg3);
+ /*
+ * It might be preferable to load_vcpu_state here, in order to get the
+ * GPR/FP register loads executing in parallel with the previous mtSPR
+ * instructions, but for now that can't be done because the TM handling
+ * in load_vcpu_state can change some SPRs and vcpu state (nip, msr).
+ * But TM could be split out if this would be a significant benefit.
+ */
+
local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_HV_P9;
/*
@@ -811,6 +852,20 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
vc->dpdes = mfspr(SPRN_DPDES);
vc->vtb = mfspr(SPRN_VTB);
+ save_clear_guest_mmu(kvm, vcpu);
+ switch_mmu_to_host(kvm, host_pidr);
+
+ /*
+ * If we are in real mode, only switch MMU on after the MMU is
+ * switched to host, to avoid the P9_RADIX_PREFETCH_BUG.
+ */
+ if (IS_ENABLED(CONFIG_PPC_TRANSACTIONAL_MEM) &&
+ vcpu->arch.shregs.msr & MSR_TS_MASK)
+ msr |= MSR_TS_S;
+ __mtmsrd(msr, 0);
+
+ store_vcpu_state(vcpu);
+
dec = mfspr(SPRN_DEC);
if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */
dec = (s32) dec;
@@ -843,6 +898,19 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
mtspr(SPRN_DAWRX1, host_dawrx1);
}
+ mtspr(SPRN_DPDES, 0);
+ if (vc->pcr)
+ mtspr(SPRN_PCR, PCR_MASK);
+
+ /* HDEC must be at least as large as DEC, so decrementer_max fits */
+ mtspr(SPRN_HDEC, decrementer_max);
+
+ timer_rearm_host_dec(*tb);
+
+ restore_p9_host_os_sprs(vcpu, &host_os_sprs);
+
+ local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_NONE;
+
if (kvm_is_radix(kvm)) {
/*
* Since this is radix, do a eieio; tlbsync; ptesync sequence
@@ -859,26 +927,8 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
if (cpu_has_feature(CPU_FTR_ARCH_31))
asm volatile(PPC_CP_ABORT);
- mtspr(SPRN_DPDES, 0);
- if (vc->pcr)
- mtspr(SPRN_PCR, PCR_MASK);
-
- /* HDEC must be at least as large as DEC, so decrementer_max fits */
- mtspr(SPRN_HDEC, decrementer_max);
-
- save_clear_guest_mmu(kvm, vcpu);
- switch_mmu_to_host(kvm, host_pidr);
- local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_NONE;
-
- /*
- * If we are in real mode, only switch MMU on after the MMU is
- * switched to host, to avoid the P9_RADIX_PREFETCH_BUG.
- */
- if (IS_ENABLED(CONFIG_PPC_TRANSACTIONAL_MEM) &&
- vcpu->arch.shregs.msr & MSR_TS_MASK)
- msr |= MSR_TS_S;
-
- __mtmsrd(msr, 0);
+out:
+ switch_pmu_to_host(vcpu, &host_os_sprs);
end_timing(vcpu);
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 36/55] KVM: PPC: Book3S HV P9: Implement TM fastpath for guest entry/exit
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (34 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 35/55] KVM: PPC: Book3S HV P9: Move remaining SPR and MSR access into low level entry Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 37/55] KVM: PPC: Book3S HV P9: Switch PMU to guest as late as possible Nicholas Piggin
` (18 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
If TM is not active, only TM register state needs to be saved.
-348 cycles (7218) POWER9 virt-mode NULL hcall
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv_p9_entry.c | 23 +++++++++++++++++++----
1 file changed, 19 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
index ea531f76f116..2e7498817b2e 100644
--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
+++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
@@ -281,8 +281,15 @@ bool load_vcpu_state(struct kvm_vcpu *vcpu,
if (cpu_has_feature(CPU_FTR_TM) ||
cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) {
- kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true);
- ret = true;
+ unsigned long guest_msr = vcpu->arch.shregs.msr;
+ if (MSR_TM_ACTIVE(guest_msr)) {
+ kvmppc_restore_tm_hv(vcpu, guest_msr, true);
+ ret = true;
+ } else {
+ mtspr(SPRN_TEXASR, vcpu->arch.texasr);
+ mtspr(SPRN_TFHAR, vcpu->arch.tfhar);
+ mtspr(SPRN_TFIAR, vcpu->arch.tfiar);
+ }
}
load_spr_state(vcpu, host_os_sprs);
@@ -308,8 +315,16 @@ void store_vcpu_state(struct kvm_vcpu *vcpu)
vcpu->arch.vrsave = mfspr(SPRN_VRSAVE);
if (cpu_has_feature(CPU_FTR_TM) ||
- cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
- kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true);
+ cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) {
+ unsigned long guest_msr = vcpu->arch.shregs.msr;
+ if (MSR_TM_ACTIVE(guest_msr)) {
+ kvmppc_save_tm_hv(vcpu, guest_msr, true);
+ } else {
+ vcpu->arch.texasr = mfspr(SPRN_TEXASR);
+ vcpu->arch.tfhar = mfspr(SPRN_TFHAR);
+ vcpu->arch.tfiar = mfspr(SPRN_TFIAR);
+ }
+ }
}
EXPORT_SYMBOL_GPL(store_vcpu_state);
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 37/55] KVM: PPC: Book3S HV P9: Switch PMU to guest as late as possible
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (35 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 36/55] KVM: PPC: Book3S HV P9: Implement TM fastpath for guest entry/exit Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 38/55] KVM: PPC: Book3S HV P9: Restrict DSISR canary workaround to processors that require it Nicholas Piggin
` (17 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
This moves PMU switch to guest as late as possible in entry, and switch
back to host as early as possible at exit. This helps the host get the
most perf coverage of KVM entry/exit code as possible.
This is slightly suboptimal for SPR scheduling point of view when the
PMU is enabled, but when perf is disabled there is no real difference.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 6 ++----
arch/powerpc/kvm/book3s_hv_p9_entry.c | 6 ++----
2 files changed, 4 insertions(+), 8 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 8c1c93ebd669..e7dfc33e2b38 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -3800,8 +3800,6 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns
s64 dec;
int trap;
- switch_pmu_to_guest(vcpu, &host_os_sprs);
-
save_p9_host_os_sprs(&host_os_sprs);
/*
@@ -3864,9 +3862,11 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns
mtspr(SPRN_DAR, vcpu->arch.shregs.dar);
mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr);
+ switch_pmu_to_guest(vcpu, &host_os_sprs);
trap = plpar_hcall_norets(H_ENTER_NESTED, __pa(&hvregs),
__pa(&vcpu->arch.regs));
kvmhv_restore_hv_return_state(vcpu, &hvregs);
+ switch_pmu_to_host(vcpu, &host_os_sprs);
vcpu->arch.shregs.msr = vcpu->arch.regs.msr;
vcpu->arch.shregs.dar = mfspr(SPRN_DAR);
vcpu->arch.shregs.dsisr = mfspr(SPRN_DSISR);
@@ -3885,8 +3885,6 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns
restore_p9_host_os_sprs(vcpu, &host_os_sprs);
- switch_pmu_to_host(vcpu, &host_os_sprs);
-
return trap;
}
diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
index 2e7498817b2e..737d4eaf74bc 100644
--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
+++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
@@ -589,8 +589,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
local_paca->kvm_hstate.host_purr = mfspr(SPRN_PURR);
local_paca->kvm_hstate.host_spurr = mfspr(SPRN_SPURR);
- switch_pmu_to_guest(vcpu, &host_os_sprs);
-
save_p9_host_os_sprs(&host_os_sprs);
/*
@@ -732,7 +730,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
accumulate_time(vcpu, &vcpu->arch.guest_time);
+ switch_pmu_to_guest(vcpu, &host_os_sprs);
kvmppc_p9_enter_guest(vcpu);
+ switch_pmu_to_host(vcpu, &host_os_sprs);
accumulate_time(vcpu, &vcpu->arch.rm_intr);
@@ -943,8 +943,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
asm volatile(PPC_CP_ABORT);
out:
- switch_pmu_to_host(vcpu, &host_os_sprs);
-
end_timing(vcpu);
return trap;
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 38/55] KVM: PPC: Book3S HV P9: Restrict DSISR canary workaround to processors that require it
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (36 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 37/55] KVM: PPC: Book3S HV P9: Switch PMU to guest as late as possible Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 39/55] KVM: PPC: Book3S HV P9: More SPR speed improvements Nicholas Piggin
` (16 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
Use CPU_FTR_P9_RADIX_PREFETCH_BUG for this, to test for DD2.1 and below
processors.
-43 cycles (7178) POWER9 virt-mode NULL hcall
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 3 ++-
arch/powerpc/kvm/book3s_hv_p9_entry.c | 6 ++++--
2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index e7dfc33e2b38..47ccea5ffba2 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -1598,7 +1598,8 @@ XXX benchmark guest exits
unsigned long vsid;
long err;
- if (vcpu->arch.fault_dsisr == HDSISR_CANARY) {
+ if (cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG) &&
+ unlikely(vcpu->arch.fault_dsisr == HDSISR_CANARY)) {
r = RESUME_GUEST; /* Just retry if it's the canary */
break;
}
diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
index 737d4eaf74bc..d83b5d4d02c1 100644
--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
+++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
@@ -671,9 +671,11 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
* HDSI which should correctly update the HDSISR the second time HDSI
* entry.
*
- * Just do this on all p9 processors for now.
+ * The "radix prefetch bug" test can be used to test for this bug, as
+ * it also exists fo DD2.1 and below.
*/
- mtspr(SPRN_HDSISR, HDSISR_CANARY);
+ if (cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG))
+ mtspr(SPRN_HDSISR, HDSISR_CANARY);
mtspr(SPRN_SPRG0, vcpu->arch.shregs.sprg0);
mtspr(SPRN_SPRG1, vcpu->arch.shregs.sprg1);
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 39/55] KVM: PPC: Book3S HV P9: More SPR speed improvements
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (37 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 38/55] KVM: PPC: Book3S HV P9: Restrict DSISR canary workaround to processors that require it Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 40/55] KVM: PPC: Book3S HV P9: Demand fault EBB facility registers Nicholas Piggin
` (15 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
This avoids more scoreboard stalls and reduces mtSPRs.
-193 cycles (6985) POWER9 virt-mode NULL hcall
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv_p9_entry.c | 73 ++++++++++++++++-----------
1 file changed, 43 insertions(+), 30 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
index d83b5d4d02c1..c4e93167d120 100644
--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
+++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
@@ -633,24 +633,29 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
vc->tb_offset_applied = vc->tb_offset;
}
- if (vc->pcr)
- mtspr(SPRN_PCR, vc->pcr | PCR_MASK);
- mtspr(SPRN_DPDES, vc->dpdes);
mtspr(SPRN_VTB, vc->vtb);
-
mtspr(SPRN_PURR, vcpu->arch.purr);
mtspr(SPRN_SPURR, vcpu->arch.spurr);
+ if (vc->pcr)
+ mtspr(SPRN_PCR, vc->pcr | PCR_MASK);
+ if (vc->dpdes)
+ mtspr(SPRN_DPDES, vc->dpdes);
+
if (dawr_enabled()) {
- mtspr(SPRN_DAWR0, vcpu->arch.dawr0);
- mtspr(SPRN_DAWRX0, vcpu->arch.dawrx0);
+ if (vcpu->arch.dawr0 != host_dawr0)
+ mtspr(SPRN_DAWR0, vcpu->arch.dawr0);
+ if (vcpu->arch.dawrx0 != host_dawrx0)
+ mtspr(SPRN_DAWRX0, vcpu->arch.dawrx0);
if (cpu_has_feature(CPU_FTR_DAWR1)) {
- mtspr(SPRN_DAWR1, vcpu->arch.dawr1);
- mtspr(SPRN_DAWRX1, vcpu->arch.dawrx1);
+ if (vcpu->arch.dawr1 != host_dawr1)
+ mtspr(SPRN_DAWR1, vcpu->arch.dawr1);
+ if (vcpu->arch.dawrx1 != host_dawrx1)
+ mtspr(SPRN_DAWRX1, vcpu->arch.dawrx1);
}
}
- mtspr(SPRN_CIABR, vcpu->arch.ciabr);
- mtspr(SPRN_IC, vcpu->arch.ic);
+ if (vcpu->arch.ciabr != host_ciabr)
+ mtspr(SPRN_CIABR, vcpu->arch.ciabr);
mtspr(SPRN_PSSCR, vcpu->arch.psscr | PSSCR_EC |
(local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG));
@@ -869,20 +874,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
vc->dpdes = mfspr(SPRN_DPDES);
vc->vtb = mfspr(SPRN_VTB);
- save_clear_guest_mmu(kvm, vcpu);
- switch_mmu_to_host(kvm, host_pidr);
-
- /*
- * If we are in real mode, only switch MMU on after the MMU is
- * switched to host, to avoid the P9_RADIX_PREFETCH_BUG.
- */
- if (IS_ENABLED(CONFIG_PPC_TRANSACTIONAL_MEM) &&
- vcpu->arch.shregs.msr & MSR_TS_MASK)
- msr |= MSR_TS_S;
- __mtmsrd(msr, 0);
-
- store_vcpu_state(vcpu);
-
dec = mfspr(SPRN_DEC);
if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */
dec = (s32) dec;
@@ -900,6 +891,22 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
vc->tb_offset_applied = 0;
}
+ save_clear_guest_mmu(kvm, vcpu);
+ switch_mmu_to_host(kvm, host_pidr);
+
+ /*
+ * Enable MSR here in order to have facilities enabled to save
+ * guest registers. This enables MMU (if we were in realmode), so
+ * only switch MMU on after the MMU is switched to host, to avoid
+ * the P9_RADIX_PREFETCH_BUG or hash guest context.
+ */
+ if (IS_ENABLED(CONFIG_PPC_TRANSACTIONAL_MEM) &&
+ vcpu->arch.shregs.msr & MSR_TS_MASK)
+ msr |= MSR_TS_S;
+ __mtmsrd(msr, 0);
+
+ store_vcpu_state(vcpu);
+
mtspr(SPRN_PURR, local_paca->kvm_hstate.host_purr);
mtspr(SPRN_SPURR, local_paca->kvm_hstate.host_spurr);
@@ -907,15 +914,21 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
mtspr(SPRN_PSSCR, host_psscr |
(local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG));
mtspr(SPRN_HFSCR, host_hfscr);
- mtspr(SPRN_CIABR, host_ciabr);
- mtspr(SPRN_DAWR0, host_dawr0);
- mtspr(SPRN_DAWRX0, host_dawrx0);
+ if (vcpu->arch.ciabr != host_ciabr)
+ mtspr(SPRN_CIABR, host_ciabr);
+ if (vcpu->arch.dawr0 != host_dawr0)
+ mtspr(SPRN_DAWR0, host_dawr0);
+ if (vcpu->arch.dawrx0 != host_dawrx0)
+ mtspr(SPRN_DAWRX0, host_dawrx0);
if (cpu_has_feature(CPU_FTR_DAWR1)) {
- mtspr(SPRN_DAWR1, host_dawr1);
- mtspr(SPRN_DAWRX1, host_dawrx1);
+ if (vcpu->arch.dawr1 != host_dawr1)
+ mtspr(SPRN_DAWR1, host_dawr1);
+ if (vcpu->arch.dawrx1 != host_dawrx1)
+ mtspr(SPRN_DAWRX1, host_dawrx1);
}
- mtspr(SPRN_DPDES, 0);
+ if (vc->dpdes)
+ mtspr(SPRN_DPDES, 0);
if (vc->pcr)
mtspr(SPRN_PCR, PCR_MASK);
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 40/55] KVM: PPC: Book3S HV P9: Demand fault EBB facility registers
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (38 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 39/55] KVM: PPC: Book3S HV P9: More SPR speed improvements Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 41/55] KVM: PPC: Book3S HV P9: Demand fault TM " Nicholas Piggin
` (14 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin, Fabiano Rosas
Use HFSCR facility disabling to implement demand faulting for EBB, with
a hysteresis counter similar to the load_fp etc counters in context
switching that implement the equivalent demand faulting for userspace
facilities.
This speeds up guest entry/exit by avoiding the register save/restore
when a guest is not frequently using them. When a guest does use them
often, there will be some additional demand fault overhead, but these
are not commonly used facilities.
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/kvm_host.h | 1 +
arch/powerpc/kvm/book3s_hv.c | 16 +++++++++++++--
arch/powerpc/kvm/book3s_hv_p9_entry.c | 28 +++++++++++++++++++++------
3 files changed, 37 insertions(+), 8 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index f105eaeb4521..1c00c4a565f5 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -580,6 +580,7 @@ struct kvm_vcpu_arch {
ulong cfar;
ulong ppr;
u32 pspb;
+ u8 load_ebb;
ulong fscr;
ulong shadow_fscr;
ulong ebbhr;
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 47ccea5ffba2..dd8199a423cf 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -1441,6 +1441,16 @@ static int kvmppc_pmu_unavailable(struct kvm_vcpu *vcpu)
return RESUME_GUEST;
}
+static int kvmppc_ebb_unavailable(struct kvm_vcpu *vcpu)
+{
+ if (!(vcpu->arch.hfscr_permitted & HFSCR_EBB))
+ return EMULATE_FAIL;
+
+ vcpu->arch.hfscr |= HFSCR_EBB;
+
+ return RESUME_GUEST;
+}
+
static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu,
struct task_struct *tsk)
{
@@ -1735,6 +1745,8 @@ XXX benchmark guest exits
r = kvmppc_emulate_doorbell_instr(vcpu);
if (cause == FSCR_PM_LG)
r = kvmppc_pmu_unavailable(vcpu);
+ if (cause == FSCR_EBB_LG)
+ r = kvmppc_ebb_unavailable(vcpu);
}
if (r == EMULATE_FAIL) {
kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
@@ -2751,9 +2763,9 @@ static int kvmppc_core_vcpu_create_hv(struct kvm_vcpu *vcpu)
vcpu->arch.hfscr_permitted = vcpu->arch.hfscr;
/*
- * PM is demand-faulted so start with it clear.
+ * PM, EBB is demand-faulted so start with it clear.
*/
- vcpu->arch.hfscr &= ~HFSCR_PM;
+ vcpu->arch.hfscr &= ~(HFSCR_PM | HFSCR_EBB);
kvmppc_mmu_book3s_hv_init(vcpu);
diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
index c4e93167d120..f68a3d107d04 100644
--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
+++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
@@ -224,9 +224,12 @@ static void load_spr_state(struct kvm_vcpu *vcpu,
struct p9_host_os_sprs *host_os_sprs)
{
mtspr(SPRN_TAR, vcpu->arch.tar);
- mtspr(SPRN_EBBHR, vcpu->arch.ebbhr);
- mtspr(SPRN_EBBRR, vcpu->arch.ebbrr);
- mtspr(SPRN_BESCR, vcpu->arch.bescr);
+
+ if (vcpu->arch.hfscr & HFSCR_EBB) {
+ mtspr(SPRN_EBBHR, vcpu->arch.ebbhr);
+ mtspr(SPRN_EBBRR, vcpu->arch.ebbrr);
+ mtspr(SPRN_BESCR, vcpu->arch.bescr);
+ }
if (!cpu_has_feature(CPU_FTR_ARCH_31))
mtspr(SPRN_TIDR, vcpu->arch.tid);
@@ -257,9 +260,22 @@ static void load_spr_state(struct kvm_vcpu *vcpu,
static void store_spr_state(struct kvm_vcpu *vcpu)
{
vcpu->arch.tar = mfspr(SPRN_TAR);
- vcpu->arch.ebbhr = mfspr(SPRN_EBBHR);
- vcpu->arch.ebbrr = mfspr(SPRN_EBBRR);
- vcpu->arch.bescr = mfspr(SPRN_BESCR);
+
+ if (vcpu->arch.hfscr & HFSCR_EBB) {
+ vcpu->arch.ebbhr = mfspr(SPRN_EBBHR);
+ vcpu->arch.ebbrr = mfspr(SPRN_EBBRR);
+ vcpu->arch.bescr = mfspr(SPRN_BESCR);
+ /*
+ * This is like load_fp in context switching, turn off the
+ * facility after it wraps the u8 to try avoiding saving
+ * and restoring the registers each partition switch.
+ */
+ if (!vcpu->arch.nested) {
+ vcpu->arch.load_ebb++;
+ if (!vcpu->arch.load_ebb)
+ vcpu->arch.hfscr &= ~HFSCR_EBB;
+ }
+ }
if (!cpu_has_feature(CPU_FTR_ARCH_31))
vcpu->arch.tid = mfspr(SPRN_TIDR);
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 41/55] KVM: PPC: Book3S HV P9: Demand fault TM facility registers
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (39 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 40/55] KVM: PPC: Book3S HV P9: Demand fault EBB facility registers Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 42/55] KVM: PPC: Book3S HV P9: Use Linux SPR save/restore to manage some host SPRs Nicholas Piggin
` (13 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin, Fabiano Rosas
Use HFSCR facility disabling to implement demand faulting for TM, with
a hysteresis counter similar to the load_fp etc counters in context
switching that implement the equivalent demand faulting for userspace
facilities.
This speeds up guest entry/exit by avoiding the register save/restore
when a guest is not frequently using them. When a guest does use them
often, there will be some additional demand fault overhead, but these
are not commonly used facilities.
-304 cycles (6681) POWER9 virt-mode NULL hcall with the previous patch
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/kvm_host.h | 1 +
arch/powerpc/kvm/book3s_hv.c | 26 ++++++++++++++++++++------
arch/powerpc/kvm/book3s_hv_p9_entry.c | 25 +++++++++++++++++--------
3 files changed, 38 insertions(+), 14 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 1c00c4a565f5..74ee3a5b110e 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -581,6 +581,7 @@ struct kvm_vcpu_arch {
ulong ppr;
u32 pspb;
u8 load_ebb;
+ u8 load_tm;
ulong fscr;
ulong shadow_fscr;
ulong ebbhr;
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index dd8199a423cf..5b2114c00c43 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -1451,6 +1451,16 @@ static int kvmppc_ebb_unavailable(struct kvm_vcpu *vcpu)
return RESUME_GUEST;
}
+static int kvmppc_tm_unavailable(struct kvm_vcpu *vcpu)
+{
+ if (!(vcpu->arch.hfscr_permitted & HFSCR_TM))
+ return EMULATE_FAIL;
+
+ vcpu->arch.hfscr |= HFSCR_TM;
+
+ return RESUME_GUEST;
+}
+
static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu,
struct task_struct *tsk)
{
@@ -1747,6 +1757,8 @@ XXX benchmark guest exits
r = kvmppc_pmu_unavailable(vcpu);
if (cause == FSCR_EBB_LG)
r = kvmppc_ebb_unavailable(vcpu);
+ if (cause == FSCR_TM_LG)
+ r = kvmppc_tm_unavailable(vcpu);
}
if (r == EMULATE_FAIL) {
kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
@@ -2763,9 +2775,9 @@ static int kvmppc_core_vcpu_create_hv(struct kvm_vcpu *vcpu)
vcpu->arch.hfscr_permitted = vcpu->arch.hfscr;
/*
- * PM, EBB is demand-faulted so start with it clear.
+ * PM, EBB, TM are demand-faulted so start with it clear.
*/
- vcpu->arch.hfscr &= ~(HFSCR_PM | HFSCR_EBB);
+ vcpu->arch.hfscr &= ~(HFSCR_PM | HFSCR_EBB | HFSCR_TM);
kvmppc_mmu_book3s_hv_init(vcpu);
@@ -3835,8 +3847,9 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns
msr |= MSR_VEC;
if (cpu_has_feature(CPU_FTR_VSX))
msr |= MSR_VSX;
- if (cpu_has_feature(CPU_FTR_TM) ||
- cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
+ if ((cpu_has_feature(CPU_FTR_TM) ||
+ cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) &&
+ (vcpu->arch.hfscr & HFSCR_TM))
msr |= MSR_TM;
msr = msr_check_and_set(msr);
@@ -4552,8 +4565,9 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu)
msr |= MSR_VEC;
if (cpu_has_feature(CPU_FTR_VSX))
msr |= MSR_VSX;
- if (cpu_has_feature(CPU_FTR_TM) ||
- cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
+ if ((cpu_has_feature(CPU_FTR_TM) ||
+ cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) &&
+ (vcpu->arch.hfscr & HFSCR_TM))
msr |= MSR_TM;
msr = msr_check_and_set(msr);
diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
index f68a3d107d04..db5eb83e26d1 100644
--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
+++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
@@ -295,10 +295,11 @@ bool load_vcpu_state(struct kvm_vcpu *vcpu,
{
bool ret = false;
- if (cpu_has_feature(CPU_FTR_TM) ||
- cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) {
+ if ((cpu_has_feature(CPU_FTR_TM) ||
+ cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) &&
+ (vcpu->arch.hfscr & HFSCR_TM)) {
unsigned long guest_msr = vcpu->arch.shregs.msr;
- if (MSR_TM_ACTIVE(guest_msr)) {
+ if (MSR_TM_ACTIVE(guest_msr) || local_paca->kvm_hstate.fake_suspend) {
kvmppc_restore_tm_hv(vcpu, guest_msr, true);
ret = true;
} else {
@@ -330,15 +331,22 @@ void store_vcpu_state(struct kvm_vcpu *vcpu)
#endif
vcpu->arch.vrsave = mfspr(SPRN_VRSAVE);
- if (cpu_has_feature(CPU_FTR_TM) ||
- cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) {
+ if ((cpu_has_feature(CPU_FTR_TM) ||
+ cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) &&
+ (vcpu->arch.hfscr & HFSCR_TM)) {
unsigned long guest_msr = vcpu->arch.shregs.msr;
- if (MSR_TM_ACTIVE(guest_msr)) {
+ if (MSR_TM_ACTIVE(guest_msr) || local_paca->kvm_hstate.fake_suspend) {
kvmppc_save_tm_hv(vcpu, guest_msr, true);
} else {
vcpu->arch.texasr = mfspr(SPRN_TEXASR);
vcpu->arch.tfhar = mfspr(SPRN_TFHAR);
vcpu->arch.tfiar = mfspr(SPRN_TFIAR);
+
+ if (!vcpu->arch.nested) {
+ vcpu->arch.load_tm++; /* see load_ebb comment */
+ if (!vcpu->arch.load_tm)
+ vcpu->arch.hfscr &= ~HFSCR_TM;
+ }
}
}
}
@@ -629,8 +637,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
msr |= MSR_VEC;
if (cpu_has_feature(CPU_FTR_VSX))
msr |= MSR_VSX;
- if (cpu_has_feature(CPU_FTR_TM) ||
- cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
+ if ((cpu_has_feature(CPU_FTR_TM) ||
+ cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) &&
+ (vcpu->arch.hfscr & HFSCR_TM))
msr |= MSR_TM;
msr = msr_check_and_set(msr);
/* Save MSR for restore. This is after hard disable, so EE is clear. */
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 42/55] KVM: PPC: Book3S HV P9: Use Linux SPR save/restore to manage some host SPRs
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (40 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 41/55] KVM: PPC: Book3S HV P9: Demand fault TM " Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 43/55] KVM: PPC: Book3S HV P9: Comment and fix MMU context switching code Nicholas Piggin
` (12 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
Linux implements SPR save/restore including storage space for registers
in the task struct for process context switching. Make use of this
similarly to the way we make use of the context switching fp/vec save
restore.
This improves code reuse, allows some stack space to be saved, and helps
with avoiding VRSAVE updates if they are not required.
-61 cycles (6620) POWER9 virt-mode NULL hcall
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/switch_to.h | 2 +
arch/powerpc/kernel/process.c | 6 ++
arch/powerpc/kvm/book3s_hv.c | 21 +-----
arch/powerpc/kvm/book3s_hv.h | 3 -
arch/powerpc/kvm/book3s_hv_p9_entry.c | 93 +++++++++++++++++++--------
5 files changed, 74 insertions(+), 51 deletions(-)
diff --git a/arch/powerpc/include/asm/switch_to.h b/arch/powerpc/include/asm/switch_to.h
index 9d1fbd8be1c7..de17c45314bc 100644
--- a/arch/powerpc/include/asm/switch_to.h
+++ b/arch/powerpc/include/asm/switch_to.h
@@ -112,6 +112,8 @@ static inline void clear_task_ebb(struct task_struct *t)
#endif
}
+void kvmppc_save_current_sprs(void);
+
extern int set_thread_tidr(struct task_struct *t);
#endif /* _ASM_POWERPC_SWITCH_TO_H */
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 00b55b38a460..d54baa3e20d2 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1180,6 +1180,12 @@ static inline void save_sprs(struct thread_struct *t)
#endif
}
+void kvmppc_save_current_sprs(void)
+{
+ save_sprs(¤t->thread);
+}
+EXPORT_SYMBOL_GPL(kvmppc_save_current_sprs);
+
static inline void restore_sprs(struct thread_struct *old_thread,
struct thread_struct *new_thread)
{
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 5b2114c00c43..c0a04ce39e00 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -4510,9 +4510,6 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu)
struct kvm_run *run = vcpu->run;
int r;
int srcu_idx;
- unsigned long ebb_regs[3] = {}; /* shut up GCC */
- unsigned long user_tar = 0;
- unsigned int user_vrsave;
struct kvm *kvm;
unsigned long msr;
@@ -4573,14 +4570,7 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu)
save_user_regs_kvm();
- /* Save userspace EBB and other register values */
- if (cpu_has_feature(CPU_FTR_ARCH_207S)) {
- ebb_regs[0] = mfspr(SPRN_EBBHR);
- ebb_regs[1] = mfspr(SPRN_EBBRR);
- ebb_regs[2] = mfspr(SPRN_BESCR);
- user_tar = mfspr(SPRN_TAR);
- }
- user_vrsave = mfspr(SPRN_VRSAVE);
+ kvmppc_save_current_sprs();
vcpu->arch.waitp = &vcpu->arch.vcore->wait;
vcpu->arch.pgdir = kvm->mm->pgd;
@@ -4621,15 +4611,6 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu)
}
} while (is_kvmppc_resume_guest(r));
- /* Restore userspace EBB and other register values */
- if (cpu_has_feature(CPU_FTR_ARCH_207S)) {
- mtspr(SPRN_EBBHR, ebb_regs[0]);
- mtspr(SPRN_EBBRR, ebb_regs[1]);
- mtspr(SPRN_BESCR, ebb_regs[2]);
- mtspr(SPRN_TAR, user_tar);
- }
- mtspr(SPRN_VRSAVE, user_vrsave);
-
vcpu->arch.state = KVMPPC_VCPU_NOTREADY;
atomic_dec(&kvm->arch.vcpus_running);
diff --git a/arch/powerpc/kvm/book3s_hv.h b/arch/powerpc/kvm/book3s_hv.h
index a9065a380547..04884e271862 100644
--- a/arch/powerpc/kvm/book3s_hv.h
+++ b/arch/powerpc/kvm/book3s_hv.h
@@ -3,11 +3,8 @@
* Privileged (non-hypervisor) host registers to save.
*/
struct p9_host_os_sprs {
- unsigned long dscr;
- unsigned long tidr;
unsigned long iamr;
unsigned long amr;
- unsigned long fscr;
unsigned int pmc1;
unsigned int pmc2;
diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
index db5eb83e26d1..5fca0a09425d 100644
--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
+++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
@@ -223,15 +223,26 @@ EXPORT_SYMBOL_GPL(switch_pmu_to_host);
static void load_spr_state(struct kvm_vcpu *vcpu,
struct p9_host_os_sprs *host_os_sprs)
{
+ /* TAR is very fast */
mtspr(SPRN_TAR, vcpu->arch.tar);
+#ifdef CONFIG_ALTIVEC
+ if (cpu_has_feature(CPU_FTR_ALTIVEC) &&
+ current->thread.vrsave != vcpu->arch.vrsave)
+ mtspr(SPRN_VRSAVE, vcpu->arch.vrsave);
+#endif
+
if (vcpu->arch.hfscr & HFSCR_EBB) {
- mtspr(SPRN_EBBHR, vcpu->arch.ebbhr);
- mtspr(SPRN_EBBRR, vcpu->arch.ebbrr);
- mtspr(SPRN_BESCR, vcpu->arch.bescr);
+ if (current->thread.ebbhr != vcpu->arch.ebbhr)
+ mtspr(SPRN_EBBHR, vcpu->arch.ebbhr);
+ if (current->thread.ebbrr != vcpu->arch.ebbrr)
+ mtspr(SPRN_EBBRR, vcpu->arch.ebbrr);
+ if (current->thread.bescr != vcpu->arch.bescr)
+ mtspr(SPRN_BESCR, vcpu->arch.bescr);
}
- if (!cpu_has_feature(CPU_FTR_ARCH_31))
+ if (!cpu_has_feature(CPU_FTR_ARCH_31) &&
+ current->thread.tidr != vcpu->arch.tid)
mtspr(SPRN_TIDR, vcpu->arch.tid);
if (host_os_sprs->iamr != vcpu->arch.iamr)
mtspr(SPRN_IAMR, vcpu->arch.iamr);
@@ -239,9 +250,9 @@ static void load_spr_state(struct kvm_vcpu *vcpu,
mtspr(SPRN_AMR, vcpu->arch.amr);
if (vcpu->arch.uamor != 0)
mtspr(SPRN_UAMOR, vcpu->arch.uamor);
- if (host_os_sprs->fscr != vcpu->arch.fscr)
+ if (current->thread.fscr != vcpu->arch.fscr)
mtspr(SPRN_FSCR, vcpu->arch.fscr);
- if (host_os_sprs->dscr != vcpu->arch.dscr)
+ if (current->thread.dscr != vcpu->arch.dscr)
mtspr(SPRN_DSCR, vcpu->arch.dscr);
if (vcpu->arch.pspb != 0)
mtspr(SPRN_PSPB, vcpu->arch.pspb);
@@ -261,20 +272,15 @@ static void store_spr_state(struct kvm_vcpu *vcpu)
{
vcpu->arch.tar = mfspr(SPRN_TAR);
+#ifdef CONFIG_ALTIVEC
+ if (cpu_has_feature(CPU_FTR_ALTIVEC))
+ vcpu->arch.vrsave = mfspr(SPRN_VRSAVE);
+#endif
+
if (vcpu->arch.hfscr & HFSCR_EBB) {
vcpu->arch.ebbhr = mfspr(SPRN_EBBHR);
vcpu->arch.ebbrr = mfspr(SPRN_EBBRR);
vcpu->arch.bescr = mfspr(SPRN_BESCR);
- /*
- * This is like load_fp in context switching, turn off the
- * facility after it wraps the u8 to try avoiding saving
- * and restoring the registers each partition switch.
- */
- if (!vcpu->arch.nested) {
- vcpu->arch.load_ebb++;
- if (!vcpu->arch.load_ebb)
- vcpu->arch.hfscr &= ~HFSCR_EBB;
- }
}
if (!cpu_has_feature(CPU_FTR_ARCH_31))
@@ -315,7 +321,6 @@ bool load_vcpu_state(struct kvm_vcpu *vcpu,
#ifdef CONFIG_ALTIVEC
load_vr_state(&vcpu->arch.vr);
#endif
- mtspr(SPRN_VRSAVE, vcpu->arch.vrsave);
return ret;
}
@@ -329,7 +334,6 @@ void store_vcpu_state(struct kvm_vcpu *vcpu)
#ifdef CONFIG_ALTIVEC
store_vr_state(&vcpu->arch.vr);
#endif
- vcpu->arch.vrsave = mfspr(SPRN_VRSAVE);
if ((cpu_has_feature(CPU_FTR_TM) ||
cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) &&
@@ -354,12 +358,8 @@ EXPORT_SYMBOL_GPL(store_vcpu_state);
void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs)
{
- if (!cpu_has_feature(CPU_FTR_ARCH_31))
- host_os_sprs->tidr = mfspr(SPRN_TIDR);
host_os_sprs->iamr = mfspr(SPRN_IAMR);
host_os_sprs->amr = mfspr(SPRN_AMR);
- host_os_sprs->fscr = mfspr(SPRN_FSCR);
- host_os_sprs->dscr = mfspr(SPRN_DSCR);
}
EXPORT_SYMBOL_GPL(save_p9_host_os_sprs);
@@ -367,26 +367,63 @@ EXPORT_SYMBOL_GPL(save_p9_host_os_sprs);
void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu,
struct p9_host_os_sprs *host_os_sprs)
{
+ /*
+ * current->thread.xxx registers must all be restored to host
+ * values before a potential context switch, othrewise the context
+ * switch itself will overwrite current->thread.xxx with the values
+ * from the guest SPRs.
+ */
+
mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso);
- if (!cpu_has_feature(CPU_FTR_ARCH_31))
- mtspr(SPRN_TIDR, host_os_sprs->tidr);
+ if (!cpu_has_feature(CPU_FTR_ARCH_31) &&
+ current->thread.tidr != vcpu->arch.tid)
+ mtspr(SPRN_TIDR, current->thread.tidr);
if (host_os_sprs->iamr != vcpu->arch.iamr)
mtspr(SPRN_IAMR, host_os_sprs->iamr);
if (vcpu->arch.uamor != 0)
mtspr(SPRN_UAMOR, 0);
if (host_os_sprs->amr != vcpu->arch.amr)
mtspr(SPRN_AMR, host_os_sprs->amr);
- if (host_os_sprs->fscr != vcpu->arch.fscr)
- mtspr(SPRN_FSCR, host_os_sprs->fscr);
- if (host_os_sprs->dscr != vcpu->arch.dscr)
- mtspr(SPRN_DSCR, host_os_sprs->dscr);
+ if (current->thread.fscr != vcpu->arch.fscr)
+ mtspr(SPRN_FSCR, current->thread.fscr);
+ if (current->thread.dscr != vcpu->arch.dscr)
+ mtspr(SPRN_DSCR, current->thread.dscr);
if (vcpu->arch.pspb != 0)
mtspr(SPRN_PSPB, 0);
/* Save guest CTRL register, set runlatch to 1 */
if (!(vcpu->arch.ctrl & 1))
mtspr(SPRN_CTRLT, 1);
+
+#ifdef CONFIG_ALTIVEC
+ if (cpu_has_feature(CPU_FTR_ALTIVEC) &&
+ vcpu->arch.vrsave != current->thread.vrsave)
+ mtspr(SPRN_VRSAVE, current->thread.vrsave);
+#endif
+ if (vcpu->arch.hfscr & HFSCR_EBB) {
+ if (vcpu->arch.bescr != current->thread.bescr)
+ mtspr(SPRN_BESCR, current->thread.bescr);
+ if (vcpu->arch.ebbhr != current->thread.ebbhr)
+ mtspr(SPRN_EBBHR, current->thread.ebbhr);
+ if (vcpu->arch.ebbrr != current->thread.ebbrr)
+ mtspr(SPRN_EBBRR, current->thread.ebbrr);
+
+ if (!vcpu->arch.nested) {
+ /*
+ * This is like load_fp in context switching, turn off
+ * the facility after it wraps the u8 to try avoiding
+ * saving and restoring the registers each partition
+ * switch.
+ */
+ vcpu->arch.load_ebb++;
+ if (!vcpu->arch.load_ebb)
+ vcpu->arch.hfscr &= ~HFSCR_EBB;
+ }
+ }
+
+ if (vcpu->arch.tar != current->thread.tar)
+ mtspr(SPRN_TAR, current->thread.tar);
}
EXPORT_SYMBOL_GPL(restore_p9_host_os_sprs);
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 43/55] KVM: PPC: Book3S HV P9: Comment and fix MMU context switching code
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (41 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 42/55] KVM: PPC: Book3S HV P9: Use Linux SPR save/restore to manage some host SPRs Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 44/55] KVM: PPC: Book3S HV P9: Test dawr_enabled() before saving host DAWR SPRs Nicholas Piggin
` (11 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
Tighten up partition switching code synchronisation and comments.
In particular, hwsync ; isync is required after the last access that is
performed in the context of a partition, before the partition is
switched away from.
-301 cycles (6319) POWER9 virt-mode NULL hcall
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_64_mmu_radix.c | 4 +++
arch/powerpc/kvm/book3s_hv_p9_entry.c | 40 +++++++++++++++++++-------
2 files changed, 33 insertions(+), 11 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index b5905ae4377c..c5508744e14c 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -54,6 +54,8 @@ unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid,
preempt_disable();
+ asm volatile("hwsync" ::: "memory");
+ isync();
/* switch the lpid first to avoid running host with unallocated pid */
old_lpid = mfspr(SPRN_LPID);
if (old_lpid != lpid)
@@ -70,6 +72,8 @@ unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid,
else
ret = copy_to_user_nofault((void __user *)to, from, n);
+ asm volatile("hwsync" ::: "memory");
+ isync();
/* switch the pid first to avoid running host with unallocated pid */
if (quadrant == 1 && pid != old_pid)
mtspr(SPRN_PID, old_pid);
diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
index 5fca0a09425d..0aad2bf29d6e 100644
--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
+++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
@@ -521,17 +521,19 @@ static void switch_mmu_to_guest_radix(struct kvm *kvm, struct kvm_vcpu *vcpu, u6
lpid = nested ? nested->shadow_lpid : kvm->arch.lpid;
/*
- * All the isync()s are overkill but trivially follow the ISA
- * requirements. Some can likely be replaced with justification
- * comment for why they are not needed.
+ * Prior memory accesses to host PID Q3 must be completed before we
+ * start switching, and stores must be drained to avoid not-my-LPAR
+ * logic (see switch_mmu_to_host).
*/
+ asm volatile("hwsync" ::: "memory");
isync();
mtspr(SPRN_LPID, lpid);
- isync();
mtspr(SPRN_LPCR, lpcr);
- isync();
mtspr(SPRN_PID, vcpu->arch.pid);
- isync();
+ /*
+ * isync not required here because we are HRFID'ing to guest before
+ * any guest context access, which is context synchronising.
+ */
}
static void switch_mmu_to_guest_hpt(struct kvm *kvm, struct kvm_vcpu *vcpu, u64 lpcr)
@@ -541,25 +543,41 @@ static void switch_mmu_to_guest_hpt(struct kvm *kvm, struct kvm_vcpu *vcpu, u64
lpid = kvm->arch.lpid;
+ /*
+ * See switch_mmu_to_guest_radix. ptesync should not be required here
+ * even if the host is in HPT mode because speculative accesses would
+ * not cause RC updates (we are in real mode).
+ */
+ asm volatile("hwsync" ::: "memory");
+ isync();
mtspr(SPRN_LPID, lpid);
mtspr(SPRN_LPCR, lpcr);
mtspr(SPRN_PID, vcpu->arch.pid);
for (i = 0; i < vcpu->arch.slb_max; i++)
mtslb(vcpu->arch.slb[i].orige, vcpu->arch.slb[i].origv);
-
- isync();
+ /*
+ * isync not required here, see switch_mmu_to_guest_radix.
+ */
}
static void switch_mmu_to_host(struct kvm *kvm, u32 pid)
{
+ /*
+ * The guest has exited, so guest MMU context is no longer being
+ * non-speculatively accessed, but a hwsync is needed before the
+ * mtLPIDR / mtPIDR switch, in order to ensure all stores are drained,
+ * so the not-my-LPAR tlbie logic does not overlook them.
+ */
+ asm volatile("hwsync" ::: "memory");
isync();
mtspr(SPRN_PID, pid);
- isync();
mtspr(SPRN_LPID, kvm->arch.host_lpid);
- isync();
mtspr(SPRN_LPCR, kvm->arch.host_lpcr);
- isync();
+ /*
+ * isync is not required after the switch, because mtmsrd with L=0
+ * is performed after this switch, which is context synchronising.
+ */
if (!radix_enabled())
slb_restore_bolted_realmode();
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 44/55] KVM: PPC: Book3S HV P9: Test dawr_enabled() before saving host DAWR SPRs
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (42 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 43/55] KVM: PPC: Book3S HV P9: Comment and fix MMU context switching code Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 45/55] KVM: PPC: Book3S HV P9: Don't restore PSSCR if not needed Nicholas Piggin
` (10 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
Some of the DAWR SPR access is already predicated on dawr_enabled(),
apply this to the remainder of the accesses.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv_p9_entry.c | 34 ++++++++++++++++-----------
1 file changed, 20 insertions(+), 14 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
index 0aad2bf29d6e..976687c3709a 100644
--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
+++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
@@ -656,13 +656,16 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
host_hfscr = mfspr(SPRN_HFSCR);
host_ciabr = mfspr(SPRN_CIABR);
- host_dawr0 = mfspr(SPRN_DAWR0);
- host_dawrx0 = mfspr(SPRN_DAWRX0);
host_psscr = mfspr(SPRN_PSSCR);
host_pidr = mfspr(SPRN_PID);
- if (cpu_has_feature(CPU_FTR_DAWR1)) {
- host_dawr1 = mfspr(SPRN_DAWR1);
- host_dawrx1 = mfspr(SPRN_DAWRX1);
+
+ if (dawr_enabled()) {
+ host_dawr0 = mfspr(SPRN_DAWR0);
+ host_dawrx0 = mfspr(SPRN_DAWRX0);
+ if (cpu_has_feature(CPU_FTR_DAWR1)) {
+ host_dawr1 = mfspr(SPRN_DAWR1);
+ host_dawrx1 = mfspr(SPRN_DAWRX1);
+ }
}
local_paca->kvm_hstate.host_purr = mfspr(SPRN_PURR);
@@ -996,15 +999,18 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
mtspr(SPRN_HFSCR, host_hfscr);
if (vcpu->arch.ciabr != host_ciabr)
mtspr(SPRN_CIABR, host_ciabr);
- if (vcpu->arch.dawr0 != host_dawr0)
- mtspr(SPRN_DAWR0, host_dawr0);
- if (vcpu->arch.dawrx0 != host_dawrx0)
- mtspr(SPRN_DAWRX0, host_dawrx0);
- if (cpu_has_feature(CPU_FTR_DAWR1)) {
- if (vcpu->arch.dawr1 != host_dawr1)
- mtspr(SPRN_DAWR1, host_dawr1);
- if (vcpu->arch.dawrx1 != host_dawrx1)
- mtspr(SPRN_DAWRX1, host_dawrx1);
+
+ if (dawr_enabled()) {
+ if (vcpu->arch.dawr0 != host_dawr0)
+ mtspr(SPRN_DAWR0, host_dawr0);
+ if (vcpu->arch.dawrx0 != host_dawrx0)
+ mtspr(SPRN_DAWRX0, host_dawrx0);
+ if (cpu_has_feature(CPU_FTR_DAWR1)) {
+ if (vcpu->arch.dawr1 != host_dawr1)
+ mtspr(SPRN_DAWR1, host_dawr1);
+ if (vcpu->arch.dawrx1 != host_dawrx1)
+ mtspr(SPRN_DAWRX1, host_dawrx1);
+ }
}
if (vc->dpdes)
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 45/55] KVM: PPC: Book3S HV P9: Don't restore PSSCR if not needed
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (43 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 44/55] KVM: PPC: Book3S HV P9: Test dawr_enabled() before saving host DAWR SPRs Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 46/55] KVM: PPC: Book3S HV P9: Avoid tlbsync sequence on radix guest exit Nicholas Piggin
` (9 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
This also moves the PSSCR update in nested entry to avoid a SPR
scoreboard stall.
-45 cycles (6276) POWER9 virt-mode NULL hcall
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 7 +++++--
arch/powerpc/kvm/book3s_hv_p9_entry.c | 26 +++++++++++++++++++-------
2 files changed, 24 insertions(+), 9 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index c0a04ce39e00..a37ab798eb7c 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -3856,7 +3856,9 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns
if (unlikely(load_vcpu_state(vcpu, &host_os_sprs)))
msr = mfmsr(); /* TM restore can update msr */
- mtspr(SPRN_PSSCR_PR, vcpu->arch.psscr);
+ if (vcpu->arch.psscr != host_psscr)
+ mtspr(SPRN_PSSCR_PR, vcpu->arch.psscr);
+
kvmhv_save_hv_regs(vcpu, &hvregs);
hvregs.lpcr = lpcr;
vcpu->arch.regs.msr = vcpu->arch.shregs.msr;
@@ -3897,7 +3899,6 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns
vcpu->arch.shregs.dar = mfspr(SPRN_DAR);
vcpu->arch.shregs.dsisr = mfspr(SPRN_DSISR);
vcpu->arch.psscr = mfspr(SPRN_PSSCR_PR);
- mtspr(SPRN_PSSCR_PR, host_psscr);
store_vcpu_state(vcpu);
@@ -3910,6 +3911,8 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns
timer_rearm_host_dec(*tb);
restore_p9_host_os_sprs(vcpu, &host_os_sprs);
+ if (vcpu->arch.psscr != host_psscr)
+ mtspr(SPRN_PSSCR_PR, host_psscr);
return trap;
}
diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
index 976687c3709a..52690af66ca9 100644
--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
+++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
@@ -639,6 +639,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
unsigned long host_dawr0;
unsigned long host_dawrx0;
unsigned long host_psscr;
+ unsigned long host_hpsscr;
unsigned long host_pidr;
unsigned long host_dawr1;
unsigned long host_dawrx1;
@@ -656,7 +657,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
host_hfscr = mfspr(SPRN_HFSCR);
host_ciabr = mfspr(SPRN_CIABR);
- host_psscr = mfspr(SPRN_PSSCR);
+ host_psscr = mfspr(SPRN_PSSCR_PR);
+ if (cpu_has_feature(CPU_FTRS_POWER9_DD2_2))
+ host_hpsscr = mfspr(SPRN_PSSCR);
host_pidr = mfspr(SPRN_PID);
if (dawr_enabled()) {
@@ -740,8 +743,14 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
if (vcpu->arch.ciabr != host_ciabr)
mtspr(SPRN_CIABR, vcpu->arch.ciabr);
- mtspr(SPRN_PSSCR, vcpu->arch.psscr | PSSCR_EC |
- (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG));
+
+ if (cpu_has_feature(CPU_FTRS_POWER9_DD2_2)) {
+ mtspr(SPRN_PSSCR, vcpu->arch.psscr | PSSCR_EC |
+ (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG));
+ } else {
+ if (vcpu->arch.psscr != host_psscr)
+ mtspr(SPRN_PSSCR_PR, vcpu->arch.psscr);
+ }
mtspr(SPRN_HFSCR, vcpu->arch.hfscr);
@@ -947,7 +956,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
vcpu->arch.ic = mfspr(SPRN_IC);
vcpu->arch.pid = mfspr(SPRN_PID);
- vcpu->arch.psscr = mfspr(SPRN_PSSCR) & PSSCR_GUEST_VIS;
+ vcpu->arch.psscr = mfspr(SPRN_PSSCR_PR);
vcpu->arch.shregs.sprg0 = mfspr(SPRN_SPRG0);
vcpu->arch.shregs.sprg1 = mfspr(SPRN_SPRG1);
@@ -993,9 +1002,12 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
mtspr(SPRN_PURR, local_paca->kvm_hstate.host_purr);
mtspr(SPRN_SPURR, local_paca->kvm_hstate.host_spurr);
- /* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */
- mtspr(SPRN_PSSCR, host_psscr |
- (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG));
+ if (cpu_has_feature(CPU_FTRS_POWER9_DD2_2)) {
+ /* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */
+ mtspr(SPRN_PSSCR, host_hpsscr |
+ (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG));
+ }
+
mtspr(SPRN_HFSCR, host_hfscr);
if (vcpu->arch.ciabr != host_ciabr)
mtspr(SPRN_CIABR, host_ciabr);
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 46/55] KVM: PPC: Book3S HV P9: Avoid tlbsync sequence on radix guest exit
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (44 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 45/55] KVM: PPC: Book3S HV P9: Don't restore PSSCR if not needed Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 47/55] KVM: PPC: Book3S HV Nested: Avoid extra mftb() in nested entry Nicholas Piggin
` (8 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
Use the existing TLB flushing logic to IPI the previous CPU and run the
necessary barriers before running a guest vCPU on a new physical CPU,
to do the necessary radix GTSE barriers for handling the case of an
interrupted guest tlbie sequence.
This results in more IPIs than the TLB flush logic requires, but it's
a significant win for common case scheduling when the vCPU remains on
the same physical CPU.
-522 cycles (5754) POWER9 virt-mode NULL hcall
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 31 +++++++++++++++++++++++----
arch/powerpc/kvm/book3s_hv_p9_entry.c | 9 --------
2 files changed, 27 insertions(+), 13 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index a37ab798eb7c..3e5c6b745394 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -3005,6 +3005,25 @@ static void radix_flush_cpu(struct kvm *kvm, int cpu, struct kvm_vcpu *vcpu)
smp_call_function_single(i, do_nothing, NULL, 1);
}
+static void do_migrate_away_vcpu(void *arg)
+{
+ struct kvm_vcpu *vcpu = arg;
+ struct kvm *kvm = vcpu->kvm;
+
+ /*
+ * If the guest has GTSE, it may execute tlbie, so do a eieio; tlbsync;
+ * ptesync sequence on the old CPU before migrating to a new one, in
+ * case we interrupted the guest between a tlbie ; eieio ;
+ * tlbsync; ptesync sequence.
+ *
+ * Otherwise, ptesync is sufficient.
+ */
+ if (kvm->arch.lpcr & LPCR_GTSE)
+ asm volatile("eieio; tlbsync; ptesync");
+ else
+ asm volatile("ptesync");
+}
+
static void kvmppc_prepare_radix_vcpu(struct kvm_vcpu *vcpu, int pcpu)
{
struct kvm_nested_guest *nested = vcpu->arch.nested;
@@ -3032,10 +3051,14 @@ static void kvmppc_prepare_radix_vcpu(struct kvm_vcpu *vcpu, int pcpu)
* so we use a single bit in .need_tlb_flush for all 4 threads.
*/
if (prev_cpu != pcpu) {
- if (prev_cpu >= 0 &&
- cpu_first_tlb_thread_sibling(prev_cpu) !=
- cpu_first_tlb_thread_sibling(pcpu))
- radix_flush_cpu(kvm, prev_cpu, vcpu);
+ if (prev_cpu >= 0) {
+ if (cpu_first_tlb_thread_sibling(prev_cpu) !=
+ cpu_first_tlb_thread_sibling(pcpu))
+ radix_flush_cpu(kvm, prev_cpu, vcpu);
+
+ smp_call_function_single(prev_cpu,
+ do_migrate_away_vcpu, vcpu, 1);
+ }
if (nested)
nested->prev_cpu[vcpu->arch.nested_vcpu_id] = pcpu;
else
diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
index 52690af66ca9..1bb81be09d4f 100644
--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
+++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
@@ -1039,15 +1039,6 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
local_paca->kvm_hstate.in_guest = KVM_GUEST_MODE_NONE;
- if (kvm_is_radix(kvm)) {
- /*
- * Since this is radix, do a eieio; tlbsync; ptesync sequence
- * in case we interrupted the guest between a tlbie and a
- * ptesync.
- */
- asm volatile("eieio; tlbsync; ptesync");
- }
-
/*
* cp_abort is required if the processor supports local copy-paste
* to clear the copy buffer that was under control of the guest.
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 47/55] KVM: PPC: Book3S HV Nested: Avoid extra mftb() in nested entry
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (45 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 46/55] KVM: PPC: Book3S HV P9: Avoid tlbsync sequence on radix guest exit Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 48/55] KVM: PPC: Book3S HV P9: Improve mfmsr performance on entry Nicholas Piggin
` (7 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
mftb() is expensive and one can be avoided on nested guest dispatch.
If the time checking code distinguishes between the L0 timer and the
nested HV timer, then both can be tested in the same place with the
same mftb() value.
This also nicely illustrates the relationship between the L0 and nested
HV timers.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/kvm_asm.h | 1 +
arch/powerpc/kvm/book3s_hv.c | 12 ++++++++++++
arch/powerpc/kvm/book3s_hv_nested.c | 5 -----
3 files changed, 13 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_asm.h b/arch/powerpc/include/asm/kvm_asm.h
index fbbf3cec92e9..d68d71987d5c 100644
--- a/arch/powerpc/include/asm/kvm_asm.h
+++ b/arch/powerpc/include/asm/kvm_asm.h
@@ -79,6 +79,7 @@
#define BOOK3S_INTERRUPT_FP_UNAVAIL 0x800
#define BOOK3S_INTERRUPT_DECREMENTER 0x900
#define BOOK3S_INTERRUPT_HV_DECREMENTER 0x980
+#define BOOK3S_INTERRUPT_NESTED_HV_DECREMENTER 0x1980
#define BOOK3S_INTERRUPT_DOORBELL 0xa00
#define BOOK3S_INTERRUPT_SYSCALL 0xc00
#define BOOK3S_INTERRUPT_TRACE 0xd00
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 3e5c6b745394..b95e0c5e5557 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -1491,6 +1491,10 @@ static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu,
run->ready_for_interrupt_injection = 1;
switch (vcpu->arch.trap) {
/* We're good on these - the host merely wanted to get our attention */
+ case BOOK3S_INTERRUPT_NESTED_HV_DECREMENTER:
+ WARN_ON_ONCE(1); /* Should never happen */
+ vcpu->arch.trap = BOOK3S_INTERRUPT_HV_DECREMENTER;
+ fallthrough;
case BOOK3S_INTERRUPT_HV_DECREMENTER:
vcpu->stat.dec_exits++;
r = RESUME_GUEST;
@@ -1821,6 +1825,12 @@ static int kvmppc_handle_nested_exit(struct kvm_vcpu *vcpu)
vcpu->stat.ext_intr_exits++;
r = RESUME_GUEST;
break;
+ /* These need to go to the nested HV */
+ case BOOK3S_INTERRUPT_NESTED_HV_DECREMENTER:
+ vcpu->arch.trap = BOOK3S_INTERRUPT_HV_DECREMENTER;
+ vcpu->stat.dec_exits++;
+ r = RESUME_HOST;
+ break;
/* SR/HMI/PMI are HV interrupts that host has handled. Resume guest.*/
case BOOK3S_INTERRUPT_HMI:
case BOOK3S_INTERRUPT_PERFMON:
@@ -3955,6 +3965,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
return BOOK3S_INTERRUPT_HV_DECREMENTER;
if (next_timer < time_limit)
time_limit = next_timer;
+ else if (*tb >= time_limit) /* nested time limit */
+ return BOOK3S_INTERRUPT_NESTED_HV_DECREMENTER;
vcpu->arch.ceded = 0;
diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
index fad7bc8736ea..322064564260 100644
--- a/arch/powerpc/kvm/book3s_hv_nested.c
+++ b/arch/powerpc/kvm/book3s_hv_nested.c
@@ -397,11 +397,6 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
vcpu->arch.ret = RESUME_GUEST;
vcpu->arch.trap = 0;
do {
- if (mftb() >= hdec_exp) {
- vcpu->arch.trap = BOOK3S_INTERRUPT_HV_DECREMENTER;
- r = RESUME_HOST;
- break;
- }
r = kvmhv_run_single_vcpu(vcpu, hdec_exp, lpcr);
} while (is_kvmppc_resume_guest(r));
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 48/55] KVM: PPC: Book3S HV P9: Improve mfmsr performance on entry
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (46 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 47/55] KVM: PPC: Book3S HV Nested: Avoid extra mftb() in nested entry Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 49/55] KVM: PPC: Book3S HV P9: Optimise hash guest SLB saving Nicholas Piggin
` (6 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
Rearrange the MSR saving on entry so it does not follow the mtmsrd to
disable interrupts, avoiding a possible RAW scoreboard stall.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/kvm_book3s_64.h | 2 +
arch/powerpc/kvm/book3s_hv.c | 18 ++-----
arch/powerpc/kvm/book3s_hv_p9_entry.c | 66 +++++++++++++++---------
3 files changed, 47 insertions(+), 39 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h
index 52e2b7a352c7..4b0753e03731 100644
--- a/arch/powerpc/include/asm/kvm_book3s_64.h
+++ b/arch/powerpc/include/asm/kvm_book3s_64.h
@@ -154,6 +154,8 @@ static inline bool kvmhv_vcpu_is_radix(struct kvm_vcpu *vcpu)
return radix;
}
+unsigned long kvmppc_msr_hard_disable_set_facilities(struct kvm_vcpu *vcpu, unsigned long msr);
+
int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb);
#define KVM_DEFAULT_HPT_ORDER 24 /* 16MB HPT by default */
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index b95e0c5e5557..ee4e38cf5df4 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -3858,6 +3858,8 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns
s64 dec;
int trap;
+ msr = mfmsr();
+
save_p9_host_os_sprs(&host_os_sprs);
/*
@@ -3868,24 +3870,10 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns
*/
host_psscr = mfspr(SPRN_PSSCR_PR);
- hard_irq_disable();
+ kvmppc_msr_hard_disable_set_facilities(vcpu, msr);
if (lazy_irq_pending())
return 0;
- /* MSR bits may have been cleared by context switch */
- msr = 0;
- if (IS_ENABLED(CONFIG_PPC_FPU))
- msr |= MSR_FP;
- if (cpu_has_feature(CPU_FTR_ALTIVEC))
- msr |= MSR_VEC;
- if (cpu_has_feature(CPU_FTR_VSX))
- msr |= MSR_VSX;
- if ((cpu_has_feature(CPU_FTR_TM) ||
- cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) &&
- (vcpu->arch.hfscr & HFSCR_TM))
- msr |= MSR_TM;
- msr = msr_check_and_set(msr);
-
if (unlikely(load_vcpu_state(vcpu, &host_os_sprs)))
msr = mfmsr(); /* TM restore can update msr */
diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
index 1bb81be09d4f..1287dac918a0 100644
--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
+++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
@@ -622,6 +622,44 @@ static void save_clear_guest_mmu(struct kvm *kvm, struct kvm_vcpu *vcpu)
}
}
+unsigned long kvmppc_msr_hard_disable_set_facilities(struct kvm_vcpu *vcpu, unsigned long msr)
+{
+ unsigned long msr_needed = 0;
+
+ msr &= ~MSR_EE;
+
+ /* MSR bits may have been cleared by context switch so must recheck */
+ if (IS_ENABLED(CONFIG_PPC_FPU))
+ msr_needed |= MSR_FP;
+ if (cpu_has_feature(CPU_FTR_ALTIVEC))
+ msr_needed |= MSR_VEC;
+ if (cpu_has_feature(CPU_FTR_VSX))
+ msr_needed |= MSR_VSX;
+ if ((cpu_has_feature(CPU_FTR_TM) ||
+ cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) &&
+ (vcpu->arch.hfscr & HFSCR_TM))
+ msr_needed |= MSR_TM;
+
+ /*
+ * This could be combined with MSR[RI] clearing, but that expands
+ * the unrecoverable window. It would be better to cover unrecoverable
+ * with KVM bad interrupt handling rather than use MSR[RI] at all.
+ *
+ * Much more difficult and less worthwhile to combine with IR/DR
+ * disable.
+ */
+ if ((msr & msr_needed) != msr_needed) {
+ msr |= msr_needed;
+ __mtmsrd(msr, 0);
+ } else {
+ __hard_irq_disable();
+ }
+ local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
+
+ return msr;
+}
+EXPORT_SYMBOL_GPL(kvmppc_msr_hard_disable_set_facilities);
+
int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb)
{
struct p9_host_os_sprs host_os_sprs;
@@ -655,6 +693,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
vcpu->arch.ceded = 0;
+ /* Save MSR for restore, with EE clear. */
+ msr = mfmsr() & ~MSR_EE;
+
host_hfscr = mfspr(SPRN_HFSCR);
host_ciabr = mfspr(SPRN_CIABR);
host_psscr = mfspr(SPRN_PSSCR_PR);
@@ -676,35 +717,12 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
save_p9_host_os_sprs(&host_os_sprs);
- /*
- * This could be combined with MSR[RI] clearing, but that expands
- * the unrecoverable window. It would be better to cover unrecoverable
- * with KVM bad interrupt handling rather than use MSR[RI] at all.
- *
- * Much more difficult and less worthwhile to combine with IR/DR
- * disable.
- */
- hard_irq_disable();
+ msr = kvmppc_msr_hard_disable_set_facilities(vcpu, msr);
if (lazy_irq_pending()) {
trap = 0;
goto out;
}
- /* MSR bits may have been cleared by context switch */
- msr = 0;
- if (IS_ENABLED(CONFIG_PPC_FPU))
- msr |= MSR_FP;
- if (cpu_has_feature(CPU_FTR_ALTIVEC))
- msr |= MSR_VEC;
- if (cpu_has_feature(CPU_FTR_VSX))
- msr |= MSR_VSX;
- if ((cpu_has_feature(CPU_FTR_TM) ||
- cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) &&
- (vcpu->arch.hfscr & HFSCR_TM))
- msr |= MSR_TM;
- msr = msr_check_and_set(msr);
- /* Save MSR for restore. This is after hard disable, so EE is clear. */
-
if (unlikely(load_vcpu_state(vcpu, &host_os_sprs)))
msr = mfmsr(); /* MSR may have been updated */
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 49/55] KVM: PPC: Book3S HV P9: Optimise hash guest SLB saving
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (47 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 48/55] KVM: PPC: Book3S HV P9: Improve mfmsr performance on entry Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 50/55] KVM: PPC: Book3S HV P9: Add unlikely annotation for !mmu_ready Nicholas Piggin
` (5 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
slbmfee/slbmfev instructions are very expensive, moreso than a regular
mfspr instruction, so minimising them significantly improves hash guest
exit performance. The slbmfev is only required if slbmfee found a valid
SLB entry.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv_p9_entry.c | 22 ++++++++++++++++++----
1 file changed, 18 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
index 1287dac918a0..338873f90c72 100644
--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
+++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
@@ -477,10 +477,22 @@ static void __accumulate_time(struct kvm_vcpu *vcpu, struct kvmhv_tb_accumulator
#define accumulate_time(vcpu, next) do {} while (0)
#endif
-static inline void mfslb(unsigned int idx, u64 *slbee, u64 *slbev)
+static inline u64 mfslbv(unsigned int idx)
{
- asm volatile("slbmfev %0,%1" : "=r" (*slbev) : "r" (idx));
- asm volatile("slbmfee %0,%1" : "=r" (*slbee) : "r" (idx));
+ u64 slbev;
+
+ asm volatile("slbmfev %0,%1" : "=r" (slbev) : "r" (idx));
+
+ return slbev;
+}
+
+static inline u64 mfslbe(unsigned int idx)
+{
+ u64 slbee;
+
+ asm volatile("slbmfee %0,%1" : "=r" (slbee) : "r" (idx));
+
+ return slbee;
}
static inline void mtslb(u64 slbee, u64 slbev)
@@ -610,8 +622,10 @@ static void save_clear_guest_mmu(struct kvm *kvm, struct kvm_vcpu *vcpu)
*/
for (i = 0; i < vcpu->arch.slb_nr; i++) {
u64 slbee, slbev;
- mfslb(i, &slbee, &slbev);
+
+ slbee = mfslbe(i);
if (slbee & SLB_ESID_V) {
+ slbev = mfslbv(i);
vcpu->arch.slb[nr].orige = slbee | i;
vcpu->arch.slb[nr].origv = slbev;
nr++;
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 50/55] KVM: PPC: Book3S HV P9: Add unlikely annotation for !mmu_ready
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (48 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 49/55] KVM: PPC: Book3S HV P9: Optimise hash guest SLB saving Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 51/55] KVM: PPC: Book3S HV P9: Avoid cpu_in_guest atomics on entry and exit Nicholas Piggin
` (4 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
The mmu will almost always be ready.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index ee4e38cf5df4..2bd000e2c269 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -4376,7 +4376,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
vc->runner = vcpu;
/* See if the MMU is ready to go */
- if (!kvm->arch.mmu_ready) {
+ if (unlikely(!kvm->arch.mmu_ready)) {
r = kvmhv_setup_mmu(vcpu);
if (r) {
run->exit_reason = KVM_EXIT_FAIL_ENTRY;
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 51/55] KVM: PPC: Book3S HV P9: Avoid cpu_in_guest atomics on entry and exit
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (49 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 50/55] KVM: PPC: Book3S HV P9: Add unlikely annotation for !mmu_ready Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 52/55] KVM: PPC: Book3S HV P9: Remove most of the vcore logic Nicholas Piggin
` (3 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
cpu_in_guest is set to determine if a CPU needs to be IPI'ed to exit
the guest and notice the need_tlb_flush bit.
This can be implemented as a global per-CPU pointer to the currently
running guest instead of per-guest cpumasks, saving 2 atomics per
entry/exit. P7/8 doesn't require cpu_in_guest, nor does a nested HV
(only the L0 does), so move it to the P9 HV path.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/kvm_book3s_64.h | 1 -
arch/powerpc/include/asm/kvm_host.h | 1 -
arch/powerpc/kvm/book3s_hv.c | 38 +++++++++++++-----------
3 files changed, 21 insertions(+), 19 deletions(-)
diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h
index 4b0753e03731..793aa2868c3f 100644
--- a/arch/powerpc/include/asm/kvm_book3s_64.h
+++ b/arch/powerpc/include/asm/kvm_book3s_64.h
@@ -44,7 +44,6 @@ struct kvm_nested_guest {
struct mutex tlb_lock; /* serialize page faults and tlbies */
struct kvm_nested_guest *next;
cpumask_t need_tlb_flush;
- cpumask_t cpu_in_guest;
short prev_cpu[NR_CPUS];
u8 radix; /* is this nested guest radix */
};
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 74ee3a5b110e..650e1c0d118c 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -288,7 +288,6 @@ struct kvm_arch {
u32 online_vcores;
atomic_t hpte_mod_interest;
cpumask_t need_tlb_flush;
- cpumask_t cpu_in_guest;
u8 radix;
u8 fwnmi_enabled;
u8 secure_guest;
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 2bd000e2c269..6f29fa7d77cc 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -2989,30 +2989,33 @@ static void kvmppc_release_hwthread(int cpu)
tpaca->kvm_hstate.kvm_split_mode = NULL;
}
+static DEFINE_PER_CPU(struct kvm *, cpu_in_guest);
+
static void radix_flush_cpu(struct kvm *kvm, int cpu, struct kvm_vcpu *vcpu)
{
struct kvm_nested_guest *nested = vcpu->arch.nested;
- cpumask_t *cpu_in_guest;
int i;
cpu = cpu_first_tlb_thread_sibling(cpu);
- if (nested) {
+ if (nested)
cpumask_set_cpu(cpu, &nested->need_tlb_flush);
- cpu_in_guest = &nested->cpu_in_guest;
- } else {
+ else
cpumask_set_cpu(cpu, &kvm->arch.need_tlb_flush);
- cpu_in_guest = &kvm->arch.cpu_in_guest;
- }
/*
- * Make sure setting of bit in need_tlb_flush precedes
- * testing of cpu_in_guest bits. The matching barrier on
- * the other side is the first smp_mb() in kvmppc_run_core().
+ * Make sure setting of bit in need_tlb_flush precedes testing of
+ * cpu_in_guest. The matching barrier on the other side is hwsync
+ * when switching to guest MMU mode, which happens between
+ * cpu_in_guest being set to the guest kvm, and need_tlb_flush bit
+ * being tested.
*/
smp_mb();
for (i = cpu; i <= cpu_last_tlb_thread_sibling(cpu);
- i += cpu_tlb_thread_sibling_step())
- if (cpumask_test_cpu(i, cpu_in_guest))
+ i += cpu_tlb_thread_sibling_step()) {
+ struct kvm *running = *per_cpu_ptr(&cpu_in_guest, i);
+
+ if (running == kvm)
smp_call_function_single(i, do_nothing, NULL, 1);
+ }
}
static void do_migrate_away_vcpu(void *arg)
@@ -3080,7 +3083,6 @@ static void kvmppc_start_thread(struct kvm_vcpu *vcpu, struct kvmppc_vcore *vc)
{
int cpu;
struct paca_struct *tpaca;
- struct kvm *kvm = vc->kvm;
cpu = vc->pcpu;
if (vcpu) {
@@ -3091,7 +3093,6 @@ static void kvmppc_start_thread(struct kvm_vcpu *vcpu, struct kvmppc_vcore *vc)
cpu += vcpu->arch.ptid;
vcpu->cpu = vc->pcpu;
vcpu->arch.thread_cpu = cpu;
- cpumask_set_cpu(cpu, &kvm->arch.cpu_in_guest);
}
tpaca = paca_ptrs[cpu];
tpaca->kvm_hstate.kvm_vcpu = vcpu;
@@ -3809,7 +3810,6 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc)
kvmppc_release_hwthread(pcpu + i);
if (sip && sip->napped[i])
kvmppc_ipi_thread(pcpu + i);
- cpumask_clear_cpu(pcpu + i, &vc->kvm->arch.cpu_in_guest);
}
spin_unlock(&vc->lock);
@@ -3977,8 +3977,14 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
}
} else {
+ struct kvm *kvm = vcpu->kvm;
+
kvmppc_xive_push_vcpu(vcpu);
+
+ __this_cpu_write(cpu_in_guest, kvm);
trap = kvmhv_vcpu_entry_p9(vcpu, time_limit, lpcr, tb);
+ __this_cpu_write(cpu_in_guest, NULL);
+
if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested &&
!(vcpu->arch.shregs.msr & MSR_PR)) {
unsigned long req = kvmppc_get_gpr(vcpu, 3);
@@ -4003,7 +4009,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
}
kvmppc_xive_pull_vcpu(vcpu);
- if (kvm_is_radix(vcpu->kvm))
+ if (kvm_is_radix(kvm))
vcpu->arch.slb_max = 0;
}
@@ -4468,8 +4474,6 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
powerpc_local_irq_pmu_restore(flags);
- cpumask_clear_cpu(pcpu, &kvm->arch.cpu_in_guest);
-
preempt_enable();
/*
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 52/55] KVM: PPC: Book3S HV P9: Remove most of the vcore logic
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (50 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 51/55] KVM: PPC: Book3S HV P9: Avoid cpu_in_guest atomics on entry and exit Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 53/55] KVM: PPC: Book3S HV P9: Tidy kvmppc_create_dtl_entry Nicholas Piggin
` (2 subsequent siblings)
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
The P9 path always uses one vcpu per vcore, so none of the the vcore,
locks, stolen time, blocking logic, shared waitq, etc., is required.
Remove most of it.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 147 ++++++++++++++++++++---------------
1 file changed, 85 insertions(+), 62 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 6f29fa7d77cc..f83ae33e875c 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -281,6 +281,8 @@ static void kvmppc_core_start_stolen(struct kvmppc_vcore *vc, u64 tb)
{
unsigned long flags;
+ WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300));
+
spin_lock_irqsave(&vc->stoltb_lock, flags);
vc->preempt_tb = tb;
spin_unlock_irqrestore(&vc->stoltb_lock, flags);
@@ -290,6 +292,8 @@ static void kvmppc_core_end_stolen(struct kvmppc_vcore *vc, u64 tb)
{
unsigned long flags;
+ WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300));
+
spin_lock_irqsave(&vc->stoltb_lock, flags);
if (vc->preempt_tb != TB_NIL) {
vc->stolen_tb += tb - vc->preempt_tb;
@@ -302,7 +306,12 @@ static void kvmppc_core_vcpu_load_hv(struct kvm_vcpu *vcpu, int cpu)
{
struct kvmppc_vcore *vc = vcpu->arch.vcore;
unsigned long flags;
- u64 now = mftb();
+ u64 now;
+
+ if (cpu_has_feature(CPU_FTR_ARCH_300))
+ return;
+
+ now = mftb();
/*
* We can test vc->runner without taking the vcore lock,
@@ -326,7 +335,12 @@ static void kvmppc_core_vcpu_put_hv(struct kvm_vcpu *vcpu)
{
struct kvmppc_vcore *vc = vcpu->arch.vcore;
unsigned long flags;
- u64 now = mftb();
+ u64 now;
+
+ if (cpu_has_feature(CPU_FTR_ARCH_300))
+ return;
+
+ now = mftb();
if (vc->runner == vcpu && vc->vcore_state >= VCORE_SLEEPING)
kvmppc_core_start_stolen(vc, now);
@@ -678,6 +692,8 @@ static u64 vcore_stolen_time(struct kvmppc_vcore *vc, u64 now)
u64 p;
unsigned long flags;
+ WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300));
+
spin_lock_irqsave(&vc->stoltb_lock, flags);
p = vc->stolen_tb;
if (vc->vcore_state != VCORE_INACTIVE &&
@@ -700,13 +716,19 @@ static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu,
dt = vcpu->arch.dtl_ptr;
vpa = vcpu->arch.vpa.pinned_addr;
now = tb;
- core_stolen = vcore_stolen_time(vc, now);
- stolen = core_stolen - vcpu->arch.stolen_logged;
- vcpu->arch.stolen_logged = core_stolen;
- spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags);
- stolen += vcpu->arch.busy_stolen;
- vcpu->arch.busy_stolen = 0;
- spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags);
+
+ if (cpu_has_feature(CPU_FTR_ARCH_300)) {
+ stolen = 0;
+ } else {
+ core_stolen = vcore_stolen_time(vc, now);
+ stolen = core_stolen - vcpu->arch.stolen_logged;
+ vcpu->arch.stolen_logged = core_stolen;
+ spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags);
+ stolen += vcpu->arch.busy_stolen;
+ vcpu->arch.busy_stolen = 0;
+ spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags);
+ }
+
if (!dt || !vpa)
return;
memset(dt, 0, sizeof(struct dtl_entry));
@@ -903,13 +925,14 @@ static int kvm_arch_vcpu_yield_to(struct kvm_vcpu *target)
* mode handler is not called but no other threads are in the
* source vcore.
*/
-
- spin_lock(&vcore->lock);
- if (target->arch.state == KVMPPC_VCPU_RUNNABLE &&
- vcore->vcore_state != VCORE_INACTIVE &&
- vcore->runner)
- target = vcore->runner;
- spin_unlock(&vcore->lock);
+ if (!cpu_has_feature(CPU_FTR_ARCH_300)) {
+ spin_lock(&vcore->lock);
+ if (target->arch.state == KVMPPC_VCPU_RUNNABLE &&
+ vcore->vcore_state != VCORE_INACTIVE &&
+ vcore->runner)
+ target = vcore->runner;
+ spin_unlock(&vcore->lock);
+ }
return kvm_vcpu_yield_to(target);
}
@@ -3105,13 +3128,6 @@ static void kvmppc_start_thread(struct kvm_vcpu *vcpu, struct kvmppc_vcore *vc)
kvmppc_ipi_thread(cpu);
}
-/* Old path does this in asm */
-static void kvmppc_stop_thread(struct kvm_vcpu *vcpu)
-{
- vcpu->cpu = -1;
- vcpu->arch.thread_cpu = -1;
-}
-
static void kvmppc_wait_for_nap(int n_threads)
{
int cpu = smp_processor_id();
@@ -3200,6 +3216,8 @@ static void kvmppc_vcore_preempt(struct kvmppc_vcore *vc)
{
struct preempted_vcore_list *lp = this_cpu_ptr(&preempted_vcores);
+ WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300));
+
vc->vcore_state = VCORE_PREEMPT;
vc->pcpu = smp_processor_id();
if (vc->num_threads < threads_per_vcore(vc->kvm)) {
@@ -3216,6 +3234,8 @@ static void kvmppc_vcore_end_preempt(struct kvmppc_vcore *vc)
{
struct preempted_vcore_list *lp;
+ WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300));
+
kvmppc_core_end_stolen(vc, mftb());
if (!list_empty(&vc->preempt_list)) {
lp = &per_cpu(preempted_vcores, vc->pcpu);
@@ -3944,7 +3964,6 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns
static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
unsigned long lpcr, u64 *tb)
{
- struct kvmppc_vcore *vc = vcpu->arch.vcore;
u64 next_timer;
int trap;
@@ -3960,9 +3979,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
kvmppc_subcore_enter_guest();
- vc->entry_exit_map = 1;
- vc->in_guest = 1;
-
vcpu_vpa_increment_dispatch(vcpu);
if (kvmhv_on_pseries()) {
@@ -4015,9 +4031,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
vcpu_vpa_increment_dispatch(vcpu);
- vc->entry_exit_map = 0x101;
- vc->in_guest = 0;
-
kvmppc_subcore_exit_guest();
return trap;
@@ -4083,6 +4096,13 @@ static bool kvmppc_vcpu_woken(struct kvm_vcpu *vcpu)
return false;
}
+static bool kvmppc_vcpu_check_block(struct kvm_vcpu *vcpu)
+{
+ if (!vcpu->arch.ceded || kvmppc_vcpu_woken(vcpu))
+ return true;
+ return false;
+}
+
/*
* Check to see if any of the runnable vcpus on the vcore have pending
* exceptions or are no longer ceded
@@ -4093,7 +4113,7 @@ static int kvmppc_vcore_check_block(struct kvmppc_vcore *vc)
int i;
for_each_runnable_thread(i, vcpu, vc) {
- if (!vcpu->arch.ceded || kvmppc_vcpu_woken(vcpu))
+ if (kvmppc_vcpu_check_block(vcpu))
return 1;
}
@@ -4110,6 +4130,8 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc)
int do_sleep = 1;
u64 block_ns;
+ WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300));
+
/* Poll for pending exceptions and ceded state */
cur = start_poll = ktime_get();
if (vc->halt_poll_ns) {
@@ -4375,11 +4397,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
vcpu->arch.ceded = 0;
vcpu->arch.run_task = current;
vcpu->arch.state = KVMPPC_VCPU_RUNNABLE;
- vcpu->arch.busy_preempt = TB_NIL;
vcpu->arch.last_inst = KVM_INST_FETCH_FAILED;
- vc->runnable_threads[0] = vcpu;
- vc->n_runnable = 1;
- vc->runner = vcpu;
/* See if the MMU is ready to go */
if (unlikely(!kvm->arch.mmu_ready)) {
@@ -4397,11 +4415,8 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
kvmppc_update_vpas(vcpu);
- init_vcore_to_run(vc);
-
preempt_disable();
pcpu = smp_processor_id();
- vc->pcpu = pcpu;
if (kvm_is_radix(kvm))
kvmppc_prepare_radix_vcpu(vcpu, pcpu);
@@ -4430,21 +4445,23 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
goto out;
}
- tb = mftb();
+ if (vcpu->arch.timer_running) {
+ hrtimer_try_to_cancel(&vcpu->arch.dec_timer);
+ vcpu->arch.timer_running = 0;
+ }
- vcpu->arch.stolen_logged = vcore_stolen_time(vc, tb);
- vc->preempt_tb = TB_NIL;
+ tb = mftb();
- kvmppc_clear_host_core(pcpu);
+ vcpu->cpu = pcpu;
+ vcpu->arch.thread_cpu = pcpu;
+ local_paca->kvm_hstate.kvm_vcpu = vcpu;
+ local_paca->kvm_hstate.ptid = 0;
+ local_paca->kvm_hstate.fake_suspend = 0;
- local_paca->kvm_hstate.napping = 0;
- local_paca->kvm_hstate.kvm_split_mode = NULL;
- kvmppc_start_thread(vcpu, vc);
+ vc->pcpu = pcpu; // for kvmppc_create_dtl_entry
kvmppc_create_dtl_entry(vcpu, vc, tb);
- trace_kvm_guest_enter(vcpu);
- vc->vcore_state = VCORE_RUNNING;
- trace_kvmppc_run_core(vc, 0);
+ trace_kvm_guest_enter(vcpu);
guest_enter_irqoff();
@@ -4466,11 +4483,10 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
set_irq_happened(trap);
- kvmppc_set_host_core(pcpu);
-
guest_exit_irqoff();
- kvmppc_stop_thread(vcpu);
+ vcpu->cpu = -1;
+ vcpu->arch.thread_cpu = -1;
powerpc_local_irq_pmu_restore(flags);
@@ -4497,28 +4513,31 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
}
vcpu->arch.ret = r;
- if (is_kvmppc_resume_guest(r) && vcpu->arch.ceded &&
- !kvmppc_vcpu_woken(vcpu)) {
+ if (is_kvmppc_resume_guest(r) && !kvmppc_vcpu_check_block(vcpu)) {
kvmppc_set_timer(vcpu);
- while (vcpu->arch.ceded && !kvmppc_vcpu_woken(vcpu)) {
+
+ prepare_to_rcuwait(&vcpu->wait);
+ for (;;) {
+ set_current_state(TASK_INTERRUPTIBLE);
if (signal_pending(current)) {
vcpu->stat.signal_exits++;
run->exit_reason = KVM_EXIT_INTR;
vcpu->arch.ret = -EINTR;
break;
}
- spin_lock(&vc->lock);
- kvmppc_vcore_blocked(vc);
- spin_unlock(&vc->lock);
+
+ if (kvmppc_vcpu_check_block(vcpu))
+ break;
+
+ trace_kvmppc_vcore_blocked(vc, 0);
+ schedule();
+ trace_kvmppc_vcore_blocked(vc, 1);
}
+ finish_rcuwait(&vcpu->wait);
}
vcpu->arch.ceded = 0;
- vc->vcore_state = VCORE_INACTIVE;
- trace_kvmppc_run_core(vc, 1);
-
done:
- kvmppc_remove_runnable(vc, vcpu, tb);
trace_kvmppc_run_vcpu_exit(vcpu);
return vcpu->arch.ret;
@@ -4602,7 +4621,8 @@ static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu)
kvmppc_save_current_sprs();
- vcpu->arch.waitp = &vcpu->arch.vcore->wait;
+ if (!cpu_has_feature(CPU_FTR_ARCH_300))
+ vcpu->arch.waitp = &vcpu->arch.vcore->wait;
vcpu->arch.pgdir = kvm->mm->pgd;
vcpu->arch.state = KVMPPC_VCPU_BUSY_IN_HOST;
@@ -5064,6 +5084,9 @@ void kvmppc_alloc_host_rm_ops(void)
int cpu, core;
int size;
+ if (cpu_has_feature(CPU_FTR_ARCH_300))
+ return;
+
/* Not the first time here ? */
if (kvmppc_host_rm_ops_hv != NULL)
return;
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 53/55] KVM: PPC: Book3S HV P9: Tidy kvmppc_create_dtl_entry
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (51 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 52/55] KVM: PPC: Book3S HV P9: Remove most of the vcore logic Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 54/55] KVM: PPC: Book3S HV P9: Stop using vc->dpdes Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 55/55] KVM: PPC: Book3S HV P9: Remove subcore HMI handling Nicholas Piggin
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
This goes further to removing vcores from the P9 path. Also avoid the
memset in favour of explicitly initialising all fields.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 61 +++++++++++++++++++++---------------
1 file changed, 35 insertions(+), 26 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index f83ae33e875c..f233ff1c18e1 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -703,41 +703,30 @@ static u64 vcore_stolen_time(struct kvmppc_vcore *vc, u64 now)
return p;
}
-static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu,
- struct kvmppc_vcore *vc, u64 tb)
+static void __kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu,
+ unsigned int pcpu, u64 now,
+ unsigned long stolen)
{
struct dtl_entry *dt;
struct lppaca *vpa;
- unsigned long stolen;
- unsigned long core_stolen;
- u64 now;
- unsigned long flags;
dt = vcpu->arch.dtl_ptr;
vpa = vcpu->arch.vpa.pinned_addr;
- now = tb;
-
- if (cpu_has_feature(CPU_FTR_ARCH_300)) {
- stolen = 0;
- } else {
- core_stolen = vcore_stolen_time(vc, now);
- stolen = core_stolen - vcpu->arch.stolen_logged;
- vcpu->arch.stolen_logged = core_stolen;
- spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags);
- stolen += vcpu->arch.busy_stolen;
- vcpu->arch.busy_stolen = 0;
- spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags);
- }
if (!dt || !vpa)
return;
- memset(dt, 0, sizeof(struct dtl_entry));
+
dt->dispatch_reason = 7;
- dt->processor_id = cpu_to_be16(vc->pcpu + vcpu->arch.ptid);
- dt->timebase = cpu_to_be64(now + vc->tb_offset);
+ dt->preempt_reason = 0;
+ dt->processor_id = cpu_to_be16(pcpu + vcpu->arch.ptid);
dt->enqueue_to_dispatch_time = cpu_to_be32(stolen);
+ dt->ready_to_enqueue_time = 0;
+ dt->waiting_to_ready_time = 0;
+ dt->timebase = cpu_to_be64(now);
+ dt->fault_addr = 0;
dt->srr0 = cpu_to_be64(kvmppc_get_pc(vcpu));
dt->srr1 = cpu_to_be64(vcpu->arch.shregs.msr);
+
++dt;
if (dt == vcpu->arch.dtl.pinned_end)
dt = vcpu->arch.dtl.pinned_addr;
@@ -748,6 +737,27 @@ static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu,
vcpu->arch.dtl.dirty = true;
}
+static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu,
+ struct kvmppc_vcore *vc)
+{
+ unsigned long stolen;
+ unsigned long core_stolen;
+ u64 now;
+ unsigned long flags;
+
+ now = mftb();
+
+ core_stolen = vcore_stolen_time(vc, now);
+ stolen = core_stolen - vcpu->arch.stolen_logged;
+ vcpu->arch.stolen_logged = core_stolen;
+ spin_lock_irqsave(&vcpu->arch.tbacct_lock, flags);
+ stolen += vcpu->arch.busy_stolen;
+ vcpu->arch.busy_stolen = 0;
+ spin_unlock_irqrestore(&vcpu->arch.tbacct_lock, flags);
+
+ __kvmppc_create_dtl_entry(vcpu, vc->pcpu, now + vc->tb_offset, stolen);
+}
+
/* See if there is a doorbell interrupt pending for a vcpu */
static bool kvmppc_doorbell_pending(struct kvm_vcpu *vcpu)
{
@@ -3730,7 +3740,7 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc)
pvc->pcpu = pcpu + thr;
for_each_runnable_thread(i, vcpu, pvc) {
kvmppc_start_thread(vcpu, pvc);
- kvmppc_create_dtl_entry(vcpu, pvc, mftb());
+ kvmppc_create_dtl_entry(vcpu, pvc);
trace_kvm_guest_enter(vcpu);
if (!vcpu->arch.ptid)
thr0_done = true;
@@ -4281,7 +4291,7 @@ static int kvmppc_run_vcpu(struct kvm_vcpu *vcpu)
if ((vc->vcore_state == VCORE_PIGGYBACK ||
vc->vcore_state == VCORE_RUNNING) &&
!VCORE_IS_EXITING(vc)) {
- kvmppc_create_dtl_entry(vcpu, vc, mftb());
+ kvmppc_create_dtl_entry(vcpu, vc);
kvmppc_start_thread(vcpu, vc);
trace_kvm_guest_enter(vcpu);
} else if (vc->vcore_state == VCORE_SLEEPING) {
@@ -4458,8 +4468,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
local_paca->kvm_hstate.ptid = 0;
local_paca->kvm_hstate.fake_suspend = 0;
- vc->pcpu = pcpu; // for kvmppc_create_dtl_entry
- kvmppc_create_dtl_entry(vcpu, vc, tb);
+ __kvmppc_create_dtl_entry(vcpu, pcpu, tb + vc->tb_offset, 0);
trace_kvm_guest_enter(vcpu);
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 54/55] KVM: PPC: Book3S HV P9: Stop using vc->dpdes
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (52 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 53/55] KVM: PPC: Book3S HV P9: Tidy kvmppc_create_dtl_entry Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 55/55] KVM: PPC: Book3S HV P9: Remove subcore HMI handling Nicholas Piggin
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
The P9 path uses vc->dpdes only for msgsndp / SMT emulation. This adds
an ordering requirement between vcpu->doorbell_request and vc->dpdes for
no real benefit. Use vcpu->doorbell_request directly.
XXX: verify msgsndp / DPDES emulation works properly.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 18 ++++++++++--------
arch/powerpc/kvm/book3s_hv_builtin.c | 2 ++
arch/powerpc/kvm/book3s_hv_p9_entry.c | 14 ++++++++++----
3 files changed, 22 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index f233ff1c18e1..b727b2cfad98 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -766,6 +766,8 @@ static bool kvmppc_doorbell_pending(struct kvm_vcpu *vcpu)
if (vcpu->arch.doorbell_request)
return true;
+ if (cpu_has_feature(CPU_FTR_ARCH_300))
+ return false;
/*
* Ensure that the read of vcore->dpdes comes after the read
* of vcpu->doorbell_request. This barrier matches the
@@ -2166,8 +2168,10 @@ static int kvmppc_get_one_reg_hv(struct kvm_vcpu *vcpu, u64 id,
* either vcore->dpdes or doorbell_request.
* On POWER8, doorbell_request is 0.
*/
- *val = get_reg_val(id, vcpu->arch.vcore->dpdes |
- vcpu->arch.doorbell_request);
+ if (cpu_has_feature(CPU_FTR_ARCH_300))
+ *val = get_reg_val(id, vcpu->arch.doorbell_request);
+ else
+ *val = get_reg_val(id, vcpu->arch.vcore->dpdes);
break;
case KVM_REG_PPC_VTB:
*val = get_reg_val(id, vcpu->arch.vcore->vtb);
@@ -2404,7 +2408,10 @@ static int kvmppc_set_one_reg_hv(struct kvm_vcpu *vcpu, u64 id,
vcpu->arch.pspb = set_reg_val(id, *val);
break;
case KVM_REG_PPC_DPDES:
- vcpu->arch.vcore->dpdes = set_reg_val(id, *val);
+ if (cpu_has_feature(CPU_FTR_ARCH_300))
+ vcpu->arch.doorbell_request = set_reg_val(id, *val) & 1;
+ else
+ vcpu->arch.vcore->dpdes = set_reg_val(id, *val);
break;
case KVM_REG_PPC_VTB:
vcpu->arch.vcore->vtb = set_reg_val(id, *val);
@@ -4440,11 +4447,6 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
if (!nested) {
kvmppc_core_prepare_to_enter(vcpu);
- if (vcpu->arch.doorbell_request) {
- vc->dpdes = 1;
- smp_wmb();
- vcpu->arch.doorbell_request = 0;
- }
if (test_bit(BOOK3S_IRQPRIO_EXTERNAL,
&vcpu->arch.pending_exceptions))
lpcr |= LPCR_MER;
diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c
index a10bf93054ca..3ed90149ed2e 100644
--- a/arch/powerpc/kvm/book3s_hv_builtin.c
+++ b/arch/powerpc/kvm/book3s_hv_builtin.c
@@ -660,6 +660,8 @@ void kvmppc_guest_entry_inject_int(struct kvm_vcpu *vcpu)
int ext;
unsigned long lpcr;
+ WARN_ON_ONCE(cpu_has_feature(CPU_FTR_ARCH_300));
+
/* Insert EXTERNAL bit into LPCR at the MER bit position */
ext = (vcpu->arch.pending_exceptions >> BOOK3S_IRQPRIO_EXTERNAL) & 1;
lpcr = mfspr(SPRN_LPCR);
diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
index 338873f90c72..032ca6dfd83c 100644
--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
+++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
@@ -695,6 +695,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
unsigned long host_pidr;
unsigned long host_dawr1;
unsigned long host_dawrx1;
+ unsigned long dpdes;
hdec = time_limit - *tb;
if (hdec < 0)
@@ -757,8 +758,10 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
if (vc->pcr)
mtspr(SPRN_PCR, vc->pcr | PCR_MASK);
- if (vc->dpdes)
- mtspr(SPRN_DPDES, vc->dpdes);
+ if (vcpu->arch.doorbell_request) {
+ vcpu->arch.doorbell_request = 0;
+ mtspr(SPRN_DPDES, 1);
+ }
if (dawr_enabled()) {
if (vcpu->arch.dawr0 != host_dawr0)
@@ -995,7 +998,10 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
vcpu->arch.shregs.sprg2 = mfspr(SPRN_SPRG2);
vcpu->arch.shregs.sprg3 = mfspr(SPRN_SPRG3);
- vc->dpdes = mfspr(SPRN_DPDES);
+ dpdes = mfspr(SPRN_DPDES);
+ if (dpdes)
+ vcpu->arch.doorbell_request = 1;
+
vc->vtb = mfspr(SPRN_VTB);
dec = mfspr(SPRN_DEC);
@@ -1057,7 +1063,7 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
}
}
- if (vc->dpdes)
+ if (dpdes)
mtspr(SPRN_DPDES, 0);
if (vc->pcr)
mtspr(SPRN_PCR, PCR_MASK);
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH v1 55/55] KVM: PPC: Book3S HV P9: Remove subcore HMI handling
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
` (53 preceding siblings ...)
2021-07-26 3:50 ` [PATCH v1 54/55] KVM: PPC: Book3S HV P9: Stop using vc->dpdes Nicholas Piggin
@ 2021-07-26 3:50 ` Nicholas Piggin
54 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-07-26 3:50 UTC (permalink / raw)
To: kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
On POWER9 and newer, rather than the complex HMI synchronisation and
subcore state, have each thread un-apply the guest TB offset before
calling into the early HMI handler.
This allows the subcore state to be avoided, including subcore enter
/ exit guest, which includes an expensive divide that shows up
slightly in profiles.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kvm/book3s_hv.c | 12 +++++-----
arch/powerpc/kvm/book3s_hv_hmi.c | 7 +++++-
arch/powerpc/kvm/book3s_hv_p9_entry.c | 32 ++++++++++++++++++++++++++-
arch/powerpc/kvm/book3s_hv_ras.c | 4 ++++
4 files changed, 46 insertions(+), 9 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index b727b2cfad98..3f62ada1a669 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -3994,8 +3994,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
vcpu->arch.ceded = 0;
- kvmppc_subcore_enter_guest();
-
vcpu_vpa_increment_dispatch(vcpu);
if (kvmhv_on_pseries()) {
@@ -4048,8 +4046,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
vcpu_vpa_increment_dispatch(vcpu);
- kvmppc_subcore_exit_guest();
-
return trap;
}
@@ -6031,9 +6027,11 @@ static int kvmppc_book3s_init_hv(void)
if (r)
return r;
- r = kvm_init_subcore_bitmap();
- if (r)
- return r;
+ if (!cpu_has_feature(CPU_FTR_ARCH_300)) {
+ r = kvm_init_subcore_bitmap();
+ if (r)
+ return r;
+ }
/*
* We need a way of accessing the XICS interrupt controller,
diff --git a/arch/powerpc/kvm/book3s_hv_hmi.c b/arch/powerpc/kvm/book3s_hv_hmi.c
index 9af660476314..1ec50c69678b 100644
--- a/arch/powerpc/kvm/book3s_hv_hmi.c
+++ b/arch/powerpc/kvm/book3s_hv_hmi.c
@@ -20,10 +20,15 @@ void wait_for_subcore_guest_exit(void)
/*
* NULL bitmap pointer indicates that KVM module hasn't
- * been loaded yet and hence no guests are running.
+ * been loaded yet and hence no guests are running, or running
+ * on POWER9 or newer CPU.
+ *
* If no KVM is in use, no need to co-ordinate among threads
* as all of them will always be in host and no one is going
* to modify TB other than the opal hmi handler.
+ *
+ * POWER9 and newer don't need this synchronisation.
+ *
* Hence, just return from here.
*/
if (!local_paca->sibling_subcore_state)
diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
index 032ca6dfd83c..d23e1ef2e3a7 100644
--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
+++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
@@ -3,6 +3,7 @@
#include <linux/kvm_host.h>
#include <asm/asm-prototypes.h>
#include <asm/dbell.h>
+#include <asm/interrupt.h>
#include <asm/kvm_ppc.h>
#include <asm/pmc.h>
#include <asm/ppc-opcode.h>
@@ -927,7 +928,36 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
kvmppc_realmode_machine_check(vcpu);
} else if (unlikely(trap == BOOK3S_INTERRUPT_HMI)) {
- kvmppc_realmode_hmi_handler();
+ /*
+ * Unapply and clear the offset first. That way, if the TB
+ * was fine then no harm done, if it is corrupted then the
+ * HMI resync will bring it back to host mode. This way, we
+ * don't need to actualy know whether not OPAL resynced the
+ * timebase. Although it would be cleaner if we could rely
+ * on that, early POWER9 OPAL did not support the
+ * OPAL_HANDLE_HMI2 call.
+ */
+ if (vc->tb_offset_applied) {
+ u64 new_tb = mftb() - vc->tb_offset_applied;
+ mtspr(SPRN_TBU40, new_tb);
+ if ((mftb() & 0xffffff) < (new_tb & 0xffffff)) {
+ new_tb += 0x1000000;
+ mtspr(SPRN_TBU40, new_tb);
+ }
+ vc->tb_offset_applied = 0;
+ }
+
+ hmi_exception_realmode(NULL);
+
+ if (vc->tb_offset) {
+ u64 new_tb = mftb() + vc->tb_offset;
+ mtspr(SPRN_TBU40, new_tb);
+ if ((mftb() & 0xffffff) < (new_tb & 0xffffff)) {
+ new_tb += 0x1000000;
+ mtspr(SPRN_TBU40, new_tb);
+ }
+ vc->tb_offset_applied = vc->tb_offset;
+ }
} else if (trap == BOOK3S_INTERRUPT_H_EMUL_ASSIST) {
vcpu->arch.emul_inst = mfspr(SPRN_HEIR);
diff --git a/arch/powerpc/kvm/book3s_hv_ras.c b/arch/powerpc/kvm/book3s_hv_ras.c
index d4bca93b79f6..a49ee9bdab67 100644
--- a/arch/powerpc/kvm/book3s_hv_ras.c
+++ b/arch/powerpc/kvm/book3s_hv_ras.c
@@ -136,6 +136,10 @@ void kvmppc_realmode_machine_check(struct kvm_vcpu *vcpu)
vcpu->arch.mce_evt = mce_evt;
}
+/*
+ * This subcore HMI handling is all only for pre-POWER9 CPUs.
+ */
+
/* Check if dynamic split is in force and return subcore size accordingly. */
static inline int kvmppc_cur_subcore_size(void)
{
--
2.23.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* Re: [PATCH v1 23/55] KVM: PPC: Book3S HV P9: Reduce mtmsrd instructions required to save host SPRs
2021-07-26 3:50 ` [PATCH v1 23/55] KVM: PPC: Book3S HV P9: Reduce mtmsrd instructions required to save host SPRs Nicholas Piggin
@ 2021-07-26 6:57 ` kernel test robot
2021-07-26 7:01 ` kernel test robot
1 sibling, 0 replies; 79+ messages in thread
From: kernel test robot @ 2021-07-26 6:57 UTC (permalink / raw)
To: Nicholas Piggin, kvm-ppc
Cc: clang-built-linux, kbuild-all, linuxppc-dev, Nicholas Piggin
[-- Attachment #1: Type: text/plain, Size: 4721 bytes --]
Hi Nicholas,
I love your patch! Perhaps something to improve:
[auto build test WARNING on linus/master]
[also build test WARNING on v5.14-rc3 next-20210723]
[cannot apply to powerpc/next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Nicholas-Piggin/KVM-PPC-Book3S-HV-P9-entry-exit-optimisations/20210726-115329
base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git ff1176468d368232b684f75e82563369208bc371
config: powerpc-randconfig-r022-20210726 (attached as .config)
compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project c63dbd850182797bc4b76124d08e1c320ab2365d)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# install powerpc cross compiling tool for clang build
# apt-get install binutils-powerpc-linux-gnu
# https://github.com/0day-ci/linux/commit/d173e4690cf13578686dbbce48e1f81e925b96af
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Nicholas-Piggin/KVM-PPC-Book3S-HV-P9-entry-exit-optimisations/20210726-115329
git checkout d173e4690cf13578686dbbce48e1f81e925b96af
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=powerpc
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All warnings (new ones prefixed by >>):
arch/powerpc/kernel/process.c:612:33: error: no member named 'tm_tfhar' in 'struct thread_struct'
current->thread.tm_tfhar = mfspr(SPRN_TFHAR);
~~~~~~~~~~~~~~~ ^
arch/powerpc/kernel/process.c:613:33: error: no member named 'tm_tfiar' in 'struct thread_struct'
current->thread.tm_tfiar = mfspr(SPRN_TFIAR);
~~~~~~~~~~~~~~~ ^
arch/powerpc/kernel/process.c:614:33: error: no member named 'tm_texasr' in 'struct thread_struct'
current->thread.tm_texasr = mfspr(SPRN_TEXASR);
~~~~~~~~~~~~~~~ ^
>> arch/powerpc/kernel/process.c:596:6: warning: no previous prototype for function 'save_user_regs_kvm' [-Wmissing-prototypes]
void save_user_regs_kvm(void)
^
arch/powerpc/kernel/process.c:596:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
void save_user_regs_kvm(void)
^
static
>> arch/powerpc/kernel/process.c:611:16: warning: shift count >= width of type [-Wshift-count-overflow]
if (usermsr & MSR_TM) {
^~~~~~
arch/powerpc/include/asm/reg.h:115:17: note: expanded from macro 'MSR_TM'
#define MSR_TM __MASK(MSR_TM_LG) /* Transactional Mem Available */
^~~~~~~~~~~~~~~~~
arch/powerpc/include/asm/reg.h:66:23: note: expanded from macro '__MASK'
#define __MASK(X) (1UL<<(X))
^ ~~~
arch/powerpc/kernel/process.c:615:47: warning: shift count >= width of type [-Wshift-count-overflow]
current->thread.regs->msr &= ~MSR_TM;
^~~~~~
arch/powerpc/include/asm/reg.h:115:17: note: expanded from macro 'MSR_TM'
#define MSR_TM __MASK(MSR_TM_LG) /* Transactional Mem Available */
^~~~~~~~~~~~~~~~~
arch/powerpc/include/asm/reg.h:66:23: note: expanded from macro '__MASK'
#define __MASK(X) (1UL<<(X))
^ ~~~
3 warnings and 3 errors generated.
vim +/save_user_regs_kvm +596 arch/powerpc/kernel/process.c
595
> 596 void save_user_regs_kvm(void)
597 {
598 unsigned long usermsr;
599
600 if (!current->thread.regs)
601 return;
602
603 usermsr = current->thread.regs->msr;
604
605 if (usermsr & MSR_FP)
606 save_fpu(current);
607
608 if (usermsr & MSR_VEC)
609 save_altivec(current);
610
> 611 if (usermsr & MSR_TM) {
612 current->thread.tm_tfhar = mfspr(SPRN_TFHAR);
613 current->thread.tm_tfiar = mfspr(SPRN_TFIAR);
614 current->thread.tm_texasr = mfspr(SPRN_TEXASR);
615 current->thread.regs->msr &= ~MSR_TM;
616 }
617 }
618 EXPORT_SYMBOL_GPL(save_user_regs_kvm);
619
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 34510 bytes --]
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v1 23/55] KVM: PPC: Book3S HV P9: Reduce mtmsrd instructions required to save host SPRs
@ 2021-07-26 6:57 ` kernel test robot
0 siblings, 0 replies; 79+ messages in thread
From: kernel test robot @ 2021-07-26 6:57 UTC (permalink / raw)
To: kbuild-all
[-- Attachment #1: Type: text/plain, Size: 4823 bytes --]
Hi Nicholas,
I love your patch! Perhaps something to improve:
[auto build test WARNING on linus/master]
[also build test WARNING on v5.14-rc3 next-20210723]
[cannot apply to powerpc/next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Nicholas-Piggin/KVM-PPC-Book3S-HV-P9-entry-exit-optimisations/20210726-115329
base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git ff1176468d368232b684f75e82563369208bc371
config: powerpc-randconfig-r022-20210726 (attached as .config)
compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project c63dbd850182797bc4b76124d08e1c320ab2365d)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# install powerpc cross compiling tool for clang build
# apt-get install binutils-powerpc-linux-gnu
# https://github.com/0day-ci/linux/commit/d173e4690cf13578686dbbce48e1f81e925b96af
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Nicholas-Piggin/KVM-PPC-Book3S-HV-P9-entry-exit-optimisations/20210726-115329
git checkout d173e4690cf13578686dbbce48e1f81e925b96af
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=powerpc
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All warnings (new ones prefixed by >>):
arch/powerpc/kernel/process.c:612:33: error: no member named 'tm_tfhar' in 'struct thread_struct'
current->thread.tm_tfhar = mfspr(SPRN_TFHAR);
~~~~~~~~~~~~~~~ ^
arch/powerpc/kernel/process.c:613:33: error: no member named 'tm_tfiar' in 'struct thread_struct'
current->thread.tm_tfiar = mfspr(SPRN_TFIAR);
~~~~~~~~~~~~~~~ ^
arch/powerpc/kernel/process.c:614:33: error: no member named 'tm_texasr' in 'struct thread_struct'
current->thread.tm_texasr = mfspr(SPRN_TEXASR);
~~~~~~~~~~~~~~~ ^
>> arch/powerpc/kernel/process.c:596:6: warning: no previous prototype for function 'save_user_regs_kvm' [-Wmissing-prototypes]
void save_user_regs_kvm(void)
^
arch/powerpc/kernel/process.c:596:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
void save_user_regs_kvm(void)
^
static
>> arch/powerpc/kernel/process.c:611:16: warning: shift count >= width of type [-Wshift-count-overflow]
if (usermsr & MSR_TM) {
^~~~~~
arch/powerpc/include/asm/reg.h:115:17: note: expanded from macro 'MSR_TM'
#define MSR_TM __MASK(MSR_TM_LG) /* Transactional Mem Available */
^~~~~~~~~~~~~~~~~
arch/powerpc/include/asm/reg.h:66:23: note: expanded from macro '__MASK'
#define __MASK(X) (1UL<<(X))
^ ~~~
arch/powerpc/kernel/process.c:615:47: warning: shift count >= width of type [-Wshift-count-overflow]
current->thread.regs->msr &= ~MSR_TM;
^~~~~~
arch/powerpc/include/asm/reg.h:115:17: note: expanded from macro 'MSR_TM'
#define MSR_TM __MASK(MSR_TM_LG) /* Transactional Mem Available */
^~~~~~~~~~~~~~~~~
arch/powerpc/include/asm/reg.h:66:23: note: expanded from macro '__MASK'
#define __MASK(X) (1UL<<(X))
^ ~~~
3 warnings and 3 errors generated.
vim +/save_user_regs_kvm +596 arch/powerpc/kernel/process.c
595
> 596 void save_user_regs_kvm(void)
597 {
598 unsigned long usermsr;
599
600 if (!current->thread.regs)
601 return;
602
603 usermsr = current->thread.regs->msr;
604
605 if (usermsr & MSR_FP)
606 save_fpu(current);
607
608 if (usermsr & MSR_VEC)
609 save_altivec(current);
610
> 611 if (usermsr & MSR_TM) {
612 current->thread.tm_tfhar = mfspr(SPRN_TFHAR);
613 current->thread.tm_tfiar = mfspr(SPRN_TFIAR);
614 current->thread.tm_texasr = mfspr(SPRN_TEXASR);
615 current->thread.regs->msr &= ~MSR_TM;
616 }
617 }
618 EXPORT_SYMBOL_GPL(save_user_regs_kvm);
619
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org
[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 34510 bytes --]
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v1 23/55] KVM: PPC: Book3S HV P9: Reduce mtmsrd instructions required to save host SPRs
2021-07-26 3:50 ` [PATCH v1 23/55] KVM: PPC: Book3S HV P9: Reduce mtmsrd instructions required to save host SPRs Nicholas Piggin
@ 2021-07-26 7:01 ` kernel test robot
2021-07-26 7:01 ` kernel test robot
1 sibling, 0 replies; 79+ messages in thread
From: kernel test robot @ 2021-07-26 7:01 UTC (permalink / raw)
To: Nicholas Piggin, kvm-ppc
Cc: clang-built-linux, kbuild-all, linuxppc-dev, Nicholas Piggin
[-- Attachment #1: Type: text/plain, Size: 9816 bytes --]
Hi Nicholas,
I love your patch! Yet something to improve:
[auto build test ERROR on linus/master]
[also build test ERROR on v5.14-rc3 next-20210723]
[cannot apply to powerpc/next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Nicholas-Piggin/KVM-PPC-Book3S-HV-P9-entry-exit-optimisations/20210726-115329
base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git ff1176468d368232b684f75e82563369208bc371
config: powerpc64-randconfig-r024-20210726 (attached as .config)
compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project c63dbd850182797bc4b76124d08e1c320ab2365d)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# install powerpc64 cross compiling tool for clang build
# apt-get install binutils-powerpc64-linux-gnu
# https://github.com/0day-ci/linux/commit/d173e4690cf13578686dbbce48e1f81e925b96af
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Nicholas-Piggin/KVM-PPC-Book3S-HV-P9-entry-exit-optimisations/20210726-115329
git checkout d173e4690cf13578686dbbce48e1f81e925b96af
# save the attached .config to linux build tree
mkdir build_dir
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross O=build_dir ARCH=powerpc SHELL=/bin/bash arch/powerpc/kernel/
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All errors (new ones prefixed by >>):
arch/powerpc/include/asm/io-defs.h:45:1: error: performing pointer arithmetic on a null pointer has undefined behavior [-Werror,-Wnull-pointer-arithmetic]
DEF_PCI_AC_NORET(insw, (unsigned long p, void *b, unsigned long c),
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/include/asm/io.h:616:3: note: expanded from macro 'DEF_PCI_AC_NORET'
__do_##name al; \
^~~~~~~~~~~~~~
<scratch space>:121:1: note: expanded from here
__do_insw
^
arch/powerpc/include/asm/io.h:557:56: note: expanded from macro '__do_insw'
#define __do_insw(p, b, n) readsw((PCI_IO_ADDR)_IO_BASE+(p), (b), (n))
~~~~~~~~~~~~~~~~~~~~~^
In file included from arch/powerpc/kernel/process.c:28:
In file included from include/linux/init_task.h:9:
In file included from include/linux/ftrace.h:10:
In file included from include/linux/trace_recursion.h:5:
In file included from include/linux/interrupt.h:11:
In file included from include/linux/hardirq.h:11:
In file included from arch/powerpc/include/asm/hardirq.h:6:
In file included from include/linux/irq.h:20:
In file included from include/linux/io.h:13:
In file included from arch/powerpc/include/asm/io.h:619:
arch/powerpc/include/asm/io-defs.h:47:1: error: performing pointer arithmetic on a null pointer has undefined behavior [-Werror,-Wnull-pointer-arithmetic]
DEF_PCI_AC_NORET(insl, (unsigned long p, void *b, unsigned long c),
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/include/asm/io.h:616:3: note: expanded from macro 'DEF_PCI_AC_NORET'
__do_##name al; \
^~~~~~~~~~~~~~
<scratch space>:123:1: note: expanded from here
__do_insl
^
arch/powerpc/include/asm/io.h:558:56: note: expanded from macro '__do_insl'
#define __do_insl(p, b, n) readsl((PCI_IO_ADDR)_IO_BASE+(p), (b), (n))
~~~~~~~~~~~~~~~~~~~~~^
In file included from arch/powerpc/kernel/process.c:28:
In file included from include/linux/init_task.h:9:
In file included from include/linux/ftrace.h:10:
In file included from include/linux/trace_recursion.h:5:
In file included from include/linux/interrupt.h:11:
In file included from include/linux/hardirq.h:11:
In file included from arch/powerpc/include/asm/hardirq.h:6:
In file included from include/linux/irq.h:20:
In file included from include/linux/io.h:13:
In file included from arch/powerpc/include/asm/io.h:619:
arch/powerpc/include/asm/io-defs.h:49:1: error: performing pointer arithmetic on a null pointer has undefined behavior [-Werror,-Wnull-pointer-arithmetic]
DEF_PCI_AC_NORET(outsb, (unsigned long p, const void *b, unsigned long c),
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/include/asm/io.h:616:3: note: expanded from macro 'DEF_PCI_AC_NORET'
__do_##name al; \
^~~~~~~~~~~~~~
<scratch space>:125:1: note: expanded from here
__do_outsb
^
arch/powerpc/include/asm/io.h:559:58: note: expanded from macro '__do_outsb'
#define __do_outsb(p, b, n) writesb((PCI_IO_ADDR)_IO_BASE+(p),(b),(n))
~~~~~~~~~~~~~~~~~~~~~^
In file included from arch/powerpc/kernel/process.c:28:
In file included from include/linux/init_task.h:9:
In file included from include/linux/ftrace.h:10:
In file included from include/linux/trace_recursion.h:5:
In file included from include/linux/interrupt.h:11:
In file included from include/linux/hardirq.h:11:
In file included from arch/powerpc/include/asm/hardirq.h:6:
In file included from include/linux/irq.h:20:
In file included from include/linux/io.h:13:
In file included from arch/powerpc/include/asm/io.h:619:
arch/powerpc/include/asm/io-defs.h:51:1: error: performing pointer arithmetic on a null pointer has undefined behavior [-Werror,-Wnull-pointer-arithmetic]
DEF_PCI_AC_NORET(outsw, (unsigned long p, const void *b, unsigned long c),
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/include/asm/io.h:616:3: note: expanded from macro 'DEF_PCI_AC_NORET'
__do_##name al; \
^~~~~~~~~~~~~~
<scratch space>:127:1: note: expanded from here
__do_outsw
^
arch/powerpc/include/asm/io.h:560:58: note: expanded from macro '__do_outsw'
#define __do_outsw(p, b, n) writesw((PCI_IO_ADDR)_IO_BASE+(p),(b),(n))
~~~~~~~~~~~~~~~~~~~~~^
In file included from arch/powerpc/kernel/process.c:28:
In file included from include/linux/init_task.h:9:
In file included from include/linux/ftrace.h:10:
In file included from include/linux/trace_recursion.h:5:
In file included from include/linux/interrupt.h:11:
In file included from include/linux/hardirq.h:11:
In file included from arch/powerpc/include/asm/hardirq.h:6:
In file included from include/linux/irq.h:20:
In file included from include/linux/io.h:13:
In file included from arch/powerpc/include/asm/io.h:619:
arch/powerpc/include/asm/io-defs.h:53:1: error: performing pointer arithmetic on a null pointer has undefined behavior [-Werror,-Wnull-pointer-arithmetic]
DEF_PCI_AC_NORET(outsl, (unsigned long p, const void *b, unsigned long c),
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/include/asm/io.h:616:3: note: expanded from macro 'DEF_PCI_AC_NORET'
__do_##name al; \
^~~~~~~~~~~~~~
<scratch space>:129:1: note: expanded from here
__do_outsl
^
arch/powerpc/include/asm/io.h:561:58: note: expanded from macro '__do_outsl'
#define __do_outsl(p, b, n) writesl((PCI_IO_ADDR)_IO_BASE+(p),(b),(n))
~~~~~~~~~~~~~~~~~~~~~^
>> arch/powerpc/kernel/process.c:612:33: error: no member named 'tm_tfhar' in 'struct thread_struct'
current->thread.tm_tfhar = mfspr(SPRN_TFHAR);
~~~~~~~~~~~~~~~ ^
>> arch/powerpc/kernel/process.c:613:33: error: no member named 'tm_tfiar' in 'struct thread_struct'
current->thread.tm_tfiar = mfspr(SPRN_TFIAR);
~~~~~~~~~~~~~~~ ^
>> arch/powerpc/kernel/process.c:614:33: error: no member named 'tm_texasr' in 'struct thread_struct'
current->thread.tm_texasr = mfspr(SPRN_TEXASR);
~~~~~~~~~~~~~~~ ^
>> arch/powerpc/kernel/process.c:596:6: error: no previous prototype for function 'save_user_regs_kvm' [-Werror,-Wmissing-prototypes]
void save_user_regs_kvm(void)
^
arch/powerpc/kernel/process.c:596:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
void save_user_regs_kvm(void)
^
static
16 errors generated.
vim +612 arch/powerpc/kernel/process.c
595
> 596 void save_user_regs_kvm(void)
597 {
598 unsigned long usermsr;
599
600 if (!current->thread.regs)
601 return;
602
603 usermsr = current->thread.regs->msr;
604
605 if (usermsr & MSR_FP)
606 save_fpu(current);
607
608 if (usermsr & MSR_VEC)
609 save_altivec(current);
610
611 if (usermsr & MSR_TM) {
> 612 current->thread.tm_tfhar = mfspr(SPRN_TFHAR);
> 613 current->thread.tm_tfiar = mfspr(SPRN_TFIAR);
> 614 current->thread.tm_texasr = mfspr(SPRN_TEXASR);
615 current->thread.regs->msr &= ~MSR_TM;
616 }
617 }
618 EXPORT_SYMBOL_GPL(save_user_regs_kvm);
619
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 31561 bytes --]
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v1 23/55] KVM: PPC: Book3S HV P9: Reduce mtmsrd instructions required to save host SPRs
@ 2021-07-26 7:01 ` kernel test robot
0 siblings, 0 replies; 79+ messages in thread
From: kernel test robot @ 2021-07-26 7:01 UTC (permalink / raw)
To: kbuild-all
[-- Attachment #1: Type: text/plain, Size: 10001 bytes --]
Hi Nicholas,
I love your patch! Yet something to improve:
[auto build test ERROR on linus/master]
[also build test ERROR on v5.14-rc3 next-20210723]
[cannot apply to powerpc/next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Nicholas-Piggin/KVM-PPC-Book3S-HV-P9-entry-exit-optimisations/20210726-115329
base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git ff1176468d368232b684f75e82563369208bc371
config: powerpc64-randconfig-r024-20210726 (attached as .config)
compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project c63dbd850182797bc4b76124d08e1c320ab2365d)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# install powerpc64 cross compiling tool for clang build
# apt-get install binutils-powerpc64-linux-gnu
# https://github.com/0day-ci/linux/commit/d173e4690cf13578686dbbce48e1f81e925b96af
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Nicholas-Piggin/KVM-PPC-Book3S-HV-P9-entry-exit-optimisations/20210726-115329
git checkout d173e4690cf13578686dbbce48e1f81e925b96af
# save the attached .config to linux build tree
mkdir build_dir
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross O=build_dir ARCH=powerpc SHELL=/bin/bash arch/powerpc/kernel/
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All errors (new ones prefixed by >>):
arch/powerpc/include/asm/io-defs.h:45:1: error: performing pointer arithmetic on a null pointer has undefined behavior [-Werror,-Wnull-pointer-arithmetic]
DEF_PCI_AC_NORET(insw, (unsigned long p, void *b, unsigned long c),
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/include/asm/io.h:616:3: note: expanded from macro 'DEF_PCI_AC_NORET'
__do_##name al; \
^~~~~~~~~~~~~~
<scratch space>:121:1: note: expanded from here
__do_insw
^
arch/powerpc/include/asm/io.h:557:56: note: expanded from macro '__do_insw'
#define __do_insw(p, b, n) readsw((PCI_IO_ADDR)_IO_BASE+(p), (b), (n))
~~~~~~~~~~~~~~~~~~~~~^
In file included from arch/powerpc/kernel/process.c:28:
In file included from include/linux/init_task.h:9:
In file included from include/linux/ftrace.h:10:
In file included from include/linux/trace_recursion.h:5:
In file included from include/linux/interrupt.h:11:
In file included from include/linux/hardirq.h:11:
In file included from arch/powerpc/include/asm/hardirq.h:6:
In file included from include/linux/irq.h:20:
In file included from include/linux/io.h:13:
In file included from arch/powerpc/include/asm/io.h:619:
arch/powerpc/include/asm/io-defs.h:47:1: error: performing pointer arithmetic on a null pointer has undefined behavior [-Werror,-Wnull-pointer-arithmetic]
DEF_PCI_AC_NORET(insl, (unsigned long p, void *b, unsigned long c),
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/include/asm/io.h:616:3: note: expanded from macro 'DEF_PCI_AC_NORET'
__do_##name al; \
^~~~~~~~~~~~~~
<scratch space>:123:1: note: expanded from here
__do_insl
^
arch/powerpc/include/asm/io.h:558:56: note: expanded from macro '__do_insl'
#define __do_insl(p, b, n) readsl((PCI_IO_ADDR)_IO_BASE+(p), (b), (n))
~~~~~~~~~~~~~~~~~~~~~^
In file included from arch/powerpc/kernel/process.c:28:
In file included from include/linux/init_task.h:9:
In file included from include/linux/ftrace.h:10:
In file included from include/linux/trace_recursion.h:5:
In file included from include/linux/interrupt.h:11:
In file included from include/linux/hardirq.h:11:
In file included from arch/powerpc/include/asm/hardirq.h:6:
In file included from include/linux/irq.h:20:
In file included from include/linux/io.h:13:
In file included from arch/powerpc/include/asm/io.h:619:
arch/powerpc/include/asm/io-defs.h:49:1: error: performing pointer arithmetic on a null pointer has undefined behavior [-Werror,-Wnull-pointer-arithmetic]
DEF_PCI_AC_NORET(outsb, (unsigned long p, const void *b, unsigned long c),
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/include/asm/io.h:616:3: note: expanded from macro 'DEF_PCI_AC_NORET'
__do_##name al; \
^~~~~~~~~~~~~~
<scratch space>:125:1: note: expanded from here
__do_outsb
^
arch/powerpc/include/asm/io.h:559:58: note: expanded from macro '__do_outsb'
#define __do_outsb(p, b, n) writesb((PCI_IO_ADDR)_IO_BASE+(p),(b),(n))
~~~~~~~~~~~~~~~~~~~~~^
In file included from arch/powerpc/kernel/process.c:28:
In file included from include/linux/init_task.h:9:
In file included from include/linux/ftrace.h:10:
In file included from include/linux/trace_recursion.h:5:
In file included from include/linux/interrupt.h:11:
In file included from include/linux/hardirq.h:11:
In file included from arch/powerpc/include/asm/hardirq.h:6:
In file included from include/linux/irq.h:20:
In file included from include/linux/io.h:13:
In file included from arch/powerpc/include/asm/io.h:619:
arch/powerpc/include/asm/io-defs.h:51:1: error: performing pointer arithmetic on a null pointer has undefined behavior [-Werror,-Wnull-pointer-arithmetic]
DEF_PCI_AC_NORET(outsw, (unsigned long p, const void *b, unsigned long c),
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/include/asm/io.h:616:3: note: expanded from macro 'DEF_PCI_AC_NORET'
__do_##name al; \
^~~~~~~~~~~~~~
<scratch space>:127:1: note: expanded from here
__do_outsw
^
arch/powerpc/include/asm/io.h:560:58: note: expanded from macro '__do_outsw'
#define __do_outsw(p, b, n) writesw((PCI_IO_ADDR)_IO_BASE+(p),(b),(n))
~~~~~~~~~~~~~~~~~~~~~^
In file included from arch/powerpc/kernel/process.c:28:
In file included from include/linux/init_task.h:9:
In file included from include/linux/ftrace.h:10:
In file included from include/linux/trace_recursion.h:5:
In file included from include/linux/interrupt.h:11:
In file included from include/linux/hardirq.h:11:
In file included from arch/powerpc/include/asm/hardirq.h:6:
In file included from include/linux/irq.h:20:
In file included from include/linux/io.h:13:
In file included from arch/powerpc/include/asm/io.h:619:
arch/powerpc/include/asm/io-defs.h:53:1: error: performing pointer arithmetic on a null pointer has undefined behavior [-Werror,-Wnull-pointer-arithmetic]
DEF_PCI_AC_NORET(outsl, (unsigned long p, const void *b, unsigned long c),
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/include/asm/io.h:616:3: note: expanded from macro 'DEF_PCI_AC_NORET'
__do_##name al; \
^~~~~~~~~~~~~~
<scratch space>:129:1: note: expanded from here
__do_outsl
^
arch/powerpc/include/asm/io.h:561:58: note: expanded from macro '__do_outsl'
#define __do_outsl(p, b, n) writesl((PCI_IO_ADDR)_IO_BASE+(p),(b),(n))
~~~~~~~~~~~~~~~~~~~~~^
>> arch/powerpc/kernel/process.c:612:33: error: no member named 'tm_tfhar' in 'struct thread_struct'
current->thread.tm_tfhar = mfspr(SPRN_TFHAR);
~~~~~~~~~~~~~~~ ^
>> arch/powerpc/kernel/process.c:613:33: error: no member named 'tm_tfiar' in 'struct thread_struct'
current->thread.tm_tfiar = mfspr(SPRN_TFIAR);
~~~~~~~~~~~~~~~ ^
>> arch/powerpc/kernel/process.c:614:33: error: no member named 'tm_texasr' in 'struct thread_struct'
current->thread.tm_texasr = mfspr(SPRN_TEXASR);
~~~~~~~~~~~~~~~ ^
>> arch/powerpc/kernel/process.c:596:6: error: no previous prototype for function 'save_user_regs_kvm' [-Werror,-Wmissing-prototypes]
void save_user_regs_kvm(void)
^
arch/powerpc/kernel/process.c:596:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
void save_user_regs_kvm(void)
^
static
16 errors generated.
vim +612 arch/powerpc/kernel/process.c
595
> 596 void save_user_regs_kvm(void)
597 {
598 unsigned long usermsr;
599
600 if (!current->thread.regs)
601 return;
602
603 usermsr = current->thread.regs->msr;
604
605 if (usermsr & MSR_FP)
606 save_fpu(current);
607
608 if (usermsr & MSR_VEC)
609 save_altivec(current);
610
611 if (usermsr & MSR_TM) {
> 612 current->thread.tm_tfhar = mfspr(SPRN_TFHAR);
> 613 current->thread.tm_tfiar = mfspr(SPRN_TFIAR);
> 614 current->thread.tm_texasr = mfspr(SPRN_TEXASR);
615 current->thread.regs->msr &= ~MSR_TM;
616 }
617 }
618 EXPORT_SYMBOL_GPL(save_user_regs_kvm);
619
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org
[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 31561 bytes --]
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v1 11/55] powerpc/time: add API for KVM to re-arm the host timer/decrementer
2021-07-26 3:49 ` [PATCH v1 11/55] powerpc/time: add API for KVM to re-arm the host timer/decrementer Nicholas Piggin
@ 2021-08-05 7:22 ` Christophe Leroy
2021-08-06 10:30 ` Nicholas Piggin
0 siblings, 1 reply; 79+ messages in thread
From: Christophe Leroy @ 2021-08-05 7:22 UTC (permalink / raw)
To: Nicholas Piggin, kvm-ppc; +Cc: linuxppc-dev
Le 26/07/2021 à 05:49, Nicholas Piggin a écrit :
> Rather than have KVM look up the host timer and fiddle with the
> irq-work internal details, have the powerpc/time.c code provide a
> function for KVM to re-arm the Linux timer code when exiting a
> guest.
>
> This is implementation has an improvement over existing code of
> marking a decrementer interrupt as soft-pending if a timer has
> expired, rather than setting DEC to a -ve value, which tended to
> cause host timers to take two interrupts (first hdec to exit the
> guest, then the immediate dec).
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> arch/powerpc/include/asm/time.h | 16 +++-------
> arch/powerpc/kernel/time.c | 52 +++++++++++++++++++++++++++------
> arch/powerpc/kvm/book3s_hv.c | 7 ++---
> 3 files changed, 49 insertions(+), 26 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h
> index 69b6be617772..924b2157882f 100644
> --- a/arch/powerpc/include/asm/time.h
> +++ b/arch/powerpc/include/asm/time.h
> @@ -99,18 +99,6 @@ extern void div128_by_32(u64 dividend_high, u64 dividend_low,
> extern void secondary_cpu_time_init(void);
> extern void __init time_init(void);
>
> -#ifdef CONFIG_PPC64
> -static inline unsigned long test_irq_work_pending(void)
> -{
> - unsigned long x;
> -
> - asm volatile("lbz %0,%1(13)"
> - : "=r" (x)
> - : "i" (offsetof(struct paca_struct, irq_work_pending)));
> - return x;
> -}
> -#endif
> -
> DECLARE_PER_CPU(u64, decrementers_next_tb);
>
> static inline u64 timer_get_next_tb(void)
> @@ -118,6 +106,10 @@ static inline u64 timer_get_next_tb(void)
> return __this_cpu_read(decrementers_next_tb);
> }
>
> +#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
> +void timer_rearm_host_dec(u64 now);
> +#endif
> +
> /* Convert timebase ticks to nanoseconds */
> unsigned long long tb_to_ns(unsigned long long tb_ticks);
>
> diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
> index 72d872b49167..016828b7401b 100644
> --- a/arch/powerpc/kernel/time.c
> +++ b/arch/powerpc/kernel/time.c
> @@ -499,6 +499,16 @@ EXPORT_SYMBOL(profile_pc);
> * 64-bit uses a byte in the PACA, 32-bit uses a per-cpu variable...
> */
> #ifdef CONFIG_PPC64
> +static inline unsigned long test_irq_work_pending(void)
> +{
> + unsigned long x;
> +
> + asm volatile("lbz %0,%1(13)"
> + : "=r" (x)
> + : "i" (offsetof(struct paca_struct, irq_work_pending)));
Can we just use READ_ONCE() instead of hard coding the read ?
> + return x;
> +}
> +
> static inline void set_irq_work_pending_flag(void)
> {
> asm volatile("stb %0,%1(13)" : :
> @@ -542,13 +552,44 @@ void arch_irq_work_raise(void)
> preempt_enable();
> }
>
> +static void set_dec_or_work(u64 val)
> +{
> + set_dec(val);
> + /* We may have raced with new irq work */
> + if (unlikely(test_irq_work_pending()))
> + set_dec(1);
> +}
> +
> #else /* CONFIG_IRQ_WORK */
>
> #define test_irq_work_pending() 0
> #define clear_irq_work_pending()
>
> +static void set_dec_or_work(u64 val)
> +{
> + set_dec(val);
> +}
> #endif /* CONFIG_IRQ_WORK */
>
> +#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
> +void timer_rearm_host_dec(u64 now)
> +{
> + u64 *next_tb = this_cpu_ptr(&decrementers_next_tb);
> +
> + WARN_ON_ONCE(!arch_irqs_disabled());
> + WARN_ON_ONCE(mfmsr() & MSR_EE);
> +
> + if (now >= *next_tb) {
> + local_paca->irq_happened |= PACA_IRQ_DEC;
> + } else {
> + now = *next_tb - now;
> + if (now <= decrementer_max)
> + set_dec_or_work(now);
> + }
> +}
> +EXPORT_SYMBOL_GPL(timer_rearm_host_dec);
> +#endif
> +
> /*
> * timer_interrupt - gets called when the decrementer overflows,
> * with interrupts disabled.
> @@ -609,10 +650,7 @@ DEFINE_INTERRUPT_HANDLER_ASYNC(timer_interrupt)
> } else {
> now = *next_tb - now;
> if (now <= decrementer_max)
> - set_dec(now);
> - /* We may have raced with new irq work */
> - if (test_irq_work_pending())
> - set_dec(1);
> + set_dec_or_work(now);
> __this_cpu_inc(irq_stat.timer_irqs_others);
> }
>
> @@ -854,11 +892,7 @@ static int decrementer_set_next_event(unsigned long evt,
> struct clock_event_device *dev)
> {
> __this_cpu_write(decrementers_next_tb, get_tb() + evt);
> - set_dec(evt);
> -
> - /* We may have raced with new irq work */
> - if (test_irq_work_pending())
> - set_dec(1);
> + set_dec_or_work(evt);
>
> return 0;
> }
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 6e6cfb10e9bb..0cef578930f9 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -4018,11 +4018,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
> vc->entry_exit_map = 0x101;
> vc->in_guest = 0;
>
> - next_timer = timer_get_next_tb();
> - set_dec(next_timer - tb);
> - /* We may have raced with new irq work */
> - if (test_irq_work_pending())
> - set_dec(1);
> + timer_rearm_host_dec(tb);
> +
> mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso);
>
> kvmhv_load_host_pmu();
>
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v1 02/55] KVM: PPC: Book3S HV P9: Fixes for TM softpatch interrupt
2021-07-26 3:49 ` [PATCH v1 02/55] KVM: PPC: Book3S HV P9: Fixes for TM softpatch interrupt Nicholas Piggin
@ 2021-08-06 1:16 ` Michael Ellerman
2021-08-06 10:25 ` Nicholas Piggin
0 siblings, 1 reply; 79+ messages in thread
From: Michael Ellerman @ 2021-08-06 1:16 UTC (permalink / raw)
To: Nicholas Piggin, kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
Nicholas Piggin <npiggin@gmail.com> writes:
> The softpatch interrupt sets HSRR0 to the faulting instruction +4, so
> it should subtract 4 for the faulting instruction address. Also have it
> emulate and deliver HFAC interrupts correctly, which is important for
> nested HV and facility demand-faulting in future.
The nip being off by 4 sounds bad. But I guess it's not that big a deal
because it's only used for reporting the instruction address?
Would also be good to have some more explanation of why it's OK to
change from illegal to HFAC, which is a guest visible change.
> diff --git a/arch/powerpc/kvm/book3s_hv_tm.c b/arch/powerpc/kvm/book3s_hv_tm.c
> index cc90b8b82329..e4fd4a9dee08 100644
> --- a/arch/powerpc/kvm/book3s_hv_tm.c
> +++ b/arch/powerpc/kvm/book3s_hv_tm.c
> @@ -74,19 +74,23 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
> case PPC_INST_RFEBB:
> if ((msr & MSR_PR) && (vcpu->arch.vcore->pcr & PCR_ARCH_206)) {
> /* generate an illegal instruction interrupt */
> + vcpu->arch.regs.nip -= 4;
> kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
> return RESUME_GUEST;
> }
> /* check EBB facility is available */
> if (!(vcpu->arch.hfscr & HFSCR_EBB)) {
> - /* generate an illegal instruction interrupt */
> - kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
> - return RESUME_GUEST;
> + vcpu->arch.regs.nip -= 4;
> + vcpu->arch.hfscr &= ~HFSCR_INTR_CAUSE;
> + vcpu->arch.hfscr |= (u64)FSCR_EBB_LG << 56;
> + vcpu->arch.trap = BOOK3S_INTERRUPT_H_FAC_UNAVAIL;
> + return -1; /* rerun host interrupt handler */
This is EBB not TM. Probably OK to leave it in this patch as long as
it's mentioned in the change log?
> }
> if ((msr & MSR_PR) && !(vcpu->arch.fscr & FSCR_EBB)) {
> /* generate a facility unavailable interrupt */
> - vcpu->arch.fscr = (vcpu->arch.fscr & ~(0xffull << 56)) |
> - ((u64)FSCR_EBB_LG << 56);
> + vcpu->arch.regs.nip -= 4;
> + vcpu->arch.fscr &= ~FSCR_INTR_CAUSE;
> + vcpu->arch.fscr |= (u64)FSCR_EBB_LG << 56;
Same.
cheers
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v1 16/55] powerpc/64s: Implement PMU override command line option
2021-07-26 3:49 ` [PATCH v1 16/55] powerpc/64s: Implement PMU override command line option Nicholas Piggin
@ 2021-08-06 7:33 ` Madhavan Srinivasan
2021-08-06 10:38 ` Nicholas Piggin
2021-08-06 9:28 ` Athira Rajeev
1 sibling, 1 reply; 79+ messages in thread
From: Madhavan Srinivasan @ 2021-08-06 7:33 UTC (permalink / raw)
To: Nicholas Piggin, kvm-ppc; +Cc: linuxppc-dev
On 7/26/21 9:19 AM, Nicholas Piggin wrote:
> It can be useful in simulators (with very constrained environments)
> to allow some PMCs to run from boot so they can be sampled directly
> by a test harness, rather than having to run perf.
>
> A previous change freezes counters at boot by default, so provide
> a boot time option to un-freeze (plus a bit more flexibility).
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> .../admin-guide/kernel-parameters.txt | 7 ++++
> arch/powerpc/perf/core-book3s.c | 35 +++++++++++++++++++
> 2 files changed, 42 insertions(+)
>
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index bdb22006f713..96b7d0ebaa40 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -4089,6 +4089,13 @@
> Override pmtimer IOPort with a hex value.
> e.g. pmtmr=0x508
>
> + pmu= [PPC] Manually enable the PMU.
This is bit confusing, IIUC, we are manually disabling the perf
registration
with this option and not pmu. If this option is used, we will unfreeze the
MMCR0_FC (only in the HV_mode) and not register perf subsystem.
Since this option is valid only for HV_mode, canwe call it
kvm_disable_perf or kvm_dis_perf.
> + Enable the PMU by setting MMCR0 to 0 (clear FC bit).
> + This option is implemented for Book3S processors.
> + If a number is given, then MMCR1 is set to that number,
> + otherwise (e.g., 'pmu=on'), it is left 0. The perf
> + subsystem is disabled if this option is used.
> +
> pm_debug_messages [SUSPEND,KNL]
> Enable suspend/resume debug messages during boot up.
>
> diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
> index 65795cadb475..e7cef4fe17d7 100644
> --- a/arch/powerpc/perf/core-book3s.c
> +++ b/arch/powerpc/perf/core-book3s.c
> @@ -2428,8 +2428,24 @@ int register_power_pmu(struct power_pmu *pmu)
> }
>
> #ifdef CONFIG_PPC64
> +static bool pmu_override = false;
> +static unsigned long pmu_override_val;
> +static void do_pmu_override(void *data)
> +{
> + ppc_set_pmu_inuse(1);
> + if (pmu_override_val)
> + mtspr(SPRN_MMCR1, pmu_override_val);
> + mtspr(SPRN_MMCR0, mfspr(SPRN_MMCR0) & ~MMCR0_FC);
> +}
> +
> static int __init init_ppc64_pmu(void)
> {
> + if (cpu_has_feature(CPU_FTR_HVMODE) && pmu_override) {
> + printk(KERN_WARNING "perf: disabling perf due to pmu= command line option.\n");
> + on_each_cpu(do_pmu_override, NULL, 1);
> + return 0;
> + }
> +
> /* run through all the pmu drivers one at a time */
> if (!init_power5_pmu())
> return 0;
> @@ -2451,4 +2467,23 @@ static int __init init_ppc64_pmu(void)
> return init_generic_compat_pmu();
> }
> early_initcall(init_ppc64_pmu);
> +
> +static int __init pmu_setup(char *str)
> +{
> + unsigned long val;
> +
> + if (!early_cpu_has_feature(CPU_FTR_HVMODE))
> + return 0;
> +
> + pmu_override = true;
> +
> + if (kstrtoul(str, 0, &val))
> + val = 0;
> +
> + pmu_override_val = val;
> +
> + return 1;
> +}
> +__setup("pmu=", pmu_setup);
> +
> #endif
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v1 14/55] KVM: PPC: Book3S HV: Don't always save PMU for guest capable of nesting
2021-07-26 3:49 ` [PATCH v1 14/55] KVM: PPC: Book3S HV: Don't always save PMU for guest capable of nesting Nicholas Piggin
@ 2021-08-06 7:34 ` Michael Ellerman
2021-08-06 10:32 ` Nicholas Piggin
0 siblings, 1 reply; 79+ messages in thread
From: Michael Ellerman @ 2021-08-06 7:34 UTC (permalink / raw)
To: Nicholas Piggin, kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
Nicholas Piggin <npiggin@gmail.com> writes:
> Revert the workaround added by commit 63279eeb7f93a ("KVM: PPC: Book3S
> HV: Always save guest pmu for guest capable of nesting").
>
> Nested capable guests running with the earlier commit ("KVM: PPC: Book3S
> HV Nested: Indicate guest PMU in-use in VPA") will now indicate the PMU
> in-use status of their guests, which means the parent does not need to
> unconditionally save the PMU for nested capable guests.
>
> This will cause the PMU to break for nested guests when running older
> nested hypervisor guests under a kernel with this change. It's unclear
> there's an easy way to avoid that, so this could wait for a release or
> so for the fix to filter into stable kernels.
I'm not sure PMU inside nested guests is getting much use, but I don't
think we can break this quite so casually :)
Especially as the failure mode will be PMU counts that don't match
reality, which is hard to diagnose. It took nearly a year for us to find
the original bug.
I think we need to hold this back for a while.
We could put it under a CONFIG option, and then flip that option to off
at some point in the future.
cheers
> index e7f8cc04944b..ab89db561c85 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -4003,8 +4003,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
> vcpu->arch.vpa.dirty = 1;
> save_pmu = lp->pmcregs_in_use;
> }
> - /* Must save pmu if this guest is capable of running nested guests */
> - save_pmu |= nesting_enabled(vcpu->kvm);
>
> kvmhv_save_guest_pmu(vcpu, save_pmu);
> #ifdef CONFIG_PPC_PSERIES
> --
> 2.23.0
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v1 16/55] powerpc/64s: Implement PMU override command line option
2021-07-26 3:49 ` [PATCH v1 16/55] powerpc/64s: Implement PMU override command line option Nicholas Piggin
2021-08-06 7:33 ` Madhavan Srinivasan
@ 2021-08-06 9:28 ` Athira Rajeev
2021-08-06 10:42 ` Nicholas Piggin
1 sibling, 1 reply; 79+ messages in thread
From: Athira Rajeev @ 2021-08-06 9:28 UTC (permalink / raw)
To: Nicholas Piggin; +Cc: linuxppc-dev, kvm-ppc
> On 26-Jul-2021, at 9:19 AM, Nicholas Piggin <npiggin@gmail.com> wrote:
>
> It can be useful in simulators (with very constrained environments)
> to allow some PMCs to run from boot so they can be sampled directly
> by a test harness, rather than having to run perf.
>
> A previous change freezes counters at boot by default, so provide
> a boot time option to un-freeze (plus a bit more flexibility).
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> .../admin-guide/kernel-parameters.txt | 7 ++++
> arch/powerpc/perf/core-book3s.c | 35 +++++++++++++++++++
> 2 files changed, 42 insertions(+)
>
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index bdb22006f713..96b7d0ebaa40 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -4089,6 +4089,13 @@
> Override pmtimer IOPort with a hex value.
> e.g. pmtmr=0x508
>
> + pmu= [PPC] Manually enable the PMU.
> + Enable the PMU by setting MMCR0 to 0 (clear FC bit).
> + This option is implemented for Book3S processors.
> + If a number is given, then MMCR1 is set to that number,
> + otherwise (e.g., 'pmu=on'), it is left 0. The perf
> + subsystem is disabled if this option is used.
> +
> pm_debug_messages [SUSPEND,KNL]
> Enable suspend/resume debug messages during boot up.
>
> diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
> index 65795cadb475..e7cef4fe17d7 100644
> --- a/arch/powerpc/perf/core-book3s.c
> +++ b/arch/powerpc/perf/core-book3s.c
> @@ -2428,8 +2428,24 @@ int register_power_pmu(struct power_pmu *pmu)
> }
>
> #ifdef CONFIG_PPC64
> +static bool pmu_override = false;
> +static unsigned long pmu_override_val;
> +static void do_pmu_override(void *data)
> +{
> + ppc_set_pmu_inuse(1);
> + if (pmu_override_val)
> + mtspr(SPRN_MMCR1, pmu_override_val);
> + mtspr(SPRN_MMCR0, mfspr(SPRN_MMCR0) & ~MMCR0_FC);
Hi Nick
Here, we are not doing any validity check for the value used to set MMCR1.
For advanced users, the option to pass value for MMCR1 is fine. But other cases, it could result in
invalid event getting used. Do we need to restrict this boot time option for only PMC5/6 ?
Thanks
Athira
> +}
> +
> static int __init init_ppc64_pmu(void)
> {
> + if (cpu_has_feature(CPU_FTR_HVMODE) && pmu_override) {
> + printk(KERN_WARNING "perf: disabling perf due to pmu= command line option.\n");
> + on_each_cpu(do_pmu_override, NULL, 1);
> + return 0;
> + }
> +
> /* run through all the pmu drivers one at a time */
> if (!init_power5_pmu())
> return 0;
> @@ -2451,4 +2467,23 @@ static int __init init_ppc64_pmu(void)
> return init_generic_compat_pmu();
> }
> early_initcall(init_ppc64_pmu);
> +
> +static int __init pmu_setup(char *str)
> +{
> + unsigned long val;
> +
> + if (!early_cpu_has_feature(CPU_FTR_HVMODE))
> + return 0;
> +
> + pmu_override = true;
> +
> + if (kstrtoul(str, 0, &val))
> + val = 0;
> +
> + pmu_override_val = val;
> +
> + return 1;
> +}
> +__setup("pmu=", pmu_setup);
> +
> #endif
> --
> 2.23.0
>
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v1 02/55] KVM: PPC: Book3S HV P9: Fixes for TM softpatch interrupt
2021-08-06 1:16 ` Michael Ellerman
@ 2021-08-06 10:25 ` Nicholas Piggin
0 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-08-06 10:25 UTC (permalink / raw)
To: kvm-ppc, Michael Ellerman; +Cc: linuxppc-dev
Excerpts from Michael Ellerman's message of August 6, 2021 11:16 am:
> Nicholas Piggin <npiggin@gmail.com> writes:
>> The softpatch interrupt sets HSRR0 to the faulting instruction +4, so
>> it should subtract 4 for the faulting instruction address. Also have it
>> emulate and deliver HFAC interrupts correctly, which is important for
>> nested HV and facility demand-faulting in future.
>
> The nip being off by 4 sounds bad. But I guess it's not that big a deal
> because it's only used for reporting the instruction address?
Yeah currently I think so. It's not that bad of a bug.
>
> Would also be good to have some more explanation of why it's OK to
> change from illegal to HFAC, which is a guest visible change.
Good point. Again for now it doesn't really matter because the HFAC
handler turns everything (except msgsndp) into a sigill anyway, so
becomes important when we start using HFACs. Put that way I'll probably
split it out.
>
>> diff --git a/arch/powerpc/kvm/book3s_hv_tm.c b/arch/powerpc/kvm/book3s_hv_tm.c
>> index cc90b8b82329..e4fd4a9dee08 100644
>> --- a/arch/powerpc/kvm/book3s_hv_tm.c
>> +++ b/arch/powerpc/kvm/book3s_hv_tm.c
>> @@ -74,19 +74,23 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
>> case PPC_INST_RFEBB:
>> if ((msr & MSR_PR) && (vcpu->arch.vcore->pcr & PCR_ARCH_206)) {
>> /* generate an illegal instruction interrupt */
>> + vcpu->arch.regs.nip -= 4;
>> kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
>> return RESUME_GUEST;
>> }
>> /* check EBB facility is available */
>> if (!(vcpu->arch.hfscr & HFSCR_EBB)) {
>> - /* generate an illegal instruction interrupt */
>> - kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
>> - return RESUME_GUEST;
>> + vcpu->arch.regs.nip -= 4;
>> + vcpu->arch.hfscr &= ~HFSCR_INTR_CAUSE;
>> + vcpu->arch.hfscr |= (u64)FSCR_EBB_LG << 56;
>> + vcpu->arch.trap = BOOK3S_INTERRUPT_H_FAC_UNAVAIL;
>> + return -1; /* rerun host interrupt handler */
>
> This is EBB not TM. Probably OK to leave it in this patch as long as
> it's mentioned in the change log?
It is, but you can get a softpatch interrupt on rfebb changing TM state.
Although I haven't actually tested to see if you get a softpatch when
HFSCR disables EBB or the hardware just gives you the HFAC. For that
matter, same for all the other facility tests.
Thanks,
Nick
>
>> }
>> if ((msr & MSR_PR) && !(vcpu->arch.fscr & FSCR_EBB)) {
>> /* generate a facility unavailable interrupt */
>> - vcpu->arch.fscr = (vcpu->arch.fscr & ~(0xffull << 56)) |
>> - ((u64)FSCR_EBB_LG << 56);
>> + vcpu->arch.regs.nip -= 4;
>> + vcpu->arch.fscr &= ~FSCR_INTR_CAUSE;
>> + vcpu->arch.fscr |= (u64)FSCR_EBB_LG << 56;
>
> Same.
>
>
> cheers
>
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v1 11/55] powerpc/time: add API for KVM to re-arm the host timer/decrementer
2021-08-05 7:22 ` Christophe Leroy
@ 2021-08-06 10:30 ` Nicholas Piggin
0 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-08-06 10:30 UTC (permalink / raw)
To: Christophe Leroy, kvm-ppc; +Cc: linuxppc-dev
Excerpts from Christophe Leroy's message of August 5, 2021 5:22 pm:
>
>
> Le 26/07/2021 à 05:49, Nicholas Piggin a écrit :
>> Rather than have KVM look up the host timer and fiddle with the
>> irq-work internal details, have the powerpc/time.c code provide a
>> function for KVM to re-arm the Linux timer code when exiting a
>> guest.
>>
>> This is implementation has an improvement over existing code of
>> marking a decrementer interrupt as soft-pending if a timer has
>> expired, rather than setting DEC to a -ve value, which tended to
>> cause host timers to take two interrupts (first hdec to exit the
>> guest, then the immediate dec).
>>
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>> arch/powerpc/include/asm/time.h | 16 +++-------
>> arch/powerpc/kernel/time.c | 52 +++++++++++++++++++++++++++------
>> arch/powerpc/kvm/book3s_hv.c | 7 ++---
>> 3 files changed, 49 insertions(+), 26 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h
>> index 69b6be617772..924b2157882f 100644
>> --- a/arch/powerpc/include/asm/time.h
>> +++ b/arch/powerpc/include/asm/time.h
>> @@ -99,18 +99,6 @@ extern void div128_by_32(u64 dividend_high, u64 dividend_low,
>> extern void secondary_cpu_time_init(void);
>> extern void __init time_init(void);
>>
>> -#ifdef CONFIG_PPC64
>> -static inline unsigned long test_irq_work_pending(void)
>> -{
>> - unsigned long x;
>> -
>> - asm volatile("lbz %0,%1(13)"
>> - : "=r" (x)
>> - : "i" (offsetof(struct paca_struct, irq_work_pending)));
>> - return x;
>> -}
>> -#endif
>> -
>> DECLARE_PER_CPU(u64, decrementers_next_tb);
>>
>> static inline u64 timer_get_next_tb(void)
>> @@ -118,6 +106,10 @@ static inline u64 timer_get_next_tb(void)
>> return __this_cpu_read(decrementers_next_tb);
>> }
>>
>> +#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
>> +void timer_rearm_host_dec(u64 now);
>> +#endif
>> +
>> /* Convert timebase ticks to nanoseconds */
>> unsigned long long tb_to_ns(unsigned long long tb_ticks);
>>
>> diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
>> index 72d872b49167..016828b7401b 100644
>> --- a/arch/powerpc/kernel/time.c
>> +++ b/arch/powerpc/kernel/time.c
>> @@ -499,6 +499,16 @@ EXPORT_SYMBOL(profile_pc);
>> * 64-bit uses a byte in the PACA, 32-bit uses a per-cpu variable...
>> */
>> #ifdef CONFIG_PPC64
>> +static inline unsigned long test_irq_work_pending(void)
>> +{
>> + unsigned long x;
>> +
>> + asm volatile("lbz %0,%1(13)"
>> + : "=r" (x)
>> + : "i" (offsetof(struct paca_struct, irq_work_pending)));
>
> Can we just use READ_ONCE() instead of hard coding the read ?
Good question, probably yes. Probably calls for its own patch series,
e.g., hw_irq.h has all similar paca accesses.
Thanks,
Nick
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v1 14/55] KVM: PPC: Book3S HV: Don't always save PMU for guest capable of nesting
2021-08-06 7:34 ` Michael Ellerman
@ 2021-08-06 10:32 ` Nicholas Piggin
0 siblings, 0 replies; 79+ messages in thread
From: Nicholas Piggin @ 2021-08-06 10:32 UTC (permalink / raw)
To: kvm-ppc, Michael Ellerman; +Cc: linuxppc-dev
Excerpts from Michael Ellerman's message of August 6, 2021 5:34 pm:
> Nicholas Piggin <npiggin@gmail.com> writes:
>> Revert the workaround added by commit 63279eeb7f93a ("KVM: PPC: Book3S
>> HV: Always save guest pmu for guest capable of nesting").
>>
>> Nested capable guests running with the earlier commit ("KVM: PPC: Book3S
>> HV Nested: Indicate guest PMU in-use in VPA") will now indicate the PMU
>> in-use status of their guests, which means the parent does not need to
>> unconditionally save the PMU for nested capable guests.
>>
>> This will cause the PMU to break for nested guests when running older
>> nested hypervisor guests under a kernel with this change. It's unclear
>> there's an easy way to avoid that, so this could wait for a release or
>> so for the fix to filter into stable kernels.
>
> I'm not sure PMU inside nested guests is getting much use, but I don't
> think we can break this quite so casually :)
>
> Especially as the failure mode will be PMU counts that don't match
> reality, which is hard to diagnose. It took nearly a year for us to find
> the original bug.
>
> I think we need to hold this back for a while.
>
> We could put it under a CONFIG option, and then flip that option to off
> at some point in the future.
Okay that might be a good compromise, I'll do that.
Thanks,
Nick
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v1 16/55] powerpc/64s: Implement PMU override command line option
2021-08-06 7:33 ` Madhavan Srinivasan
@ 2021-08-06 10:38 ` Nicholas Piggin
2021-08-11 12:46 ` Madhavan Srinivasan
0 siblings, 1 reply; 79+ messages in thread
From: Nicholas Piggin @ 2021-08-06 10:38 UTC (permalink / raw)
To: kvm-ppc, Madhavan Srinivasan; +Cc: linuxppc-dev
Excerpts from Madhavan Srinivasan's message of August 6, 2021 5:33 pm:
>
> On 7/26/21 9:19 AM, Nicholas Piggin wrote:
>> It can be useful in simulators (with very constrained environments)
>> to allow some PMCs to run from boot so they can be sampled directly
>> by a test harness, rather than having to run perf.
>>
>> A previous change freezes counters at boot by default, so provide
>> a boot time option to un-freeze (plus a bit more flexibility).
>>
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>> .../admin-guide/kernel-parameters.txt | 7 ++++
>> arch/powerpc/perf/core-book3s.c | 35 +++++++++++++++++++
>> 2 files changed, 42 insertions(+)
>>
>> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
>> index bdb22006f713..96b7d0ebaa40 100644
>> --- a/Documentation/admin-guide/kernel-parameters.txt
>> +++ b/Documentation/admin-guide/kernel-parameters.txt
>> @@ -4089,6 +4089,13 @@
>> Override pmtimer IOPort with a hex value.
>> e.g. pmtmr=0x508
>>
>> + pmu= [PPC] Manually enable the PMU.
>
>
> This is bit confusing, IIUC, we are manually disabling the perf
> registration
> with this option and not pmu.
> If this option is used, we will unfreeze the
> MMCR0_FC (only in the HV_mode) and not register perf subsystem.
With the previous patch, this option un-freezes the PMU
(and disables perf).
> Since this option is valid only for HV_mode, canwe call it
> kvm_disable_perf or kvm_dis_perf.
It's only disabled for guests because it would require a bit
of logic to set pmcregs_in_use when we register our lppaca. We could
add that if needed, but the intention is for use on BML, not exactly
KVM specific.
I can add HV restriction to the help text. And we could rename the
option. free_run_pmu= or something?
Thanks,
Nick
>
>
>> + Enable the PMU by setting MMCR0 to 0 (clear FC bit).
>> + This option is implemented for Book3S processors.
>> + If a number is given, then MMCR1 is set to that number,
>> + otherwise (e.g., 'pmu=on'), it is left 0. The perf
>> + subsystem is disabled if this option is used.
>> +
>> pm_debug_messages [SUSPEND,KNL]
>> Enable suspend/resume debug messages during boot up.
>>
>> diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
>> index 65795cadb475..e7cef4fe17d7 100644
>> --- a/arch/powerpc/perf/core-book3s.c
>> +++ b/arch/powerpc/perf/core-book3s.c
>> @@ -2428,8 +2428,24 @@ int register_power_pmu(struct power_pmu *pmu)
>> }
>>
>> #ifdef CONFIG_PPC64
>> +static bool pmu_override = false;
>> +static unsigned long pmu_override_val;
>> +static void do_pmu_override(void *data)
>> +{
>> + ppc_set_pmu_inuse(1);
>> + if (pmu_override_val)
>> + mtspr(SPRN_MMCR1, pmu_override_val);
>> + mtspr(SPRN_MMCR0, mfspr(SPRN_MMCR0) & ~MMCR0_FC);
>> +}
>> +
>> static int __init init_ppc64_pmu(void)
>> {
>> + if (cpu_has_feature(CPU_FTR_HVMODE) && pmu_override) {
>> + printk(KERN_WARNING "perf: disabling perf due to pmu= command line option.\n");
>> + on_each_cpu(do_pmu_override, NULL, 1);
>> + return 0;
>> + }
>> +
>> /* run through all the pmu drivers one at a time */
>> if (!init_power5_pmu())
>> return 0;
>> @@ -2451,4 +2467,23 @@ static int __init init_ppc64_pmu(void)
>> return init_generic_compat_pmu();
>> }
>> early_initcall(init_ppc64_pmu);
>> +
>> +static int __init pmu_setup(char *str)
>> +{
>> + unsigned long val;
>> +
>> + if (!early_cpu_has_feature(CPU_FTR_HVMODE))
>> + return 0;
>> +
>> + pmu_override = true;
>> +
>> + if (kstrtoul(str, 0, &val))
>> + val = 0;
>> +
>> + pmu_override_val = val;
>> +
>> + return 1;
>> +}
>> +__setup("pmu=", pmu_setup);
>> +
>> #endif
>
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v1 16/55] powerpc/64s: Implement PMU override command line option
2021-08-06 9:28 ` Athira Rajeev
@ 2021-08-06 10:42 ` Nicholas Piggin
2021-08-11 10:54 ` Athira Rajeev
0 siblings, 1 reply; 79+ messages in thread
From: Nicholas Piggin @ 2021-08-06 10:42 UTC (permalink / raw)
To: Athira Rajeev; +Cc: linuxppc-dev, kvm-ppc
Excerpts from Athira Rajeev's message of August 6, 2021 7:28 pm:
>
>
>> On 26-Jul-2021, at 9:19 AM, Nicholas Piggin <npiggin@gmail.com> wrote:
>>
>> It can be useful in simulators (with very constrained environments)
>> to allow some PMCs to run from boot so they can be sampled directly
>> by a test harness, rather than having to run perf.
>>
>> A previous change freezes counters at boot by default, so provide
>> a boot time option to un-freeze (plus a bit more flexibility).
>>
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>> .../admin-guide/kernel-parameters.txt | 7 ++++
>> arch/powerpc/perf/core-book3s.c | 35 +++++++++++++++++++
>> 2 files changed, 42 insertions(+)
>>
>> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
>> index bdb22006f713..96b7d0ebaa40 100644
>> --- a/Documentation/admin-guide/kernel-parameters.txt
>> +++ b/Documentation/admin-guide/kernel-parameters.txt
>> @@ -4089,6 +4089,13 @@
>> Override pmtimer IOPort with a hex value.
>> e.g. pmtmr=0x508
>>
>> + pmu= [PPC] Manually enable the PMU.
>> + Enable the PMU by setting MMCR0 to 0 (clear FC bit).
>> + This option is implemented for Book3S processors.
>> + If a number is given, then MMCR1 is set to that number,
>> + otherwise (e.g., 'pmu=on'), it is left 0. The perf
>> + subsystem is disabled if this option is used.
>> +
>> pm_debug_messages [SUSPEND,KNL]
>> Enable suspend/resume debug messages during boot up.
>>
>> diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
>> index 65795cadb475..e7cef4fe17d7 100644
>> --- a/arch/powerpc/perf/core-book3s.c
>> +++ b/arch/powerpc/perf/core-book3s.c
>> @@ -2428,8 +2428,24 @@ int register_power_pmu(struct power_pmu *pmu)
>> }
>>
>> #ifdef CONFIG_PPC64
>> +static bool pmu_override = false;
>> +static unsigned long pmu_override_val;
>> +static void do_pmu_override(void *data)
>> +{
>> + ppc_set_pmu_inuse(1);
>> + if (pmu_override_val)
>> + mtspr(SPRN_MMCR1, pmu_override_val);
>> + mtspr(SPRN_MMCR0, mfspr(SPRN_MMCR0) & ~MMCR0_FC);
>
> Hi Nick
>
> Here, we are not doing any validity check for the value used to set MMCR1.
> For advanced users, the option to pass value for MMCR1 is fine. But other cases, it could result in
> invalid event getting used. Do we need to restrict this boot time option for only PMC5/6 ?
Depends what would be useful. We don't have to prevent the admin shooting
themselves in the foot with options like this, but if we can make it
safer without making it less useful then that's always a good option.
Thanks,
Nick
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v1 30/55] KVM: PPC: Book3S HV P9: Only execute mtSPR if the value changed
2021-07-26 3:50 ` [PATCH v1 30/55] KVM: PPC: Book3S HV P9: Only execute mtSPR if the value changed Nicholas Piggin
@ 2021-08-06 20:45 ` Fabiano Rosas
0 siblings, 0 replies; 79+ messages in thread
From: Fabiano Rosas @ 2021-08-06 20:45 UTC (permalink / raw)
To: Nicholas Piggin, kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
Nicholas Piggin <npiggin@gmail.com> writes:
> Keep better track of the current SPR value in places where
> they are to be loaded with a new context, to reduce expensive
> mtSPR operations.
>
> -73 cycles (7354) POWER9 virt-mode NULL hcall
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
> ---
> arch/powerpc/kvm/book3s_hv.c | 64 ++++++++++++++++++++++--------------
> 1 file changed, 39 insertions(+), 25 deletions(-)
>
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 0d97138e6fa4..56429b53f4dc 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -4009,19 +4009,28 @@ static void switch_pmu_to_host(struct kvm_vcpu *vcpu,
> }
> }
>
> -static void load_spr_state(struct kvm_vcpu *vcpu)
> +static void load_spr_state(struct kvm_vcpu *vcpu,
> + struct p9_host_os_sprs *host_os_sprs)
> {
> - mtspr(SPRN_DSCR, vcpu->arch.dscr);
> - mtspr(SPRN_IAMR, vcpu->arch.iamr);
> - mtspr(SPRN_PSPB, vcpu->arch.pspb);
> - mtspr(SPRN_FSCR, vcpu->arch.fscr);
> mtspr(SPRN_TAR, vcpu->arch.tar);
> mtspr(SPRN_EBBHR, vcpu->arch.ebbhr);
> mtspr(SPRN_EBBRR, vcpu->arch.ebbrr);
> mtspr(SPRN_BESCR, vcpu->arch.bescr);
> - mtspr(SPRN_TIDR, vcpu->arch.tid);
> - mtspr(SPRN_AMR, vcpu->arch.amr);
> - mtspr(SPRN_UAMOR, vcpu->arch.uamor);
> +
> + if (!cpu_has_feature(CPU_FTR_ARCH_31))
> + mtspr(SPRN_TIDR, vcpu->arch.tid);
> + if (host_os_sprs->iamr != vcpu->arch.iamr)
> + mtspr(SPRN_IAMR, vcpu->arch.iamr);
> + if (host_os_sprs->amr != vcpu->arch.amr)
> + mtspr(SPRN_AMR, vcpu->arch.amr);
> + if (vcpu->arch.uamor != 0)
> + mtspr(SPRN_UAMOR, vcpu->arch.uamor);
> + if (host_os_sprs->fscr != vcpu->arch.fscr)
> + mtspr(SPRN_FSCR, vcpu->arch.fscr);
> + if (host_os_sprs->dscr != vcpu->arch.dscr)
> + mtspr(SPRN_DSCR, vcpu->arch.dscr);
> + if (vcpu->arch.pspb != 0)
> + mtspr(SPRN_PSPB, vcpu->arch.pspb);
>
> /*
> * DAR, DSISR, and for nested HV, SPRGs must be set with MSR[RI]
> @@ -4036,28 +4045,31 @@ static void load_spr_state(struct kvm_vcpu *vcpu)
>
> static void store_spr_state(struct kvm_vcpu *vcpu)
> {
> - vcpu->arch.ctrl = mfspr(SPRN_CTRLF);
> -
> - vcpu->arch.iamr = mfspr(SPRN_IAMR);
> - vcpu->arch.pspb = mfspr(SPRN_PSPB);
> - vcpu->arch.fscr = mfspr(SPRN_FSCR);
> vcpu->arch.tar = mfspr(SPRN_TAR);
> vcpu->arch.ebbhr = mfspr(SPRN_EBBHR);
> vcpu->arch.ebbrr = mfspr(SPRN_EBBRR);
> vcpu->arch.bescr = mfspr(SPRN_BESCR);
> - vcpu->arch.tid = mfspr(SPRN_TIDR);
> +
> + if (!cpu_has_feature(CPU_FTR_ARCH_31))
> + vcpu->arch.tid = mfspr(SPRN_TIDR);
> + vcpu->arch.iamr = mfspr(SPRN_IAMR);
> vcpu->arch.amr = mfspr(SPRN_AMR);
> vcpu->arch.uamor = mfspr(SPRN_UAMOR);
> + vcpu->arch.fscr = mfspr(SPRN_FSCR);
> vcpu->arch.dscr = mfspr(SPRN_DSCR);
> + vcpu->arch.pspb = mfspr(SPRN_PSPB);
> +
> + vcpu->arch.ctrl = mfspr(SPRN_CTRLF);
> }
>
> static void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs)
> {
> - host_os_sprs->dscr = mfspr(SPRN_DSCR);
> - host_os_sprs->tidr = mfspr(SPRN_TIDR);
> + if (!cpu_has_feature(CPU_FTR_ARCH_31))
> + host_os_sprs->tidr = mfspr(SPRN_TIDR);
> host_os_sprs->iamr = mfspr(SPRN_IAMR);
> host_os_sprs->amr = mfspr(SPRN_AMR);
> host_os_sprs->fscr = mfspr(SPRN_FSCR);
> + host_os_sprs->dscr = mfspr(SPRN_DSCR);
> }
>
> /* vcpu guest regs must already be saved */
> @@ -4066,18 +4078,20 @@ static void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu,
> {
> mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso);
>
> - mtspr(SPRN_PSPB, 0);
> - mtspr(SPRN_UAMOR, 0);
> -
> - mtspr(SPRN_DSCR, host_os_sprs->dscr);
> - mtspr(SPRN_TIDR, host_os_sprs->tidr);
> - mtspr(SPRN_IAMR, host_os_sprs->iamr);
> -
> + if (!cpu_has_feature(CPU_FTR_ARCH_31))
> + mtspr(SPRN_TIDR, host_os_sprs->tidr);
> + if (host_os_sprs->iamr != vcpu->arch.iamr)
> + mtspr(SPRN_IAMR, host_os_sprs->iamr);
> + if (vcpu->arch.uamor != 0)
> + mtspr(SPRN_UAMOR, 0);
> if (host_os_sprs->amr != vcpu->arch.amr)
> mtspr(SPRN_AMR, host_os_sprs->amr);
> -
> if (host_os_sprs->fscr != vcpu->arch.fscr)
> mtspr(SPRN_FSCR, host_os_sprs->fscr);
> + if (host_os_sprs->dscr != vcpu->arch.dscr)
> + mtspr(SPRN_DSCR, host_os_sprs->dscr);
> + if (vcpu->arch.pspb != 0)
> + mtspr(SPRN_PSPB, 0);
>
> /* Save guest CTRL register, set runlatch to 1 */
> if (!(vcpu->arch.ctrl & 1))
> @@ -4169,7 +4183,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
> #endif
> mtspr(SPRN_VRSAVE, vcpu->arch.vrsave);
>
> - load_spr_state(vcpu);
> + load_spr_state(vcpu, &host_os_sprs);
>
> if (kvmhv_on_pseries()) {
> /*
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v1 31/55] KVM: PPC: Book3S HV P9: Juggle SPR switching around
2021-07-26 3:50 ` [PATCH v1 31/55] KVM: PPC: Book3S HV P9: Juggle SPR switching around Nicholas Piggin
@ 2021-08-06 20:46 ` Fabiano Rosas
0 siblings, 0 replies; 79+ messages in thread
From: Fabiano Rosas @ 2021-08-06 20:46 UTC (permalink / raw)
To: Nicholas Piggin, kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
Nicholas Piggin <npiggin@gmail.com> writes:
> This juggles SPR switching on the entry and exit sides to be more
> symmetric, which makes the next refactoring patch possible with no
> functional change.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
> ---
> arch/powerpc/kvm/book3s_hv.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 56429b53f4dc..c2c72875fca9 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -4175,7 +4175,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
> msr = mfmsr(); /* TM restore can update msr */
> }
>
> - switch_pmu_to_guest(vcpu, &host_os_sprs);
> + load_spr_state(vcpu, &host_os_sprs);
>
> load_fp_state(&vcpu->arch.fp);
> #ifdef CONFIG_ALTIVEC
> @@ -4183,7 +4183,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
> #endif
> mtspr(SPRN_VRSAVE, vcpu->arch.vrsave);
>
> - load_spr_state(vcpu, &host_os_sprs);
> + switch_pmu_to_guest(vcpu, &host_os_sprs);
>
> if (kvmhv_on_pseries()) {
> /*
> @@ -4283,6 +4283,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
> vcpu->arch.slb_max = 0;
> }
>
> + switch_pmu_to_host(vcpu, &host_os_sprs);
> +
> store_spr_state(vcpu);
>
> store_fp_state(&vcpu->arch.fp);
> @@ -4297,8 +4299,6 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
>
> vcpu_vpa_increment_dispatch(vcpu);
>
> - switch_pmu_to_host(vcpu, &host_os_sprs);
> -
> timer_rearm_host_dec(*tb);
>
> restore_p9_host_os_sprs(vcpu, &host_os_sprs);
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v1 32/55] KVM: PPC: Book3S HV P9: Move vcpu register save/restore into functions
2021-07-26 3:50 ` [PATCH v1 32/55] KVM: PPC: Book3S HV P9: Move vcpu register save/restore into functions Nicholas Piggin
@ 2021-08-06 20:49 ` Fabiano Rosas
0 siblings, 0 replies; 79+ messages in thread
From: Fabiano Rosas @ 2021-08-06 20:49 UTC (permalink / raw)
To: Nicholas Piggin, kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
Nicholas Piggin <npiggin@gmail.com> writes:
> This should be no functional difference but makes the caller easier
> to read.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
> ---
> arch/powerpc/kvm/book3s_hv.c | 65 +++++++++++++++++++++++-------------
> 1 file changed, 41 insertions(+), 24 deletions(-)
>
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index c2c72875fca9..45211458ac05 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -4062,6 +4062,44 @@ static void store_spr_state(struct kvm_vcpu *vcpu)
> vcpu->arch.ctrl = mfspr(SPRN_CTRLF);
> }
>
> +/* Returns true if current MSR and/or guest MSR may have changed */
> +static bool load_vcpu_state(struct kvm_vcpu *vcpu,
> + struct p9_host_os_sprs *host_os_sprs)
> +{
> + bool ret = false;
> +
> + if (cpu_has_feature(CPU_FTR_TM) ||
> + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) {
> + kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true);
> + ret = true;
> + }
> +
> + load_spr_state(vcpu, host_os_sprs);
> +
> + load_fp_state(&vcpu->arch.fp);
> +#ifdef CONFIG_ALTIVEC
> + load_vr_state(&vcpu->arch.vr);
> +#endif
> + mtspr(SPRN_VRSAVE, vcpu->arch.vrsave);
> +
> + return ret;
> +}
> +
> +static void store_vcpu_state(struct kvm_vcpu *vcpu)
> +{
> + store_spr_state(vcpu);
> +
> + store_fp_state(&vcpu->arch.fp);
> +#ifdef CONFIG_ALTIVEC
> + store_vr_state(&vcpu->arch.vr);
> +#endif
> + vcpu->arch.vrsave = mfspr(SPRN_VRSAVE);
> +
> + if (cpu_has_feature(CPU_FTR_TM) ||
> + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
> + kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true);
> +}
> +
> static void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs)
> {
> if (!cpu_has_feature(CPU_FTR_ARCH_31))
> @@ -4169,19 +4207,8 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
>
> vcpu_vpa_increment_dispatch(vcpu);
>
> - if (cpu_has_feature(CPU_FTR_TM) ||
> - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) {
> - kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true);
> - msr = mfmsr(); /* TM restore can update msr */
> - }
> -
> - load_spr_state(vcpu, &host_os_sprs);
> -
> - load_fp_state(&vcpu->arch.fp);
> -#ifdef CONFIG_ALTIVEC
> - load_vr_state(&vcpu->arch.vr);
> -#endif
> - mtspr(SPRN_VRSAVE, vcpu->arch.vrsave);
> + if (unlikely(load_vcpu_state(vcpu, &host_os_sprs)))
> + msr = mfmsr(); /* MSR may have been updated */
>
> switch_pmu_to_guest(vcpu, &host_os_sprs);
>
> @@ -4285,17 +4312,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
>
> switch_pmu_to_host(vcpu, &host_os_sprs);
>
> - store_spr_state(vcpu);
> -
> - store_fp_state(&vcpu->arch.fp);
> -#ifdef CONFIG_ALTIVEC
> - store_vr_state(&vcpu->arch.vr);
> -#endif
> - vcpu->arch.vrsave = mfspr(SPRN_VRSAVE);
> -
> - if (cpu_has_feature(CPU_FTR_TM) ||
> - cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
> - kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true);
> + store_vcpu_state(vcpu);
>
> vcpu_vpa_increment_dispatch(vcpu);
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v1 26/55] KVM: PPC: Book3S HV: Change dec_expires to be relative to guest timebase
2021-07-26 3:50 ` [PATCH v1 26/55] KVM: PPC: Book3S HV: Change dec_expires to be relative to guest timebase Nicholas Piggin
@ 2021-08-07 23:17 ` Michael Ellerman
0 siblings, 0 replies; 79+ messages in thread
From: Michael Ellerman @ 2021-08-07 23:17 UTC (permalink / raw)
To: Nicholas Piggin, kvm-ppc; +Cc: linuxppc-dev, Nicholas Piggin
Nicholas Piggin <npiggin@gmail.com> writes:
> Change dec_expires to be relative to the guest timebase, and allow
> it to be moved into low level P9 guest entry functions, to improve
> SPR access scheduling.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> arch/powerpc/include/asm/kvm_book3s.h | 6 +++
> arch/powerpc/include/asm/kvm_host.h | 2 +-
> arch/powerpc/kvm/book3s_hv.c | 58 +++++++++++++------------
> arch/powerpc/kvm/book3s_hv_nested.c | 3 ++
> arch/powerpc/kvm/book3s_hv_p9_entry.c | 10 ++++-
> arch/powerpc/kvm/book3s_hv_rmhandlers.S | 14 ------
> 6 files changed, 49 insertions(+), 44 deletions(-)
My p8 is hitting an oops running guests, and bisect points to this. Not
obvious how the change relates to the oops, but maybe you can see it.
cheers
[ 716.042962][T16989] Kernel attempted to read user page (0) - exploit attempt? (uid: 0)
[ 716.043020][T16989] BUG: Kernel NULL pointer dereference on read at 0x00000000
[ 716.043028][T16989] Faulting instruction address: 0xc00000000001e1a8
[ 716.043037][T16989] Oops: Kernel access of bad area, sig: 11 [#1]
[ 716.043043][T16989] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA PowerNV
[ 716.043052][T16989] Modules linked in: xt_MASQUERADE xt_conntrack ipt_REJECT nf_reject_ipv4 xt_tcpudp iptable_mangle iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nfnetlink ip6table_filter ip6_tables iptable_filter tun bridge stp llc fuse kvm_hv kvm binfmt_misc squashfs mlx4_ib ib_uverbs dm_multipath scsi_dh_rdac ib_core scsi_dh_alua mlx4_en sr_mod cdrom lpfc sg mlx4_core bnx2x crc_t10dif crct10dif_generic scsi_transport_fc mdio vmx_crypto gf128mul crct10dif_vpmsum crct10dif_common leds_powernv powernv_rng led_class crc32c_vpmsum rng_core powernv_op_panel sunrpc ip_tables x_tables autofs4
[ 716.043128][T16989] CPU: 56 PID: 16989 Comm: qemu-system-ppc Not tainted 5.14.0-rc4-02329-g9bdd37071243 #1
[ 716.043137][T16989] NIP: c00000000001e1a8 LR: c00000000001e154 CTR: c00000000016ebb0
[ 716.043144][T16989] REGS: c0000009f1a93480 TRAP: 0300 Not tainted (5.14.0-rc4-02329-g9bdd37071243)
[ 716.043150][T16989] MSR: 9000000002803033 <SF,HV,VEC,VSX,FP,ME,IR,DR,RI,LE> CR: 42442444 XER: 20000000
[ 716.043167][T16989] CFAR: c00000000000cd0c DAR: 0000000000000000 DSISR: 40000000 IRQMASK: 3
[ 716.043167][T16989] GPR00: c00000000001eab8 c0000009f1a93720 c000000002459f00 c0000009c0730270
[ 716.043167][T16989] GPR04: 00000000000001f0 0000000000000000 0000000022442448 c0000009c072ec80
[ 716.043167][T16989] GPR08: 00000000000000c2 0000000044000000 9000000002803033 0000000000000001
[ 716.043167][T16989] GPR12: 0000000000002200 c000000ffffec600 00007fff955f4410 0000000000000000
[ 716.043167][T16989] GPR16: 00007fff96280000 00007fff955f0320 00007fff8ee8ebe0 00007fff8e660028
[ 716.043167][T16989] GPR20: c000000803807400 c000000858b243bc 000000000000000a c000000002496eb8
[ 716.043167][T16989] GPR24: c000000801123650 c0000009c0730250 c0000009c072ec80 0000000002802000
[ 716.043167][T16989] GPR28: 0000000000800000 0000000002802000 0000000000800000 c0000009f1a93e80
[ 716.043236][T16989] NIP [c00000000001e1a8] restore_math+0x208/0x310
[ 716.043247][T16989] LR [c00000000001e154] restore_math+0x1b4/0x310
[ 716.043254][T16989] Call Trace:
[ 716.043257][T16989] [c0000009f1a93720] [0000000022442448] 0x22442448 (unreliable)
[ 716.043267][T16989] [c0000009f1a93780] [c00000000001eab8] __switch_to+0x228/0x2f0
[ 716.043274][T16989] [c0000009f1a937e0] [c000000000f7949c] __schedule+0x40c/0xf10
[ 716.043283][T16989] [c0000009f1a938b0] [c000000000f7a034] schedule+0x94/0x170
[ 716.043291][T16989] [c0000009f1a938e0] [c00800000b8e4474] kvmppc_wait_for_exec+0xdc/0xf8 [kvm_hv]
[ 716.043307][T16989] [c0000009f1a93960] [c00800000b8eeb18] kvmppc_vcpu_run_hv+0x900/0x10f0 [kvm_hv]
[ 716.043319][T16989] [c0000009f1a93a10] [c00800000b76355c] kvmppc_vcpu_run+0x34/0x48 [kvm]
[ 716.043340][T16989] [c0000009f1a93a30] [c00800000b75f188] kvm_arch_vcpu_ioctl_run+0x340/0x450 [kvm]
[ 716.043359][T16989] [c0000009f1a93ac0] [c00800000b74d470] kvm_vcpu_ioctl+0x328/0x8f8 [kvm]
[ 716.043378][T16989] [c0000009f1a93ca0] [c0000000004fe9d4] sys_ioctl+0x6b4/0x13b0
[ 716.043386][T16989] [c0000009f1a93db0] [c00000000002f918] system_call_exception+0x168/0x290
[ 716.043394][T16989] [c0000009f1a93e10] [c00000000000c864] system_call_common+0xf4/0x258
[ 716.043402][T16989] --- interrupt: c00 at 0x7fff954af010
[ 716.043407][T16989] NIP: 00007fff954af010 LR: 0000000116243430 CTR: 0000000000000000
[ 716.043413][T16989] REGS: c0000009f1a93e80 TRAP: 0c00 Not tainted (5.14.0-rc4-02329-g9bdd37071243)
[ 716.043419][T16989] MSR: 900000000000d033 <SF,HV,EE,PR,ME,IR,DR,RI,LE> CR: 22444442 XER: 00000000
[ 716.043434][T16989] IRQMASK: 0
[ 716.043434][T16989] GPR00: 0000000000000036 00007fff8ee8dc30 00007fff955a7100 000000000000000f
[ 716.043434][T16989] GPR04: 000000002000ae80 0000000000000000 00000000000004fb 0000000000000000
[ 716.043434][T16989] GPR08: 000000000000000f 0000000000000000 0000000000000000 0000000000000000
[ 716.043434][T16989] GPR12: 0000000000000000 00007fff8ee96290 00007fff955f4410 0000000000000000
[ 716.043434][T16989] GPR16: 00007fff96280000 00007fff955f0320 00007fff8ee8ebe0 00007fff8e660028
[ 716.043434][T16989] GPR20: 0000000000000000 0000000000000000 000000011689b0d0 000000002000ae80
[ 716.043434][T16989] GPR24: 00007fff8ffa00ae 0000000000000000 00007fff8ee8f290 00007fff8ffb0010
[ 716.043434][T16989] GPR28: 0000000116e010e0 00007fff8ffb0010 0000000000000000 000000002000ae80
[ 716.043498][T16989] NIP [00007fff954af010] 0x7fff954af010
[ 716.043503][T16989] LR [0000000116243430] 0x116243430
[ 716.043507][T16989] --- interrupt: c00
[ 716.043511][T16989] Instruction dump:
[ 716.043517][T16989] fb610038 67db0200 9907185a 4182005c 7c0802a6 7f63db78 f8010070 4bffeeed
[ 716.043529][T16989] 2c3e0000 408200d4 547ddb78 0082812b <eb000000> 387a1860 7fdcf378 7f7edb78
[ 716.043543][T16989] ---[ end trace b02ece1d913ff866 ]---
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v1 17/55] KVM: PPC: Book3S HV P9: Implement PMU save/restore in C
2021-07-26 3:49 ` [PATCH v1 17/55] KVM: PPC: Book3S HV P9: Implement PMU save/restore in C Nicholas Piggin
@ 2021-08-09 3:03 ` Athira Rajeev
2021-08-13 4:24 ` Nicholas Piggin
0 siblings, 1 reply; 79+ messages in thread
From: Athira Rajeev @ 2021-08-09 3:03 UTC (permalink / raw)
To: Nicholas Piggin; +Cc: linuxppc-dev, kvm-ppc
> On 26-Jul-2021, at 9:19 AM, Nicholas Piggin <npiggin@gmail.com> wrote:
>
> Implement the P9 path PMU save/restore code in C, and remove the
> POWER9/10 code from the P7/8 path assembly.
>
> -449 cycles (8533) POWER9 virt-mode NULL hcall
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> arch/powerpc/include/asm/asm-prototypes.h | 5 -
> arch/powerpc/kvm/book3s_hv.c | 205 ++++++++++++++++++++--
> arch/powerpc/kvm/book3s_hv_interrupts.S | 13 +-
> arch/powerpc/kvm/book3s_hv_rmhandlers.S | 43 +----
> 4 files changed, 200 insertions(+), 66 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h
> index 222823861a67..41b8a1e1144a 100644
> --- a/arch/powerpc/include/asm/asm-prototypes.h
> +++ b/arch/powerpc/include/asm/asm-prototypes.h
> @@ -141,11 +141,6 @@ static inline void kvmppc_restore_tm_hv(struct kvm_vcpu *vcpu, u64 msr,
> bool preserve_nv) { }
> #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
>
> -void kvmhv_save_host_pmu(void);
> -void kvmhv_load_host_pmu(void);
> -void kvmhv_save_guest_pmu(struct kvm_vcpu *vcpu, bool pmu_in_use);
> -void kvmhv_load_guest_pmu(struct kvm_vcpu *vcpu);
> -
> void kvmppc_p9_enter_guest(struct kvm_vcpu *vcpu);
>
> long kvmppc_h_set_dabr(struct kvm_vcpu *vcpu, unsigned long dabr);
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 2eef708c4354..d20b579ddcdf 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -3735,6 +3735,188 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc)
> trace_kvmppc_run_core(vc, 1);
> }
>
> +/*
> + * Privileged (non-hypervisor) host registers to save.
> + */
> +struct p9_host_os_sprs {
> + unsigned long dscr;
> + unsigned long tidr;
> + unsigned long iamr;
> + unsigned long amr;
> + unsigned long fscr;
> +
> + unsigned int pmc1;
> + unsigned int pmc2;
> + unsigned int pmc3;
> + unsigned int pmc4;
> + unsigned int pmc5;
> + unsigned int pmc6;
> + unsigned long mmcr0;
> + unsigned long mmcr1;
> + unsigned long mmcr2;
> + unsigned long mmcr3;
> + unsigned long mmcra;
> + unsigned long siar;
> + unsigned long sier1;
> + unsigned long sier2;
> + unsigned long sier3;
> + unsigned long sdar;
> +};
> +
> +static void freeze_pmu(unsigned long mmcr0, unsigned long mmcra)
> +{
> + if (!(mmcr0 & MMCR0_FC))
> + goto do_freeze;
> + if (mmcra & MMCRA_SAMPLE_ENABLE)
> + goto do_freeze;
> + if (cpu_has_feature(CPU_FTR_ARCH_31)) {
> + if (!(mmcr0 & MMCR0_PMCCEXT))
> + goto do_freeze;
> + if (!(mmcra & MMCRA_BHRB_DISABLE))
> + goto do_freeze;
> + }
> + return;
> +
> +do_freeze:
> + mmcr0 = MMCR0_FC;
> + mmcra = 0;
> + if (cpu_has_feature(CPU_FTR_ARCH_31)) {
> + mmcr0 |= MMCR0_PMCCEXT;
> + mmcra = MMCRA_BHRB_DISABLE;
> + }
> +
> + mtspr(SPRN_MMCR0, mmcr0);
> + mtspr(SPRN_MMCRA, mmcra);
> + isync();
> +}
> +
Hi Nick,
After feezing pmu, do we need to clear “pmcregs_in_use” as well?
Also can’t we unconditionally do the MMCR0/MMCRA/ freeze settings in here ? do we need the if conditions for FC/PMCCEXT/BHRB ?
Thanks
Athira
> +static void save_p9_host_pmu(struct p9_host_os_sprs *host_os_sprs)
> +{
> + if (ppc_get_pmu_inuse()) {
> + /*
> + * It might be better to put PMU handling (at least for the
> + * host) in the perf subsystem because it knows more about what
> + * is being used.
> + */
> +
> + /* POWER9, POWER10 do not implement HPMC or SPMC */
> +
> + host_os_sprs->mmcr0 = mfspr(SPRN_MMCR0);
> + host_os_sprs->mmcra = mfspr(SPRN_MMCRA);
> +
> + freeze_pmu(host_os_sprs->mmcr0, host_os_sprs->mmcra);
> +
> + host_os_sprs->pmc1 = mfspr(SPRN_PMC1);
> + host_os_sprs->pmc2 = mfspr(SPRN_PMC2);
> + host_os_sprs->pmc3 = mfspr(SPRN_PMC3);
> + host_os_sprs->pmc4 = mfspr(SPRN_PMC4);
> + host_os_sprs->pmc5 = mfspr(SPRN_PMC5);
> + host_os_sprs->pmc6 = mfspr(SPRN_PMC6);
> + host_os_sprs->mmcr1 = mfspr(SPRN_MMCR1);
> + host_os_sprs->mmcr2 = mfspr(SPRN_MMCR2);
> + host_os_sprs->sdar = mfspr(SPRN_SDAR);
> + host_os_sprs->siar = mfspr(SPRN_SIAR);
> + host_os_sprs->sier1 = mfspr(SPRN_SIER);
> +
> + if (cpu_has_feature(CPU_FTR_ARCH_31)) {
> + host_os_sprs->mmcr3 = mfspr(SPRN_MMCR3);
> + host_os_sprs->sier2 = mfspr(SPRN_SIER2);
> + host_os_sprs->sier3 = mfspr(SPRN_SIER3);
> + }
> + }
> +}
> +
> +static void load_p9_guest_pmu(struct kvm_vcpu *vcpu)
> +{
> + mtspr(SPRN_PMC1, vcpu->arch.pmc[0]);
> + mtspr(SPRN_PMC2, vcpu->arch.pmc[1]);
> + mtspr(SPRN_PMC3, vcpu->arch.pmc[2]);
> + mtspr(SPRN_PMC4, vcpu->arch.pmc[3]);
> + mtspr(SPRN_PMC5, vcpu->arch.pmc[4]);
> + mtspr(SPRN_PMC6, vcpu->arch.pmc[5]);
> + mtspr(SPRN_MMCR1, vcpu->arch.mmcr[1]);
> + mtspr(SPRN_MMCR2, vcpu->arch.mmcr[2]);
> + mtspr(SPRN_SDAR, vcpu->arch.sdar);
> + mtspr(SPRN_SIAR, vcpu->arch.siar);
> + mtspr(SPRN_SIER, vcpu->arch.sier[0]);
> +
> + if (cpu_has_feature(CPU_FTR_ARCH_31)) {
> + mtspr(SPRN_MMCR3, vcpu->arch.mmcr[3]);
> + mtspr(SPRN_SIER2, vcpu->arch.sier[1]);
> + mtspr(SPRN_SIER3, vcpu->arch.sier[2]);
> + }
> +
> + /* Set MMCRA then MMCR0 last */
> + mtspr(SPRN_MMCRA, vcpu->arch.mmcra);
> + mtspr(SPRN_MMCR0, vcpu->arch.mmcr[0]);
> + /* No isync necessary because we're starting counters */
> +}
> +
> +static void save_p9_guest_pmu(struct kvm_vcpu *vcpu)
> +{
> + struct lppaca *lp;
> + int save_pmu = 1;
> +
> + lp = vcpu->arch.vpa.pinned_addr;
> + if (lp)
> + save_pmu = lp->pmcregs_in_use;
> +
> + if (save_pmu) {
> + vcpu->arch.mmcr[0] = mfspr(SPRN_MMCR0);
> + vcpu->arch.mmcra = mfspr(SPRN_MMCRA);
> +
> + freeze_pmu(vcpu->arch.mmcr[0], vcpu->arch.mmcra);
> +
> + vcpu->arch.pmc[0] = mfspr(SPRN_PMC1);
> + vcpu->arch.pmc[1] = mfspr(SPRN_PMC2);
> + vcpu->arch.pmc[2] = mfspr(SPRN_PMC3);
> + vcpu->arch.pmc[3] = mfspr(SPRN_PMC4);
> + vcpu->arch.pmc[4] = mfspr(SPRN_PMC5);
> + vcpu->arch.pmc[5] = mfspr(SPRN_PMC6);
> + vcpu->arch.mmcr[1] = mfspr(SPRN_MMCR1);
> + vcpu->arch.mmcr[2] = mfspr(SPRN_MMCR2);
> + vcpu->arch.sdar = mfspr(SPRN_SDAR);
> + vcpu->arch.siar = mfspr(SPRN_SIAR);
> + vcpu->arch.sier[0] = mfspr(SPRN_SIER);
> +
> + if (cpu_has_feature(CPU_FTR_ARCH_31)) {
> + vcpu->arch.mmcr[3] = mfspr(SPRN_MMCR3);
> + vcpu->arch.sier[1] = mfspr(SPRN_SIER2);
> + vcpu->arch.sier[2] = mfspr(SPRN_SIER3);
> + }
> + } else {
> + freeze_pmu(mfspr(SPRN_MMCR0), mfspr(SPRN_MMCRA));
> + }
> +}
> +
> +static void load_p9_host_pmu(struct p9_host_os_sprs *host_os_sprs)
> +{
> + if (ppc_get_pmu_inuse()) {
> + mtspr(SPRN_PMC1, host_os_sprs->pmc1);
> + mtspr(SPRN_PMC2, host_os_sprs->pmc2);
> + mtspr(SPRN_PMC3, host_os_sprs->pmc3);
> + mtspr(SPRN_PMC4, host_os_sprs->pmc4);
> + mtspr(SPRN_PMC5, host_os_sprs->pmc5);
> + mtspr(SPRN_PMC6, host_os_sprs->pmc6);
> + mtspr(SPRN_MMCR1, host_os_sprs->mmcr1);
> + mtspr(SPRN_MMCR2, host_os_sprs->mmcr2);
> + mtspr(SPRN_SDAR, host_os_sprs->sdar);
> + mtspr(SPRN_SIAR, host_os_sprs->siar);
> + mtspr(SPRN_SIER, host_os_sprs->sier1);
> +
> + if (cpu_has_feature(CPU_FTR_ARCH_31)) {
> + mtspr(SPRN_MMCR3, host_os_sprs->mmcr3);
> + mtspr(SPRN_SIER2, host_os_sprs->sier2);
> + mtspr(SPRN_SIER3, host_os_sprs->sier3);
> + }
> +
> + /* Set MMCRA then MMCR0 last */
> + mtspr(SPRN_MMCRA, host_os_sprs->mmcra);
> + mtspr(SPRN_MMCR0, host_os_sprs->mmcr0);
> + isync();
> + }
> +}
> +
> static void load_spr_state(struct kvm_vcpu *vcpu)
> {
> mtspr(SPRN_DSCR, vcpu->arch.dscr);
> @@ -3777,17 +3959,6 @@ static void store_spr_state(struct kvm_vcpu *vcpu)
> vcpu->arch.dscr = mfspr(SPRN_DSCR);
> }
>
> -/*
> - * Privileged (non-hypervisor) host registers to save.
> - */
> -struct p9_host_os_sprs {
> - unsigned long dscr;
> - unsigned long tidr;
> - unsigned long iamr;
> - unsigned long amr;
> - unsigned long fscr;
> -};
> -
> static void save_p9_host_os_sprs(struct p9_host_os_sprs *host_os_sprs)
> {
> host_os_sprs->dscr = mfspr(SPRN_DSCR);
> @@ -3835,7 +4006,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
> struct p9_host_os_sprs host_os_sprs;
> s64 dec;
> u64 tb, next_timer;
> - int trap, save_pmu;
> + int trap;
>
> WARN_ON_ONCE(vcpu->arch.ceded);
>
> @@ -3848,7 +4019,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
>
> save_p9_host_os_sprs(&host_os_sprs);
>
> - kvmhv_save_host_pmu(); /* saves it to PACA kvm_hstate */
> + save_p9_host_pmu(&host_os_sprs);
>
> kvmppc_subcore_enter_guest();
>
> @@ -3878,7 +4049,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
> barrier();
> }
> #endif
> - kvmhv_load_guest_pmu(vcpu);
> + load_p9_guest_pmu(vcpu);
>
> msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX);
> load_fp_state(&vcpu->arch.fp);
> @@ -4000,16 +4171,14 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
> cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
> kvmppc_save_tm_hv(vcpu, vcpu->arch.shregs.msr, true);
>
> - save_pmu = 1;
> if (vcpu->arch.vpa.pinned_addr) {
> struct lppaca *lp = vcpu->arch.vpa.pinned_addr;
> u32 yield_count = be32_to_cpu(lp->yield_count) + 1;
> lp->yield_count = cpu_to_be32(yield_count);
> vcpu->arch.vpa.dirty = 1;
> - save_pmu = lp->pmcregs_in_use;
> }
>
> - kvmhv_save_guest_pmu(vcpu, save_pmu);
> + save_p9_guest_pmu(vcpu);
> #ifdef CONFIG_PPC_PSERIES
> if (kvmhv_on_pseries()) {
> barrier();
> @@ -4025,7 +4194,7 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
>
> mtspr(SPRN_SPRG_VDSO_WRITE, local_paca->sprg_vdso);
>
> - kvmhv_load_host_pmu();
> + load_p9_host_pmu(&host_os_sprs);
>
> kvmppc_subcore_exit_guest();
>
> diff --git a/arch/powerpc/kvm/book3s_hv_interrupts.S b/arch/powerpc/kvm/book3s_hv_interrupts.S
> index 4444f83cb133..59d89e4b154a 100644
> --- a/arch/powerpc/kvm/book3s_hv_interrupts.S
> +++ b/arch/powerpc/kvm/book3s_hv_interrupts.S
> @@ -104,7 +104,10 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
> mtlr r0
> blr
>
> -_GLOBAL(kvmhv_save_host_pmu)
> +/*
> + * void kvmhv_save_host_pmu(void)
> + */
> +kvmhv_save_host_pmu:
> BEGIN_FTR_SECTION
> /* Work around P8 PMAE bug */
> li r3, -1
> @@ -138,14 +141,6 @@ BEGIN_FTR_SECTION
> std r8, HSTATE_MMCR2(r13)
> std r9, HSTATE_SIER(r13)
> END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
> -BEGIN_FTR_SECTION
> - mfspr r5, SPRN_MMCR3
> - mfspr r6, SPRN_SIER2
> - mfspr r7, SPRN_SIER3
> - std r5, HSTATE_MMCR3(r13)
> - std r6, HSTATE_SIER2(r13)
> - std r7, HSTATE_SIER3(r13)
> -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31)
> mfspr r3, SPRN_PMC1
> mfspr r5, SPRN_PMC2
> mfspr r6, SPRN_PMC3
> diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> index 9021052f1579..551ce223b40c 100644
> --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> @@ -2738,10 +2738,11 @@ kvmppc_msr_interrupt:
> blr
>
> /*
> + * void kvmhv_load_guest_pmu(struct kvm_vcpu *vcpu)
> + *
> * Load up guest PMU state. R3 points to the vcpu struct.
> */
> -_GLOBAL(kvmhv_load_guest_pmu)
> -EXPORT_SYMBOL_GPL(kvmhv_load_guest_pmu)
> +kvmhv_load_guest_pmu:
> mr r4, r3
> mflr r0
> li r3, 1
> @@ -2775,27 +2776,17 @@ END_FTR_SECTION_IFSET(CPU_FTR_PMAO_BUG)
> mtspr SPRN_MMCRA, r6
> mtspr SPRN_SIAR, r7
> mtspr SPRN_SDAR, r8
> -BEGIN_FTR_SECTION
> - ld r5, VCPU_MMCR + 24(r4)
> - ld r6, VCPU_SIER + 8(r4)
> - ld r7, VCPU_SIER + 16(r4)
> - mtspr SPRN_MMCR3, r5
> - mtspr SPRN_SIER2, r6
> - mtspr SPRN_SIER3, r7
> -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31)
> BEGIN_FTR_SECTION
> ld r5, VCPU_MMCR + 16(r4)
> ld r6, VCPU_SIER(r4)
> mtspr SPRN_MMCR2, r5
> mtspr SPRN_SIER, r6
> -BEGIN_FTR_SECTION_NESTED(96)
> lwz r7, VCPU_PMC + 24(r4)
> lwz r8, VCPU_PMC + 28(r4)
> ld r9, VCPU_MMCRS(r4)
> mtspr SPRN_SPMC1, r7
> mtspr SPRN_SPMC2, r8
> mtspr SPRN_MMCRS, r9
> -END_FTR_SECTION_NESTED(CPU_FTR_ARCH_300, 0, 96)
> END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
> mtspr SPRN_MMCR0, r3
> isync
> @@ -2803,10 +2794,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
> blr
>
> /*
> + * void kvmhv_load_host_pmu(void)
> + *
> * Reload host PMU state saved in the PACA by kvmhv_save_host_pmu.
> */
> -_GLOBAL(kvmhv_load_host_pmu)
> -EXPORT_SYMBOL_GPL(kvmhv_load_host_pmu)
> +kvmhv_load_host_pmu:
> mflr r0
> lbz r4, PACA_PMCINUSE(r13) /* is the host using the PMU? */
> cmpwi r4, 0
> @@ -2844,25 +2836,18 @@ BEGIN_FTR_SECTION
> mtspr SPRN_MMCR2, r8
> mtspr SPRN_SIER, r9
> END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
> -BEGIN_FTR_SECTION
> - ld r5, HSTATE_MMCR3(r13)
> - ld r6, HSTATE_SIER2(r13)
> - ld r7, HSTATE_SIER3(r13)
> - mtspr SPRN_MMCR3, r5
> - mtspr SPRN_SIER2, r6
> - mtspr SPRN_SIER3, r7
> -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31)
> mtspr SPRN_MMCR0, r3
> isync
> mtlr r0
> 23: blr
>
> /*
> + * void kvmhv_save_guest_pmu(struct kvm_vcpu *vcpu, bool pmu_in_use)
> + *
> * Save guest PMU state into the vcpu struct.
> * r3 = vcpu, r4 = full save flag (PMU in use flag set in VPA)
> */
> -_GLOBAL(kvmhv_save_guest_pmu)
> -EXPORT_SYMBOL_GPL(kvmhv_save_guest_pmu)
> +kvmhv_save_guest_pmu:
> mr r9, r3
> mr r8, r4
> BEGIN_FTR_SECTION
> @@ -2911,14 +2896,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
> BEGIN_FTR_SECTION
> std r10, VCPU_MMCR + 16(r9)
> END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
> -BEGIN_FTR_SECTION
> - mfspr r5, SPRN_MMCR3
> - mfspr r6, SPRN_SIER2
> - mfspr r7, SPRN_SIER3
> - std r5, VCPU_MMCR + 24(r9)
> - std r6, VCPU_SIER + 8(r9)
> - std r7, VCPU_SIER + 16(r9)
> -END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31)
> std r7, VCPU_SIAR(r9)
> std r8, VCPU_SDAR(r9)
> mfspr r3, SPRN_PMC1
> @@ -2936,7 +2913,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31)
> BEGIN_FTR_SECTION
> mfspr r5, SPRN_SIER
> std r5, VCPU_SIER(r9)
> -BEGIN_FTR_SECTION_NESTED(96)
> mfspr r6, SPRN_SPMC1
> mfspr r7, SPRN_SPMC2
> mfspr r8, SPRN_MMCRS
> @@ -2945,7 +2921,6 @@ BEGIN_FTR_SECTION_NESTED(96)
> std r8, VCPU_MMCRS(r9)
> lis r4, 0x8000
> mtspr SPRN_MMCRS, r4
> -END_FTR_SECTION_NESTED(CPU_FTR_ARCH_300, 0, 96)
> END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
> 22: blr
>
> --
> 2.23.0
>
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v1 16/55] powerpc/64s: Implement PMU override command line option
2021-08-06 10:42 ` Nicholas Piggin
@ 2021-08-11 10:54 ` Athira Rajeev
0 siblings, 0 replies; 79+ messages in thread
From: Athira Rajeev @ 2021-08-11 10:54 UTC (permalink / raw)
To: Nicholas Piggin; +Cc: linuxppc-dev, kvm-ppc
> On 06-Aug-2021, at 4:12 PM, Nicholas Piggin <npiggin@gmail.com> wrote:
>
> Excerpts from Athira Rajeev's message of August 6, 2021 7:28 pm:
>>
>>
>>> On 26-Jul-2021, at 9:19 AM, Nicholas Piggin <npiggin@gmail.com> wrote:
>>>
>>> It can be useful in simulators (with very constrained environments)
>>> to allow some PMCs to run from boot so they can be sampled directly
>>> by a test harness, rather than having to run perf.
>>>
>>> A previous change freezes counters at boot by default, so provide
>>> a boot time option to un-freeze (plus a bit more flexibility).
>>>
>>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>>> ---
>>> .../admin-guide/kernel-parameters.txt | 7 ++++
>>> arch/powerpc/perf/core-book3s.c | 35 +++++++++++++++++++
>>> 2 files changed, 42 insertions(+)
>>>
>>> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
>>> index bdb22006f713..96b7d0ebaa40 100644
>>> --- a/Documentation/admin-guide/kernel-parameters.txt
>>> +++ b/Documentation/admin-guide/kernel-parameters.txt
>>> @@ -4089,6 +4089,13 @@
>>> Override pmtimer IOPort with a hex value.
>>> e.g. pmtmr=0x508
>>>
>>> + pmu= [PPC] Manually enable the PMU.
>>> + Enable the PMU by setting MMCR0 to 0 (clear FC bit).
>>> + This option is implemented for Book3S processors.
>>> + If a number is given, then MMCR1 is set to that number,
>>> + otherwise (e.g., 'pmu=on'), it is left 0. The perf
>>> + subsystem is disabled if this option is used.
>>> +
>>> pm_debug_messages [SUSPEND,KNL]
>>> Enable suspend/resume debug messages during boot up.
>>>
>>> diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
>>> index 65795cadb475..e7cef4fe17d7 100644
>>> --- a/arch/powerpc/perf/core-book3s.c
>>> +++ b/arch/powerpc/perf/core-book3s.c
>>> @@ -2428,8 +2428,24 @@ int register_power_pmu(struct power_pmu *pmu)
>>> }
>>>
>>> #ifdef CONFIG_PPC64
>>> +static bool pmu_override = false;
>>> +static unsigned long pmu_override_val;
>>> +static void do_pmu_override(void *data)
>>> +{
>>> + ppc_set_pmu_inuse(1);
>>> + if (pmu_override_val)
>>> + mtspr(SPRN_MMCR1, pmu_override_val);
>>> + mtspr(SPRN_MMCR0, mfspr(SPRN_MMCR0) & ~MMCR0_FC);
>>
>> Hi Nick
>>
>> Here, we are not doing any validity check for the value used to set MMCR1.
>> For advanced users, the option to pass value for MMCR1 is fine. But other cases, it could result in
>> invalid event getting used. Do we need to restrict this boot time option for only PMC5/6 ?
>
> Depends what would be useful. We don't have to prevent the admin shooting
> themselves in the foot with options like this, but if we can make it
> safer without making it less useful then that's always a good option.
Hi Nick
I checked back on my comment and it will be difficult to add/maintain validity check for MMCR1 considering different platforms that we have.
We can go ahead with present approach you have in this patch. Changes looks good to me.
Reviewed-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
>
> Thanks,
> Nick
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v1 16/55] powerpc/64s: Implement PMU override command line option
2021-08-06 10:38 ` Nicholas Piggin
@ 2021-08-11 12:46 ` Madhavan Srinivasan
0 siblings, 0 replies; 79+ messages in thread
From: Madhavan Srinivasan @ 2021-08-11 12:46 UTC (permalink / raw)
To: Nicholas Piggin, kvm-ppc; +Cc: linuxppc-dev
On 8/6/21 4:08 PM, Nicholas Piggin wrote:
> Excerpts from Madhavan Srinivasan's message of August 6, 2021 5:33 pm:
>> On 7/26/21 9:19 AM, Nicholas Piggin wrote:
>>> It can be useful in simulators (with very constrained environments)
>>> to allow some PMCs to run from boot so they can be sampled directly
>>> by a test harness, rather than having to run perf.
>>>
>>> A previous change freezes counters at boot by default, so provide
>>> a boot time option to un-freeze (plus a bit more flexibility).
>>>
>>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>>> ---
>>> .../admin-guide/kernel-parameters.txt | 7 ++++
>>> arch/powerpc/perf/core-book3s.c | 35 +++++++++++++++++++
>>> 2 files changed, 42 insertions(+)
>>>
>>> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
>>> index bdb22006f713..96b7d0ebaa40 100644
>>> --- a/Documentation/admin-guide/kernel-parameters.txt
>>> +++ b/Documentation/admin-guide/kernel-parameters.txt
>>> @@ -4089,6 +4089,13 @@
>>> Override pmtimer IOPort with a hex value.
>>> e.g. pmtmr=0x508
>>>
>>> + pmu= [PPC] Manually enable the PMU.
>>
>> This is bit confusing, IIUC, we are manually disabling the perf
>> registration
>> with this option and not pmu.
>> If this option is used, we will unfreeze the
>> MMCR0_FC (only in the HV_mode) and not register perf subsystem.
> With the previous patch, this option un-freezes the PMU
> (and disables perf).
>
>> Since this option is valid only for HV_mode, canwe call it
>> kvm_disable_perf or kvm_dis_perf.
> It's only disabled for guests because it would require a bit
> of logic to set pmcregs_in_use when we register our lppaca. We could
> add that if needed, but the intention is for use on BML, not exactly
> KVM specific.
>
> I can add HV restriction to the help text. And we could rename the
> option. free_run_pmu= or something?
yeah having it a different name will be better. I am not sure
whether we should say "[PPC] Manually enable the PMU",
because IIUC, if we dont provide this option PMU and perf is
anyway enabled, but rest looks good to me.
Maddy
>
> Thanks,
> Nick
>
>>
>>> + Enable the PMU by setting MMCR0 to 0 (clear FC bit).
>>> + This option is implemented for Book3S processors.
>>> + If a number is given, then MMCR1 is set to that number,
>>> + otherwise (e.g., 'pmu=on'), it is left 0. The perf
>>> + subsystem is disabled if this option is used.
>>> +
>>> pm_debug_messages [SUSPEND,KNL]
>>> Enable suspend/resume debug messages during boot up.
>>>
>>> diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
>>> index 65795cadb475..e7cef4fe17d7 100644
>>> --- a/arch/powerpc/perf/core-book3s.c
>>> +++ b/arch/powerpc/perf/core-book3s.c
>>> @@ -2428,8 +2428,24 @@ int register_power_pmu(struct power_pmu *pmu)
>>> }
>>>
>>> #ifdef CONFIG_PPC64
>>> +static bool pmu_override = false;
>>> +static unsigned long pmu_override_val;
>>> +static void do_pmu_override(void *data)
>>> +{
>>> + ppc_set_pmu_inuse(1);
>>> + if (pmu_override_val)
>>> + mtspr(SPRN_MMCR1, pmu_override_val);
>>> + mtspr(SPRN_MMCR0, mfspr(SPRN_MMCR0) & ~MMCR0_FC);
>>> +}
>>> +
>>> static int __init init_ppc64_pmu(void)
>>> {
>>> + if (cpu_has_feature(CPU_FTR_HVMODE) && pmu_override) {
>>> + printk(KERN_WARNING "perf: disabling perf due to pmu= command line option.\n");
>>> + on_each_cpu(do_pmu_override, NULL, 1);
>>> + return 0;
>>> + }
>>> +
>>> /* run through all the pmu drivers one at a time */
>>> if (!init_power5_pmu())
>>> return 0;
>>> @@ -2451,4 +2467,23 @@ static int __init init_ppc64_pmu(void)
>>> return init_generic_compat_pmu();
>>> }
>>> early_initcall(init_ppc64_pmu);
>>> +
>>> +static int __init pmu_setup(char *str)
>>> +{
>>> + unsigned long val;
>>> +
>>> + if (!early_cpu_has_feature(CPU_FTR_HVMODE))
>>> + return 0;
>>> +
>>> + pmu_override = true;
>>> +
>>> + if (kstrtoul(str, 0, &val))
>>> + val = 0;
>>> +
>>> + pmu_override_val = val;
>>> +
>>> + return 1;
>>> +}
>>> +__setup("pmu=", pmu_setup);
>>> +
>>> #endif
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v1 17/55] KVM: PPC: Book3S HV P9: Implement PMU save/restore in C
2021-08-09 3:03 ` Athira Rajeev
@ 2021-08-13 4:24 ` Nicholas Piggin
2021-08-14 7:12 ` Athira Rajeev
0 siblings, 1 reply; 79+ messages in thread
From: Nicholas Piggin @ 2021-08-13 4:24 UTC (permalink / raw)
To: Athira Rajeev; +Cc: linuxppc-dev, kvm-ppc
Excerpts from Athira Rajeev's message of August 9, 2021 1:03 pm:
>
>
>> On 26-Jul-2021, at 9:19 AM, Nicholas Piggin <npiggin@gmail.com> wrote:
>> +static void freeze_pmu(unsigned long mmcr0, unsigned long mmcra)
>> +{
>> + if (!(mmcr0 & MMCR0_FC))
>> + goto do_freeze;
>> + if (mmcra & MMCRA_SAMPLE_ENABLE)
>> + goto do_freeze;
>> + if (cpu_has_feature(CPU_FTR_ARCH_31)) {
>> + if (!(mmcr0 & MMCR0_PMCCEXT))
>> + goto do_freeze;
>> + if (!(mmcra & MMCRA_BHRB_DISABLE))
>> + goto do_freeze;
>> + }
>> + return;
>> +
>> +do_freeze:
>> + mmcr0 = MMCR0_FC;
>> + mmcra = 0;
>> + if (cpu_has_feature(CPU_FTR_ARCH_31)) {
>> + mmcr0 |= MMCR0_PMCCEXT;
>> + mmcra = MMCRA_BHRB_DISABLE;
>> + }
>> +
>> + mtspr(SPRN_MMCR0, mmcr0);
>> + mtspr(SPRN_MMCRA, mmcra);
>> + isync();
>> +}
>> +
> Hi Nick,
>
> After feezing pmu, do we need to clear “pmcregs_in_use” as well?
Not until we save the values out of the registers. pmcregs_in_use = 0
means our hypervisor is free to clear our PMU register contents.
> Also can’t we unconditionally do the MMCR0/MMCRA/ freeze settings in here ? do we need the if conditions for FC/PMCCEXT/BHRB ?
I think it's possible, but pretty minimal advantage. I would prefer to
set them the way perf does for now. If we can move this code into perf/
it should become easier for you to tweak things.
Thanks,
Nick
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH v1 17/55] KVM: PPC: Book3S HV P9: Implement PMU save/restore in C
2021-08-13 4:24 ` Nicholas Piggin
@ 2021-08-14 7:12 ` Athira Rajeev
0 siblings, 0 replies; 79+ messages in thread
From: Athira Rajeev @ 2021-08-14 7:12 UTC (permalink / raw)
To: Nicholas Piggin; +Cc: linuxppc-dev, kvm-ppc
> On 13-Aug-2021, at 9:54 AM, Nicholas Piggin <npiggin@gmail.com> wrote:
>
> Excerpts from Athira Rajeev's message of August 9, 2021 1:03 pm:
>>
>>
>>> On 26-Jul-2021, at 9:19 AM, Nicholas Piggin <npiggin@gmail.com> wrote:
>
>
>>> +static void freeze_pmu(unsigned long mmcr0, unsigned long mmcra)
>>> +{
>>> + if (!(mmcr0 & MMCR0_FC))
>>> + goto do_freeze;
>>> + if (mmcra & MMCRA_SAMPLE_ENABLE)
>>> + goto do_freeze;
>>> + if (cpu_has_feature(CPU_FTR_ARCH_31)) {
>>> + if (!(mmcr0 & MMCR0_PMCCEXT))
>>> + goto do_freeze;
>>> + if (!(mmcra & MMCRA_BHRB_DISABLE))
>>> + goto do_freeze;
>>> + }
>>> + return;
>>> +
>>> +do_freeze:
>>> + mmcr0 = MMCR0_FC;
>>> + mmcra = 0;
>>> + if (cpu_has_feature(CPU_FTR_ARCH_31)) {
>>> + mmcr0 |= MMCR0_PMCCEXT;
>>> + mmcra = MMCRA_BHRB_DISABLE;
>>> + }
>>> +
>>> + mtspr(SPRN_MMCR0, mmcr0);
>>> + mtspr(SPRN_MMCRA, mmcra);
>>> + isync();
>>> +}
>>> +
>> Hi Nick,
>>
>> After feezing pmu, do we need to clear “pmcregs_in_use” as well?
>
> Not until we save the values out of the registers. pmcregs_in_use = 0
> means our hypervisor is free to clear our PMU register contents.
>
>> Also can’t we unconditionally do the MMCR0/MMCRA/ freeze settings in here ? do we need the if conditions for FC/PMCCEXT/BHRB ?
>
> I think it's possible, but pretty minimal advantage. I would prefer to
> set them the way perf does for now.
Sure Nick,
Other changes looks good to me.
Reviewed-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Thanks
Athira
> If we can move this code into perf/
> it should become easier for you to tweak things.
>
> Thanks,
> Nick
^ permalink raw reply [flat|nested] 79+ messages in thread
end of thread, other threads:[~2021-08-14 7:13 UTC | newest]
Thread overview: 79+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-26 3:49 [PATCH v1 00/55] KVM: PPC: Book3S HV P9: entry/exit optimisations Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 01/55] KVM: PPC: Book3S HV: Remove TM emulation from POWER7/8 path Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 02/55] KVM: PPC: Book3S HV P9: Fixes for TM softpatch interrupt Nicholas Piggin
2021-08-06 1:16 ` Michael Ellerman
2021-08-06 10:25 ` Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 03/55] KVM: PPC: Book3S HV: Sanitise vcpu registers in nested path Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 04/55] KVM: PPC: Book3S HV: Stop forwarding all HFUs to L1 Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 05/55] KVM: PPC: Book3S HV Nested: Reflect guest PMU in-use to L0 when guest SPRs are live Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 06/55] powerpc/64s: Remove WORT SPR from POWER9/10 Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 07/55] KMV: PPC: Book3S HV P9: Use set_dec to set decrementer to host Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 08/55] KVM: PPC: Book3S HV P9: Use host timer accounting to avoid decrementer read Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 09/55] KVM: PPC: Book3S HV P9: Use large decrementer for HDEC Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 10/55] KVM: PPC: Book3S HV P9: Reduce mftb per guest entry/exit Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 11/55] powerpc/time: add API for KVM to re-arm the host timer/decrementer Nicholas Piggin
2021-08-05 7:22 ` Christophe Leroy
2021-08-06 10:30 ` Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 12/55] KVM: PPC: Book3S HV: POWER10 enable HAIL when running radix guests Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 13/55] powerpc/64s: Keep AMOR SPR a constant ~0 at runtime Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 14/55] KVM: PPC: Book3S HV: Don't always save PMU for guest capable of nesting Nicholas Piggin
2021-08-06 7:34 ` Michael Ellerman
2021-08-06 10:32 ` Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 15/55] powerpc/64s: Always set PMU control registers to frozen/disabled when not in use Nicholas Piggin
2021-07-26 3:49 ` [PATCH v1 16/55] powerpc/64s: Implement PMU override command line option Nicholas Piggin
2021-08-06 7:33 ` Madhavan Srinivasan
2021-08-06 10:38 ` Nicholas Piggin
2021-08-11 12:46 ` Madhavan Srinivasan
2021-08-06 9:28 ` Athira Rajeev
2021-08-06 10:42 ` Nicholas Piggin
2021-08-11 10:54 ` Athira Rajeev
2021-07-26 3:49 ` [PATCH v1 17/55] KVM: PPC: Book3S HV P9: Implement PMU save/restore in C Nicholas Piggin
2021-08-09 3:03 ` Athira Rajeev
2021-08-13 4:24 ` Nicholas Piggin
2021-08-14 7:12 ` Athira Rajeev
2021-07-26 3:49 ` [PATCH v1 18/55] KVM: PPC: Book3S HV P9: Factor PMU save/load into context switch functions Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 19/55] KVM: PPC: Book3S HV P9: Demand fault PMU SPRs when marked not inuse Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 20/55] KVM: PPC: Book3S HV P9: Factor out yield_count increment Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 21/55] KVM: PPC: Book3S HV: CTRL SPR does not require read-modify-write Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 22/55] KVM: PPC: Book3S HV P9: Move SPRG restore to restore_p9_host_os_sprs Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 23/55] KVM: PPC: Book3S HV P9: Reduce mtmsrd instructions required to save host SPRs Nicholas Piggin
2021-07-26 6:57 ` kernel test robot
2021-07-26 6:57 ` kernel test robot
2021-07-26 7:01 ` kernel test robot
2021-07-26 7:01 ` kernel test robot
2021-07-26 3:50 ` [PATCH v1 24/55] KVM: PPC: Book3S HV P9: Improve mtmsrd scheduling by delaying MSR[EE] disable Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 25/55] KVM: PPC: Book3S HV P9: Add kvmppc_stop_thread to match kvmppc_start_thread Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 26/55] KVM: PPC: Book3S HV: Change dec_expires to be relative to guest timebase Nicholas Piggin
2021-08-07 23:17 ` Michael Ellerman
2021-07-26 3:50 ` [PATCH v1 27/55] KVM: PPC: Book3S HV P9: Move TB updates Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 28/55] KVM: PPC: Book3S HV P9: Optimise timebase reads Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 29/55] KVM: PPC: Book3S HV P9: Avoid SPR scoreboard stalls Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 30/55] KVM: PPC: Book3S HV P9: Only execute mtSPR if the value changed Nicholas Piggin
2021-08-06 20:45 ` Fabiano Rosas
2021-07-26 3:50 ` [PATCH v1 31/55] KVM: PPC: Book3S HV P9: Juggle SPR switching around Nicholas Piggin
2021-08-06 20:46 ` Fabiano Rosas
2021-07-26 3:50 ` [PATCH v1 32/55] KVM: PPC: Book3S HV P9: Move vcpu register save/restore into functions Nicholas Piggin
2021-08-06 20:49 ` Fabiano Rosas
2021-07-26 3:50 ` [PATCH v1 33/55] KVM: PPC: Book3S HV P9: Move host OS save/restore functions to built-in Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 34/55] KVM: PPC: Book3S HV P9: Move nested guest entry into its own function Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 35/55] KVM: PPC: Book3S HV P9: Move remaining SPR and MSR access into low level entry Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 36/55] KVM: PPC: Book3S HV P9: Implement TM fastpath for guest entry/exit Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 37/55] KVM: PPC: Book3S HV P9: Switch PMU to guest as late as possible Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 38/55] KVM: PPC: Book3S HV P9: Restrict DSISR canary workaround to processors that require it Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 39/55] KVM: PPC: Book3S HV P9: More SPR speed improvements Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 40/55] KVM: PPC: Book3S HV P9: Demand fault EBB facility registers Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 41/55] KVM: PPC: Book3S HV P9: Demand fault TM " Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 42/55] KVM: PPC: Book3S HV P9: Use Linux SPR save/restore to manage some host SPRs Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 43/55] KVM: PPC: Book3S HV P9: Comment and fix MMU context switching code Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 44/55] KVM: PPC: Book3S HV P9: Test dawr_enabled() before saving host DAWR SPRs Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 45/55] KVM: PPC: Book3S HV P9: Don't restore PSSCR if not needed Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 46/55] KVM: PPC: Book3S HV P9: Avoid tlbsync sequence on radix guest exit Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 47/55] KVM: PPC: Book3S HV Nested: Avoid extra mftb() in nested entry Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 48/55] KVM: PPC: Book3S HV P9: Improve mfmsr performance on entry Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 49/55] KVM: PPC: Book3S HV P9: Optimise hash guest SLB saving Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 50/55] KVM: PPC: Book3S HV P9: Add unlikely annotation for !mmu_ready Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 51/55] KVM: PPC: Book3S HV P9: Avoid cpu_in_guest atomics on entry and exit Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 52/55] KVM: PPC: Book3S HV P9: Remove most of the vcore logic Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 53/55] KVM: PPC: Book3S HV P9: Tidy kvmppc_create_dtl_entry Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 54/55] KVM: PPC: Book3S HV P9: Stop using vc->dpdes Nicholas Piggin
2021-07-26 3:50 ` [PATCH v1 55/55] KVM: PPC: Book3S HV P9: Remove subcore HMI handling Nicholas Piggin
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.