All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/26] KVM: PPC: Book3S PR: Transaction memory support on PR KVM
@ 2018-01-11 10:11 ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

In current days, many OS distributions have utilized transaction
memory functionality. In PowerPC, HV KVM supports TM. But PR KVM
does not.

The drive for the transaction memory support of PR KVM is the
openstack Continuous Integration testing - They runs a HV(hypervisor)
KVM(as level 1) and then run PR KVM(as level 2) on top of that.

This patch set add transaction memory support on PR KVM.

Test cases performed:
linux/tools/testing/selftests/powerpc/tm/tm-syscall
linux/tools/testing/selftests/powerpc/tm/tm-fork
linux/tools/testing/selftests/powerpc/tm/tm-vmx-unavail
linux/tools/testing/selftests/powerpc/tm/tm-tmspr
linux/tools/testing/selftests/powerpc/tm/tm-signal-msr-resv
linux/tools/testing/selftests/powerpc/math/vsx_preempt
linux/tools/testing/selftests/powerpc/math/fpu_signal
linux/tools/testing/selftests/powerpc/math/vmx_preempt
linux/tools/testing/selftests/powerpc/math/fpu_syscall
linux/tools/testing/selftests/powerpc/math/vmx_syscall
linux/tools/testing/selftests/powerpc/math/fpu_preempt
linux/tools/testing/selftests/powerpc/math/vmx_signal
linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-gpr
linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr
linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-vsx
linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spr
linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-vsx
https://github.com/justdoitqd/publicFiles/blob/master/test_tbegin_pr.c
https://github.com/justdoitqd/publicFiles/blob/master/test_tabort.c
https://github.com/justdoitqd/publicFiles/blob/master/test_kvm_htm_cap.c

Simon Guo (25):
  KVM: PPC: Book3S PR: Move kvmppc_save_tm/kvmppc_restore_tm to separate
    file
  KVM: PPC: Book3S PR: add new parameter (guest MSR) for
    kvmppc_save_tm()/kvmppc_restore_tm()
  KVM: PPC: Book3S PR: turn on FP/VSX/VMX MSR bits in kvmppc_save_tm()
  KVM: PPC: Book3S PR: add C function wrapper for
    _kvmppc_save/restore_tm()
  KVM: PPC: Book3S PR: In PR KVM suspends Transactional state when
    inject an interrupt.
  KVM: PPC: Book3S PR: PR KVM pass through MSR TM/TS bits to shadow_msr.
  KVM: PPC: Book3S PR: add TEXASR related macros
  KVM: PPC: Book3S PR: Sync TM bits to shadow msr for problem state
    guest
  KVM: PPC: Book3S PR: implement RFID TM behavior to suppress change
    from S0 to N0
  KVM: PPC: Book3S PR: set MSR HV bit accordingly for PPC970 and others.
  KVM: PPC: Book3S PR: prevent TS bits change in kvmppc_interrupt_pr()
  powerpc: export symbol msr_check_and_set().
  KVM: PPC: Book3S PR: adds new
    kvmppc_copyto_vcpu_tm/kvmppc_copyfrom_vcpu_tm API for PR KVM.
  KVM: PPC: Book3S PR: export tm_enable()/tm_disable/tm_abort() APIs
  KVM: PPC: Book3S PR: add kvmppc_save/restore_tm_sprs() APIs
  KVM: PPC: Book3S PR: add transaction memory save/restore skeleton for
    PR KVM
  KVM: PPC: Book3S PR: add math support for PR KVM HTM
  KVM: PPC: Book3S PR: make mtspr/mfspr emulation behavior based on
    active TM SPRs
  KVM: PPC: Book3S PR: always fail transaction in guest privilege state
  KVM: PPC: Book3S PR: enable NV reg restore for reading TM SPR at guest
    privilege state
  KVM: PPC: Book3S PR: adds emulation for treclaim.
  KVM: PPC: Book3S PR: add emulation for trechkpt in PR KVM.
  KVM: PPC: Book3S PR: add emulation for tabort. for privilege guest
  KVM: PPC: Book3S PR: add guard code to prevent returning to guest with
    PR=0 and Transactional state
  KVM: PPC: Book3S PR: enable HTM for PR KVM for KVM_CHECK_EXTENSION
    ioctl

 arch/powerpc/include/asm/asm-prototypes.h   |  10 +
 arch/powerpc/include/asm/kvm_book3s.h       |   8 +
 arch/powerpc/include/asm/kvm_host.h         |   3 +
 arch/powerpc/include/asm/reg.h              |  25 +-
 arch/powerpc/include/asm/tm.h               |   2 -
 arch/powerpc/include/uapi/asm/tm.h          |   2 +-
 arch/powerpc/kernel/process.c               |   1 +
 arch/powerpc/kernel/tm.S                    |  12 +
 arch/powerpc/kvm/Makefile                   |   3 +
 arch/powerpc/kvm/book3s.h                   |   1 +
 arch/powerpc/kvm/book3s_64_mmu.c            |  11 +-
 arch/powerpc/kvm/book3s_emulate.c           | 279 +++++++++++++++++++-
 arch/powerpc/kvm/book3s_hv_rmhandlers.S     | 259 ++-----------------
 arch/powerpc/kvm/book3s_pr.c                | 256 +++++++++++++++++--
 arch/powerpc/kvm/book3s_segment.S           |  13 +
 arch/powerpc/kvm/powerpc.c                  |   3 +-
 arch/powerpc/kvm/tm.S                       | 379 ++++++++++++++++++++++++++++
 arch/powerpc/mm/hash_utils_64.c             |   1 +
 arch/powerpc/platforms/powernv/copy-paste.h |   3 +-
 19 files changed, 982 insertions(+), 289 deletions(-)
 create mode 100644 arch/powerpc/kvm/tm.S

-- 
1.8.3.1

*** BLURB HERE ***

Simon Guo (26):
  KVM: PPC: Book3S PR: Move kvmppc_save_tm/kvmppc_restore_tm to separate
    file
  KVM: PPC: Book3S PR: add new parameter (guest MSR) for
    kvmppc_save_tm()/kvmppc_restore_tm()
  KVM: PPC: Book3S PR: turn on FP/VSX/VMX MSR bits in kvmppc_save_tm()
  KVM: PPC: Book3S PR: add C function wrapper for
    _kvmppc_save/restore_tm()
  KVM: PPC: Book3S PR: In PR KVM suspends Transactional state when
    inject an interrupt.
  KVM: PPC: Book3S PR: PR KVM pass through MSR TM/TS bits to shadow_msr.
  KVM: PPC: Book3S PR: add TEXASR related macros
  KVM: PPC: Book3S PR: Sync TM bits to shadow msr for problem state
    guest
  KVM: PPC: Book3S PR: implement RFID TM behavior to suppress change
    from S0 to N0
  KVM: PPC: Book3S PR: set MSR HV bit accordingly for PPC970 and others.
  KVM: PPC: Book3S PR: prevent TS bits change in kvmppc_interrupt_pr()
  powerpc: export symbol msr_check_and_set().
  KVM: PPC: Book3S PR: adds new
    kvmppc_copyto_vcpu_tm/kvmppc_copyfrom_vcpu_tm API for PR KVM.
  KVM: PPC: Book3S PR: export tm_enable()/tm_disable/tm_abort() APIs
  KVM: PPC: Book3S PR: add kvmppc_save/restore_tm_sprs() APIs
  KVM: PPC: Book3S PR: add transaction memory save/restore skeleton for
    PR KVM
  KVM: PPC: Book3S PR: add math support for PR KVM HTM
  KVM: PPC: Book3S PR: make mtspr/mfspr emulation behavior based on
    active TM SPRs
  KVM: PPC: Book3S PR: always fail transaction in guest privilege state
  KVM: PPC: Book3S PR: enable NV reg restore for reading TM SPR at guest
    privilege state
  KVM: PPC: Book3S PR: adds emulation for treclaim.
  KVM: PPC: Book3S PR: add emulation for trechkpt in PR KVM.
  KVM: PPC: Book3S PR: add emulation for tabort. for privilege guest
  KVM: PPC: Book3S PR: add guard code to prevent returning to guest with
    PR=0 and Transactional state
  KVM: PPC: Book3S PR: Support TAR handling for PR KVM HTM.
  KVM: PPC: Book3S PR: enable HTM for PR KVM for KVM_CHECK_EXTENSION
    ioctl

 arch/powerpc/include/asm/asm-prototypes.h   |  10 +
 arch/powerpc/include/asm/kvm_book3s.h       |   9 +
 arch/powerpc/include/asm/kvm_host.h         |   3 +
 arch/powerpc/include/asm/reg.h              |  25 +-
 arch/powerpc/include/asm/tm.h               |   2 -
 arch/powerpc/include/uapi/asm/tm.h          |   2 +-
 arch/powerpc/kernel/process.c               |   1 +
 arch/powerpc/kernel/tm.S                    |  12 +
 arch/powerpc/kvm/Makefile                   |   3 +
 arch/powerpc/kvm/book3s.h                   |   1 +
 arch/powerpc/kvm/book3s_64_mmu.c            |  11 +-
 arch/powerpc/kvm/book3s_emulate.c           | 283 +++++++++++++++++++-
 arch/powerpc/kvm/book3s_hv_rmhandlers.S     | 259 ++----------------
 arch/powerpc/kvm/book3s_pr.c                | 287 ++++++++++++++++++--
 arch/powerpc/kvm/book3s_segment.S           |  13 +
 arch/powerpc/kvm/powerpc.c                  |   3 +-
 arch/powerpc/kvm/tm.S                       | 391 ++++++++++++++++++++++++++++
 arch/powerpc/mm/hash_utils_64.c             |   1 +
 arch/powerpc/platforms/powernv/copy-paste.h |   3 +-
 19 files changed, 1028 insertions(+), 291 deletions(-)
 create mode 100644 arch/powerpc/kvm/tm.S

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 116+ messages in thread

* [PATCH 00/26] KVM: PPC: Book3S PR: Transaction memory support on PR KVM
@ 2018-01-11 10:11 ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

In current days, many OS distributions have utilized transaction
memory functionality. In PowerPC, HV KVM supports TM. But PR KVM
does not.

The drive for the transaction memory support of PR KVM is the
openstack Continuous Integration testing - They runs a HV(hypervisor)
KVM(as level 1) and then run PR KVM(as level 2) on top of that.

This patch set add transaction memory support on PR KVM.

Test cases performed:
linux/tools/testing/selftests/powerpc/tm/tm-syscall
linux/tools/testing/selftests/powerpc/tm/tm-fork
linux/tools/testing/selftests/powerpc/tm/tm-vmx-unavail
linux/tools/testing/selftests/powerpc/tm/tm-tmspr
linux/tools/testing/selftests/powerpc/tm/tm-signal-msr-resv
linux/tools/testing/selftests/powerpc/math/vsx_preempt
linux/tools/testing/selftests/powerpc/math/fpu_signal
linux/tools/testing/selftests/powerpc/math/vmx_preempt
linux/tools/testing/selftests/powerpc/math/fpu_syscall
linux/tools/testing/selftests/powerpc/math/vmx_syscall
linux/tools/testing/selftests/powerpc/math/fpu_preempt
linux/tools/testing/selftests/powerpc/math/vmx_signal
linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-gpr
linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr
linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-vsx
linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spr
linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-vsx
https://github.com/justdoitqd/publicFiles/blob/master/test_tbegin_pr.c
https://github.com/justdoitqd/publicFiles/blob/master/test_tabort.c
https://github.com/justdoitqd/publicFiles/blob/master/test_kvm_htm_cap.c

Simon Guo (25):
  KVM: PPC: Book3S PR: Move kvmppc_save_tm/kvmppc_restore_tm to separate
    file
  KVM: PPC: Book3S PR: add new parameter (guest MSR) for
    kvmppc_save_tm()/kvmppc_restore_tm()
  KVM: PPC: Book3S PR: turn on FP/VSX/VMX MSR bits in kvmppc_save_tm()
  KVM: PPC: Book3S PR: add C function wrapper for
    _kvmppc_save/restore_tm()
  KVM: PPC: Book3S PR: In PR KVM suspends Transactional state when
    inject an interrupt.
  KVM: PPC: Book3S PR: PR KVM pass through MSR TM/TS bits to shadow_msr.
  KVM: PPC: Book3S PR: add TEXASR related macros
  KVM: PPC: Book3S PR: Sync TM bits to shadow msr for problem state
    guest
  KVM: PPC: Book3S PR: implement RFID TM behavior to suppress change
    from S0 to N0
  KVM: PPC: Book3S PR: set MSR HV bit accordingly for PPC970 and others.
  KVM: PPC: Book3S PR: prevent TS bits change in kvmppc_interrupt_pr()
  powerpc: export symbol msr_check_and_set().
  KVM: PPC: Book3S PR: adds new
    kvmppc_copyto_vcpu_tm/kvmppc_copyfrom_vcpu_tm API for PR KVM.
  KVM: PPC: Book3S PR: export tm_enable()/tm_disable/tm_abort() APIs
  KVM: PPC: Book3S PR: add kvmppc_save/restore_tm_sprs() APIs
  KVM: PPC: Book3S PR: add transaction memory save/restore skeleton for
    PR KVM
  KVM: PPC: Book3S PR: add math support for PR KVM HTM
  KVM: PPC: Book3S PR: make mtspr/mfspr emulation behavior based on
    active TM SPRs
  KVM: PPC: Book3S PR: always fail transaction in guest privilege state
  KVM: PPC: Book3S PR: enable NV reg restore for reading TM SPR at guest
    privilege state
  KVM: PPC: Book3S PR: adds emulation for treclaim.
  KVM: PPC: Book3S PR: add emulation for trechkpt in PR KVM.
  KVM: PPC: Book3S PR: add emulation for tabort. for privilege guest
  KVM: PPC: Book3S PR: add guard code to prevent returning to guest with
    PR=0 and Transactional state
  KVM: PPC: Book3S PR: enable HTM for PR KVM for KVM_CHECK_EXTENSION
    ioctl

 arch/powerpc/include/asm/asm-prototypes.h   |  10 +
 arch/powerpc/include/asm/kvm_book3s.h       |   8 +
 arch/powerpc/include/asm/kvm_host.h         |   3 +
 arch/powerpc/include/asm/reg.h              |  25 +-
 arch/powerpc/include/asm/tm.h               |   2 -
 arch/powerpc/include/uapi/asm/tm.h          |   2 +-
 arch/powerpc/kernel/process.c               |   1 +
 arch/powerpc/kernel/tm.S                    |  12 +
 arch/powerpc/kvm/Makefile                   |   3 +
 arch/powerpc/kvm/book3s.h                   |   1 +
 arch/powerpc/kvm/book3s_64_mmu.c            |  11 +-
 arch/powerpc/kvm/book3s_emulate.c           | 279 +++++++++++++++++++-
 arch/powerpc/kvm/book3s_hv_rmhandlers.S     | 259 ++-----------------
 arch/powerpc/kvm/book3s_pr.c                | 256 +++++++++++++++++--
 arch/powerpc/kvm/book3s_segment.S           |  13 +
 arch/powerpc/kvm/powerpc.c                  |   3 +-
 arch/powerpc/kvm/tm.S                       | 379 ++++++++++++++++++++++++++++
 arch/powerpc/mm/hash_utils_64.c             |   1 +
 arch/powerpc/platforms/powernv/copy-paste.h |   3 +-
 19 files changed, 982 insertions(+), 289 deletions(-)
 create mode 100644 arch/powerpc/kvm/tm.S

-- 
1.8.3.1

*** BLURB HERE ***

Simon Guo (26):
  KVM: PPC: Book3S PR: Move kvmppc_save_tm/kvmppc_restore_tm to separate
    file
  KVM: PPC: Book3S PR: add new parameter (guest MSR) for
    kvmppc_save_tm()/kvmppc_restore_tm()
  KVM: PPC: Book3S PR: turn on FP/VSX/VMX MSR bits in kvmppc_save_tm()
  KVM: PPC: Book3S PR: add C function wrapper for
    _kvmppc_save/restore_tm()
  KVM: PPC: Book3S PR: In PR KVM suspends Transactional state when
    inject an interrupt.
  KVM: PPC: Book3S PR: PR KVM pass through MSR TM/TS bits to shadow_msr.
  KVM: PPC: Book3S PR: add TEXASR related macros
  KVM: PPC: Book3S PR: Sync TM bits to shadow msr for problem state
    guest
  KVM: PPC: Book3S PR: implement RFID TM behavior to suppress change
    from S0 to N0
  KVM: PPC: Book3S PR: set MSR HV bit accordingly for PPC970 and others.
  KVM: PPC: Book3S PR: prevent TS bits change in kvmppc_interrupt_pr()
  powerpc: export symbol msr_check_and_set().
  KVM: PPC: Book3S PR: adds new
    kvmppc_copyto_vcpu_tm/kvmppc_copyfrom_vcpu_tm API for PR KVM.
  KVM: PPC: Book3S PR: export tm_enable()/tm_disable/tm_abort() APIs
  KVM: PPC: Book3S PR: add kvmppc_save/restore_tm_sprs() APIs
  KVM: PPC: Book3S PR: add transaction memory save/restore skeleton for
    PR KVM
  KVM: PPC: Book3S PR: add math support for PR KVM HTM
  KVM: PPC: Book3S PR: make mtspr/mfspr emulation behavior based on
    active TM SPRs
  KVM: PPC: Book3S PR: always fail transaction in guest privilege state
  KVM: PPC: Book3S PR: enable NV reg restore for reading TM SPR at guest
    privilege state
  KVM: PPC: Book3S PR: adds emulation for treclaim.
  KVM: PPC: Book3S PR: add emulation for trechkpt in PR KVM.
  KVM: PPC: Book3S PR: add emulation for tabort. for privilege guest
  KVM: PPC: Book3S PR: add guard code to prevent returning to guest with
    PR=0 and Transactional state
  KVM: PPC: Book3S PR: Support TAR handling for PR KVM HTM.
  KVM: PPC: Book3S PR: enable HTM for PR KVM for KVM_CHECK_EXTENSION
    ioctl

 arch/powerpc/include/asm/asm-prototypes.h   |  10 +
 arch/powerpc/include/asm/kvm_book3s.h       |   9 +
 arch/powerpc/include/asm/kvm_host.h         |   3 +
 arch/powerpc/include/asm/reg.h              |  25 +-
 arch/powerpc/include/asm/tm.h               |   2 -
 arch/powerpc/include/uapi/asm/tm.h          |   2 +-
 arch/powerpc/kernel/process.c               |   1 +
 arch/powerpc/kernel/tm.S                    |  12 +
 arch/powerpc/kvm/Makefile                   |   3 +
 arch/powerpc/kvm/book3s.h                   |   1 +
 arch/powerpc/kvm/book3s_64_mmu.c            |  11 +-
 arch/powerpc/kvm/book3s_emulate.c           | 283 +++++++++++++++++++-
 arch/powerpc/kvm/book3s_hv_rmhandlers.S     | 259 ++----------------
 arch/powerpc/kvm/book3s_pr.c                | 287 ++++++++++++++++++--
 arch/powerpc/kvm/book3s_segment.S           |  13 +
 arch/powerpc/kvm/powerpc.c                  |   3 +-
 arch/powerpc/kvm/tm.S                       | 391 ++++++++++++++++++++++++++++
 arch/powerpc/mm/hash_utils_64.c             |   1 +
 arch/powerpc/platforms/powernv/copy-paste.h |   3 +-
 19 files changed, 1028 insertions(+), 291 deletions(-)
 create mode 100644 arch/powerpc/kvm/tm.S

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 116+ messages in thread

* [PATCH 01/26] KVM: PPC: Book3S PR: Move kvmppc_save_tm/kvmppc_restore_tm to separate file
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 10:11   ` wei.guo.simon
  -1 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

It is a simple patch just for moving kvmppc_save_tm/kvmppc_restore_tm()
functionalities to tm.S. There is no logic change. The reconstruct of
those APIs will be done in later patches to improve readability.

It is for preparation of reusing those APIs on both HV/PR PPC KVM.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/kvm/Makefile               |   3 +
 arch/powerpc/kvm/book3s_hv_rmhandlers.S | 239 ----------------------------
 arch/powerpc/kvm/tm.S                   | 267 ++++++++++++++++++++++++++++++++
 3 files changed, 270 insertions(+), 239 deletions(-)
 create mode 100644 arch/powerpc/kvm/tm.S

diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile
index 85ba80d..3886f1b 100644
--- a/arch/powerpc/kvm/Makefile
+++ b/arch/powerpc/kvm/Makefile
@@ -63,6 +63,9 @@ kvm-pr-y := \
 	book3s_64_mmu.o \
 	book3s_32_mmu.o
 
+kvm-book3s_64-builtin-objs-$(CONFIG_KVM_BOOK3S_64_HANDLER) += \
+	tm.o
+
 ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
 kvm-book3s_64-builtin-objs-$(CONFIG_KVM_BOOK3S_64_HANDLER) += \
 	book3s_rmhandlers.o
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 2659844..a5c8ecd 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -39,8 +39,6 @@ BEGIN_FTR_SECTION;				\
 	extsw	reg, reg;			\
 END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300)
 
-#define VCPU_GPRS_TM(reg) (((reg) * ULONG_SIZE) + VCPU_GPR_TM)
-
 /* Values in HSTATE_NAPPING(r13) */
 #define NAPPING_CEDE	1
 #define NAPPING_NOVCPU	2
@@ -2951,243 +2949,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
 	mr	r4,r31
 	blr
 
-#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
-/*
- * Save transactional state and TM-related registers.
- * Called with r9 pointing to the vcpu struct.
- * This can modify all checkpointed registers, but
- * restores r1, r2 and r9 (vcpu pointer) before exit.
- */
-kvmppc_save_tm:
-	mflr	r0
-	std	r0, PPC_LR_STKOFF(r1)
-
-	/* Turn on TM. */
-	mfmsr	r8
-	li	r0, 1
-	rldimi	r8, r0, MSR_TM_LG, 63-MSR_TM_LG
-	mtmsrd	r8
-
-	ld	r5, VCPU_MSR(r9)
-	rldicl. r5, r5, 64 - MSR_TS_S_LG, 62
-	beq	1f	/* TM not active in guest. */
-
-	std	r1, HSTATE_HOST_R1(r13)
-	li	r3, TM_CAUSE_KVM_RESCHED
-
-	/* Clear the MSR RI since r1, r13 are all going to be foobar. */
-	li	r5, 0
-	mtmsrd	r5, 1
-
-	/* All GPRs are volatile at this point. */
-	TRECLAIM(R3)
-
-	/* Temporarily store r13 and r9 so we have some regs to play with */
-	SET_SCRATCH0(r13)
-	GET_PACA(r13)
-	std	r9, PACATMSCRATCH(r13)
-	ld	r9, HSTATE_KVM_VCPU(r13)
-
-	/* Get a few more GPRs free. */
-	std	r29, VCPU_GPRS_TM(29)(r9)
-	std	r30, VCPU_GPRS_TM(30)(r9)
-	std	r31, VCPU_GPRS_TM(31)(r9)
-
-	/* Save away PPR and DSCR soon so don't run with user values. */
-	mfspr	r31, SPRN_PPR
-	HMT_MEDIUM
-	mfspr	r30, SPRN_DSCR
-	ld	r29, HSTATE_DSCR(r13)
-	mtspr	SPRN_DSCR, r29
-
-	/* Save all but r9, r13 & r29-r31 */
-	reg = 0
-	.rept	29
-	.if (reg != 9) && (reg != 13)
-	std	reg, VCPU_GPRS_TM(reg)(r9)
-	.endif
-	reg = reg + 1
-	.endr
-	/* ... now save r13 */
-	GET_SCRATCH0(r4)
-	std	r4, VCPU_GPRS_TM(13)(r9)
-	/* ... and save r9 */
-	ld	r4, PACATMSCRATCH(r13)
-	std	r4, VCPU_GPRS_TM(9)(r9)
-
-	/* Reload stack pointer and TOC. */
-	ld	r1, HSTATE_HOST_R1(r13)
-	ld	r2, PACATOC(r13)
-
-	/* Set MSR RI now we have r1 and r13 back. */
-	li	r5, MSR_RI
-	mtmsrd	r5, 1
-
-	/* Save away checkpinted SPRs. */
-	std	r31, VCPU_PPR_TM(r9)
-	std	r30, VCPU_DSCR_TM(r9)
-	mflr	r5
-	mfcr	r6
-	mfctr	r7
-	mfspr	r8, SPRN_AMR
-	mfspr	r10, SPRN_TAR
-	mfxer	r11
-	std	r5, VCPU_LR_TM(r9)
-	stw	r6, VCPU_CR_TM(r9)
-	std	r7, VCPU_CTR_TM(r9)
-	std	r8, VCPU_AMR_TM(r9)
-	std	r10, VCPU_TAR_TM(r9)
-	std	r11, VCPU_XER_TM(r9)
-
-	/* Restore r12 as trap number. */
-	lwz	r12, VCPU_TRAP(r9)
-
-	/* Save FP/VSX. */
-	addi	r3, r9, VCPU_FPRS_TM
-	bl	store_fp_state
-	addi	r3, r9, VCPU_VRS_TM
-	bl	store_vr_state
-	mfspr	r6, SPRN_VRSAVE
-	stw	r6, VCPU_VRSAVE_TM(r9)
-1:
-	/*
-	 * We need to save these SPRs after the treclaim so that the software
-	 * error code is recorded correctly in the TEXASR.  Also the user may
-	 * change these outside of a transaction, so they must always be
-	 * context switched.
-	 */
-	mfspr	r5, SPRN_TFHAR
-	mfspr	r6, SPRN_TFIAR
-	mfspr	r7, SPRN_TEXASR
-	std	r5, VCPU_TFHAR(r9)
-	std	r6, VCPU_TFIAR(r9)
-	std	r7, VCPU_TEXASR(r9)
-
-	ld	r0, PPC_LR_STKOFF(r1)
-	mtlr	r0
-	blr
-
-/*
- * Restore transactional state and TM-related registers.
- * Called with r4 pointing to the vcpu struct.
- * This potentially modifies all checkpointed registers.
- * It restores r1, r2, r4 from the PACA.
- */
-kvmppc_restore_tm:
-	mflr	r0
-	std	r0, PPC_LR_STKOFF(r1)
-
-	/* Turn on TM/FP/VSX/VMX so we can restore them. */
-	mfmsr	r5
-	li	r6, MSR_TM >> 32
-	sldi	r6, r6, 32
-	or	r5, r5, r6
-	ori	r5, r5, MSR_FP
-	oris	r5, r5, (MSR_VEC | MSR_VSX)@h
-	mtmsrd	r5
-
-	/*
-	 * The user may change these outside of a transaction, so they must
-	 * always be context switched.
-	 */
-	ld	r5, VCPU_TFHAR(r4)
-	ld	r6, VCPU_TFIAR(r4)
-	ld	r7, VCPU_TEXASR(r4)
-	mtspr	SPRN_TFHAR, r5
-	mtspr	SPRN_TFIAR, r6
-	mtspr	SPRN_TEXASR, r7
-
-	ld	r5, VCPU_MSR(r4)
-	rldicl. r5, r5, 64 - MSR_TS_S_LG, 62
-	beqlr		/* TM not active in guest */
-	std	r1, HSTATE_HOST_R1(r13)
-
-	/* Make sure the failure summary is set, otherwise we'll program check
-	 * when we trechkpt.  It's possible that this might have been not set
-	 * on a kvmppc_set_one_reg() call but we shouldn't let this crash the
-	 * host.
-	 */
-	oris	r7, r7, (TEXASR_FS)@h
-	mtspr	SPRN_TEXASR, r7
-
-	/*
-	 * We need to load up the checkpointed state for the guest.
-	 * We need to do this early as it will blow away any GPRs, VSRs and
-	 * some SPRs.
-	 */
-
-	mr	r31, r4
-	addi	r3, r31, VCPU_FPRS_TM
-	bl	load_fp_state
-	addi	r3, r31, VCPU_VRS_TM
-	bl	load_vr_state
-	mr	r4, r31
-	lwz	r7, VCPU_VRSAVE_TM(r4)
-	mtspr	SPRN_VRSAVE, r7
-
-	ld	r5, VCPU_LR_TM(r4)
-	lwz	r6, VCPU_CR_TM(r4)
-	ld	r7, VCPU_CTR_TM(r4)
-	ld	r8, VCPU_AMR_TM(r4)
-	ld	r9, VCPU_TAR_TM(r4)
-	ld	r10, VCPU_XER_TM(r4)
-	mtlr	r5
-	mtcr	r6
-	mtctr	r7
-	mtspr	SPRN_AMR, r8
-	mtspr	SPRN_TAR, r9
-	mtxer	r10
-
-	/*
-	 * Load up PPR and DSCR values but don't put them in the actual SPRs
-	 * till the last moment to avoid running with userspace PPR and DSCR for
-	 * too long.
-	 */
-	ld	r29, VCPU_DSCR_TM(r4)
-	ld	r30, VCPU_PPR_TM(r4)
-
-	std	r2, PACATMSCRATCH(r13) /* Save TOC */
-
-	/* Clear the MSR RI since r1, r13 are all going to be foobar. */
-	li	r5, 0
-	mtmsrd	r5, 1
-
-	/* Load GPRs r0-r28 */
-	reg = 0
-	.rept	29
-	ld	reg, VCPU_GPRS_TM(reg)(r31)
-	reg = reg + 1
-	.endr
-
-	mtspr	SPRN_DSCR, r29
-	mtspr	SPRN_PPR, r30
-
-	/* Load final GPRs */
-	ld	29, VCPU_GPRS_TM(29)(r31)
-	ld	30, VCPU_GPRS_TM(30)(r31)
-	ld	31, VCPU_GPRS_TM(31)(r31)
-
-	/* TM checkpointed state is now setup.  All GPRs are now volatile. */
-	TRECHKPT
-
-	/* Now let's get back the state we need. */
-	HMT_MEDIUM
-	GET_PACA(r13)
-	ld	r29, HSTATE_DSCR(r13)
-	mtspr	SPRN_DSCR, r29
-	ld	r4, HSTATE_KVM_VCPU(r13)
-	ld	r1, HSTATE_HOST_R1(r13)
-	ld	r2, PACATMSCRATCH(r13)
-
-	/* Set the MSR RI since we have our registers back. */
-	li	r5, MSR_RI
-	mtmsrd	r5, 1
-
-	ld	r0, PPC_LR_STKOFF(r1)
-	mtlr	r0
-	blr
-#endif
-
 /*
  * We come here if we get any exception or interrupt while we are
  * executing host real mode code while in guest MMU context.
diff --git a/arch/powerpc/kvm/tm.S b/arch/powerpc/kvm/tm.S
new file mode 100644
index 0000000..072d35e
--- /dev/null
+++ b/arch/powerpc/kvm/tm.S
@@ -0,0 +1,267 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * Derived from book3s_hv_rmhandlers.S,  which are:
+ *
+ * Copyright 2011 Paul Mackerras, IBM Corp. <paulus@au1.ibm.com>
+ *
+ */
+
+#include <asm/reg.h>
+#include <asm/ppc_asm.h>
+#include <asm/asm-offsets.h>
+#include <asm/export.h>
+#include <asm/tm.h>
+#include <asm/cputable.h>
+
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+#define VCPU_GPRS_TM(reg) (((reg) * ULONG_SIZE) + VCPU_GPR_TM)
+#endif
+
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+/*
+ * Save transactional state and TM-related registers.
+ * Called with r9 pointing to the vcpu struct.
+ * This can modify all checkpointed registers, but
+ * restores r1, r2 and r9 (vcpu pointer) before exit.
+ */
+_GLOBAL(kvmppc_save_tm)
+	mflr	r0
+	std	r0, PPC_LR_STKOFF(r1)
+
+	/* Turn on TM. */
+	mfmsr	r8
+	li	r0, 1
+	rldimi	r8, r0, MSR_TM_LG, 63-MSR_TM_LG
+	mtmsrd	r8
+
+	ld	r5, VCPU_MSR(r9)
+	rldicl. r5, r5, 64 - MSR_TS_S_LG, 62
+	beq	1f	/* TM not active in guest. */
+
+	std	r1, HSTATE_HOST_R1(r13)
+	li	r3, TM_CAUSE_KVM_RESCHED
+
+	/* Clear the MSR RI since r1, r13 are all going to be foobar. */
+	li	r5, 0
+	mtmsrd	r5, 1
+
+	/* All GPRs are volatile at this point. */
+	TRECLAIM(R3)
+
+	/* Temporarily store r13 and r9 so we have some regs to play with */
+	SET_SCRATCH0(r13)
+	GET_PACA(r13)
+	std	r9, PACATMSCRATCH(r13)
+	ld	r9, HSTATE_KVM_VCPU(r13)
+
+	/* Get a few more GPRs free. */
+	std	r29, VCPU_GPRS_TM(29)(r9)
+	std	r30, VCPU_GPRS_TM(30)(r9)
+	std	r31, VCPU_GPRS_TM(31)(r9)
+
+	/* Save away PPR and DSCR soon so don't run with user values. */
+	mfspr	r31, SPRN_PPR
+	HMT_MEDIUM
+	mfspr	r30, SPRN_DSCR
+#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
+	ld	r29, HSTATE_DSCR(r13)
+	mtspr	SPRN_DSCR, r29
+#endif
+
+	/* Save all but r9, r13 & r29-r31 */
+	reg = 0
+	.rept	29
+	.if (reg != 9) && (reg != 13)
+	std	reg, VCPU_GPRS_TM(reg)(r9)
+	.endif
+	reg = reg + 1
+	.endr
+	/* ... now save r13 */
+	GET_SCRATCH0(r4)
+	std	r4, VCPU_GPRS_TM(13)(r9)
+	/* ... and save r9 */
+	ld	r4, PACATMSCRATCH(r13)
+	std	r4, VCPU_GPRS_TM(9)(r9)
+
+	/* Reload stack pointer and TOC. */
+	ld	r1, HSTATE_HOST_R1(r13)
+	ld	r2, PACATOC(r13)
+
+	/* Set MSR RI now we have r1 and r13 back. */
+	li	r5, MSR_RI
+	mtmsrd	r5, 1
+
+	/* Save away checkpinted SPRs. */
+	std	r31, VCPU_PPR_TM(r9)
+	std	r30, VCPU_DSCR_TM(r9)
+	mflr	r5
+	mfcr	r6
+	mfctr	r7
+	mfspr	r8, SPRN_AMR
+	mfspr	r10, SPRN_TAR
+	mfxer	r11
+	std	r5, VCPU_LR_TM(r9)
+	stw	r6, VCPU_CR_TM(r9)
+	std	r7, VCPU_CTR_TM(r9)
+	std	r8, VCPU_AMR_TM(r9)
+	std	r10, VCPU_TAR_TM(r9)
+	std	r11, VCPU_XER_TM(r9)
+
+	/* Restore r12 as trap number. */
+	lwz	r12, VCPU_TRAP(r9)
+
+	/* Save FP/VSX. */
+	addi	r3, r9, VCPU_FPRS_TM
+	bl	store_fp_state
+	addi	r3, r9, VCPU_VRS_TM
+	bl	store_vr_state
+	mfspr	r6, SPRN_VRSAVE
+	stw	r6, VCPU_VRSAVE_TM(r9)
+1:
+	/*
+	 * We need to save these SPRs after the treclaim so that the software
+	 * error code is recorded correctly in the TEXASR.  Also the user may
+	 * change these outside of a transaction, so they must always be
+	 * context switched.
+	 */
+	mfspr	r5, SPRN_TFHAR
+	mfspr	r6, SPRN_TFIAR
+	mfspr	r7, SPRN_TEXASR
+	std	r5, VCPU_TFHAR(r9)
+	std	r6, VCPU_TFIAR(r9)
+	std	r7, VCPU_TEXASR(r9)
+
+	ld	r0, PPC_LR_STKOFF(r1)
+	mtlr	r0
+	blr
+
+/*
+ * Restore transactional state and TM-related registers.
+ * Called with r4 pointing to the vcpu struct.
+ * This potentially modifies all checkpointed registers.
+ * It restores r1, r2, r4 from the PACA.
+ */
+_GLOBAL(kvmppc_restore_tm)
+	mflr	r0
+	std	r0, PPC_LR_STKOFF(r1)
+
+	/* Turn on TM/FP/VSX/VMX so we can restore them. */
+	mfmsr	r5
+	li	r6, MSR_TM >> 32
+	sldi	r6, r6, 32
+	or	r5, r5, r6
+	ori	r5, r5, MSR_FP
+	oris	r5, r5, (MSR_VEC | MSR_VSX)@h
+	mtmsrd	r5
+
+	/*
+	 * The user may change these outside of a transaction, so they must
+	 * always be context switched.
+	 */
+	ld	r5, VCPU_TFHAR(r4)
+	ld	r6, VCPU_TFIAR(r4)
+	ld	r7, VCPU_TEXASR(r4)
+	mtspr	SPRN_TFHAR, r5
+	mtspr	SPRN_TFIAR, r6
+	mtspr	SPRN_TEXASR, r7
+
+	ld	r5, VCPU_MSR(r4)
+	rldicl. r5, r5, 64 - MSR_TS_S_LG, 62
+	beqlr		/* TM not active in guest */
+	std	r1, HSTATE_HOST_R1(r13)
+
+	/* Make sure the failure summary is set, otherwise we'll program check
+	 * when we trechkpt.  It's possible that this might have been not set
+	 * on a kvmppc_set_one_reg() call but we shouldn't let this crash the
+	 * host.
+	 */
+	oris	r7, r7, (TEXASR_FS)@h
+	mtspr	SPRN_TEXASR, r7
+
+	/*
+	 * We need to load up the checkpointed state for the guest.
+	 * We need to do this early as it will blow away any GPRs, VSRs and
+	 * some SPRs.
+	 */
+
+	mr	r31, r4
+	addi	r3, r31, VCPU_FPRS_TM
+	bl	load_fp_state
+	addi	r3, r31, VCPU_VRS_TM
+	bl	load_vr_state
+	mr	r4, r31
+	lwz	r7, VCPU_VRSAVE_TM(r4)
+	mtspr	SPRN_VRSAVE, r7
+
+	ld	r5, VCPU_LR_TM(r4)
+	lwz	r6, VCPU_CR_TM(r4)
+	ld	r7, VCPU_CTR_TM(r4)
+	ld	r8, VCPU_AMR_TM(r4)
+	ld	r9, VCPU_TAR_TM(r4)
+	ld	r10, VCPU_XER_TM(r4)
+	mtlr	r5
+	mtcr	r6
+	mtctr	r7
+	mtspr	SPRN_AMR, r8
+	mtspr	SPRN_TAR, r9
+	mtxer	r10
+
+	/*
+	 * Load up PPR and DSCR values but don't put them in the actual SPRs
+	 * till the last moment to avoid running with userspace PPR and DSCR for
+	 * too long.
+	 */
+	ld	r29, VCPU_DSCR_TM(r4)
+	ld	r30, VCPU_PPR_TM(r4)
+
+	std	r2, PACATMSCRATCH(r13) /* Save TOC */
+
+	/* Clear the MSR RI since r1, r13 are all going to be foobar. */
+	li	r5, 0
+	mtmsrd	r5, 1
+
+	/* Load GPRs r0-r28 */
+	reg = 0
+	.rept	29
+	ld	reg, VCPU_GPRS_TM(reg)(r31)
+	reg = reg + 1
+	.endr
+
+	mtspr	SPRN_DSCR, r29
+	mtspr	SPRN_PPR, r30
+
+	/* Load final GPRs */
+	ld	29, VCPU_GPRS_TM(29)(r31)
+	ld	30, VCPU_GPRS_TM(30)(r31)
+	ld	31, VCPU_GPRS_TM(31)(r31)
+
+	/* TM checkpointed state is now setup.  All GPRs are now volatile. */
+	TRECHKPT
+
+	/* Now let's get back the state we need. */
+	HMT_MEDIUM
+	GET_PACA(r13)
+#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
+	ld	r29, HSTATE_DSCR(r13)
+	mtspr	SPRN_DSCR, r29
+#endif
+	ld	r4, HSTATE_KVM_VCPU(r13)
+	ld	r1, HSTATE_HOST_R1(r13)
+	ld	r2, PACATMSCRATCH(r13)
+
+	/* Set the MSR RI since we have our registers back. */
+	li	r5, MSR_RI
+	mtmsrd	r5, 1
+
+	ld	r0, PPC_LR_STKOFF(r1)
+	mtlr	r0
+	blr
+#endif
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 01/26] KVM: PPC: Book3S PR: Move kvmppc_save_tm/kvmppc_restore_tm to separate file
@ 2018-01-11 10:11   ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

It is a simple patch just for moving kvmppc_save_tm/kvmppc_restore_tm()
functionalities to tm.S. There is no logic change. The reconstruct of
those APIs will be done in later patches to improve readability.

It is for preparation of reusing those APIs on both HV/PR PPC KVM.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/kvm/Makefile               |   3 +
 arch/powerpc/kvm/book3s_hv_rmhandlers.S | 239 ----------------------------
 arch/powerpc/kvm/tm.S                   | 267 ++++++++++++++++++++++++++++++++
 3 files changed, 270 insertions(+), 239 deletions(-)
 create mode 100644 arch/powerpc/kvm/tm.S

diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile
index 85ba80d..3886f1b 100644
--- a/arch/powerpc/kvm/Makefile
+++ b/arch/powerpc/kvm/Makefile
@@ -63,6 +63,9 @@ kvm-pr-y := \
 	book3s_64_mmu.o \
 	book3s_32_mmu.o
 
+kvm-book3s_64-builtin-objs-$(CONFIG_KVM_BOOK3S_64_HANDLER) += \
+	tm.o
+
 ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
 kvm-book3s_64-builtin-objs-$(CONFIG_KVM_BOOK3S_64_HANDLER) += \
 	book3s_rmhandlers.o
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 2659844..a5c8ecd 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -39,8 +39,6 @@ BEGIN_FTR_SECTION;				\
 	extsw	reg, reg;			\
 END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300)
 
-#define VCPU_GPRS_TM(reg) (((reg) * ULONG_SIZE) + VCPU_GPR_TM)
-
 /* Values in HSTATE_NAPPING(r13) */
 #define NAPPING_CEDE	1
 #define NAPPING_NOVCPU	2
@@ -2951,243 +2949,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
 	mr	r4,r31
 	blr
 
-#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
-/*
- * Save transactional state and TM-related registers.
- * Called with r9 pointing to the vcpu struct.
- * This can modify all checkpointed registers, but
- * restores r1, r2 and r9 (vcpu pointer) before exit.
- */
-kvmppc_save_tm:
-	mflr	r0
-	std	r0, PPC_LR_STKOFF(r1)
-
-	/* Turn on TM. */
-	mfmsr	r8
-	li	r0, 1
-	rldimi	r8, r0, MSR_TM_LG, 63-MSR_TM_LG
-	mtmsrd	r8
-
-	ld	r5, VCPU_MSR(r9)
-	rldicl. r5, r5, 64 - MSR_TS_S_LG, 62
-	beq	1f	/* TM not active in guest. */
-
-	std	r1, HSTATE_HOST_R1(r13)
-	li	r3, TM_CAUSE_KVM_RESCHED
-
-	/* Clear the MSR RI since r1, r13 are all going to be foobar. */
-	li	r5, 0
-	mtmsrd	r5, 1
-
-	/* All GPRs are volatile at this point. */
-	TRECLAIM(R3)
-
-	/* Temporarily store r13 and r9 so we have some regs to play with */
-	SET_SCRATCH0(r13)
-	GET_PACA(r13)
-	std	r9, PACATMSCRATCH(r13)
-	ld	r9, HSTATE_KVM_VCPU(r13)
-
-	/* Get a few more GPRs free. */
-	std	r29, VCPU_GPRS_TM(29)(r9)
-	std	r30, VCPU_GPRS_TM(30)(r9)
-	std	r31, VCPU_GPRS_TM(31)(r9)
-
-	/* Save away PPR and DSCR soon so don't run with user values. */
-	mfspr	r31, SPRN_PPR
-	HMT_MEDIUM
-	mfspr	r30, SPRN_DSCR
-	ld	r29, HSTATE_DSCR(r13)
-	mtspr	SPRN_DSCR, r29
-
-	/* Save all but r9, r13 & r29-r31 */
-	reg = 0
-	.rept	29
-	.if (reg != 9) && (reg != 13)
-	std	reg, VCPU_GPRS_TM(reg)(r9)
-	.endif
-	reg = reg + 1
-	.endr
-	/* ... now save r13 */
-	GET_SCRATCH0(r4)
-	std	r4, VCPU_GPRS_TM(13)(r9)
-	/* ... and save r9 */
-	ld	r4, PACATMSCRATCH(r13)
-	std	r4, VCPU_GPRS_TM(9)(r9)
-
-	/* Reload stack pointer and TOC. */
-	ld	r1, HSTATE_HOST_R1(r13)
-	ld	r2, PACATOC(r13)
-
-	/* Set MSR RI now we have r1 and r13 back. */
-	li	r5, MSR_RI
-	mtmsrd	r5, 1
-
-	/* Save away checkpinted SPRs. */
-	std	r31, VCPU_PPR_TM(r9)
-	std	r30, VCPU_DSCR_TM(r9)
-	mflr	r5
-	mfcr	r6
-	mfctr	r7
-	mfspr	r8, SPRN_AMR
-	mfspr	r10, SPRN_TAR
-	mfxer	r11
-	std	r5, VCPU_LR_TM(r9)
-	stw	r6, VCPU_CR_TM(r9)
-	std	r7, VCPU_CTR_TM(r9)
-	std	r8, VCPU_AMR_TM(r9)
-	std	r10, VCPU_TAR_TM(r9)
-	std	r11, VCPU_XER_TM(r9)
-
-	/* Restore r12 as trap number. */
-	lwz	r12, VCPU_TRAP(r9)
-
-	/* Save FP/VSX. */
-	addi	r3, r9, VCPU_FPRS_TM
-	bl	store_fp_state
-	addi	r3, r9, VCPU_VRS_TM
-	bl	store_vr_state
-	mfspr	r6, SPRN_VRSAVE
-	stw	r6, VCPU_VRSAVE_TM(r9)
-1:
-	/*
-	 * We need to save these SPRs after the treclaim so that the software
-	 * error code is recorded correctly in the TEXASR.  Also the user may
-	 * change these outside of a transaction, so they must always be
-	 * context switched.
-	 */
-	mfspr	r5, SPRN_TFHAR
-	mfspr	r6, SPRN_TFIAR
-	mfspr	r7, SPRN_TEXASR
-	std	r5, VCPU_TFHAR(r9)
-	std	r6, VCPU_TFIAR(r9)
-	std	r7, VCPU_TEXASR(r9)
-
-	ld	r0, PPC_LR_STKOFF(r1)
-	mtlr	r0
-	blr
-
-/*
- * Restore transactional state and TM-related registers.
- * Called with r4 pointing to the vcpu struct.
- * This potentially modifies all checkpointed registers.
- * It restores r1, r2, r4 from the PACA.
- */
-kvmppc_restore_tm:
-	mflr	r0
-	std	r0, PPC_LR_STKOFF(r1)
-
-	/* Turn on TM/FP/VSX/VMX so we can restore them. */
-	mfmsr	r5
-	li	r6, MSR_TM >> 32
-	sldi	r6, r6, 32
-	or	r5, r5, r6
-	ori	r5, r5, MSR_FP
-	oris	r5, r5, (MSR_VEC | MSR_VSX)@h
-	mtmsrd	r5
-
-	/*
-	 * The user may change these outside of a transaction, so they must
-	 * always be context switched.
-	 */
-	ld	r5, VCPU_TFHAR(r4)
-	ld	r6, VCPU_TFIAR(r4)
-	ld	r7, VCPU_TEXASR(r4)
-	mtspr	SPRN_TFHAR, r5
-	mtspr	SPRN_TFIAR, r6
-	mtspr	SPRN_TEXASR, r7
-
-	ld	r5, VCPU_MSR(r4)
-	rldicl. r5, r5, 64 - MSR_TS_S_LG, 62
-	beqlr		/* TM not active in guest */
-	std	r1, HSTATE_HOST_R1(r13)
-
-	/* Make sure the failure summary is set, otherwise we'll program check
-	 * when we trechkpt.  It's possible that this might have been not set
-	 * on a kvmppc_set_one_reg() call but we shouldn't let this crash the
-	 * host.
-	 */
-	oris	r7, r7, (TEXASR_FS)@h
-	mtspr	SPRN_TEXASR, r7
-
-	/*
-	 * We need to load up the checkpointed state for the guest.
-	 * We need to do this early as it will blow away any GPRs, VSRs and
-	 * some SPRs.
-	 */
-
-	mr	r31, r4
-	addi	r3, r31, VCPU_FPRS_TM
-	bl	load_fp_state
-	addi	r3, r31, VCPU_VRS_TM
-	bl	load_vr_state
-	mr	r4, r31
-	lwz	r7, VCPU_VRSAVE_TM(r4)
-	mtspr	SPRN_VRSAVE, r7
-
-	ld	r5, VCPU_LR_TM(r4)
-	lwz	r6, VCPU_CR_TM(r4)
-	ld	r7, VCPU_CTR_TM(r4)
-	ld	r8, VCPU_AMR_TM(r4)
-	ld	r9, VCPU_TAR_TM(r4)
-	ld	r10, VCPU_XER_TM(r4)
-	mtlr	r5
-	mtcr	r6
-	mtctr	r7
-	mtspr	SPRN_AMR, r8
-	mtspr	SPRN_TAR, r9
-	mtxer	r10
-
-	/*
-	 * Load up PPR and DSCR values but don't put them in the actual SPRs
-	 * till the last moment to avoid running with userspace PPR and DSCR for
-	 * too long.
-	 */
-	ld	r29, VCPU_DSCR_TM(r4)
-	ld	r30, VCPU_PPR_TM(r4)
-
-	std	r2, PACATMSCRATCH(r13) /* Save TOC */
-
-	/* Clear the MSR RI since r1, r13 are all going to be foobar. */
-	li	r5, 0
-	mtmsrd	r5, 1
-
-	/* Load GPRs r0-r28 */
-	reg = 0
-	.rept	29
-	ld	reg, VCPU_GPRS_TM(reg)(r31)
-	reg = reg + 1
-	.endr
-
-	mtspr	SPRN_DSCR, r29
-	mtspr	SPRN_PPR, r30
-
-	/* Load final GPRs */
-	ld	29, VCPU_GPRS_TM(29)(r31)
-	ld	30, VCPU_GPRS_TM(30)(r31)
-	ld	31, VCPU_GPRS_TM(31)(r31)
-
-	/* TM checkpointed state is now setup.  All GPRs are now volatile. */
-	TRECHKPT
-
-	/* Now let's get back the state we need. */
-	HMT_MEDIUM
-	GET_PACA(r13)
-	ld	r29, HSTATE_DSCR(r13)
-	mtspr	SPRN_DSCR, r29
-	ld	r4, HSTATE_KVM_VCPU(r13)
-	ld	r1, HSTATE_HOST_R1(r13)
-	ld	r2, PACATMSCRATCH(r13)
-
-	/* Set the MSR RI since we have our registers back. */
-	li	r5, MSR_RI
-	mtmsrd	r5, 1
-
-	ld	r0, PPC_LR_STKOFF(r1)
-	mtlr	r0
-	blr
-#endif
-
 /*
  * We come here if we get any exception or interrupt while we are
  * executing host real mode code while in guest MMU context.
diff --git a/arch/powerpc/kvm/tm.S b/arch/powerpc/kvm/tm.S
new file mode 100644
index 0000000..072d35e
--- /dev/null
+++ b/arch/powerpc/kvm/tm.S
@@ -0,0 +1,267 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * Derived from book3s_hv_rmhandlers.S,  which are:
+ *
+ * Copyright 2011 Paul Mackerras, IBM Corp. <paulus@au1.ibm.com>
+ *
+ */
+
+#include <asm/reg.h>
+#include <asm/ppc_asm.h>
+#include <asm/asm-offsets.h>
+#include <asm/export.h>
+#include <asm/tm.h>
+#include <asm/cputable.h>
+
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+#define VCPU_GPRS_TM(reg) (((reg) * ULONG_SIZE) + VCPU_GPR_TM)
+#endif
+
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+/*
+ * Save transactional state and TM-related registers.
+ * Called with r9 pointing to the vcpu struct.
+ * This can modify all checkpointed registers, but
+ * restores r1, r2 and r9 (vcpu pointer) before exit.
+ */
+_GLOBAL(kvmppc_save_tm)
+	mflr	r0
+	std	r0, PPC_LR_STKOFF(r1)
+
+	/* Turn on TM. */
+	mfmsr	r8
+	li	r0, 1
+	rldimi	r8, r0, MSR_TM_LG, 63-MSR_TM_LG
+	mtmsrd	r8
+
+	ld	r5, VCPU_MSR(r9)
+	rldicl. r5, r5, 64 - MSR_TS_S_LG, 62
+	beq	1f	/* TM not active in guest. */
+
+	std	r1, HSTATE_HOST_R1(r13)
+	li	r3, TM_CAUSE_KVM_RESCHED
+
+	/* Clear the MSR RI since r1, r13 are all going to be foobar. */
+	li	r5, 0
+	mtmsrd	r5, 1
+
+	/* All GPRs are volatile at this point. */
+	TRECLAIM(R3)
+
+	/* Temporarily store r13 and r9 so we have some regs to play with */
+	SET_SCRATCH0(r13)
+	GET_PACA(r13)
+	std	r9, PACATMSCRATCH(r13)
+	ld	r9, HSTATE_KVM_VCPU(r13)
+
+	/* Get a few more GPRs free. */
+	std	r29, VCPU_GPRS_TM(29)(r9)
+	std	r30, VCPU_GPRS_TM(30)(r9)
+	std	r31, VCPU_GPRS_TM(31)(r9)
+
+	/* Save away PPR and DSCR soon so don't run with user values. */
+	mfspr	r31, SPRN_PPR
+	HMT_MEDIUM
+	mfspr	r30, SPRN_DSCR
+#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
+	ld	r29, HSTATE_DSCR(r13)
+	mtspr	SPRN_DSCR, r29
+#endif
+
+	/* Save all but r9, r13 & r29-r31 */
+	reg = 0
+	.rept	29
+	.if (reg != 9) && (reg != 13)
+	std	reg, VCPU_GPRS_TM(reg)(r9)
+	.endif
+	reg = reg + 1
+	.endr
+	/* ... now save r13 */
+	GET_SCRATCH0(r4)
+	std	r4, VCPU_GPRS_TM(13)(r9)
+	/* ... and save r9 */
+	ld	r4, PACATMSCRATCH(r13)
+	std	r4, VCPU_GPRS_TM(9)(r9)
+
+	/* Reload stack pointer and TOC. */
+	ld	r1, HSTATE_HOST_R1(r13)
+	ld	r2, PACATOC(r13)
+
+	/* Set MSR RI now we have r1 and r13 back. */
+	li	r5, MSR_RI
+	mtmsrd	r5, 1
+
+	/* Save away checkpinted SPRs. */
+	std	r31, VCPU_PPR_TM(r9)
+	std	r30, VCPU_DSCR_TM(r9)
+	mflr	r5
+	mfcr	r6
+	mfctr	r7
+	mfspr	r8, SPRN_AMR
+	mfspr	r10, SPRN_TAR
+	mfxer	r11
+	std	r5, VCPU_LR_TM(r9)
+	stw	r6, VCPU_CR_TM(r9)
+	std	r7, VCPU_CTR_TM(r9)
+	std	r8, VCPU_AMR_TM(r9)
+	std	r10, VCPU_TAR_TM(r9)
+	std	r11, VCPU_XER_TM(r9)
+
+	/* Restore r12 as trap number. */
+	lwz	r12, VCPU_TRAP(r9)
+
+	/* Save FP/VSX. */
+	addi	r3, r9, VCPU_FPRS_TM
+	bl	store_fp_state
+	addi	r3, r9, VCPU_VRS_TM
+	bl	store_vr_state
+	mfspr	r6, SPRN_VRSAVE
+	stw	r6, VCPU_VRSAVE_TM(r9)
+1:
+	/*
+	 * We need to save these SPRs after the treclaim so that the software
+	 * error code is recorded correctly in the TEXASR.  Also the user may
+	 * change these outside of a transaction, so they must always be
+	 * context switched.
+	 */
+	mfspr	r5, SPRN_TFHAR
+	mfspr	r6, SPRN_TFIAR
+	mfspr	r7, SPRN_TEXASR
+	std	r5, VCPU_TFHAR(r9)
+	std	r6, VCPU_TFIAR(r9)
+	std	r7, VCPU_TEXASR(r9)
+
+	ld	r0, PPC_LR_STKOFF(r1)
+	mtlr	r0
+	blr
+
+/*
+ * Restore transactional state and TM-related registers.
+ * Called with r4 pointing to the vcpu struct.
+ * This potentially modifies all checkpointed registers.
+ * It restores r1, r2, r4 from the PACA.
+ */
+_GLOBAL(kvmppc_restore_tm)
+	mflr	r0
+	std	r0, PPC_LR_STKOFF(r1)
+
+	/* Turn on TM/FP/VSX/VMX so we can restore them. */
+	mfmsr	r5
+	li	r6, MSR_TM >> 32
+	sldi	r6, r6, 32
+	or	r5, r5, r6
+	ori	r5, r5, MSR_FP
+	oris	r5, r5, (MSR_VEC | MSR_VSX)@h
+	mtmsrd	r5
+
+	/*
+	 * The user may change these outside of a transaction, so they must
+	 * always be context switched.
+	 */
+	ld	r5, VCPU_TFHAR(r4)
+	ld	r6, VCPU_TFIAR(r4)
+	ld	r7, VCPU_TEXASR(r4)
+	mtspr	SPRN_TFHAR, r5
+	mtspr	SPRN_TFIAR, r6
+	mtspr	SPRN_TEXASR, r7
+
+	ld	r5, VCPU_MSR(r4)
+	rldicl. r5, r5, 64 - MSR_TS_S_LG, 62
+	beqlr		/* TM not active in guest */
+	std	r1, HSTATE_HOST_R1(r13)
+
+	/* Make sure the failure summary is set, otherwise we'll program check
+	 * when we trechkpt.  It's possible that this might have been not set
+	 * on a kvmppc_set_one_reg() call but we shouldn't let this crash the
+	 * host.
+	 */
+	oris	r7, r7, (TEXASR_FS)@h
+	mtspr	SPRN_TEXASR, r7
+
+	/*
+	 * We need to load up the checkpointed state for the guest.
+	 * We need to do this early as it will blow away any GPRs, VSRs and
+	 * some SPRs.
+	 */
+
+	mr	r31, r4
+	addi	r3, r31, VCPU_FPRS_TM
+	bl	load_fp_state
+	addi	r3, r31, VCPU_VRS_TM
+	bl	load_vr_state
+	mr	r4, r31
+	lwz	r7, VCPU_VRSAVE_TM(r4)
+	mtspr	SPRN_VRSAVE, r7
+
+	ld	r5, VCPU_LR_TM(r4)
+	lwz	r6, VCPU_CR_TM(r4)
+	ld	r7, VCPU_CTR_TM(r4)
+	ld	r8, VCPU_AMR_TM(r4)
+	ld	r9, VCPU_TAR_TM(r4)
+	ld	r10, VCPU_XER_TM(r4)
+	mtlr	r5
+	mtcr	r6
+	mtctr	r7
+	mtspr	SPRN_AMR, r8
+	mtspr	SPRN_TAR, r9
+	mtxer	r10
+
+	/*
+	 * Load up PPR and DSCR values but don't put them in the actual SPRs
+	 * till the last moment to avoid running with userspace PPR and DSCR for
+	 * too long.
+	 */
+	ld	r29, VCPU_DSCR_TM(r4)
+	ld	r30, VCPU_PPR_TM(r4)
+
+	std	r2, PACATMSCRATCH(r13) /* Save TOC */
+
+	/* Clear the MSR RI since r1, r13 are all going to be foobar. */
+	li	r5, 0
+	mtmsrd	r5, 1
+
+	/* Load GPRs r0-r28 */
+	reg = 0
+	.rept	29
+	ld	reg, VCPU_GPRS_TM(reg)(r31)
+	reg = reg + 1
+	.endr
+
+	mtspr	SPRN_DSCR, r29
+	mtspr	SPRN_PPR, r30
+
+	/* Load final GPRs */
+	ld	29, VCPU_GPRS_TM(29)(r31)
+	ld	30, VCPU_GPRS_TM(30)(r31)
+	ld	31, VCPU_GPRS_TM(31)(r31)
+
+	/* TM checkpointed state is now setup.  All GPRs are now volatile. */
+	TRECHKPT
+
+	/* Now let's get back the state we need. */
+	HMT_MEDIUM
+	GET_PACA(r13)
+#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
+	ld	r29, HSTATE_DSCR(r13)
+	mtspr	SPRN_DSCR, r29
+#endif
+	ld	r4, HSTATE_KVM_VCPU(r13)
+	ld	r1, HSTATE_HOST_R1(r13)
+	ld	r2, PACATMSCRATCH(r13)
+
+	/* Set the MSR RI since we have our registers back. */
+	li	r5, MSR_RI
+	mtmsrd	r5, 1
+
+	ld	r0, PPC_LR_STKOFF(r1)
+	mtlr	r0
+	blr
+#endif
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 02/26] KVM: PPC: Book3S PR: add new parameter (guest MSR) for kvmppc_save_tm()/kvmppc_restore_tm()
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 10:11   ` wei.guo.simon
  -1 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

HV KVM and PR KVM need different MSR source to indicate whether
treclaim. or trecheckpoint. is necessary.

This patch add new parameter (guest MSR) for these kvmppc_save_tm/
kvmppc_restore_tm() APIs:
- For HV KVM, it is VCPU_MSR
- For PR KVM, it is current host MSR or VCPU_SHADOW_SRR1

This enhancement enables these 2 APIs to be reused by PR KVM later.
And the patch keeps HV KVM logic unchanged.

This patch also reworks kvmppc_save_tm()/kvmppc_restore_tm() to
have a clean ABI: r3 for vcpu and r4 for guest_msr.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/kvm/book3s_hv_rmhandlers.S | 12 ++++++-
 arch/powerpc/kvm/tm.S                   | 61 ++++++++++++++++++---------------
 2 files changed, 45 insertions(+), 28 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index a5c8ecd..613fd27 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -808,7 +808,10 @@ BEGIN_FTR_SECTION
 	/*
 	 * NOTE THAT THIS TRASHES ALL NON-VOLATILE REGISTERS INCLUDING CR
 	 */
+	mr      r3, r4
+	ld      r4, VCPU_MSR(r3)
 	bl	kvmppc_restore_tm
+	ld	r4, HSTATE_KVM_VCPU(r13)
 END_FTR_SECTION_IFSET(CPU_FTR_TM)
 #endif
 
@@ -1680,7 +1683,10 @@ BEGIN_FTR_SECTION
 	/*
 	 * NOTE THAT THIS TRASHES ALL NON-VOLATILE REGISTERS INCLUDING CR
 	 */
+	mr      r3, r9
+	ld      r4, VCPU_MSR(r3)
 	bl	kvmppc_save_tm
+	ld	r9, HSTATE_KVM_VCPU(r13)
 END_FTR_SECTION_IFSET(CPU_FTR_TM)
 #endif
 
@@ -2543,7 +2549,8 @@ BEGIN_FTR_SECTION
 	/*
 	 * NOTE THAT THIS TRASHES ALL NON-VOLATILE REGISTERS INCLUDING CR
 	 */
-	ld	r9, HSTATE_KVM_VCPU(r13)
+	ld      r3, HSTATE_KVM_VCPU(r13)
+	ld      r4, VCPU_MSR(r3)
 	bl	kvmppc_save_tm
 END_FTR_SECTION_IFSET(CPU_FTR_TM)
 #endif
@@ -2656,7 +2663,10 @@ BEGIN_FTR_SECTION
 	/*
 	 * NOTE THAT THIS TRASHES ALL NON-VOLATILE REGISTERS INCLUDING CR
 	 */
+	mr      r3, r4
+	ld      r4, VCPU_MSR(r3)
 	bl	kvmppc_restore_tm
+	ld	r4, HSTATE_KVM_VCPU(r13)
 END_FTR_SECTION_IFSET(CPU_FTR_TM)
 #endif
 
diff --git a/arch/powerpc/kvm/tm.S b/arch/powerpc/kvm/tm.S
index 072d35e..e779b15 100644
--- a/arch/powerpc/kvm/tm.S
+++ b/arch/powerpc/kvm/tm.S
@@ -28,9 +28,12 @@
 #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
 /*
  * Save transactional state and TM-related registers.
- * Called with r9 pointing to the vcpu struct.
+ * Called with:
+ * - r3 pointing to the vcpu struct
+ * - r4 points to the MSR with current TS bits:
+ * 	(For HV KVM, it is VCPU_MSR ; For PR KVM, it is host MSR).
  * This can modify all checkpointed registers, but
- * restores r1, r2 and r9 (vcpu pointer) before exit.
+ * restores r1, r2 before exit.
  */
 _GLOBAL(kvmppc_save_tm)
 	mflr	r0
@@ -42,11 +45,11 @@ _GLOBAL(kvmppc_save_tm)
 	rldimi	r8, r0, MSR_TM_LG, 63-MSR_TM_LG
 	mtmsrd	r8
 
-	ld	r5, VCPU_MSR(r9)
-	rldicl. r5, r5, 64 - MSR_TS_S_LG, 62
+	rldicl. r4, r4, 64 - MSR_TS_S_LG, 62
 	beq	1f	/* TM not active in guest. */
 
-	std	r1, HSTATE_HOST_R1(r13)
+	std	r1, HSTATE_SCRATCH2(r13)
+	std	r3, HSTATE_SCRATCH1(r13)
 	li	r3, TM_CAUSE_KVM_RESCHED
 
 	/* Clear the MSR RI since r1, r13 are all going to be foobar. */
@@ -60,7 +63,7 @@ _GLOBAL(kvmppc_save_tm)
 	SET_SCRATCH0(r13)
 	GET_PACA(r13)
 	std	r9, PACATMSCRATCH(r13)
-	ld	r9, HSTATE_KVM_VCPU(r13)
+	ld	r9, HSTATE_SCRATCH1(r13)
 
 	/* Get a few more GPRs free. */
 	std	r29, VCPU_GPRS_TM(29)(r9)
@@ -92,7 +95,7 @@ _GLOBAL(kvmppc_save_tm)
 	std	r4, VCPU_GPRS_TM(9)(r9)
 
 	/* Reload stack pointer and TOC. */
-	ld	r1, HSTATE_HOST_R1(r13)
+	ld	r1, HSTATE_SCRATCH2(r13)
 	ld	r2, PACATOC(r13)
 
 	/* Set MSR RI now we have r1 and r13 back. */
@@ -145,9 +148,13 @@ _GLOBAL(kvmppc_save_tm)
 
 /*
  * Restore transactional state and TM-related registers.
- * Called with r4 pointing to the vcpu struct.
+ * Called with:
+ *  - r3 pointing to the vcpu struct.
+ *  - r4 is the guest MSR with desired TS bits:
+ * 	For HV KVM, it is VCPU_MSR
+ * 	For PR KVM, it is provided by caller
  * This potentially modifies all checkpointed registers.
- * It restores r1, r2, r4 from the PACA.
+ * It restores r1, r2 from the PACA.
  */
 _GLOBAL(kvmppc_restore_tm)
 	mflr	r0
@@ -166,17 +173,17 @@ _GLOBAL(kvmppc_restore_tm)
 	 * The user may change these outside of a transaction, so they must
 	 * always be context switched.
 	 */
-	ld	r5, VCPU_TFHAR(r4)
-	ld	r6, VCPU_TFIAR(r4)
-	ld	r7, VCPU_TEXASR(r4)
+	ld	r5, VCPU_TFHAR(r3)
+	ld	r6, VCPU_TFIAR(r3)
+	ld	r7, VCPU_TEXASR(r3)
 	mtspr	SPRN_TFHAR, r5
 	mtspr	SPRN_TFIAR, r6
 	mtspr	SPRN_TEXASR, r7
 
-	ld	r5, VCPU_MSR(r4)
+	mr	r5, r4
 	rldicl. r5, r5, 64 - MSR_TS_S_LG, 62
 	beqlr		/* TM not active in guest */
-	std	r1, HSTATE_HOST_R1(r13)
+	std	r1, HSTATE_SCRATCH2(r13)
 
 	/* Make sure the failure summary is set, otherwise we'll program check
 	 * when we trechkpt.  It's possible that this might have been not set
@@ -192,21 +199,21 @@ _GLOBAL(kvmppc_restore_tm)
 	 * some SPRs.
 	 */
 
-	mr	r31, r4
+	mr	r31, r3
 	addi	r3, r31, VCPU_FPRS_TM
 	bl	load_fp_state
 	addi	r3, r31, VCPU_VRS_TM
 	bl	load_vr_state
-	mr	r4, r31
-	lwz	r7, VCPU_VRSAVE_TM(r4)
+	mr	r3, r31
+	lwz	r7, VCPU_VRSAVE_TM(r3)
 	mtspr	SPRN_VRSAVE, r7
 
-	ld	r5, VCPU_LR_TM(r4)
-	lwz	r6, VCPU_CR_TM(r4)
-	ld	r7, VCPU_CTR_TM(r4)
-	ld	r8, VCPU_AMR_TM(r4)
-	ld	r9, VCPU_TAR_TM(r4)
-	ld	r10, VCPU_XER_TM(r4)
+	ld	r5, VCPU_LR_TM(r3)
+	lwz	r6, VCPU_CR_TM(r3)
+	ld	r7, VCPU_CTR_TM(r3)
+	ld	r8, VCPU_AMR_TM(r3)
+	ld	r9, VCPU_TAR_TM(r3)
+	ld	r10, VCPU_XER_TM(r3)
 	mtlr	r5
 	mtcr	r6
 	mtctr	r7
@@ -219,8 +226,8 @@ _GLOBAL(kvmppc_restore_tm)
 	 * till the last moment to avoid running with userspace PPR and DSCR for
 	 * too long.
 	 */
-	ld	r29, VCPU_DSCR_TM(r4)
-	ld	r30, VCPU_PPR_TM(r4)
+	ld	r29, VCPU_DSCR_TM(r3)
+	ld	r30, VCPU_PPR_TM(r3)
 
 	std	r2, PACATMSCRATCH(r13) /* Save TOC */
 
@@ -253,8 +260,7 @@ _GLOBAL(kvmppc_restore_tm)
 	ld	r29, HSTATE_DSCR(r13)
 	mtspr	SPRN_DSCR, r29
 #endif
-	ld	r4, HSTATE_KVM_VCPU(r13)
-	ld	r1, HSTATE_HOST_R1(r13)
+	ld	r1, HSTATE_SCRATCH2(r13)
 	ld	r2, PACATMSCRATCH(r13)
 
 	/* Set the MSR RI since we have our registers back. */
@@ -264,4 +270,5 @@ _GLOBAL(kvmppc_restore_tm)
 	ld	r0, PPC_LR_STKOFF(r1)
 	mtlr	r0
 	blr
+
 #endif
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 02/26] KVM: PPC: Book3S PR: add new parameter (guest MSR) for kvmppc_save_tm()/kvmppc_restore
@ 2018-01-11 10:11   ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

HV KVM and PR KVM need different MSR source to indicate whether
treclaim. or trecheckpoint. is necessary.

This patch add new parameter (guest MSR) for these kvmppc_save_tm/
kvmppc_restore_tm() APIs:
- For HV KVM, it is VCPU_MSR
- For PR KVM, it is current host MSR or VCPU_SHADOW_SRR1

This enhancement enables these 2 APIs to be reused by PR KVM later.
And the patch keeps HV KVM logic unchanged.

This patch also reworks kvmppc_save_tm()/kvmppc_restore_tm() to
have a clean ABI: r3 for vcpu and r4 for guest_msr.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/kvm/book3s_hv_rmhandlers.S | 12 ++++++-
 arch/powerpc/kvm/tm.S                   | 61 ++++++++++++++++++---------------
 2 files changed, 45 insertions(+), 28 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index a5c8ecd..613fd27 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -808,7 +808,10 @@ BEGIN_FTR_SECTION
 	/*
 	 * NOTE THAT THIS TRASHES ALL NON-VOLATILE REGISTERS INCLUDING CR
 	 */
+	mr      r3, r4
+	ld      r4, VCPU_MSR(r3)
 	bl	kvmppc_restore_tm
+	ld	r4, HSTATE_KVM_VCPU(r13)
 END_FTR_SECTION_IFSET(CPU_FTR_TM)
 #endif
 
@@ -1680,7 +1683,10 @@ BEGIN_FTR_SECTION
 	/*
 	 * NOTE THAT THIS TRASHES ALL NON-VOLATILE REGISTERS INCLUDING CR
 	 */
+	mr      r3, r9
+	ld      r4, VCPU_MSR(r3)
 	bl	kvmppc_save_tm
+	ld	r9, HSTATE_KVM_VCPU(r13)
 END_FTR_SECTION_IFSET(CPU_FTR_TM)
 #endif
 
@@ -2543,7 +2549,8 @@ BEGIN_FTR_SECTION
 	/*
 	 * NOTE THAT THIS TRASHES ALL NON-VOLATILE REGISTERS INCLUDING CR
 	 */
-	ld	r9, HSTATE_KVM_VCPU(r13)
+	ld      r3, HSTATE_KVM_VCPU(r13)
+	ld      r4, VCPU_MSR(r3)
 	bl	kvmppc_save_tm
 END_FTR_SECTION_IFSET(CPU_FTR_TM)
 #endif
@@ -2656,7 +2663,10 @@ BEGIN_FTR_SECTION
 	/*
 	 * NOTE THAT THIS TRASHES ALL NON-VOLATILE REGISTERS INCLUDING CR
 	 */
+	mr      r3, r4
+	ld      r4, VCPU_MSR(r3)
 	bl	kvmppc_restore_tm
+	ld	r4, HSTATE_KVM_VCPU(r13)
 END_FTR_SECTION_IFSET(CPU_FTR_TM)
 #endif
 
diff --git a/arch/powerpc/kvm/tm.S b/arch/powerpc/kvm/tm.S
index 072d35e..e779b15 100644
--- a/arch/powerpc/kvm/tm.S
+++ b/arch/powerpc/kvm/tm.S
@@ -28,9 +28,12 @@
 #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
 /*
  * Save transactional state and TM-related registers.
- * Called with r9 pointing to the vcpu struct.
+ * Called with:
+ * - r3 pointing to the vcpu struct
+ * - r4 points to the MSR with current TS bits:
+ * 	(For HV KVM, it is VCPU_MSR ; For PR KVM, it is host MSR).
  * This can modify all checkpointed registers, but
- * restores r1, r2 and r9 (vcpu pointer) before exit.
+ * restores r1, r2 before exit.
  */
 _GLOBAL(kvmppc_save_tm)
 	mflr	r0
@@ -42,11 +45,11 @@ _GLOBAL(kvmppc_save_tm)
 	rldimi	r8, r0, MSR_TM_LG, 63-MSR_TM_LG
 	mtmsrd	r8
 
-	ld	r5, VCPU_MSR(r9)
-	rldicl. r5, r5, 64 - MSR_TS_S_LG, 62
+	rldicl. r4, r4, 64 - MSR_TS_S_LG, 62
 	beq	1f	/* TM not active in guest. */
 
-	std	r1, HSTATE_HOST_R1(r13)
+	std	r1, HSTATE_SCRATCH2(r13)
+	std	r3, HSTATE_SCRATCH1(r13)
 	li	r3, TM_CAUSE_KVM_RESCHED
 
 	/* Clear the MSR RI since r1, r13 are all going to be foobar. */
@@ -60,7 +63,7 @@ _GLOBAL(kvmppc_save_tm)
 	SET_SCRATCH0(r13)
 	GET_PACA(r13)
 	std	r9, PACATMSCRATCH(r13)
-	ld	r9, HSTATE_KVM_VCPU(r13)
+	ld	r9, HSTATE_SCRATCH1(r13)
 
 	/* Get a few more GPRs free. */
 	std	r29, VCPU_GPRS_TM(29)(r9)
@@ -92,7 +95,7 @@ _GLOBAL(kvmppc_save_tm)
 	std	r4, VCPU_GPRS_TM(9)(r9)
 
 	/* Reload stack pointer and TOC. */
-	ld	r1, HSTATE_HOST_R1(r13)
+	ld	r1, HSTATE_SCRATCH2(r13)
 	ld	r2, PACATOC(r13)
 
 	/* Set MSR RI now we have r1 and r13 back. */
@@ -145,9 +148,13 @@ _GLOBAL(kvmppc_save_tm)
 
 /*
  * Restore transactional state and TM-related registers.
- * Called with r4 pointing to the vcpu struct.
+ * Called with:
+ *  - r3 pointing to the vcpu struct.
+ *  - r4 is the guest MSR with desired TS bits:
+ * 	For HV KVM, it is VCPU_MSR
+ * 	For PR KVM, it is provided by caller
  * This potentially modifies all checkpointed registers.
- * It restores r1, r2, r4 from the PACA.
+ * It restores r1, r2 from the PACA.
  */
 _GLOBAL(kvmppc_restore_tm)
 	mflr	r0
@@ -166,17 +173,17 @@ _GLOBAL(kvmppc_restore_tm)
 	 * The user may change these outside of a transaction, so they must
 	 * always be context switched.
 	 */
-	ld	r5, VCPU_TFHAR(r4)
-	ld	r6, VCPU_TFIAR(r4)
-	ld	r7, VCPU_TEXASR(r4)
+	ld	r5, VCPU_TFHAR(r3)
+	ld	r6, VCPU_TFIAR(r3)
+	ld	r7, VCPU_TEXASR(r3)
 	mtspr	SPRN_TFHAR, r5
 	mtspr	SPRN_TFIAR, r6
 	mtspr	SPRN_TEXASR, r7
 
-	ld	r5, VCPU_MSR(r4)
+	mr	r5, r4
 	rldicl. r5, r5, 64 - MSR_TS_S_LG, 62
 	beqlr		/* TM not active in guest */
-	std	r1, HSTATE_HOST_R1(r13)
+	std	r1, HSTATE_SCRATCH2(r13)
 
 	/* Make sure the failure summary is set, otherwise we'll program check
 	 * when we trechkpt.  It's possible that this might have been not set
@@ -192,21 +199,21 @@ _GLOBAL(kvmppc_restore_tm)
 	 * some SPRs.
 	 */
 
-	mr	r31, r4
+	mr	r31, r3
 	addi	r3, r31, VCPU_FPRS_TM
 	bl	load_fp_state
 	addi	r3, r31, VCPU_VRS_TM
 	bl	load_vr_state
-	mr	r4, r31
-	lwz	r7, VCPU_VRSAVE_TM(r4)
+	mr	r3, r31
+	lwz	r7, VCPU_VRSAVE_TM(r3)
 	mtspr	SPRN_VRSAVE, r7
 
-	ld	r5, VCPU_LR_TM(r4)
-	lwz	r6, VCPU_CR_TM(r4)
-	ld	r7, VCPU_CTR_TM(r4)
-	ld	r8, VCPU_AMR_TM(r4)
-	ld	r9, VCPU_TAR_TM(r4)
-	ld	r10, VCPU_XER_TM(r4)
+	ld	r5, VCPU_LR_TM(r3)
+	lwz	r6, VCPU_CR_TM(r3)
+	ld	r7, VCPU_CTR_TM(r3)
+	ld	r8, VCPU_AMR_TM(r3)
+	ld	r9, VCPU_TAR_TM(r3)
+	ld	r10, VCPU_XER_TM(r3)
 	mtlr	r5
 	mtcr	r6
 	mtctr	r7
@@ -219,8 +226,8 @@ _GLOBAL(kvmppc_restore_tm)
 	 * till the last moment to avoid running with userspace PPR and DSCR for
 	 * too long.
 	 */
-	ld	r29, VCPU_DSCR_TM(r4)
-	ld	r30, VCPU_PPR_TM(r4)
+	ld	r29, VCPU_DSCR_TM(r3)
+	ld	r30, VCPU_PPR_TM(r3)
 
 	std	r2, PACATMSCRATCH(r13) /* Save TOC */
 
@@ -253,8 +260,7 @@ _GLOBAL(kvmppc_restore_tm)
 	ld	r29, HSTATE_DSCR(r13)
 	mtspr	SPRN_DSCR, r29
 #endif
-	ld	r4, HSTATE_KVM_VCPU(r13)
-	ld	r1, HSTATE_HOST_R1(r13)
+	ld	r1, HSTATE_SCRATCH2(r13)
 	ld	r2, PACATMSCRATCH(r13)
 
 	/* Set the MSR RI since we have our registers back. */
@@ -264,4 +270,5 @@ _GLOBAL(kvmppc_restore_tm)
 	ld	r0, PPC_LR_STKOFF(r1)
 	mtlr	r0
 	blr
+
 #endif
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 03/26] KVM: PPC: Book3S PR: turn on FP/VSX/VMX MSR bits in kvmppc_save_tm()
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 10:11   ` wei.guo.simon
  -1 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

kvmppc_save_tm() invokes  store_fp_state/store_vr_state(). So it is
mandatory to turn on FP/VSX/VMX MSR bits for its execution, just
like what kvmppc_restore_tm() did.

Previsouly HV KVM has turned the bits on outside of function
kvmppc_save_tm().  Now we include this bit change in kvmppc_save_tm()
so that the logic is more clean. And PR KVM can reuse it later.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/kvm/tm.S | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/kvm/tm.S b/arch/powerpc/kvm/tm.S
index e779b15..2d6fe5b 100644
--- a/arch/powerpc/kvm/tm.S
+++ b/arch/powerpc/kvm/tm.S
@@ -43,6 +43,8 @@ _GLOBAL(kvmppc_save_tm)
 	mfmsr	r8
 	li	r0, 1
 	rldimi	r8, r0, MSR_TM_LG, 63-MSR_TM_LG
+	ori     r8, r8, MSR_FP
+	oris    r8, r8, (MSR_VEC | MSR_VSX)@h
 	mtmsrd	r8
 
 	rldicl. r4, r4, 64 - MSR_TS_S_LG, 62
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 03/26] KVM: PPC: Book3S PR: turn on FP/VSX/VMX MSR bits in kvmppc_save_tm()
@ 2018-01-11 10:11   ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

kvmppc_save_tm() invokes  store_fp_state/store_vr_state(). So it is
mandatory to turn on FP/VSX/VMX MSR bits for its execution, just
like what kvmppc_restore_tm() did.

Previsouly HV KVM has turned the bits on outside of function
kvmppc_save_tm().  Now we include this bit change in kvmppc_save_tm()
so that the logic is more clean. And PR KVM can reuse it later.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/kvm/tm.S | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/kvm/tm.S b/arch/powerpc/kvm/tm.S
index e779b15..2d6fe5b 100644
--- a/arch/powerpc/kvm/tm.S
+++ b/arch/powerpc/kvm/tm.S
@@ -43,6 +43,8 @@ _GLOBAL(kvmppc_save_tm)
 	mfmsr	r8
 	li	r0, 1
 	rldimi	r8, r0, MSR_TM_LG, 63-MSR_TM_LG
+	ori     r8, r8, MSR_FP
+	oris    r8, r8, (MSR_VEC | MSR_VSX)@h
 	mtmsrd	r8
 
 	rldicl. r4, r4, 64 - MSR_TS_S_LG, 62
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 04/26] KVM: PPC: Book3S PR: add C function wrapper for _kvmppc_save/restore_tm()
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 10:11   ` wei.guo.simon
  -1 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

Currently _kvmppc_save/restore_tm() APIs can only be invoked from
assembly function. This patch adds C function wrappers for them so
that they can be safely called from C function.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/asm-prototypes.h |   7 ++
 arch/powerpc/kvm/book3s_hv_rmhandlers.S   |   8 +--
 arch/powerpc/kvm/tm.S                     | 107 +++++++++++++++++++++++++++++-
 3 files changed, 116 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h
index 7330150..9c3b290 100644
--- a/arch/powerpc/include/asm/asm-prototypes.h
+++ b/arch/powerpc/include/asm/asm-prototypes.h
@@ -126,4 +126,11 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
 void _mcount(void);
 unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip);
 
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+/* Transaction memory related */
+struct kvm_vcpu;
+void _kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu, u64 guest_msr);
+void _kvmppc_save_tm_pr(struct kvm_vcpu *vcpu, u64 guest_msr);
+#endif
+
 #endif /* _ASM_POWERPC_ASM_PROTOTYPES_H */
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 613fd27..4c8d5b1 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -810,7 +810,7 @@ BEGIN_FTR_SECTION
 	 */
 	mr      r3, r4
 	ld      r4, VCPU_MSR(r3)
-	bl	kvmppc_restore_tm
+	bl	__kvmppc_restore_tm
 	ld	r4, HSTATE_KVM_VCPU(r13)
 END_FTR_SECTION_IFSET(CPU_FTR_TM)
 #endif
@@ -1685,7 +1685,7 @@ BEGIN_FTR_SECTION
 	 */
 	mr      r3, r9
 	ld      r4, VCPU_MSR(r3)
-	bl	kvmppc_save_tm
+	bl	__kvmppc_save_tm
 	ld	r9, HSTATE_KVM_VCPU(r13)
 END_FTR_SECTION_IFSET(CPU_FTR_TM)
 #endif
@@ -2551,7 +2551,7 @@ BEGIN_FTR_SECTION
 	 */
 	ld      r3, HSTATE_KVM_VCPU(r13)
 	ld      r4, VCPU_MSR(r3)
-	bl	kvmppc_save_tm
+	bl	__kvmppc_save_tm
 END_FTR_SECTION_IFSET(CPU_FTR_TM)
 #endif
 
@@ -2665,7 +2665,7 @@ BEGIN_FTR_SECTION
 	 */
 	mr      r3, r4
 	ld      r4, VCPU_MSR(r3)
-	bl	kvmppc_restore_tm
+	bl	__kvmppc_restore_tm
 	ld	r4, HSTATE_KVM_VCPU(r13)
 END_FTR_SECTION_IFSET(CPU_FTR_TM)
 #endif
diff --git a/arch/powerpc/kvm/tm.S b/arch/powerpc/kvm/tm.S
index 2d6fe5b..5752bae 100644
--- a/arch/powerpc/kvm/tm.S
+++ b/arch/powerpc/kvm/tm.S
@@ -35,7 +35,7 @@
  * This can modify all checkpointed registers, but
  * restores r1, r2 before exit.
  */
-_GLOBAL(kvmppc_save_tm)
+_GLOBAL(__kvmppc_save_tm)
 	mflr	r0
 	std	r0, PPC_LR_STKOFF(r1)
 
@@ -149,6 +149,58 @@ _GLOBAL(kvmppc_save_tm)
 	blr
 
 /*
+ * _kvmppc_save_tm() is a wrapper around __kvmppc_save_tm(), so that it can
+ * be invoked from C function by PR KVM only.
+ */
+_GLOBAL(_kvmppc_save_tm_pr)
+	mflr	r5
+	std	r5, PPC_LR_STKOFF(r1)
+	stdu    r1, -SWITCH_FRAME_SIZE(r1)
+	SAVE_NVGPRS(r1)
+
+	/* save MSR since TM/math bits might be impacted
+	 * by __kvmppc_save_tm().
+	 */
+	mfmsr	r5
+	SAVE_GPR(5, r1)
+
+	/* also save DSCR/CR so that it can be recovered later */
+	mfspr   r6, SPRN_DSCR
+	SAVE_GPR(6, r1)
+
+	mfcr    r7
+	stw     r7, _CCR(r1)
+
+	/* allocate stack frame for __kvmppc_save_tm since
+	 * it will save LR into its stackframe and we don't
+	 * want to corrupt _kvmppc_save_tm_pr's.
+	 */
+	stdu    r1, -PPC_MIN_STKFRM(r1)
+	bl	__kvmppc_save_tm
+	addi    r1, r1, PPC_MIN_STKFRM
+
+	ld      r7, _CCR(r1)
+	mtcr	r7
+
+	REST_GPR(6, r1)
+	mtspr   SPRN_DSCR, r6
+
+	/* need preserve current MSR's MSR_TS bits */
+	REST_GPR(5, r1)
+	mfmsr   r6
+	rldicl  r6, r6, 64 - MSR_TS_S_LG, 62
+	rldimi  r5, r6, MSR_TS_S_LG, 63 - MSR_TS_T_LG
+	mtmsrd  r5
+
+	REST_NVGPRS(r1)
+	addi    r1, r1, SWITCH_FRAME_SIZE
+	ld	r5, PPC_LR_STKOFF(r1)
+	mtlr	r5
+	blr
+
+EXPORT_SYMBOL_GPL(_kvmppc_save_tm_pr);
+
+/*
  * Restore transactional state and TM-related registers.
  * Called with:
  *  - r3 pointing to the vcpu struct.
@@ -158,7 +210,7 @@ _GLOBAL(kvmppc_save_tm)
  * This potentially modifies all checkpointed registers.
  * It restores r1, r2 from the PACA.
  */
-_GLOBAL(kvmppc_restore_tm)
+_GLOBAL(__kvmppc_restore_tm)
 	mflr	r0
 	std	r0, PPC_LR_STKOFF(r1)
 
@@ -186,6 +238,7 @@ _GLOBAL(kvmppc_restore_tm)
 	rldicl. r5, r5, 64 - MSR_TS_S_LG, 62
 	beqlr		/* TM not active in guest */
 	std	r1, HSTATE_SCRATCH2(r13)
+	std	r3, HSTATE_SCRATCH1(r13)
 
 	/* Make sure the failure summary is set, otherwise we'll program check
 	 * when we trechkpt.  It's possible that this might have been not set
@@ -262,6 +315,7 @@ _GLOBAL(kvmppc_restore_tm)
 	ld	r29, HSTATE_DSCR(r13)
 	mtspr	SPRN_DSCR, r29
 #endif
+	ld	r3, HSTATE_SCRATCH1(r13)
 	ld	r1, HSTATE_SCRATCH2(r13)
 	ld	r2, PACATMSCRATCH(r13)
 
@@ -273,4 +327,53 @@ _GLOBAL(kvmppc_restore_tm)
 	mtlr	r0
 	blr
 
+/*
+ * _kvmppc_restore_tm() is a wrapper around __kvmppc_restore_tm(), so that it
+ * can be invoked from C function by PR KVM only.
+ */
+_GLOBAL(_kvmppc_restore_tm_pr)
+	mflr	r5
+	std	r5, PPC_LR_STKOFF(r1)
+	stdu    r1, -SWITCH_FRAME_SIZE(r1)
+	SAVE_NVGPRS(r1)
+
+	/* save MSR to avoid TM/math bits change */
+	mfmsr	r5
+	SAVE_GPR(5, r1)
+
+	/* also save DSCR/CR so that it can be recovered later */
+	mfspr   r6, SPRN_DSCR
+	SAVE_GPR(6, r1)
+
+	mfcr    r7
+	stw     r7, _CCR(r1)
+
+	/* allocate stack frame for __kvmppc_restore_tm since
+	 * it will save LR into its own stackframe.
+	 */
+	stdu    r1, -PPC_MIN_STKFRM(r1)
+
+	bl	__kvmppc_restore_tm
+	addi    r1, r1, PPC_MIN_STKFRM
+
+	ld      r7, _CCR(r1)
+	mtcr	r7
+
+	REST_GPR(6, r1)
+	mtspr   SPRN_DSCR, r6
+
+	/* need preserve current MSR's MSR_TS bits */
+	REST_GPR(5, r1)
+	mfmsr   r6
+	rldicl  r6, r6, 64 - MSR_TS_S_LG, 62
+	rldimi  r5, r6, MSR_TS_S_LG, 63 - MSR_TS_T_LG
+	mtmsrd  r5
+
+	REST_NVGPRS(r1)
+	addi    r1, r1, SWITCH_FRAME_SIZE
+	ld	r5, PPC_LR_STKOFF(r1)
+	mtlr	r5
+	blr
+
+EXPORT_SYMBOL_GPL(_kvmppc_restore_tm_pr);
 #endif
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 04/26] KVM: PPC: Book3S PR: add C function wrapper for _kvmppc_save/restore_tm()
@ 2018-01-11 10:11   ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

Currently _kvmppc_save/restore_tm() APIs can only be invoked from
assembly function. This patch adds C function wrappers for them so
that they can be safely called from C function.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/asm-prototypes.h |   7 ++
 arch/powerpc/kvm/book3s_hv_rmhandlers.S   |   8 +--
 arch/powerpc/kvm/tm.S                     | 107 +++++++++++++++++++++++++++++-
 3 files changed, 116 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h
index 7330150..9c3b290 100644
--- a/arch/powerpc/include/asm/asm-prototypes.h
+++ b/arch/powerpc/include/asm/asm-prototypes.h
@@ -126,4 +126,11 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
 void _mcount(void);
 unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip);
 
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+/* Transaction memory related */
+struct kvm_vcpu;
+void _kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu, u64 guest_msr);
+void _kvmppc_save_tm_pr(struct kvm_vcpu *vcpu, u64 guest_msr);
+#endif
+
 #endif /* _ASM_POWERPC_ASM_PROTOTYPES_H */
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 613fd27..4c8d5b1 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -810,7 +810,7 @@ BEGIN_FTR_SECTION
 	 */
 	mr      r3, r4
 	ld      r4, VCPU_MSR(r3)
-	bl	kvmppc_restore_tm
+	bl	__kvmppc_restore_tm
 	ld	r4, HSTATE_KVM_VCPU(r13)
 END_FTR_SECTION_IFSET(CPU_FTR_TM)
 #endif
@@ -1685,7 +1685,7 @@ BEGIN_FTR_SECTION
 	 */
 	mr      r3, r9
 	ld      r4, VCPU_MSR(r3)
-	bl	kvmppc_save_tm
+	bl	__kvmppc_save_tm
 	ld	r9, HSTATE_KVM_VCPU(r13)
 END_FTR_SECTION_IFSET(CPU_FTR_TM)
 #endif
@@ -2551,7 +2551,7 @@ BEGIN_FTR_SECTION
 	 */
 	ld      r3, HSTATE_KVM_VCPU(r13)
 	ld      r4, VCPU_MSR(r3)
-	bl	kvmppc_save_tm
+	bl	__kvmppc_save_tm
 END_FTR_SECTION_IFSET(CPU_FTR_TM)
 #endif
 
@@ -2665,7 +2665,7 @@ BEGIN_FTR_SECTION
 	 */
 	mr      r3, r4
 	ld      r4, VCPU_MSR(r3)
-	bl	kvmppc_restore_tm
+	bl	__kvmppc_restore_tm
 	ld	r4, HSTATE_KVM_VCPU(r13)
 END_FTR_SECTION_IFSET(CPU_FTR_TM)
 #endif
diff --git a/arch/powerpc/kvm/tm.S b/arch/powerpc/kvm/tm.S
index 2d6fe5b..5752bae 100644
--- a/arch/powerpc/kvm/tm.S
+++ b/arch/powerpc/kvm/tm.S
@@ -35,7 +35,7 @@
  * This can modify all checkpointed registers, but
  * restores r1, r2 before exit.
  */
-_GLOBAL(kvmppc_save_tm)
+_GLOBAL(__kvmppc_save_tm)
 	mflr	r0
 	std	r0, PPC_LR_STKOFF(r1)
 
@@ -149,6 +149,58 @@ _GLOBAL(kvmppc_save_tm)
 	blr
 
 /*
+ * _kvmppc_save_tm() is a wrapper around __kvmppc_save_tm(), so that it can
+ * be invoked from C function by PR KVM only.
+ */
+_GLOBAL(_kvmppc_save_tm_pr)
+	mflr	r5
+	std	r5, PPC_LR_STKOFF(r1)
+	stdu    r1, -SWITCH_FRAME_SIZE(r1)
+	SAVE_NVGPRS(r1)
+
+	/* save MSR since TM/math bits might be impacted
+	 * by __kvmppc_save_tm().
+	 */
+	mfmsr	r5
+	SAVE_GPR(5, r1)
+
+	/* also save DSCR/CR so that it can be recovered later */
+	mfspr   r6, SPRN_DSCR
+	SAVE_GPR(6, r1)
+
+	mfcr    r7
+	stw     r7, _CCR(r1)
+
+	/* allocate stack frame for __kvmppc_save_tm since
+	 * it will save LR into its stackframe and we don't
+	 * want to corrupt _kvmppc_save_tm_pr's.
+	 */
+	stdu    r1, -PPC_MIN_STKFRM(r1)
+	bl	__kvmppc_save_tm
+	addi    r1, r1, PPC_MIN_STKFRM
+
+	ld      r7, _CCR(r1)
+	mtcr	r7
+
+	REST_GPR(6, r1)
+	mtspr   SPRN_DSCR, r6
+
+	/* need preserve current MSR's MSR_TS bits */
+	REST_GPR(5, r1)
+	mfmsr   r6
+	rldicl  r6, r6, 64 - MSR_TS_S_LG, 62
+	rldimi  r5, r6, MSR_TS_S_LG, 63 - MSR_TS_T_LG
+	mtmsrd  r5
+
+	REST_NVGPRS(r1)
+	addi    r1, r1, SWITCH_FRAME_SIZE
+	ld	r5, PPC_LR_STKOFF(r1)
+	mtlr	r5
+	blr
+
+EXPORT_SYMBOL_GPL(_kvmppc_save_tm_pr);
+
+/*
  * Restore transactional state and TM-related registers.
  * Called with:
  *  - r3 pointing to the vcpu struct.
@@ -158,7 +210,7 @@ _GLOBAL(kvmppc_save_tm)
  * This potentially modifies all checkpointed registers.
  * It restores r1, r2 from the PACA.
  */
-_GLOBAL(kvmppc_restore_tm)
+_GLOBAL(__kvmppc_restore_tm)
 	mflr	r0
 	std	r0, PPC_LR_STKOFF(r1)
 
@@ -186,6 +238,7 @@ _GLOBAL(kvmppc_restore_tm)
 	rldicl. r5, r5, 64 - MSR_TS_S_LG, 62
 	beqlr		/* TM not active in guest */
 	std	r1, HSTATE_SCRATCH2(r13)
+	std	r3, HSTATE_SCRATCH1(r13)
 
 	/* Make sure the failure summary is set, otherwise we'll program check
 	 * when we trechkpt.  It's possible that this might have been not set
@@ -262,6 +315,7 @@ _GLOBAL(kvmppc_restore_tm)
 	ld	r29, HSTATE_DSCR(r13)
 	mtspr	SPRN_DSCR, r29
 #endif
+	ld	r3, HSTATE_SCRATCH1(r13)
 	ld	r1, HSTATE_SCRATCH2(r13)
 	ld	r2, PACATMSCRATCH(r13)
 
@@ -273,4 +327,53 @@ _GLOBAL(kvmppc_restore_tm)
 	mtlr	r0
 	blr
 
+/*
+ * _kvmppc_restore_tm() is a wrapper around __kvmppc_restore_tm(), so that it
+ * can be invoked from C function by PR KVM only.
+ */
+_GLOBAL(_kvmppc_restore_tm_pr)
+	mflr	r5
+	std	r5, PPC_LR_STKOFF(r1)
+	stdu    r1, -SWITCH_FRAME_SIZE(r1)
+	SAVE_NVGPRS(r1)
+
+	/* save MSR to avoid TM/math bits change */
+	mfmsr	r5
+	SAVE_GPR(5, r1)
+
+	/* also save DSCR/CR so that it can be recovered later */
+	mfspr   r6, SPRN_DSCR
+	SAVE_GPR(6, r1)
+
+	mfcr    r7
+	stw     r7, _CCR(r1)
+
+	/* allocate stack frame for __kvmppc_restore_tm since
+	 * it will save LR into its own stackframe.
+	 */
+	stdu    r1, -PPC_MIN_STKFRM(r1)
+
+	bl	__kvmppc_restore_tm
+	addi    r1, r1, PPC_MIN_STKFRM
+
+	ld      r7, _CCR(r1)
+	mtcr	r7
+
+	REST_GPR(6, r1)
+	mtspr   SPRN_DSCR, r6
+
+	/* need preserve current MSR's MSR_TS bits */
+	REST_GPR(5, r1)
+	mfmsr   r6
+	rldicl  r6, r6, 64 - MSR_TS_S_LG, 62
+	rldimi  r5, r6, MSR_TS_S_LG, 63 - MSR_TS_T_LG
+	mtmsrd  r5
+
+	REST_NVGPRS(r1)
+	addi    r1, r1, SWITCH_FRAME_SIZE
+	ld	r5, PPC_LR_STKOFF(r1)
+	mtlr	r5
+	blr
+
+EXPORT_SYMBOL_GPL(_kvmppc_restore_tm_pr);
 #endif
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 05/26] KVM: PPC: Book3S PR: In PR KVM suspends Transactional state when inject an interrupt.
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 10:11   ` wei.guo.simon
  -1 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

This patch simulates interrupt behavior per Power ISA while injecting
interrupt in PR KVM:
- When interrupt happens, transactional state should be suspended.

kvmppc_mmu_book3s_64_reset_msr() will be invoked when injecting an
interrupt. This patch performs this ISA logic in
kvmppc_mmu_book3s_64_reset_msr().

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Reviewed-by: Paul Mackerras <paulus@samba.org>
---
 arch/powerpc/kvm/book3s_64_mmu.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c
index 29ebe2f..6048dbd 100644
--- a/arch/powerpc/kvm/book3s_64_mmu.c
+++ b/arch/powerpc/kvm/book3s_64_mmu.c
@@ -38,7 +38,16 @@
 
 static void kvmppc_mmu_book3s_64_reset_msr(struct kvm_vcpu *vcpu)
 {
-	kvmppc_set_msr(vcpu, vcpu->arch.intr_msr);
+	unsigned long msr = vcpu->arch.intr_msr;
+	unsigned long cur_msr = kvmppc_get_msr(vcpu);
+
+	/* If transactional, change to suspend mode on IRQ delivery */
+	if (MSR_TM_TRANSACTIONAL(cur_msr))
+		msr |= MSR_TS_S;
+	else
+		msr |= cur_msr & MSR_TS_MASK;
+
+	kvmppc_set_msr(vcpu, msr);
 }
 
 static struct kvmppc_slb *kvmppc_mmu_book3s_64_find_slbe(
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 05/26] KVM: PPC: Book3S PR: In PR KVM suspends Transactional state when inject an interrupt.
@ 2018-01-11 10:11   ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

This patch simulates interrupt behavior per Power ISA while injecting
interrupt in PR KVM:
- When interrupt happens, transactional state should be suspended.

kvmppc_mmu_book3s_64_reset_msr() will be invoked when injecting an
interrupt. This patch performs this ISA logic in
kvmppc_mmu_book3s_64_reset_msr().

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Reviewed-by: Paul Mackerras <paulus@samba.org>
---
 arch/powerpc/kvm/book3s_64_mmu.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c
index 29ebe2f..6048dbd 100644
--- a/arch/powerpc/kvm/book3s_64_mmu.c
+++ b/arch/powerpc/kvm/book3s_64_mmu.c
@@ -38,7 +38,16 @@
 
 static void kvmppc_mmu_book3s_64_reset_msr(struct kvm_vcpu *vcpu)
 {
-	kvmppc_set_msr(vcpu, vcpu->arch.intr_msr);
+	unsigned long msr = vcpu->arch.intr_msr;
+	unsigned long cur_msr = kvmppc_get_msr(vcpu);
+
+	/* If transactional, change to suspend mode on IRQ delivery */
+	if (MSR_TM_TRANSACTIONAL(cur_msr))
+		msr |= MSR_TS_S;
+	else
+		msr |= cur_msr & MSR_TS_MASK;
+
+	kvmppc_set_msr(vcpu, msr);
 }
 
 static struct kvmppc_slb *kvmppc_mmu_book3s_64_find_slbe(
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 06/26] KVM: PPC: Book3S PR: PR KVM pass through MSR TM/TS bits to shadow_msr.
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 10:11   ` wei.guo.simon
  -1 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

PowerPC TM functionality needs MSR TM/TS bits support in hardware level.
Guest TM functionality can not be emulated with "fake" MSR (msr in magic
page) TS bits.

This patch syncs TM/TS bits in shadow_msr with the MSR value in magic
page, so that the MSR TS value which guest sees is consistent with actual
MSR bits running in guest.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/kvm/book3s_pr.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index d0dc862..4e9acdd 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -322,7 +322,12 @@ static void kvmppc_recalc_shadow_msr(struct kvm_vcpu *vcpu)
 	ulong smsr = guest_msr;
 
 	/* Guest MSR values */
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+	smsr &= MSR_FE0 | MSR_FE1 | MSR_SF | MSR_SE | MSR_BE | MSR_LE |
+		MSR_TM | MSR_TS_MASK;
+#else
 	smsr &= MSR_FE0 | MSR_FE1 | MSR_SF | MSR_SE | MSR_BE | MSR_LE;
+#endif
 	/* Process MSR values */
 	smsr |= MSR_ME | MSR_RI | MSR_IR | MSR_DR | MSR_PR | MSR_EE;
 	/* External providers the guest reserved */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 06/26] KVM: PPC: Book3S PR: PR KVM pass through MSR TM/TS bits to shadow_msr.
@ 2018-01-11 10:11   ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

PowerPC TM functionality needs MSR TM/TS bits support in hardware level.
Guest TM functionality can not be emulated with "fake" MSR (msr in magic
page) TS bits.

This patch syncs TM/TS bits in shadow_msr with the MSR value in magic
page, so that the MSR TS value which guest sees is consistent with actual
MSR bits running in guest.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/kvm/book3s_pr.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index d0dc862..4e9acdd 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -322,7 +322,12 @@ static void kvmppc_recalc_shadow_msr(struct kvm_vcpu *vcpu)
 	ulong smsr = guest_msr;
 
 	/* Guest MSR values */
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+	smsr &= MSR_FE0 | MSR_FE1 | MSR_SF | MSR_SE | MSR_BE | MSR_LE |
+		MSR_TM | MSR_TS_MASK;
+#else
 	smsr &= MSR_FE0 | MSR_FE1 | MSR_SF | MSR_SE | MSR_BE | MSR_LE;
+#endif
 	/* Process MSR values */
 	smsr |= MSR_ME | MSR_RI | MSR_IR | MSR_DR | MSR_PR | MSR_EE;
 	/* External providers the guest reserved */
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 07/26] KVM: PPC: Book3S PR: add TEXASR related macros
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 10:11   ` wei.guo.simon
  -1 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

This patches add some macros for CR0/TEXASR bits so that PR KVM TM
logic(tbegin./treclaim./tabort.) can make use of them later.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/reg.h              | 21 ++++++++++++++++++++-
 arch/powerpc/platforms/powernv/copy-paste.h |  3 +--
 2 files changed, 21 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index b779f3c..6c293bc 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -146,6 +146,12 @@
 #define MSR_64BIT	0
 #endif
 
+/* Condition Register related */
+#define CR0_SHIFT	28
+#define CR0_MASK	0xF
+#define CR0_TBEGIN_FAILURE	(0x2 << 28) /* 0b0010 */
+
+
 /* Power Management - Processor Stop Status and Control Register Fields */
 #define PSSCR_RL_MASK		0x0000000F /* Requested Level */
 #define PSSCR_MTL_MASK		0x000000F0 /* Maximum Transition Level */
@@ -237,8 +243,21 @@
 #define SPRN_TFIAR	0x81	/* Transaction Failure Inst Addr   */
 #define SPRN_TEXASR	0x82	/* Transaction EXception & Summary */
 #define SPRN_TEXASRU	0x83	/* ''	   ''	   ''	 Upper 32  */
-#define   TEXASR_FS	__MASK(63-36) /* TEXASR Failure Summary */
+#define TEXASR_FC_LG	(63 - 7)	/* Failure Code */
+#define TEXASR_HV_LG	(63 - 34)	/* Hypervisor state*/
+#define TEXASR_PR_LG	(63 - 35)	/* Privilege level */
+#define TEXASR_FS_LG	(63 - 36)	/* failure summary */
+#define TEXASR_EX_LG	(63 - 37)	/* TFIAR exact bit */
+#define TEXASR_ROT_LG	(63 - 38)	/* ROT bit */
+#define TEXASR_FC	(ASM_CONST(0xFF) << TEXASR_FC_LG)
+#define TEXASR_HV	__MASK(TEXASR_HV_LG)
+#define TEXASR_PR	__MASK(TEXASR_PR_LG)
+#define TEXASR_FS	__MASK(TEXASR_FS_LG)
+#define TEXASR_EX	__MASK(TEXASR_EX_LG)
+#define TEXASR_ROT	__MASK(TEXASR_ROT_LG)
+
 #define SPRN_TFHAR	0x80	/* Transaction Failure Handler Addr */
+
 #define SPRN_TIDR	144	/* Thread ID register */
 #define SPRN_CTRLF	0x088
 #define SPRN_CTRLT	0x098
diff --git a/arch/powerpc/platforms/powernv/copy-paste.h b/arch/powerpc/platforms/powernv/copy-paste.h
index c9a5036..3fa62de 100644
--- a/arch/powerpc/platforms/powernv/copy-paste.h
+++ b/arch/powerpc/platforms/powernv/copy-paste.h
@@ -7,9 +7,8 @@
  * 2 of the License, or (at your option) any later version.
  */
 #include <asm/ppc-opcode.h>
+#include <asm/reg.h>
 
-#define CR0_SHIFT	28
-#define CR0_MASK	0xF
 /*
  * Copy/paste instructions:
  *
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 07/26] KVM: PPC: Book3S PR: add TEXASR related macros
@ 2018-01-11 10:11   ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

This patches add some macros for CR0/TEXASR bits so that PR KVM TM
logic(tbegin./treclaim./tabort.) can make use of them later.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/reg.h              | 21 ++++++++++++++++++++-
 arch/powerpc/platforms/powernv/copy-paste.h |  3 +--
 2 files changed, 21 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index b779f3c..6c293bc 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -146,6 +146,12 @@
 #define MSR_64BIT	0
 #endif
 
+/* Condition Register related */
+#define CR0_SHIFT	28
+#define CR0_MASK	0xF
+#define CR0_TBEGIN_FAILURE	(0x2 << 28) /* 0b0010 */
+
+
 /* Power Management - Processor Stop Status and Control Register Fields */
 #define PSSCR_RL_MASK		0x0000000F /* Requested Level */
 #define PSSCR_MTL_MASK		0x000000F0 /* Maximum Transition Level */
@@ -237,8 +243,21 @@
 #define SPRN_TFIAR	0x81	/* Transaction Failure Inst Addr   */
 #define SPRN_TEXASR	0x82	/* Transaction EXception & Summary */
 #define SPRN_TEXASRU	0x83	/* ''	   ''	   ''	 Upper 32  */
-#define   TEXASR_FS	__MASK(63-36) /* TEXASR Failure Summary */
+#define TEXASR_FC_LG	(63 - 7)	/* Failure Code */
+#define TEXASR_HV_LG	(63 - 34)	/* Hypervisor state*/
+#define TEXASR_PR_LG	(63 - 35)	/* Privilege level */
+#define TEXASR_FS_LG	(63 - 36)	/* failure summary */
+#define TEXASR_EX_LG	(63 - 37)	/* TFIAR exact bit */
+#define TEXASR_ROT_LG	(63 - 38)	/* ROT bit */
+#define TEXASR_FC	(ASM_CONST(0xFF) << TEXASR_FC_LG)
+#define TEXASR_HV	__MASK(TEXASR_HV_LG)
+#define TEXASR_PR	__MASK(TEXASR_PR_LG)
+#define TEXASR_FS	__MASK(TEXASR_FS_LG)
+#define TEXASR_EX	__MASK(TEXASR_EX_LG)
+#define TEXASR_ROT	__MASK(TEXASR_ROT_LG)
+
 #define SPRN_TFHAR	0x80	/* Transaction Failure Handler Addr */
+
 #define SPRN_TIDR	144	/* Thread ID register */
 #define SPRN_CTRLF	0x088
 #define SPRN_CTRLT	0x098
diff --git a/arch/powerpc/platforms/powernv/copy-paste.h b/arch/powerpc/platforms/powernv/copy-paste.h
index c9a5036..3fa62de 100644
--- a/arch/powerpc/platforms/powernv/copy-paste.h
+++ b/arch/powerpc/platforms/powernv/copy-paste.h
@@ -7,9 +7,8 @@
  * 2 of the License, or (at your option) any later version.
  */
 #include <asm/ppc-opcode.h>
+#include <asm/reg.h>
 
-#define CR0_SHIFT	28
-#define CR0_MASK	0xF
 /*
  * Copy/paste instructions:
  *
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 08/26] KVM: PPC: Book3S PR: Sync TM bits to shadow msr for problem state guest
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 10:11   ` wei.guo.simon
  -1 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

MSR TS bits can be modified with non-privileged instruction like
tbegin./tend.  That means guest can change MSR value "silently" without
notifying host.

It is necessary to sync the TM bits to host so that host can calculate
shadow msr correctly.

note privilege guest will always fail transactions so we only take
care of problem state guest.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/kvm/book3s_pr.c | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 4e9acdd..7ec866a 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -977,6 +977,9 @@ int kvmppc_handle_exit_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
 {
 	int r = RESUME_HOST;
 	int s;
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+	ulong old_msr = kvmppc_get_msr(vcpu);
+#endif
 
 	vcpu->stat.sum_exits++;
 
@@ -988,6 +991,28 @@ int kvmppc_handle_exit_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	trace_kvm_exit(exit_nr, vcpu);
 	guest_exit();
 
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+	/*
+	 * Unlike other MSR bits, MSR[TS]bits can be changed at guest without
+	 * notifying host:
+	 *  modified by unprivileged instructions like "tbegin"/"tend"/
+	 * "tresume"/"tsuspend" in PR KVM guest.
+	 *
+	 * It is necessary to sync here to calculate a correct shadow_msr.
+	 *
+	 * privileged guest's tbegin will be failed at present. So we
+	 * only take care of problem state guest.
+	 */
+	if (unlikely((old_msr & MSR_PR) &&
+		(vcpu->arch.shadow_srr1 & (MSR_TS_MASK)) !=
+				(old_msr & (MSR_TS_MASK)))) {
+		old_msr &= ~(MSR_TS_MASK);
+		old_msr |= (vcpu->arch.shadow_srr1 & (MSR_TS_MASK));
+		kvmppc_set_msr_fast(vcpu, old_msr);
+		kvmppc_recalc_shadow_msr(vcpu);
+	}
+#endif
+
 	switch (exit_nr) {
 	case BOOK3S_INTERRUPT_INST_STORAGE:
 	{
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 08/26] KVM: PPC: Book3S PR: Sync TM bits to shadow msr for problem state guest
@ 2018-01-11 10:11   ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

MSR TS bits can be modified with non-privileged instruction like
tbegin./tend.  That means guest can change MSR value "silently" without
notifying host.

It is necessary to sync the TM bits to host so that host can calculate
shadow msr correctly.

note privilege guest will always fail transactions so we only take
care of problem state guest.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/kvm/book3s_pr.c | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 4e9acdd..7ec866a 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -977,6 +977,9 @@ int kvmppc_handle_exit_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
 {
 	int r = RESUME_HOST;
 	int s;
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+	ulong old_msr = kvmppc_get_msr(vcpu);
+#endif
 
 	vcpu->stat.sum_exits++;
 
@@ -988,6 +991,28 @@ int kvmppc_handle_exit_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	trace_kvm_exit(exit_nr, vcpu);
 	guest_exit();
 
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+	/*
+	 * Unlike other MSR bits, MSR[TS]bits can be changed at guest without
+	 * notifying host:
+	 *  modified by unprivileged instructions like "tbegin"/"tend"/
+	 * "tresume"/"tsuspend" in PR KVM guest.
+	 *
+	 * It is necessary to sync here to calculate a correct shadow_msr.
+	 *
+	 * privileged guest's tbegin will be failed at present. So we
+	 * only take care of problem state guest.
+	 */
+	if (unlikely((old_msr & MSR_PR) &&
+		(vcpu->arch.shadow_srr1 & (MSR_TS_MASK)) !+				(old_msr & (MSR_TS_MASK)))) {
+		old_msr &= ~(MSR_TS_MASK);
+		old_msr |= (vcpu->arch.shadow_srr1 & (MSR_TS_MASK));
+		kvmppc_set_msr_fast(vcpu, old_msr);
+		kvmppc_recalc_shadow_msr(vcpu);
+	}
+#endif
+
 	switch (exit_nr) {
 	case BOOK3S_INTERRUPT_INST_STORAGE:
 	{
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 09/26] KVM: PPC: Book3S PR: implement RFID TM behavior to suppress change from S0 to N0
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 10:11   ` wei.guo.simon
  -1 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

Accordingly to ISA specification for RFID, in MSR TM disabled and TS
suspended state(S0), if the target MSR is TM disabled and TS state is
inactive(N0), rfid should suppress this update.

This patch make RFID emulation of PR KVM to be consistent with this.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/kvm/book3s_emulate.c | 21 +++++++++++++++++++--
 1 file changed, 19 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index 68d6898..2eb457b 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -117,11 +117,28 @@ int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	case 19:
 		switch (get_xop(inst)) {
 		case OP_19_XOP_RFID:
-		case OP_19_XOP_RFI:
+		case OP_19_XOP_RFI: {
+			unsigned long srr1 = kvmppc_get_srr1(vcpu);
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+			unsigned long cur_msr = kvmppc_get_msr(vcpu);
+
+			/*
+			 * add rules to fit in ISA specification regarding TM
+			 * state transistion in TM disable/Suspended state,
+			 * and target TM state is TM inactive(00) state. (the
+			 * change should be suppressed).
+			 */
+			if (((cur_msr & MSR_TM) == 0) &&
+				((srr1 & MSR_TM) == 0) &&
+				MSR_TM_SUSPENDED(cur_msr) &&
+				!MSR_TM_ACTIVE(srr1))
+				srr1 |= MSR_TS_S;
+#endif
 			kvmppc_set_pc(vcpu, kvmppc_get_srr0(vcpu));
-			kvmppc_set_msr(vcpu, kvmppc_get_srr1(vcpu));
+			kvmppc_set_msr(vcpu, srr1);
 			*advance = 0;
 			break;
+		}
 
 		default:
 			emulated = EMULATE_FAIL;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 09/26] KVM: PPC: Book3S PR: implement RFID TM behavior to suppress change from S0 to N0
@ 2018-01-11 10:11   ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

Accordingly to ISA specification for RFID, in MSR TM disabled and TS
suspended state(S0), if the target MSR is TM disabled and TS state is
inactive(N0), rfid should suppress this update.

This patch make RFID emulation of PR KVM to be consistent with this.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/kvm/book3s_emulate.c | 21 +++++++++++++++++++--
 1 file changed, 19 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index 68d6898..2eb457b 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -117,11 +117,28 @@ int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	case 19:
 		switch (get_xop(inst)) {
 		case OP_19_XOP_RFID:
-		case OP_19_XOP_RFI:
+		case OP_19_XOP_RFI: {
+			unsigned long srr1 = kvmppc_get_srr1(vcpu);
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+			unsigned long cur_msr = kvmppc_get_msr(vcpu);
+
+			/*
+			 * add rules to fit in ISA specification regarding TM
+			 * state transistion in TM disable/Suspended state,
+			 * and target TM state is TM inactive(00) state. (the
+			 * change should be suppressed).
+			 */
+			if (((cur_msr & MSR_TM) = 0) &&
+				((srr1 & MSR_TM) = 0) &&
+				MSR_TM_SUSPENDED(cur_msr) &&
+				!MSR_TM_ACTIVE(srr1))
+				srr1 |= MSR_TS_S;
+#endif
 			kvmppc_set_pc(vcpu, kvmppc_get_srr0(vcpu));
-			kvmppc_set_msr(vcpu, kvmppc_get_srr1(vcpu));
+			kvmppc_set_msr(vcpu, srr1);
 			*advance = 0;
 			break;
+		}
 
 		default:
 			emulated = EMULATE_FAIL;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 10/26] KVM: PPC: Book3S PR: set MSR HV bit accordingly for PPC970 and others.
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 10:11   ` wei.guo.simon
  -1 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

Apple G5 machines(PPC970/FX/GX/MP) have supervisor mode disabled and
MSR HV bit is forced into 1. We should follow this in PR KVM guest.

This patch set MSR HV=1 for G5 machines and HV=0 for others on PR
KVM guest.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Suggested-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/kvm/book3s_pr.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 7ec866a..b2f7566 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -320,6 +320,7 @@ static void kvmppc_recalc_shadow_msr(struct kvm_vcpu *vcpu)
 {
 	ulong guest_msr = kvmppc_get_msr(vcpu);
 	ulong smsr = guest_msr;
+	u32 guest_pvr = vcpu->arch.pvr;
 
 	/* Guest MSR values */
 #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
@@ -334,7 +335,16 @@ static void kvmppc_recalc_shadow_msr(struct kvm_vcpu *vcpu)
 	smsr |= (guest_msr & vcpu->arch.guest_owned_ext);
 	/* 64-bit Process MSR values */
 #ifdef CONFIG_PPC_BOOK3S_64
-	smsr |= MSR_ISF | MSR_HV;
+	smsr |= MSR_ISF;
+
+	/* for PPC970 chip, its HV bit is hard-wired to 1. For others,
+	 * we should clear HV bit.
+	 */
+	if ((PVR_VER(guest_pvr) == PVR_970) ||
+	    (PVR_VER(guest_pvr) == PVR_970FX) ||
+	    (PVR_VER(guest_pvr) == PVR_970MP) ||
+	    (PVR_VER(guest_pvr) == PVR_970GX))
+		smsr |= MSR_HV;
 #endif
 	vcpu->arch.shadow_msr = smsr;
 }
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 10/26] KVM: PPC: Book3S PR: set MSR HV bit accordingly for PPC970 and others.
@ 2018-01-11 10:11   ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

Apple G5 machines(PPC970/FX/GX/MP) have supervisor mode disabled and
MSR HV bit is forced into 1. We should follow this in PR KVM guest.

This patch set MSR HV=1 for G5 machines and HV=0 for others on PR
KVM guest.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Suggested-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/kvm/book3s_pr.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 7ec866a..b2f7566 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -320,6 +320,7 @@ static void kvmppc_recalc_shadow_msr(struct kvm_vcpu *vcpu)
 {
 	ulong guest_msr = kvmppc_get_msr(vcpu);
 	ulong smsr = guest_msr;
+	u32 guest_pvr = vcpu->arch.pvr;
 
 	/* Guest MSR values */
 #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
@@ -334,7 +335,16 @@ static void kvmppc_recalc_shadow_msr(struct kvm_vcpu *vcpu)
 	smsr |= (guest_msr & vcpu->arch.guest_owned_ext);
 	/* 64-bit Process MSR values */
 #ifdef CONFIG_PPC_BOOK3S_64
-	smsr |= MSR_ISF | MSR_HV;
+	smsr |= MSR_ISF;
+
+	/* for PPC970 chip, its HV bit is hard-wired to 1. For others,
+	 * we should clear HV bit.
+	 */
+	if ((PVR_VER(guest_pvr) = PVR_970) ||
+	    (PVR_VER(guest_pvr) = PVR_970FX) ||
+	    (PVR_VER(guest_pvr) = PVR_970MP) ||
+	    (PVR_VER(guest_pvr) = PVR_970GX))
+		smsr |= MSR_HV;
 #endif
 	vcpu->arch.shadow_msr = smsr;
 }
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 11/26] KVM: PPC: Book3S PR: prevent TS bits change in kvmppc_interrupt_pr()
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 10:11   ` wei.guo.simon
  -1 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

PR KVM host usually equipped with enabled TM in its host MSR value, and
with non-transactional TS value.

When a guest with TM active traps into PR KVM host, the rfid at the
tail of kvmppc_interrupt_pr() will try to switch TS bits from
S0 (Suspended & TM disabled) to N1 (Non-transactional & TM enabled).

That will leads to TM Bad Thing interrupt.

This patch manually sets target TS bits unchanged to avoid this
exception.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/kvm/book3s_segment.S | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/arch/powerpc/kvm/book3s_segment.S b/arch/powerpc/kvm/book3s_segment.S
index 2a2b96d..675e9a2 100644
--- a/arch/powerpc/kvm/book3s_segment.S
+++ b/arch/powerpc/kvm/book3s_segment.S
@@ -383,6 +383,19 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
 	 */
 
 	PPC_LL	r6, HSTATE_HOST_MSR(r13)
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+	/*
+	 * We don't want to change MSR[TS] bits via rfi here.
+	 * The actual TM handling logic will be in host with
+	 * recovered DR/IR bits after HSTATE_VMHANDLER.
+	 * And MSR_TM can be enabled in HOST_MSR so rfid may
+	 * not suppress this change and can lead to exception.
+	 * Manually set MSR to prevent TS state change here.
+	 */
+	mfmsr   r7
+	rldicl  r7, r7, 64 - MSR_TS_S_LG, 62
+	rldimi  r6, r7, MSR_TS_S_LG, 63 - MSR_TS_T_LG
+#endif
 	PPC_LL	r8, HSTATE_VMHANDLER(r13)
 
 #ifdef CONFIG_PPC64
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 11/26] KVM: PPC: Book3S PR: prevent TS bits change in kvmppc_interrupt_pr()
@ 2018-01-11 10:11   ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

PR KVM host usually equipped with enabled TM in its host MSR value, and
with non-transactional TS value.

When a guest with TM active traps into PR KVM host, the rfid at the
tail of kvmppc_interrupt_pr() will try to switch TS bits from
S0 (Suspended & TM disabled) to N1 (Non-transactional & TM enabled).

That will leads to TM Bad Thing interrupt.

This patch manually sets target TS bits unchanged to avoid this
exception.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/kvm/book3s_segment.S | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/arch/powerpc/kvm/book3s_segment.S b/arch/powerpc/kvm/book3s_segment.S
index 2a2b96d..675e9a2 100644
--- a/arch/powerpc/kvm/book3s_segment.S
+++ b/arch/powerpc/kvm/book3s_segment.S
@@ -383,6 +383,19 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
 	 */
 
 	PPC_LL	r6, HSTATE_HOST_MSR(r13)
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+	/*
+	 * We don't want to change MSR[TS] bits via rfi here.
+	 * The actual TM handling logic will be in host with
+	 * recovered DR/IR bits after HSTATE_VMHANDLER.
+	 * And MSR_TM can be enabled in HOST_MSR so rfid may
+	 * not suppress this change and can lead to exception.
+	 * Manually set MSR to prevent TS state change here.
+	 */
+	mfmsr   r7
+	rldicl  r7, r7, 64 - MSR_TS_S_LG, 62
+	rldimi  r6, r7, MSR_TS_S_LG, 63 - MSR_TS_T_LG
+#endif
 	PPC_LL	r8, HSTATE_VMHANDLER(r13)
 
 #ifdef CONFIG_PPC64
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 12/26] powerpc: export symbol msr_check_and_set().
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 10:11   ` wei.guo.simon
  -1 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

PR KVM will need to reuse msr_check_and_set().
This patch exports this API for reuse.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/kernel/process.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 72be0c3..8f430e6 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -152,6 +152,7 @@ unsigned long msr_check_and_set(unsigned long bits)
 
 	return newmsr;
 }
+EXPORT_SYMBOL_GPL(msr_check_and_set);
 
 void __msr_check_and_clear(unsigned long bits)
 {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 12/26] powerpc: export symbol msr_check_and_set().
@ 2018-01-11 10:11   ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

PR KVM will need to reuse msr_check_and_set().
This patch exports this API for reuse.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/kernel/process.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 72be0c3..8f430e6 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -152,6 +152,7 @@ unsigned long msr_check_and_set(unsigned long bits)
 
 	return newmsr;
 }
+EXPORT_SYMBOL_GPL(msr_check_and_set);
 
 void __msr_check_and_clear(unsigned long bits)
 {
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 13/26] KVM: PPC: Book3S PR: adds new kvmppc_copyto_vcpu_tm/kvmppc_copyfrom_vcpu_tm API for PR KVM.
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 10:11   ` wei.guo.simon
  -1 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

This patch adds 2 new APIs: kvmppc_copyto_vcpu_tm() and
kvmppc_copyfrom_vcpu_tm().  These 2 APIs will be used to copy from/to TM
data between VCPU_TM/VCPU area.

PR KVM will use these APIs for treclaim. or trchkpt. emulation.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/kvm/book3s_emulate.c | 39 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 39 insertions(+)

diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index 2eb457b..e096d01 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -87,6 +87,45 @@ static bool spr_allowed(struct kvm_vcpu *vcpu, enum priv_level level)
 	return true;
 }
 
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+void kvmppc_copyto_vcpu_tm(struct kvm_vcpu *vcpu)
+{
+	memcpy(&vcpu->arch.gpr_tm[0], &vcpu->arch.gpr[0],
+			sizeof(vcpu->arch.gpr_tm));
+	memcpy(&vcpu->arch.fp_tm, &vcpu->arch.fp,
+			sizeof(struct thread_fp_state));
+	memcpy(&vcpu->arch.vr_tm, &vcpu->arch.vr,
+			sizeof(struct thread_vr_state));
+	vcpu->arch.ppr_tm = vcpu->arch.ppr;
+	vcpu->arch.dscr_tm = vcpu->arch.dscr;
+	vcpu->arch.amr_tm = vcpu->arch.amr;
+	vcpu->arch.ctr_tm = vcpu->arch.ctr;
+	vcpu->arch.tar_tm = vcpu->arch.tar;
+	vcpu->arch.lr_tm = vcpu->arch.lr;
+	vcpu->arch.cr_tm = vcpu->arch.cr;
+	vcpu->arch.vrsave_tm = vcpu->arch.vrsave;
+}
+
+void kvmppc_copyfrom_vcpu_tm(struct kvm_vcpu *vcpu)
+{
+	memcpy(&vcpu->arch.gpr[0], &vcpu->arch.gpr_tm[0],
+			sizeof(vcpu->arch.gpr));
+	memcpy(&vcpu->arch.fp, &vcpu->arch.fp_tm,
+			sizeof(struct thread_fp_state));
+	memcpy(&vcpu->arch.vr, &vcpu->arch.vr_tm,
+			sizeof(struct thread_vr_state));
+	vcpu->arch.ppr = vcpu->arch.ppr_tm;
+	vcpu->arch.dscr = vcpu->arch.dscr_tm;
+	vcpu->arch.amr = vcpu->arch.amr_tm;
+	vcpu->arch.ctr = vcpu->arch.ctr_tm;
+	vcpu->arch.tar = vcpu->arch.tar_tm;
+	vcpu->arch.lr = vcpu->arch.lr_tm;
+	vcpu->arch.cr = vcpu->arch.cr_tm;
+	vcpu->arch.vrsave = vcpu->arch.vrsave_tm;
+}
+
+#endif
+
 int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			      unsigned int inst, int *advance)
 {
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 13/26] KVM: PPC: Book3S PR: adds new kvmppc_copyto_vcpu_tm/kvmppc_copyfrom_vcpu_tm API for PR
@ 2018-01-11 10:11   ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

This patch adds 2 new APIs: kvmppc_copyto_vcpu_tm() and
kvmppc_copyfrom_vcpu_tm().  These 2 APIs will be used to copy from/to TM
data between VCPU_TM/VCPU area.

PR KVM will use these APIs for treclaim. or trchkpt. emulation.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/kvm/book3s_emulate.c | 39 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 39 insertions(+)

diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index 2eb457b..e096d01 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -87,6 +87,45 @@ static bool spr_allowed(struct kvm_vcpu *vcpu, enum priv_level level)
 	return true;
 }
 
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+void kvmppc_copyto_vcpu_tm(struct kvm_vcpu *vcpu)
+{
+	memcpy(&vcpu->arch.gpr_tm[0], &vcpu->arch.gpr[0],
+			sizeof(vcpu->arch.gpr_tm));
+	memcpy(&vcpu->arch.fp_tm, &vcpu->arch.fp,
+			sizeof(struct thread_fp_state));
+	memcpy(&vcpu->arch.vr_tm, &vcpu->arch.vr,
+			sizeof(struct thread_vr_state));
+	vcpu->arch.ppr_tm = vcpu->arch.ppr;
+	vcpu->arch.dscr_tm = vcpu->arch.dscr;
+	vcpu->arch.amr_tm = vcpu->arch.amr;
+	vcpu->arch.ctr_tm = vcpu->arch.ctr;
+	vcpu->arch.tar_tm = vcpu->arch.tar;
+	vcpu->arch.lr_tm = vcpu->arch.lr;
+	vcpu->arch.cr_tm = vcpu->arch.cr;
+	vcpu->arch.vrsave_tm = vcpu->arch.vrsave;
+}
+
+void kvmppc_copyfrom_vcpu_tm(struct kvm_vcpu *vcpu)
+{
+	memcpy(&vcpu->arch.gpr[0], &vcpu->arch.gpr_tm[0],
+			sizeof(vcpu->arch.gpr));
+	memcpy(&vcpu->arch.fp, &vcpu->arch.fp_tm,
+			sizeof(struct thread_fp_state));
+	memcpy(&vcpu->arch.vr, &vcpu->arch.vr_tm,
+			sizeof(struct thread_vr_state));
+	vcpu->arch.ppr = vcpu->arch.ppr_tm;
+	vcpu->arch.dscr = vcpu->arch.dscr_tm;
+	vcpu->arch.amr = vcpu->arch.amr_tm;
+	vcpu->arch.ctr = vcpu->arch.ctr_tm;
+	vcpu->arch.tar = vcpu->arch.tar_tm;
+	vcpu->arch.lr = vcpu->arch.lr_tm;
+	vcpu->arch.cr = vcpu->arch.cr_tm;
+	vcpu->arch.vrsave = vcpu->arch.vrsave_tm;
+}
+
+#endif
+
 int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			      unsigned int inst, int *advance)
 {
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 14/26] KVM: PPC: Book3S PR: export tm_enable()/tm_disable/tm_abort() APIs
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 10:11   ` wei.guo.simon
  -1 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

This patch exports tm_enable()/tm_disable/tm_abort() APIs, which
will be used for PR KVM transaction memory logic.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/include/asm/asm-prototypes.h |  3 +++
 arch/powerpc/include/asm/tm.h             |  2 --
 arch/powerpc/kernel/tm.S                  | 12 ++++++++++++
 arch/powerpc/mm/hash_utils_64.c           |  1 +
 4 files changed, 16 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h
index 9c3b290..2a0f54e 100644
--- a/arch/powerpc/include/asm/asm-prototypes.h
+++ b/arch/powerpc/include/asm/asm-prototypes.h
@@ -133,4 +133,7 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
 void _kvmppc_save_tm_pr(struct kvm_vcpu *vcpu, u64 guest_msr);
 #endif
 
+void tm_enable(void);
+void tm_disable(void);
+void tm_abort(uint8_t cause);
 #endif /* _ASM_POWERPC_ASM_PROTOTYPES_H */
diff --git a/arch/powerpc/include/asm/tm.h b/arch/powerpc/include/asm/tm.h
index b1658c9..e94f6db 100644
--- a/arch/powerpc/include/asm/tm.h
+++ b/arch/powerpc/include/asm/tm.h
@@ -10,12 +10,10 @@
 
 #ifndef __ASSEMBLY__
 
-extern void tm_enable(void);
 extern void tm_reclaim(struct thread_struct *thread,
 		       uint8_t cause);
 extern void tm_reclaim_current(uint8_t cause);
 extern void tm_recheckpoint(struct thread_struct *thread);
-extern void tm_abort(uint8_t cause);
 extern void tm_save_sprs(struct thread_struct *thread);
 extern void tm_restore_sprs(struct thread_struct *thread);
 
diff --git a/arch/powerpc/kernel/tm.S b/arch/powerpc/kernel/tm.S
index b92ac8e..ff12f47 100644
--- a/arch/powerpc/kernel/tm.S
+++ b/arch/powerpc/kernel/tm.S
@@ -12,6 +12,7 @@
 #include <asm/ptrace.h>
 #include <asm/reg.h>
 #include <asm/bug.h>
+#include <asm/export.h>
 
 #ifdef CONFIG_VSX
 /* See fpu.S, this is borrowed from there */
@@ -55,6 +56,16 @@ _GLOBAL(tm_enable)
 	or	r4, r4, r3
 	mtmsrd	r4
 1:	blr
+EXPORT_SYMBOL_GPL(tm_enable);
+
+_GLOBAL(tm_disable)
+	mfmsr	r4
+	li	r3, MSR_TM >> 32
+	sldi	r3, r3, 32
+	andc	r4, r4, r3
+	mtmsrd	r4
+	blr
+EXPORT_SYMBOL_GPL(tm_disable);
 
 _GLOBAL(tm_save_sprs)
 	mfspr	r0, SPRN_TFHAR
@@ -78,6 +89,7 @@ _GLOBAL(tm_restore_sprs)
 _GLOBAL(tm_abort)
 	TABORT(R3)
 	blr
+EXPORT_SYMBOL_GPL(tm_abort);
 
 /* void tm_reclaim(struct thread_struct *thread,
  *		   uint8_t cause)
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index 655a5a9..d354de6 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -63,6 +63,7 @@
 #include <asm/trace.h>
 #include <asm/ps3.h>
 #include <asm/pte-walk.h>
+#include <asm/asm-prototypes.h>
 
 #ifdef DEBUG
 #define DBG(fmt...) udbg_printf(fmt)
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 14/26] KVM: PPC: Book3S PR: export tm_enable()/tm_disable/tm_abort() APIs
@ 2018-01-11 10:11   ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

This patch exports tm_enable()/tm_disable/tm_abort() APIs, which
will be used for PR KVM transaction memory logic.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/include/asm/asm-prototypes.h |  3 +++
 arch/powerpc/include/asm/tm.h             |  2 --
 arch/powerpc/kernel/tm.S                  | 12 ++++++++++++
 arch/powerpc/mm/hash_utils_64.c           |  1 +
 4 files changed, 16 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h
index 9c3b290..2a0f54e 100644
--- a/arch/powerpc/include/asm/asm-prototypes.h
+++ b/arch/powerpc/include/asm/asm-prototypes.h
@@ -133,4 +133,7 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
 void _kvmppc_save_tm_pr(struct kvm_vcpu *vcpu, u64 guest_msr);
 #endif
 
+void tm_enable(void);
+void tm_disable(void);
+void tm_abort(uint8_t cause);
 #endif /* _ASM_POWERPC_ASM_PROTOTYPES_H */
diff --git a/arch/powerpc/include/asm/tm.h b/arch/powerpc/include/asm/tm.h
index b1658c9..e94f6db 100644
--- a/arch/powerpc/include/asm/tm.h
+++ b/arch/powerpc/include/asm/tm.h
@@ -10,12 +10,10 @@
 
 #ifndef __ASSEMBLY__
 
-extern void tm_enable(void);
 extern void tm_reclaim(struct thread_struct *thread,
 		       uint8_t cause);
 extern void tm_reclaim_current(uint8_t cause);
 extern void tm_recheckpoint(struct thread_struct *thread);
-extern void tm_abort(uint8_t cause);
 extern void tm_save_sprs(struct thread_struct *thread);
 extern void tm_restore_sprs(struct thread_struct *thread);
 
diff --git a/arch/powerpc/kernel/tm.S b/arch/powerpc/kernel/tm.S
index b92ac8e..ff12f47 100644
--- a/arch/powerpc/kernel/tm.S
+++ b/arch/powerpc/kernel/tm.S
@@ -12,6 +12,7 @@
 #include <asm/ptrace.h>
 #include <asm/reg.h>
 #include <asm/bug.h>
+#include <asm/export.h>
 
 #ifdef CONFIG_VSX
 /* See fpu.S, this is borrowed from there */
@@ -55,6 +56,16 @@ _GLOBAL(tm_enable)
 	or	r4, r4, r3
 	mtmsrd	r4
 1:	blr
+EXPORT_SYMBOL_GPL(tm_enable);
+
+_GLOBAL(tm_disable)
+	mfmsr	r4
+	li	r3, MSR_TM >> 32
+	sldi	r3, r3, 32
+	andc	r4, r4, r3
+	mtmsrd	r4
+	blr
+EXPORT_SYMBOL_GPL(tm_disable);
 
 _GLOBAL(tm_save_sprs)
 	mfspr	r0, SPRN_TFHAR
@@ -78,6 +89,7 @@ _GLOBAL(tm_restore_sprs)
 _GLOBAL(tm_abort)
 	TABORT(R3)
 	blr
+EXPORT_SYMBOL_GPL(tm_abort);
 
 /* void tm_reclaim(struct thread_struct *thread,
  *		   uint8_t cause)
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index 655a5a9..d354de6 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -63,6 +63,7 @@
 #include <asm/trace.h>
 #include <asm/ps3.h>
 #include <asm/pte-walk.h>
+#include <asm/asm-prototypes.h>
 
 #ifdef DEBUG
 #define DBG(fmt...) udbg_printf(fmt)
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 15/26] KVM: PPC: Book3S PR: add kvmppc_save/restore_tm_sprs() APIs
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 10:11   ` wei.guo.simon
  -1 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

This patch adds 2 new APIs kvmppc_save_tm_sprs()/kvmppc_restore_tm_sprs()
for the purpose of TEXASR/TFIAR/TFHAR save/restore.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/kvm/book3s_pr.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index b2f7566..5224b3c 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -42,6 +42,7 @@
 #include <linux/highmem.h>
 #include <linux/module.h>
 #include <linux/miscdevice.h>
+#include <asm/asm-prototypes.h>
 
 #include "book3s.h"
 
@@ -235,6 +236,27 @@ void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
 	preempt_enable();
 }
 
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+static inline void kvmppc_save_tm_sprs(struct kvm_vcpu *vcpu)
+{
+	tm_enable();
+	vcpu->arch.tfhar = mfspr(SPRN_TFHAR);
+	vcpu->arch.texasr = mfspr(SPRN_TEXASR);
+	vcpu->arch.tfiar = mfspr(SPRN_TFIAR);
+	tm_disable();
+}
+
+static inline void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu)
+{
+	tm_enable();
+	mtspr(SPRN_TFHAR, vcpu->arch.tfhar);
+	mtspr(SPRN_TEXASR, vcpu->arch.texasr);
+	mtspr(SPRN_TFIAR, vcpu->arch.tfiar);
+	tm_disable();
+}
+
+#endif
+
 static int kvmppc_core_check_requests_pr(struct kvm_vcpu *vcpu)
 {
 	int r = 1; /* Indicate we want to get back into the guest */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 15/26] KVM: PPC: Book3S PR: add kvmppc_save/restore_tm_sprs() APIs
@ 2018-01-11 10:11   ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

This patch adds 2 new APIs kvmppc_save_tm_sprs()/kvmppc_restore_tm_sprs()
for the purpose of TEXASR/TFIAR/TFHAR save/restore.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/kvm/book3s_pr.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index b2f7566..5224b3c 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -42,6 +42,7 @@
 #include <linux/highmem.h>
 #include <linux/module.h>
 #include <linux/miscdevice.h>
+#include <asm/asm-prototypes.h>
 
 #include "book3s.h"
 
@@ -235,6 +236,27 @@ void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
 	preempt_enable();
 }
 
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+static inline void kvmppc_save_tm_sprs(struct kvm_vcpu *vcpu)
+{
+	tm_enable();
+	vcpu->arch.tfhar = mfspr(SPRN_TFHAR);
+	vcpu->arch.texasr = mfspr(SPRN_TEXASR);
+	vcpu->arch.tfiar = mfspr(SPRN_TFIAR);
+	tm_disable();
+}
+
+static inline void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu)
+{
+	tm_enable();
+	mtspr(SPRN_TFHAR, vcpu->arch.tfhar);
+	mtspr(SPRN_TEXASR, vcpu->arch.texasr);
+	mtspr(SPRN_TFIAR, vcpu->arch.tfiar);
+	tm_disable();
+}
+
+#endif
+
 static int kvmppc_core_check_requests_pr(struct kvm_vcpu *vcpu)
 {
 	int r = 1; /* Indicate we want to get back into the guest */
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 16/26] KVM: PPC: Book3S PR: add transaction memory save/restore skeleton for PR KVM
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 10:11   ` wei.guo.simon
  -1 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

The transaction memory checkpoint area save/restore behavior is
triggered when VCPU qemu process is switching out/into CPU. ie.
at kvmppc_core_vcpu_put_pr() and kvmppc_core_vcpu_load_pr().

MSR TM active state is determined by TS bits:
    active: 10(transactional) or 01 (suspended)
    inactive: 00 (non-transactional)
We don't "fake" TM functionality for guest. We "sync" guest virtual
MSR TM active state(10 or 01) with shadow MSR. That is to say,
we don't emulate a transactional guest with a TM inactive MSR.

TM SPR support(TFIAR/TFAR/TEXASR) has already been supported by
commit 9916d57e64a4 ("KVM: PPC: Book3S PR: Expose TM registers").
Math register support (FPR/VMX/VSX) will be done at subsequent
patch.

- TM save:
When kvmppc_save_tm_pr() is invoked, whether TM context need to
be saved can be determined by current host MSR state:
	* TM active - save TM context
	* TM inactive - no need to do so and only save TM SPRs.

- TM restore:
However when kvmppc_restore_tm_pr() is invoked, there is an
issue to determine whether TM restore should be performed.
The TM active host MSR val saved in kernel stack is not loaded yet.
We don't know whether there is a transaction to be restored from
current host MSR TM status at kvmppc_restore_tm_pr(). To solve this
issue, we save current MSR into vcpu->arch.save_msr_tm at
kvmppc_save_tm_pr(), and kvmppc_restore_tm_pr() check TS bits of
vcpu->arch.save_msr_tm to decide whether to do TM restore.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Suggested-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/include/asm/kvm_book3s.h |  6 +++++
 arch/powerpc/include/asm/kvm_host.h   |  1 +
 arch/powerpc/kvm/book3s_pr.c          | 41 +++++++++++++++++++++++++++++++++++
 3 files changed, 48 insertions(+)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 9a66700..d8dbfa5 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -253,6 +253,12 @@ extern void kvmppc_copy_to_svcpu(struct kvmppc_book3s_shadow_vcpu *svcpu,
 				 struct kvm_vcpu *vcpu);
 extern void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
 				   struct kvmppc_book3s_shadow_vcpu *svcpu);
+
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu);
+void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu);
+#endif
+
 extern int kvm_irq_bypass;
 
 static inline struct kvmppc_vcpu_book3s *to_book3s(struct kvm_vcpu *vcpu)
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 3aa5b57..eb3b821 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -627,6 +627,7 @@ struct kvm_vcpu_arch {
 	struct thread_vr_state vr_tm;
 	u32 vrsave_tm; /* also USPRG0 */
 
+	u64 save_msr_tm; /* TS bits: whether TM restore is required */
 #endif
 
 #ifdef CONFIG_KVM_EXIT_TIMING
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 5224b3c..eef0928 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -43,6 +43,7 @@
 #include <linux/module.h>
 #include <linux/miscdevice.h>
 #include <asm/asm-prototypes.h>
+#include <asm/tm.h>
 
 #include "book3s.h"
 
@@ -114,6 +115,9 @@ static void kvmppc_core_vcpu_load_pr(struct kvm_vcpu *vcpu, int cpu)
 
 	if (kvmppc_is_split_real(vcpu))
 		kvmppc_fixup_split_real(vcpu);
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+	kvmppc_restore_tm_pr(vcpu);
+#endif
 }
 
 static void kvmppc_core_vcpu_put_pr(struct kvm_vcpu *vcpu)
@@ -131,6 +135,10 @@ static void kvmppc_core_vcpu_put_pr(struct kvm_vcpu *vcpu)
 	if (kvmppc_is_split_real(vcpu))
 		kvmppc_unfixup_split_real(vcpu);
 
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+	kvmppc_save_tm_pr(vcpu);
+#endif
+
 	kvmppc_giveup_ext(vcpu, MSR_FP | MSR_VEC | MSR_VSX);
 	kvmppc_giveup_fac(vcpu, FSCR_TAR_LG);
 
@@ -255,6 +263,39 @@ static inline void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu)
 	tm_disable();
 }
 
+void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu)
+{
+	/*
+	 * When kvmppc_save_tm_pr() is invoked, whether TM context need to
+	 * be saved can be determined by current MSR TS active state.
+	 *
+	 * We save current MSR's TM TS bits into vcpu->arch.save_msr_tm.
+	 * So that kvmppc_restore_tm_pr() can decide to do TM restore or
+	 * not based on that.
+	 */
+	vcpu->arch.save_msr_tm = mfmsr();
+
+	if (!(MSR_TM_ACTIVE(vcpu->arch.save_msr_tm))) {
+		kvmppc_save_tm_sprs(vcpu);
+		return;
+	}
+
+	preempt_disable();
+	_kvmppc_save_tm_pr(vcpu, mfmsr());
+	preempt_enable();
+}
+
+void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu)
+{
+	if (!MSR_TM_ACTIVE(vcpu->arch.save_msr_tm)) {
+		kvmppc_restore_tm_sprs(vcpu);
+		return;
+	}
+
+	preempt_disable();
+	_kvmppc_restore_tm_pr(vcpu, vcpu->arch.save_msr_tm);
+	preempt_enable();
+}
 #endif
 
 static int kvmppc_core_check_requests_pr(struct kvm_vcpu *vcpu)
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 16/26] KVM: PPC: Book3S PR: add transaction memory save/restore skeleton for PR KVM
@ 2018-01-11 10:11   ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

The transaction memory checkpoint area save/restore behavior is
triggered when VCPU qemu process is switching out/into CPU. ie.
at kvmppc_core_vcpu_put_pr() and kvmppc_core_vcpu_load_pr().

MSR TM active state is determined by TS bits:
    active: 10(transactional) or 01 (suspended)
    inactive: 00 (non-transactional)
We don't "fake" TM functionality for guest. We "sync" guest virtual
MSR TM active state(10 or 01) with shadow MSR. That is to say,
we don't emulate a transactional guest with a TM inactive MSR.

TM SPR support(TFIAR/TFAR/TEXASR) has already been supported by
commit 9916d57e64a4 ("KVM: PPC: Book3S PR: Expose TM registers").
Math register support (FPR/VMX/VSX) will be done at subsequent
patch.

- TM save:
When kvmppc_save_tm_pr() is invoked, whether TM context need to
be saved can be determined by current host MSR state:
	* TM active - save TM context
	* TM inactive - no need to do so and only save TM SPRs.

- TM restore:
However when kvmppc_restore_tm_pr() is invoked, there is an
issue to determine whether TM restore should be performed.
The TM active host MSR val saved in kernel stack is not loaded yet.
We don't know whether there is a transaction to be restored from
current host MSR TM status at kvmppc_restore_tm_pr(). To solve this
issue, we save current MSR into vcpu->arch.save_msr_tm at
kvmppc_save_tm_pr(), and kvmppc_restore_tm_pr() check TS bits of
vcpu->arch.save_msr_tm to decide whether to do TM restore.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Suggested-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/include/asm/kvm_book3s.h |  6 +++++
 arch/powerpc/include/asm/kvm_host.h   |  1 +
 arch/powerpc/kvm/book3s_pr.c          | 41 +++++++++++++++++++++++++++++++++++
 3 files changed, 48 insertions(+)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 9a66700..d8dbfa5 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -253,6 +253,12 @@ extern void kvmppc_copy_to_svcpu(struct kvmppc_book3s_shadow_vcpu *svcpu,
 				 struct kvm_vcpu *vcpu);
 extern void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
 				   struct kvmppc_book3s_shadow_vcpu *svcpu);
+
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu);
+void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu);
+#endif
+
 extern int kvm_irq_bypass;
 
 static inline struct kvmppc_vcpu_book3s *to_book3s(struct kvm_vcpu *vcpu)
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 3aa5b57..eb3b821 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -627,6 +627,7 @@ struct kvm_vcpu_arch {
 	struct thread_vr_state vr_tm;
 	u32 vrsave_tm; /* also USPRG0 */
 
+	u64 save_msr_tm; /* TS bits: whether TM restore is required */
 #endif
 
 #ifdef CONFIG_KVM_EXIT_TIMING
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 5224b3c..eef0928 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -43,6 +43,7 @@
 #include <linux/module.h>
 #include <linux/miscdevice.h>
 #include <asm/asm-prototypes.h>
+#include <asm/tm.h>
 
 #include "book3s.h"
 
@@ -114,6 +115,9 @@ static void kvmppc_core_vcpu_load_pr(struct kvm_vcpu *vcpu, int cpu)
 
 	if (kvmppc_is_split_real(vcpu))
 		kvmppc_fixup_split_real(vcpu);
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+	kvmppc_restore_tm_pr(vcpu);
+#endif
 }
 
 static void kvmppc_core_vcpu_put_pr(struct kvm_vcpu *vcpu)
@@ -131,6 +135,10 @@ static void kvmppc_core_vcpu_put_pr(struct kvm_vcpu *vcpu)
 	if (kvmppc_is_split_real(vcpu))
 		kvmppc_unfixup_split_real(vcpu);
 
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+	kvmppc_save_tm_pr(vcpu);
+#endif
+
 	kvmppc_giveup_ext(vcpu, MSR_FP | MSR_VEC | MSR_VSX);
 	kvmppc_giveup_fac(vcpu, FSCR_TAR_LG);
 
@@ -255,6 +263,39 @@ static inline void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu)
 	tm_disable();
 }
 
+void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu)
+{
+	/*
+	 * When kvmppc_save_tm_pr() is invoked, whether TM context need to
+	 * be saved can be determined by current MSR TS active state.
+	 *
+	 * We save current MSR's TM TS bits into vcpu->arch.save_msr_tm.
+	 * So that kvmppc_restore_tm_pr() can decide to do TM restore or
+	 * not based on that.
+	 */
+	vcpu->arch.save_msr_tm = mfmsr();
+
+	if (!(MSR_TM_ACTIVE(vcpu->arch.save_msr_tm))) {
+		kvmppc_save_tm_sprs(vcpu);
+		return;
+	}
+
+	preempt_disable();
+	_kvmppc_save_tm_pr(vcpu, mfmsr());
+	preempt_enable();
+}
+
+void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu)
+{
+	if (!MSR_TM_ACTIVE(vcpu->arch.save_msr_tm)) {
+		kvmppc_restore_tm_sprs(vcpu);
+		return;
+	}
+
+	preempt_disable();
+	_kvmppc_restore_tm_pr(vcpu, vcpu->arch.save_msr_tm);
+	preempt_enable();
+}
 #endif
 
 static int kvmppc_core_check_requests_pr(struct kvm_vcpu *vcpu)
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 17/26] KVM: PPC: Book3S PR: add math support for PR KVM HTM
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 10:11   ` wei.guo.simon
  -1 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

The math registers will be saved into vcpu->arch.fp/vr and corresponding
vcpu->arch.fp_tm/vr_tm area.

We flush or giveup the math regs into vcpu->arch.fp/vr before saving
transaction. After transaction is restored, the math regs will be loaded
back into regs.

If there is a FP/VEC/VSX unavailable exception during transaction active
state, the math checkpoint content might be incorrect and we need to do
treclaim./load the correct checkpoint val/trechkpt. sequence to retry the
transaction.

If transaction is active, and the qemu process is switching out of CPU,
we need to keep the "guest_owned_ext" bits unchanged after qemu process
is switched back. The reason is that if we allow guest_owned_ext change
freely during a transaction, there will lack information to handle
FP/VEC/VSX unavailable exception during transaction active state.

Detail is as follows:
Assume we allow math bits to be given up freely during transaction:
- If it is the first FP unavailable exception after tbegin., vcpu->arch.fp/
vr need to be loaded for trechkpt.
- If it is the 2nd or subsequent FP unavailable exception after tbegin.,
vcpu->arch.fp_tm/vr_tm need to be loaded for trechkpt.
It will bring much additional complexity to cover both cases.

That is why we always save guest_owned_ext into vcpu->arch.save_msr_tm at
kvmppc_save_tm_pr(), then check those bits in vcpu->arch.save_msr_tm at
kvmppc_restore_tm_pr() to determine what math contents will be loaded.
With this, we will always load vcpu->arch.fp/vr in math unavailable
exception during active transaction.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_host.h |   4 +-
 arch/powerpc/kvm/book3s_pr.c        | 114 +++++++++++++++++++++++++++++-------
 2 files changed, 95 insertions(+), 23 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index eb3b821..1124c62 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -627,7 +627,9 @@ struct kvm_vcpu_arch {
 	struct thread_vr_state vr_tm;
 	u32 vrsave_tm; /* also USPRG0 */
 
-	u64 save_msr_tm; /* TS bits: whether TM restore is required */
+	u64 save_msr_tm; /* TS bits: whether TM restore is required
+			  * FP/VEC/VSX bits: saved guest_owned_ext
+			  */
 #endif
 
 #ifdef CONFIG_KVM_EXIT_TIMING
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index eef0928..c35bd02 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -55,6 +55,7 @@
 
 static int kvmppc_handle_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr,
 			     ulong msr);
+static int kvmppc_load_ext(struct kvm_vcpu *vcpu, ulong msr);
 static void kvmppc_giveup_fac(struct kvm_vcpu *vcpu, ulong fac);
 
 /* Some compatibility defines */
@@ -280,6 +281,33 @@ void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu)
 		return;
 	}
 
+	/* when we are in transaction active state and switch out of CPU,
+	 * we need to be careful to not "change" guest_owned_ext bits after
+	 * kvmppc_save_tm_pr()/kvmppc_restore_tm_pr() pair. The reason is
+	 * that we need to distinguish following 2 FP/VEC/VSX unavailable
+	 * exception cases in TM active state:
+	 * 1) tbegin. is executed with guest_owned_ext FP/VEC/VSX off. Then
+	 * there comes a FP/VEC/VSX unavailable exception during transaction.
+	 * In this case, the vcpu->arch.fp/vr contents need to be loaded as
+	 * checkpoint contents.
+	 * 2) tbegin. is executed with guest_owned_ext FP/VEC/VSX on. Then
+	 * there is task switch during suspended state. If we giveup ext and
+	 * update guest_owned_ext as no FP/VEC/VSX bits during context switch,
+	 * we need to load vcpu->arch.fp_tm/vr_tm contents as checkpoint
+	 * content.
+	 *
+	 * As a result, we don't change guest_owned_ext bits during
+	 * kvmppc_save/restore_tm_pr() pair. So that we can only use
+	 * vcpu->arch.fp/vr contents as checkpoint contents.
+	 * And we need to "save" the guest_owned_ext bits here who indicates
+	 * which math bits need to be "restored" in kvmppc_restore_tm_pr().
+	 */
+	vcpu->arch.save_msr_tm &= ~(MSR_FP | MSR_VEC | MSR_VSX);
+	vcpu->arch.save_msr_tm |= (vcpu->arch.guest_owned_ext &
+			(MSR_FP | MSR_VEC | MSR_VSX));
+
+	kvmppc_giveup_ext(vcpu, MSR_VSX);
+
 	preempt_disable();
 	_kvmppc_save_tm_pr(vcpu, mfmsr());
 	preempt_enable();
@@ -295,6 +323,16 @@ void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu)
 	preempt_disable();
 	_kvmppc_restore_tm_pr(vcpu, vcpu->arch.save_msr_tm);
 	preempt_enable();
+
+	if (vcpu->arch.save_msr_tm & MSR_VSX)
+		kvmppc_load_ext(vcpu, MSR_FP | MSR_VEC | MSR_VSX);
+	else {
+		if (vcpu->arch.save_msr_tm & MSR_VEC)
+			kvmppc_load_ext(vcpu, MSR_VEC);
+
+		if (vcpu->arch.save_msr_tm & MSR_FP)
+			kvmppc_load_ext(vcpu, MSR_FP);
+	}
 }
 #endif
 
@@ -788,12 +826,41 @@ static void kvmppc_giveup_fac(struct kvm_vcpu *vcpu, ulong fac)
 #endif
 }
 
+static int kvmppc_load_ext(struct kvm_vcpu *vcpu, ulong msr)
+{
+	struct thread_struct *t = &current->thread;
+
+	if (msr & MSR_FP) {
+		preempt_disable();
+		enable_kernel_fp();
+		load_fp_state(&vcpu->arch.fp);
+		disable_kernel_fp();
+		t->fp_save_area = &vcpu->arch.fp;
+		preempt_enable();
+	}
+
+	if (msr & MSR_VEC) {
+#ifdef CONFIG_ALTIVEC
+		preempt_disable();
+		enable_kernel_altivec();
+		load_vr_state(&vcpu->arch.vr);
+		disable_kernel_altivec();
+		t->vr_save_area = &vcpu->arch.vr;
+		preempt_enable();
+#endif
+	}
+
+	t->regs->msr |= msr;
+	vcpu->arch.guest_owned_ext |= msr;
+	kvmppc_recalc_shadow_msr(vcpu);
+
+	return RESUME_GUEST;
+}
+
 /* Handle external providers (FPU, Altivec, VSX) */
 static int kvmppc_handle_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr,
 			     ulong msr)
 {
-	struct thread_struct *t = &current->thread;
-
 	/* When we have paired singles, we emulate in software */
 	if (vcpu->arch.hflags & BOOK3S_HFLAG_PAIRED_SINGLE)
 		return RESUME_GUEST;
@@ -829,31 +896,34 @@ static int kvmppc_handle_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr,
 	printk(KERN_INFO "Loading up ext 0x%lx\n", msr);
 #endif
 
-	if (msr & MSR_FP) {
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+	/* if TM is active, the checkpointed math content
+	 * might be invalid. We need to reclaim current
+	 * transaction, load the correct math, and perform
+	 * rechkpoint.
+	 */
+	if (MSR_TM_ACTIVE(mfmsr())) {
 		preempt_disable();
-		enable_kernel_fp();
-		load_fp_state(&vcpu->arch.fp);
-		disable_kernel_fp();
-		t->fp_save_area = &vcpu->arch.fp;
-		preempt_enable();
-	}
+		kvmppc_save_tm_pr(vcpu);
+		/* need update the chkpt math reg saving content,
+		 * so that we can checkpoint with desired fp value.
+		 */
+		if (msr & MSR_FP)
+			memcpy(&vcpu->arch.fp_tm, &vcpu->arch.fp,
+					sizeof(struct thread_fp_state));
+
+		if (msr & MSR_VEC) {
+			memcpy(&vcpu->arch.vr_tm, &vcpu->arch.vr,
+					sizeof(struct thread_vr_state));
+			vcpu->arch.vrsave_tm = vcpu->arch.vrsave;
+		}
 
-	if (msr & MSR_VEC) {
-#ifdef CONFIG_ALTIVEC
-		preempt_disable();
-		enable_kernel_altivec();
-		load_vr_state(&vcpu->arch.vr);
-		disable_kernel_altivec();
-		t->vr_save_area = &vcpu->arch.vr;
+		kvmppc_restore_tm_pr(vcpu);
 		preempt_enable();
-#endif
 	}
+#endif
 
-	t->regs->msr |= msr;
-	vcpu->arch.guest_owned_ext |= msr;
-	kvmppc_recalc_shadow_msr(vcpu);
-
-	return RESUME_GUEST;
+	return kvmppc_load_ext(vcpu, msr);
 }
 
 /*
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 17/26] KVM: PPC: Book3S PR: add math support for PR KVM HTM
@ 2018-01-11 10:11   ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

The math registers will be saved into vcpu->arch.fp/vr and corresponding
vcpu->arch.fp_tm/vr_tm area.

We flush or giveup the math regs into vcpu->arch.fp/vr before saving
transaction. After transaction is restored, the math regs will be loaded
back into regs.

If there is a FP/VEC/VSX unavailable exception during transaction active
state, the math checkpoint content might be incorrect and we need to do
treclaim./load the correct checkpoint val/trechkpt. sequence to retry the
transaction.

If transaction is active, and the qemu process is switching out of CPU,
we need to keep the "guest_owned_ext" bits unchanged after qemu process
is switched back. The reason is that if we allow guest_owned_ext change
freely during a transaction, there will lack information to handle
FP/VEC/VSX unavailable exception during transaction active state.

Detail is as follows:
Assume we allow math bits to be given up freely during transaction:
- If it is the first FP unavailable exception after tbegin., vcpu->arch.fp/
vr need to be loaded for trechkpt.
- If it is the 2nd or subsequent FP unavailable exception after tbegin.,
vcpu->arch.fp_tm/vr_tm need to be loaded for trechkpt.
It will bring much additional complexity to cover both cases.

That is why we always save guest_owned_ext into vcpu->arch.save_msr_tm at
kvmppc_save_tm_pr(), then check those bits in vcpu->arch.save_msr_tm at
kvmppc_restore_tm_pr() to determine what math contents will be loaded.
With this, we will always load vcpu->arch.fp/vr in math unavailable
exception during active transaction.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_host.h |   4 +-
 arch/powerpc/kvm/book3s_pr.c        | 114 +++++++++++++++++++++++++++++-------
 2 files changed, 95 insertions(+), 23 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index eb3b821..1124c62 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -627,7 +627,9 @@ struct kvm_vcpu_arch {
 	struct thread_vr_state vr_tm;
 	u32 vrsave_tm; /* also USPRG0 */
 
-	u64 save_msr_tm; /* TS bits: whether TM restore is required */
+	u64 save_msr_tm; /* TS bits: whether TM restore is required
+			  * FP/VEC/VSX bits: saved guest_owned_ext
+			  */
 #endif
 
 #ifdef CONFIG_KVM_EXIT_TIMING
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index eef0928..c35bd02 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -55,6 +55,7 @@
 
 static int kvmppc_handle_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr,
 			     ulong msr);
+static int kvmppc_load_ext(struct kvm_vcpu *vcpu, ulong msr);
 static void kvmppc_giveup_fac(struct kvm_vcpu *vcpu, ulong fac);
 
 /* Some compatibility defines */
@@ -280,6 +281,33 @@ void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu)
 		return;
 	}
 
+	/* when we are in transaction active state and switch out of CPU,
+	 * we need to be careful to not "change" guest_owned_ext bits after
+	 * kvmppc_save_tm_pr()/kvmppc_restore_tm_pr() pair. The reason is
+	 * that we need to distinguish following 2 FP/VEC/VSX unavailable
+	 * exception cases in TM active state:
+	 * 1) tbegin. is executed with guest_owned_ext FP/VEC/VSX off. Then
+	 * there comes a FP/VEC/VSX unavailable exception during transaction.
+	 * In this case, the vcpu->arch.fp/vr contents need to be loaded as
+	 * checkpoint contents.
+	 * 2) tbegin. is executed with guest_owned_ext FP/VEC/VSX on. Then
+	 * there is task switch during suspended state. If we giveup ext and
+	 * update guest_owned_ext as no FP/VEC/VSX bits during context switch,
+	 * we need to load vcpu->arch.fp_tm/vr_tm contents as checkpoint
+	 * content.
+	 *
+	 * As a result, we don't change guest_owned_ext bits during
+	 * kvmppc_save/restore_tm_pr() pair. So that we can only use
+	 * vcpu->arch.fp/vr contents as checkpoint contents.
+	 * And we need to "save" the guest_owned_ext bits here who indicates
+	 * which math bits need to be "restored" in kvmppc_restore_tm_pr().
+	 */
+	vcpu->arch.save_msr_tm &= ~(MSR_FP | MSR_VEC | MSR_VSX);
+	vcpu->arch.save_msr_tm |= (vcpu->arch.guest_owned_ext &
+			(MSR_FP | MSR_VEC | MSR_VSX));
+
+	kvmppc_giveup_ext(vcpu, MSR_VSX);
+
 	preempt_disable();
 	_kvmppc_save_tm_pr(vcpu, mfmsr());
 	preempt_enable();
@@ -295,6 +323,16 @@ void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu)
 	preempt_disable();
 	_kvmppc_restore_tm_pr(vcpu, vcpu->arch.save_msr_tm);
 	preempt_enable();
+
+	if (vcpu->arch.save_msr_tm & MSR_VSX)
+		kvmppc_load_ext(vcpu, MSR_FP | MSR_VEC | MSR_VSX);
+	else {
+		if (vcpu->arch.save_msr_tm & MSR_VEC)
+			kvmppc_load_ext(vcpu, MSR_VEC);
+
+		if (vcpu->arch.save_msr_tm & MSR_FP)
+			kvmppc_load_ext(vcpu, MSR_FP);
+	}
 }
 #endif
 
@@ -788,12 +826,41 @@ static void kvmppc_giveup_fac(struct kvm_vcpu *vcpu, ulong fac)
 #endif
 }
 
+static int kvmppc_load_ext(struct kvm_vcpu *vcpu, ulong msr)
+{
+	struct thread_struct *t = &current->thread;
+
+	if (msr & MSR_FP) {
+		preempt_disable();
+		enable_kernel_fp();
+		load_fp_state(&vcpu->arch.fp);
+		disable_kernel_fp();
+		t->fp_save_area = &vcpu->arch.fp;
+		preempt_enable();
+	}
+
+	if (msr & MSR_VEC) {
+#ifdef CONFIG_ALTIVEC
+		preempt_disable();
+		enable_kernel_altivec();
+		load_vr_state(&vcpu->arch.vr);
+		disable_kernel_altivec();
+		t->vr_save_area = &vcpu->arch.vr;
+		preempt_enable();
+#endif
+	}
+
+	t->regs->msr |= msr;
+	vcpu->arch.guest_owned_ext |= msr;
+	kvmppc_recalc_shadow_msr(vcpu);
+
+	return RESUME_GUEST;
+}
+
 /* Handle external providers (FPU, Altivec, VSX) */
 static int kvmppc_handle_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr,
 			     ulong msr)
 {
-	struct thread_struct *t = &current->thread;
-
 	/* When we have paired singles, we emulate in software */
 	if (vcpu->arch.hflags & BOOK3S_HFLAG_PAIRED_SINGLE)
 		return RESUME_GUEST;
@@ -829,31 +896,34 @@ static int kvmppc_handle_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr,
 	printk(KERN_INFO "Loading up ext 0x%lx\n", msr);
 #endif
 
-	if (msr & MSR_FP) {
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+	/* if TM is active, the checkpointed math content
+	 * might be invalid. We need to reclaim current
+	 * transaction, load the correct math, and perform
+	 * rechkpoint.
+	 */
+	if (MSR_TM_ACTIVE(mfmsr())) {
 		preempt_disable();
-		enable_kernel_fp();
-		load_fp_state(&vcpu->arch.fp);
-		disable_kernel_fp();
-		t->fp_save_area = &vcpu->arch.fp;
-		preempt_enable();
-	}
+		kvmppc_save_tm_pr(vcpu);
+		/* need update the chkpt math reg saving content,
+		 * so that we can checkpoint with desired fp value.
+		 */
+		if (msr & MSR_FP)
+			memcpy(&vcpu->arch.fp_tm, &vcpu->arch.fp,
+					sizeof(struct thread_fp_state));
+
+		if (msr & MSR_VEC) {
+			memcpy(&vcpu->arch.vr_tm, &vcpu->arch.vr,
+					sizeof(struct thread_vr_state));
+			vcpu->arch.vrsave_tm = vcpu->arch.vrsave;
+		}
 
-	if (msr & MSR_VEC) {
-#ifdef CONFIG_ALTIVEC
-		preempt_disable();
-		enable_kernel_altivec();
-		load_vr_state(&vcpu->arch.vr);
-		disable_kernel_altivec();
-		t->vr_save_area = &vcpu->arch.vr;
+		kvmppc_restore_tm_pr(vcpu);
 		preempt_enable();
-#endif
 	}
+#endif
 
-	t->regs->msr |= msr;
-	vcpu->arch.guest_owned_ext |= msr;
-	kvmppc_recalc_shadow_msr(vcpu);
-
-	return RESUME_GUEST;
+	return kvmppc_load_ext(vcpu, msr);
 }
 
 /*
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 18/26] KVM: PPC: Book3S PR: make mtspr/mfspr emulation behavior based on active TM SPRs
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 10:11   ` wei.guo.simon
  -1 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

The mfspr/mtspr on TM SPRs(TEXASR/TFIAR/TFHAR) are non-privileged
instructions and can be executed at PR KVM guest without trapping
into host in problem state. We only emulate mtspr/mfspr
texasr/tfiar/tfhar at guest PR=0 state.

When we are emulating mtspr tm sprs at guest PR=0 state, the emulation
result need to be visible to guest PR=1 state. That is, the actual TM
SPR val should be loaded into actual registers.

We already flush TM SPRs into vcpu when switching out of CPU, and load
TM SPRs when switching back.

This patch corrects mfspr()/mtspr() emulation for TM SPRs to make the
actual source/dest based on actual TM SPRs.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/kvm/book3s_emulate.c | 35 +++++++++++++++++++++++++++--------
 1 file changed, 27 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index e096d01..c2836330 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -521,13 +521,26 @@ int kvmppc_core_emulate_mtspr_pr(struct kvm_vcpu *vcpu, int sprn, ulong spr_val)
 		break;
 #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
 	case SPRN_TFHAR:
-		vcpu->arch.tfhar = spr_val;
-		break;
 	case SPRN_TEXASR:
-		vcpu->arch.texasr = spr_val;
-		break;
 	case SPRN_TFIAR:
-		vcpu->arch.tfiar = spr_val;
+		if (MSR_TM_ACTIVE(kvmppc_get_msr(vcpu))) {
+			/* it is illegal to mtspr() TM regs in
+			 * other than non-transactional state.
+			 */
+			kvmppc_core_queue_program(vcpu, SRR1_PROGTM);
+			emulated = EMULATE_AGAIN;
+			break;
+		}
+
+		tm_enable();
+		if (sprn == SPRN_TFHAR)
+			mtspr(SPRN_TFHAR, spr_val);
+		else if (sprn == SPRN_TEXASR)
+			mtspr(SPRN_TEXASR, spr_val);
+		else
+			mtspr(SPRN_TFIAR, spr_val);
+		tm_disable();
+
 		break;
 #endif
 #endif
@@ -674,13 +687,19 @@ int kvmppc_core_emulate_mfspr_pr(struct kvm_vcpu *vcpu, int sprn, ulong *spr_val
 		break;
 #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
 	case SPRN_TFHAR:
-		*spr_val = vcpu->arch.tfhar;
+		tm_enable();
+		*spr_val = mfspr(SPRN_TFHAR);
+		tm_disable();
 		break;
 	case SPRN_TEXASR:
-		*spr_val = vcpu->arch.texasr;
+		tm_enable();
+		*spr_val = mfspr(SPRN_TEXASR);
+		tm_disable();
 		break;
 	case SPRN_TFIAR:
-		*spr_val = vcpu->arch.tfiar;
+		tm_enable();
+		*spr_val = mfspr(SPRN_TFIAR);
+		tm_disable();
 		break;
 #endif
 #endif
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 18/26] KVM: PPC: Book3S PR: make mtspr/mfspr emulation behavior based on active TM SPRs
@ 2018-01-11 10:11   ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

The mfspr/mtspr on TM SPRs(TEXASR/TFIAR/TFHAR) are non-privileged
instructions and can be executed at PR KVM guest without trapping
into host in problem state. We only emulate mtspr/mfspr
texasr/tfiar/tfhar at guest PR=0 state.

When we are emulating mtspr tm sprs at guest PR=0 state, the emulation
result need to be visible to guest PR=1 state. That is, the actual TM
SPR val should be loaded into actual registers.

We already flush TM SPRs into vcpu when switching out of CPU, and load
TM SPRs when switching back.

This patch corrects mfspr()/mtspr() emulation for TM SPRs to make the
actual source/dest based on actual TM SPRs.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/kvm/book3s_emulate.c | 35 +++++++++++++++++++++++++++--------
 1 file changed, 27 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index e096d01..c2836330 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -521,13 +521,26 @@ int kvmppc_core_emulate_mtspr_pr(struct kvm_vcpu *vcpu, int sprn, ulong spr_val)
 		break;
 #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
 	case SPRN_TFHAR:
-		vcpu->arch.tfhar = spr_val;
-		break;
 	case SPRN_TEXASR:
-		vcpu->arch.texasr = spr_val;
-		break;
 	case SPRN_TFIAR:
-		vcpu->arch.tfiar = spr_val;
+		if (MSR_TM_ACTIVE(kvmppc_get_msr(vcpu))) {
+			/* it is illegal to mtspr() TM regs in
+			 * other than non-transactional state.
+			 */
+			kvmppc_core_queue_program(vcpu, SRR1_PROGTM);
+			emulated = EMULATE_AGAIN;
+			break;
+		}
+
+		tm_enable();
+		if (sprn = SPRN_TFHAR)
+			mtspr(SPRN_TFHAR, spr_val);
+		else if (sprn = SPRN_TEXASR)
+			mtspr(SPRN_TEXASR, spr_val);
+		else
+			mtspr(SPRN_TFIAR, spr_val);
+		tm_disable();
+
 		break;
 #endif
 #endif
@@ -674,13 +687,19 @@ int kvmppc_core_emulate_mfspr_pr(struct kvm_vcpu *vcpu, int sprn, ulong *spr_val
 		break;
 #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
 	case SPRN_TFHAR:
-		*spr_val = vcpu->arch.tfhar;
+		tm_enable();
+		*spr_val = mfspr(SPRN_TFHAR);
+		tm_disable();
 		break;
 	case SPRN_TEXASR:
-		*spr_val = vcpu->arch.texasr;
+		tm_enable();
+		*spr_val = mfspr(SPRN_TEXASR);
+		tm_disable();
 		break;
 	case SPRN_TFIAR:
-		*spr_val = vcpu->arch.tfiar;
+		tm_enable();
+		*spr_val = mfspr(SPRN_TFIAR);
+		tm_disable();
 		break;
 #endif
 #endif
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 19/26] KVM: PPC: Book3S PR: always fail transaction in guest privilege state
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 10:11   ` wei.guo.simon
  -1 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

Currently kernel doesn't use transaction memory.
And there is an issue for privilege guest that:
tbegin/tsuspend/tresume/tabort TM instructions can impact MSR TM bits
without trap into PR host. So following code will lead to a false mfmsr
result:
	tbegin	<- MSR bits update to Transaction active.
	beq 	<- failover handler branch
	mfmsr	<- still read MSR bits from magic page with
		transaction inactive.

It is not an issue for non-privilege guest since its mfmsr is not patched
with magic page and will always trap into PR host.

This patch will always fail tbegin attempt for privilege guest, so that
the above issue is prevented. It is benign since currently (guest) kernel
doesn't initiate a transaction.

Test case:
https://github.com/justdoitqd/publicFiles/blob/master/test_tbegin_pr.c

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_book3s.h |  1 +
 arch/powerpc/kvm/book3s_emulate.c     | 34 ++++++++++++++++++++++++++++++++++
 arch/powerpc/kvm/book3s_pr.c          | 11 ++++++++++-
 3 files changed, 45 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index d8dbfa5..524cd82 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -257,6 +257,7 @@ extern void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
 #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
 void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu);
 void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu);
+void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu);
 #endif
 
 extern int kvm_irq_bypass;
diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index c2836330..1eb1900 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -23,6 +23,7 @@
 #include <asm/reg.h>
 #include <asm/switch_to.h>
 #include <asm/time.h>
+#include <asm/tm.h>
 #include "book3s.h"
 
 #define OP_19_XOP_RFID		18
@@ -47,6 +48,8 @@
 #define OP_31_XOP_EIOIO		854
 #define OP_31_XOP_SLBMFEE	915
 
+#define OP_31_XOP_TBEGIN	654
+
 /* DCBZ is actually 1014, but we patch it to 1010 so we get a trap */
 #define OP_31_XOP_DCBZ		1010
 
@@ -360,6 +363,37 @@ int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 			break;
 		}
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+		case OP_31_XOP_TBEGIN:
+		{
+			if (!(kvmppc_get_msr(vcpu) & MSR_PR)) {
+				preempt_disable();
+				vcpu->arch.cr = (CR0_TBEGIN_FAILURE |
+				  (vcpu->arch.cr & ~(CR0_MASK << CR0_SHIFT)));
+
+				vcpu->arch.texasr = (TEXASR_FS | TEXASR_EX |
+					(((u64)(TM_CAUSE_EMULATE | TM_CAUSE_PERSISTENT))
+						 << TEXASR_FC_LG));
+
+				if ((inst >> 21) & 0x1)
+					vcpu->arch.texasr |= TEXASR_ROT;
+
+				if (kvmppc_get_msr(vcpu) & MSR_PR)
+					vcpu->arch.texasr |= TEXASR_PR;
+
+				if (kvmppc_get_msr(vcpu) & MSR_HV)
+					vcpu->arch.texasr |= TEXASR_HV;
+
+				vcpu->arch.tfhar = kvmppc_get_pc(vcpu) + 4;
+				vcpu->arch.tfiar = kvmppc_get_pc(vcpu);
+
+				kvmppc_restore_tm_sprs(vcpu);
+				preempt_enable();
+			} else
+				emulated = EMULATE_FAIL;
+			break;
+		}
+#endif
 		default:
 			emulated = EMULATE_FAIL;
 		}
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index c35bd02..a26f4db 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -255,7 +255,7 @@ static inline void kvmppc_save_tm_sprs(struct kvm_vcpu *vcpu)
 	tm_disable();
 }
 
-static inline void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu)
+inline void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu)
 {
 	tm_enable();
 	mtspr(SPRN_TFHAR, vcpu->arch.tfhar);
@@ -447,6 +447,15 @@ static void kvmppc_recalc_shadow_msr(struct kvm_vcpu *vcpu)
 	    (PVR_VER(guest_pvr) == PVR_970GX))
 		smsr |= MSR_HV;
 #endif
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+	/*
+	 * in guest privileged state, we want to fail all TM transactions.
+	 * So disable MSR TM bit so that all tbegin. will be able to be
+	 * trapped into host.
+	 */
+	if (!(guest_msr & MSR_PR))
+		smsr &= ~MSR_TM;
+#endif
 	vcpu->arch.shadow_msr = smsr;
 }
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 19/26] KVM: PPC: Book3S PR: always fail transaction in guest privilege state
@ 2018-01-11 10:11   ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

Currently kernel doesn't use transaction memory.
And there is an issue for privilege guest that:
tbegin/tsuspend/tresume/tabort TM instructions can impact MSR TM bits
without trap into PR host. So following code will lead to a false mfmsr
result:
	tbegin	<- MSR bits update to Transaction active.
	beq 	<- failover handler branch
	mfmsr	<- still read MSR bits from magic page with
		transaction inactive.

It is not an issue for non-privilege guest since its mfmsr is not patched
with magic page and will always trap into PR host.

This patch will always fail tbegin attempt for privilege guest, so that
the above issue is prevented. It is benign since currently (guest) kernel
doesn't initiate a transaction.

Test case:
https://github.com/justdoitqd/publicFiles/blob/master/test_tbegin_pr.c

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_book3s.h |  1 +
 arch/powerpc/kvm/book3s_emulate.c     | 34 ++++++++++++++++++++++++++++++++++
 arch/powerpc/kvm/book3s_pr.c          | 11 ++++++++++-
 3 files changed, 45 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index d8dbfa5..524cd82 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -257,6 +257,7 @@ extern void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
 #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
 void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu);
 void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu);
+void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu);
 #endif
 
 extern int kvm_irq_bypass;
diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index c2836330..1eb1900 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -23,6 +23,7 @@
 #include <asm/reg.h>
 #include <asm/switch_to.h>
 #include <asm/time.h>
+#include <asm/tm.h>
 #include "book3s.h"
 
 #define OP_19_XOP_RFID		18
@@ -47,6 +48,8 @@
 #define OP_31_XOP_EIOIO		854
 #define OP_31_XOP_SLBMFEE	915
 
+#define OP_31_XOP_TBEGIN	654
+
 /* DCBZ is actually 1014, but we patch it to 1010 so we get a trap */
 #define OP_31_XOP_DCBZ		1010
 
@@ -360,6 +363,37 @@ int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 			break;
 		}
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+		case OP_31_XOP_TBEGIN:
+		{
+			if (!(kvmppc_get_msr(vcpu) & MSR_PR)) {
+				preempt_disable();
+				vcpu->arch.cr = (CR0_TBEGIN_FAILURE |
+				  (vcpu->arch.cr & ~(CR0_MASK << CR0_SHIFT)));
+
+				vcpu->arch.texasr = (TEXASR_FS | TEXASR_EX |
+					(((u64)(TM_CAUSE_EMULATE | TM_CAUSE_PERSISTENT))
+						 << TEXASR_FC_LG));
+
+				if ((inst >> 21) & 0x1)
+					vcpu->arch.texasr |= TEXASR_ROT;
+
+				if (kvmppc_get_msr(vcpu) & MSR_PR)
+					vcpu->arch.texasr |= TEXASR_PR;
+
+				if (kvmppc_get_msr(vcpu) & MSR_HV)
+					vcpu->arch.texasr |= TEXASR_HV;
+
+				vcpu->arch.tfhar = kvmppc_get_pc(vcpu) + 4;
+				vcpu->arch.tfiar = kvmppc_get_pc(vcpu);
+
+				kvmppc_restore_tm_sprs(vcpu);
+				preempt_enable();
+			} else
+				emulated = EMULATE_FAIL;
+			break;
+		}
+#endif
 		default:
 			emulated = EMULATE_FAIL;
 		}
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index c35bd02..a26f4db 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -255,7 +255,7 @@ static inline void kvmppc_save_tm_sprs(struct kvm_vcpu *vcpu)
 	tm_disable();
 }
 
-static inline void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu)
+inline void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu)
 {
 	tm_enable();
 	mtspr(SPRN_TFHAR, vcpu->arch.tfhar);
@@ -447,6 +447,15 @@ static void kvmppc_recalc_shadow_msr(struct kvm_vcpu *vcpu)
 	    (PVR_VER(guest_pvr) = PVR_970GX))
 		smsr |= MSR_HV;
 #endif
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+	/*
+	 * in guest privileged state, we want to fail all TM transactions.
+	 * So disable MSR TM bit so that all tbegin. will be able to be
+	 * trapped into host.
+	 */
+	if (!(guest_msr & MSR_PR))
+		smsr &= ~MSR_TM;
+#endif
 	vcpu->arch.shadow_msr = smsr;
 }
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 20/26] KVM: PPC: Book3S PR: enable NV reg restore for reading TM SPR at guest privilege state
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 10:11   ` wei.guo.simon
  -1 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

Currently kvmppc_handle_fac() will not update NV GPRs and thus it can
return with GUEST_RESUME.

However PR KVM guest always disables MSR_TM bit at privilege state. If PR
privilege guest are trying to read TM SPRs, it will trigger TM facility
unavailable exception and fall into kvmppc_handle_fac(). Then the emulation
will be done by kvmppc_core_emulate_mfspr_pr(). The mfspr instruction can
include a RT with NV reg. So it is necessary to restore NV GPRs at this
case, to reflect the update to NV RT.

This patch make kvmppc_handle_fac() return GUEST_RESUME_NV at TM fac
exception and with guest privilege state.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/kvm/book3s_pr.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index a26f4db..1d105fa 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -1030,6 +1030,18 @@ static int kvmppc_handle_fac(struct kvm_vcpu *vcpu, ulong fac)
 		break;
 	}
 
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+	/* Since we disabled MSR_TM at privilege state, the mfspr instruction
+	 * for TM spr can trigger TM fac unavailable. In this case, the
+	 * emulation is handled by kvmppc_emulate_fac(), which invokes
+	 * kvmppc_emulate_mfspr() finally. But note the mfspr can include
+	 * RT for NV registers. So it need to restore those NV reg to reflect
+	 * the update.
+	 */
+	if ((fac == FSCR_TM_LG) && !(kvmppc_get_msr(vcpu) & MSR_PR))
+		return RESUME_GUEST_NV;
+#endif
+
 	return RESUME_GUEST;
 }
 
@@ -1416,8 +1428,7 @@ int kvmppc_handle_exit_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	}
 #ifdef CONFIG_PPC_BOOK3S_64
 	case BOOK3S_INTERRUPT_FAC_UNAVAIL:
-		kvmppc_handle_fac(vcpu, vcpu->arch.shadow_fscr >> 56);
-		r = RESUME_GUEST;
+		r = kvmppc_handle_fac(vcpu, vcpu->arch.shadow_fscr >> 56);
 		break;
 #endif
 	case BOOK3S_INTERRUPT_MACHINE_CHECK:
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 20/26] KVM: PPC: Book3S PR: enable NV reg restore for reading TM SPR at guest privilege state
@ 2018-01-11 10:11   ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

Currently kvmppc_handle_fac() will not update NV GPRs and thus it can
return with GUEST_RESUME.

However PR KVM guest always disables MSR_TM bit at privilege state. If PR
privilege guest are trying to read TM SPRs, it will trigger TM facility
unavailable exception and fall into kvmppc_handle_fac(). Then the emulation
will be done by kvmppc_core_emulate_mfspr_pr(). The mfspr instruction can
include a RT with NV reg. So it is necessary to restore NV GPRs at this
case, to reflect the update to NV RT.

This patch make kvmppc_handle_fac() return GUEST_RESUME_NV at TM fac
exception and with guest privilege state.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/kvm/book3s_pr.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index a26f4db..1d105fa 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -1030,6 +1030,18 @@ static int kvmppc_handle_fac(struct kvm_vcpu *vcpu, ulong fac)
 		break;
 	}
 
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+	/* Since we disabled MSR_TM at privilege state, the mfspr instruction
+	 * for TM spr can trigger TM fac unavailable. In this case, the
+	 * emulation is handled by kvmppc_emulate_fac(), which invokes
+	 * kvmppc_emulate_mfspr() finally. But note the mfspr can include
+	 * RT for NV registers. So it need to restore those NV reg to reflect
+	 * the update.
+	 */
+	if ((fac = FSCR_TM_LG) && !(kvmppc_get_msr(vcpu) & MSR_PR))
+		return RESUME_GUEST_NV;
+#endif
+
 	return RESUME_GUEST;
 }
 
@@ -1416,8 +1428,7 @@ int kvmppc_handle_exit_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	}
 #ifdef CONFIG_PPC_BOOK3S_64
 	case BOOK3S_INTERRUPT_FAC_UNAVAIL:
-		kvmppc_handle_fac(vcpu, vcpu->arch.shadow_fscr >> 56);
-		r = RESUME_GUEST;
+		r = kvmppc_handle_fac(vcpu, vcpu->arch.shadow_fscr >> 56);
 		break;
 #endif
 	case BOOK3S_INTERRUPT_MACHINE_CHECK:
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 21/26] KVM: PPC: Book3S PR: adds emulation for treclaim.
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 10:11   ` wei.guo.simon
  -1 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

This patch adds support for "treclaim." emulation when PR KVM guest
executes treclaim. and traps to host.

We will firstly doing treclaim. and save TM checkpoint and doing
treclaim. Then it is necessary to update vcpu current reg content
with checkpointed vals. When rfid into guest again, those vcpu
current reg content(now the checkpoint vals) will be loaded into
regs.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/reg.h    |  4 +++
 arch/powerpc/kvm/book3s_emulate.c | 66 ++++++++++++++++++++++++++++++++++++++-
 2 files changed, 69 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index 6c293bc..b3bcf6b 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -244,12 +244,16 @@
 #define SPRN_TEXASR	0x82	/* Transaction EXception & Summary */
 #define SPRN_TEXASRU	0x83	/* ''	   ''	   ''	 Upper 32  */
 #define TEXASR_FC_LG	(63 - 7)	/* Failure Code */
+#define TEXASR_AB_LG	(63 - 31)	/* Abort */
+#define TEXASR_SU_LG	(63 - 32)	/* Suspend */
 #define TEXASR_HV_LG	(63 - 34)	/* Hypervisor state*/
 #define TEXASR_PR_LG	(63 - 35)	/* Privilege level */
 #define TEXASR_FS_LG	(63 - 36)	/* failure summary */
 #define TEXASR_EX_LG	(63 - 37)	/* TFIAR exact bit */
 #define TEXASR_ROT_LG	(63 - 38)	/* ROT bit */
 #define TEXASR_FC	(ASM_CONST(0xFF) << TEXASR_FC_LG)
+#define TEXASR_AB	__MASK(TEXASR_AB_LG)
+#define TEXASR_SU	__MASK(TEXASR_SU_LG)
 #define TEXASR_HV	__MASK(TEXASR_HV_LG)
 #define TEXASR_PR	__MASK(TEXASR_PR_LG)
 #define TEXASR_FS	__MASK(TEXASR_FS_LG)
diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index 1eb1900..51c0e20 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -25,6 +25,7 @@
 #include <asm/time.h>
 #include <asm/tm.h>
 #include "book3s.h"
+#include <asm/asm-prototypes.h>
 
 #define OP_19_XOP_RFID		18
 #define OP_19_XOP_RFI		50
@@ -50,6 +51,8 @@
 
 #define OP_31_XOP_TBEGIN	654
 
+#define OP_31_XOP_TRECLAIM	942
+
 /* DCBZ is actually 1014, but we patch it to 1010 so we get a trap */
 #define OP_31_XOP_DCBZ		1010
 
@@ -109,7 +112,7 @@ void kvmppc_copyto_vcpu_tm(struct kvm_vcpu *vcpu)
 	vcpu->arch.vrsave_tm = vcpu->arch.vrsave;
 }
 
-void kvmppc_copyfrom_vcpu_tm(struct kvm_vcpu *vcpu)
+static void kvmppc_copyfrom_vcpu_tm(struct kvm_vcpu *vcpu)
 {
 	memcpy(&vcpu->arch.gpr[0], &vcpu->arch.gpr_tm[0],
 			sizeof(vcpu->arch.gpr));
@@ -127,6 +130,42 @@ void kvmppc_copyfrom_vcpu_tm(struct kvm_vcpu *vcpu)
 	vcpu->arch.vrsave = vcpu->arch.vrsave_tm;
 }
 
+static void kvmppc_emulate_treclaim(struct kvm_vcpu *vcpu, int ra_val)
+{
+	unsigned long guest_msr = kvmppc_get_msr(vcpu);
+	int fc_val = ra_val ? ra_val : 1;
+
+	kvmppc_save_tm_pr(vcpu);
+
+	preempt_disable();
+	kvmppc_copyfrom_vcpu_tm(vcpu);
+	preempt_enable();
+
+	/*
+	 * treclaim need quit to non-transactional state.
+	 */
+	guest_msr &= ~(MSR_TS_MASK);
+	kvmppc_set_msr(vcpu, guest_msr);
+
+	preempt_disable();
+	tm_enable();
+	vcpu->arch.texasr = mfspr(SPRN_TEXASR);
+	vcpu->arch.texasr &= ~TEXASR_FC;
+	vcpu->arch.texasr |= ((u64)fc_val << TEXASR_FC_LG);
+
+	vcpu->arch.texasr &= ~(TEXASR_PR | TEXASR_HV);
+	if (kvmppc_get_msr(vcpu) & MSR_PR)
+		vcpu->arch.texasr |= TEXASR_PR;
+
+	if (kvmppc_get_msr(vcpu) & MSR_HV)
+		vcpu->arch.texasr |= TEXASR_HV;
+
+	vcpu->arch.tfiar = kvmppc_get_pc(vcpu);
+	mtspr(SPRN_TEXASR, vcpu->arch.texasr);
+	mtspr(SPRN_TFIAR, vcpu->arch.tfiar);
+	tm_disable();
+	preempt_enable();
+}
 #endif
 
 int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
@@ -393,6 +432,31 @@ int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
 				emulated = EMULATE_FAIL;
 			break;
 		}
+		case OP_31_XOP_TRECLAIM:
+		{
+			ulong guest_msr = kvmppc_get_msr(vcpu);
+			unsigned long ra_val = 0;
+
+			/* generate interrupt based on priorities */
+			if (guest_msr & MSR_PR) {
+				/* Privileged Instruction type Program Interrupt */
+				kvmppc_core_queue_program(vcpu, SRR1_PROGPRIV);
+				emulated = EMULATE_AGAIN;
+				break;
+			}
+
+			if (!MSR_TM_SUSPENDED(guest_msr)) {
+				/* TM bad thing interrupt */
+				kvmppc_core_queue_program(vcpu, SRR1_PROGTM);
+				emulated = EMULATE_AGAIN;
+				break;
+			}
+
+			if (ra)
+				ra_val = kvmppc_get_gpr(vcpu, ra);
+			kvmppc_emulate_treclaim(vcpu, ra_val);
+			break;
+		}
 #endif
 		default:
 			emulated = EMULATE_FAIL;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 21/26] KVM: PPC: Book3S PR: adds emulation for treclaim.
@ 2018-01-11 10:11   ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

This patch adds support for "treclaim." emulation when PR KVM guest
executes treclaim. and traps to host.

We will firstly doing treclaim. and save TM checkpoint and doing
treclaim. Then it is necessary to update vcpu current reg content
with checkpointed vals. When rfid into guest again, those vcpu
current reg content(now the checkpoint vals) will be loaded into
regs.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/reg.h    |  4 +++
 arch/powerpc/kvm/book3s_emulate.c | 66 ++++++++++++++++++++++++++++++++++++++-
 2 files changed, 69 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index 6c293bc..b3bcf6b 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -244,12 +244,16 @@
 #define SPRN_TEXASR	0x82	/* Transaction EXception & Summary */
 #define SPRN_TEXASRU	0x83	/* ''	   ''	   ''	 Upper 32  */
 #define TEXASR_FC_LG	(63 - 7)	/* Failure Code */
+#define TEXASR_AB_LG	(63 - 31)	/* Abort */
+#define TEXASR_SU_LG	(63 - 32)	/* Suspend */
 #define TEXASR_HV_LG	(63 - 34)	/* Hypervisor state*/
 #define TEXASR_PR_LG	(63 - 35)	/* Privilege level */
 #define TEXASR_FS_LG	(63 - 36)	/* failure summary */
 #define TEXASR_EX_LG	(63 - 37)	/* TFIAR exact bit */
 #define TEXASR_ROT_LG	(63 - 38)	/* ROT bit */
 #define TEXASR_FC	(ASM_CONST(0xFF) << TEXASR_FC_LG)
+#define TEXASR_AB	__MASK(TEXASR_AB_LG)
+#define TEXASR_SU	__MASK(TEXASR_SU_LG)
 #define TEXASR_HV	__MASK(TEXASR_HV_LG)
 #define TEXASR_PR	__MASK(TEXASR_PR_LG)
 #define TEXASR_FS	__MASK(TEXASR_FS_LG)
diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index 1eb1900..51c0e20 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -25,6 +25,7 @@
 #include <asm/time.h>
 #include <asm/tm.h>
 #include "book3s.h"
+#include <asm/asm-prototypes.h>
 
 #define OP_19_XOP_RFID		18
 #define OP_19_XOP_RFI		50
@@ -50,6 +51,8 @@
 
 #define OP_31_XOP_TBEGIN	654
 
+#define OP_31_XOP_TRECLAIM	942
+
 /* DCBZ is actually 1014, but we patch it to 1010 so we get a trap */
 #define OP_31_XOP_DCBZ		1010
 
@@ -109,7 +112,7 @@ void kvmppc_copyto_vcpu_tm(struct kvm_vcpu *vcpu)
 	vcpu->arch.vrsave_tm = vcpu->arch.vrsave;
 }
 
-void kvmppc_copyfrom_vcpu_tm(struct kvm_vcpu *vcpu)
+static void kvmppc_copyfrom_vcpu_tm(struct kvm_vcpu *vcpu)
 {
 	memcpy(&vcpu->arch.gpr[0], &vcpu->arch.gpr_tm[0],
 			sizeof(vcpu->arch.gpr));
@@ -127,6 +130,42 @@ void kvmppc_copyfrom_vcpu_tm(struct kvm_vcpu *vcpu)
 	vcpu->arch.vrsave = vcpu->arch.vrsave_tm;
 }
 
+static void kvmppc_emulate_treclaim(struct kvm_vcpu *vcpu, int ra_val)
+{
+	unsigned long guest_msr = kvmppc_get_msr(vcpu);
+	int fc_val = ra_val ? ra_val : 1;
+
+	kvmppc_save_tm_pr(vcpu);
+
+	preempt_disable();
+	kvmppc_copyfrom_vcpu_tm(vcpu);
+	preempt_enable();
+
+	/*
+	 * treclaim need quit to non-transactional state.
+	 */
+	guest_msr &= ~(MSR_TS_MASK);
+	kvmppc_set_msr(vcpu, guest_msr);
+
+	preempt_disable();
+	tm_enable();
+	vcpu->arch.texasr = mfspr(SPRN_TEXASR);
+	vcpu->arch.texasr &= ~TEXASR_FC;
+	vcpu->arch.texasr |= ((u64)fc_val << TEXASR_FC_LG);
+
+	vcpu->arch.texasr &= ~(TEXASR_PR | TEXASR_HV);
+	if (kvmppc_get_msr(vcpu) & MSR_PR)
+		vcpu->arch.texasr |= TEXASR_PR;
+
+	if (kvmppc_get_msr(vcpu) & MSR_HV)
+		vcpu->arch.texasr |= TEXASR_HV;
+
+	vcpu->arch.tfiar = kvmppc_get_pc(vcpu);
+	mtspr(SPRN_TEXASR, vcpu->arch.texasr);
+	mtspr(SPRN_TFIAR, vcpu->arch.tfiar);
+	tm_disable();
+	preempt_enable();
+}
 #endif
 
 int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
@@ -393,6 +432,31 @@ int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
 				emulated = EMULATE_FAIL;
 			break;
 		}
+		case OP_31_XOP_TRECLAIM:
+		{
+			ulong guest_msr = kvmppc_get_msr(vcpu);
+			unsigned long ra_val = 0;
+
+			/* generate interrupt based on priorities */
+			if (guest_msr & MSR_PR) {
+				/* Privileged Instruction type Program Interrupt */
+				kvmppc_core_queue_program(vcpu, SRR1_PROGPRIV);
+				emulated = EMULATE_AGAIN;
+				break;
+			}
+
+			if (!MSR_TM_SUSPENDED(guest_msr)) {
+				/* TM bad thing interrupt */
+				kvmppc_core_queue_program(vcpu, SRR1_PROGTM);
+				emulated = EMULATE_AGAIN;
+				break;
+			}
+
+			if (ra)
+				ra_val = kvmppc_get_gpr(vcpu, ra);
+			kvmppc_emulate_treclaim(vcpu, ra_val);
+			break;
+		}
 #endif
 		default:
 			emulated = EMULATE_FAIL;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 22/26] KVM: PPC: Book3S PR: add emulation for trechkpt in PR KVM.
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 10:11   ` wei.guo.simon
  -1 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

This patch adds host emulation when guest PR KVM executes "trechkpt.",
which is a privileged instruction and will trap into host.

We firstly copy vcpu ongoing content into vcpu tm checkpoint
content, then perform kvmppc_restore_tm_pr() to do trechkpt.
with updated vcpu tm checkpoint vals.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/kvm/book3s_emulate.c | 57 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 56 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index 51c0e20..52a2e46 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -52,6 +52,7 @@
 #define OP_31_XOP_TBEGIN	654
 
 #define OP_31_XOP_TRECLAIM	942
+#define OP_31_XOP_TRCHKPT	1006
 
 /* DCBZ is actually 1014, but we patch it to 1010 so we get a trap */
 #define OP_31_XOP_DCBZ		1010
@@ -94,7 +95,7 @@ static bool spr_allowed(struct kvm_vcpu *vcpu, enum priv_level level)
 }
 
 #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
-void kvmppc_copyto_vcpu_tm(struct kvm_vcpu *vcpu)
+static void kvmppc_copyto_vcpu_tm(struct kvm_vcpu *vcpu)
 {
 	memcpy(&vcpu->arch.gpr_tm[0], &vcpu->arch.gpr[0],
 			sizeof(vcpu->arch.gpr_tm));
@@ -166,6 +167,32 @@ static void kvmppc_emulate_treclaim(struct kvm_vcpu *vcpu, int ra_val)
 	tm_disable();
 	preempt_enable();
 }
+
+static void kvmppc_emulate_trchkpt(struct kvm_vcpu *vcpu)
+{
+	unsigned long guest_msr = kvmppc_get_msr(vcpu);
+
+	preempt_disable();
+	vcpu->arch.save_msr_tm = MSR_TS_S;
+	vcpu->arch.save_msr_tm &= ~(MSR_FP | MSR_VEC | MSR_VSX);
+	vcpu->arch.save_msr_tm |= (vcpu->arch.guest_owned_ext &
+			(MSR_FP | MSR_VEC | MSR_VSX));
+	/*
+	 * need flush FP/VEC/VSX to vcpu save area before
+	 * copy.
+	 */
+	kvmppc_giveup_ext(vcpu, MSR_VSX);
+	kvmppc_copyto_vcpu_tm(vcpu);
+	kvmppc_restore_tm_pr(vcpu);
+	preempt_enable();
+
+	/*
+	 * as a result of trecheckpoint. set TS to suspended.
+	 */
+	guest_msr &= ~(MSR_TS_MASK);
+	guest_msr |= MSR_TS_S;
+	kvmppc_set_msr(vcpu, guest_msr);
+}
 #endif
 
 int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
@@ -457,6 +484,34 @@ int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			kvmppc_emulate_treclaim(vcpu, ra_val);
 			break;
 		}
+		case OP_31_XOP_TRCHKPT:
+		{
+			ulong guest_msr = kvmppc_get_msr(vcpu);
+			unsigned long texasr;
+
+			/* generate interrupt based on priorities */
+			if (guest_msr & MSR_PR) {
+				/* Privileged Instruction type Program Intr */
+				kvmppc_core_queue_program(vcpu, SRR1_PROGPRIV);
+				emulated = EMULATE_AGAIN;
+				break;
+			}
+
+			tm_enable();
+			texasr = mfspr(SPRN_TEXASR);
+			tm_disable();
+
+			if (MSR_TM_ACTIVE(guest_msr) ||
+				!(texasr & (TEXASR_FS))) {
+				/* TM bad thing interrupt */
+				kvmppc_core_queue_program(vcpu, SRR1_PROGTM);
+				emulated = EMULATE_AGAIN;
+				break;
+			}
+
+			kvmppc_emulate_trchkpt(vcpu);
+			break;
+		}
 #endif
 		default:
 			emulated = EMULATE_FAIL;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 22/26] KVM: PPC: Book3S PR: add emulation for trechkpt in PR KVM.
@ 2018-01-11 10:11   ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

This patch adds host emulation when guest PR KVM executes "trechkpt.",
which is a privileged instruction and will trap into host.

We firstly copy vcpu ongoing content into vcpu tm checkpoint
content, then perform kvmppc_restore_tm_pr() to do trechkpt.
with updated vcpu tm checkpoint vals.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/kvm/book3s_emulate.c | 57 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 56 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index 51c0e20..52a2e46 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -52,6 +52,7 @@
 #define OP_31_XOP_TBEGIN	654
 
 #define OP_31_XOP_TRECLAIM	942
+#define OP_31_XOP_TRCHKPT	1006
 
 /* DCBZ is actually 1014, but we patch it to 1010 so we get a trap */
 #define OP_31_XOP_DCBZ		1010
@@ -94,7 +95,7 @@ static bool spr_allowed(struct kvm_vcpu *vcpu, enum priv_level level)
 }
 
 #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
-void kvmppc_copyto_vcpu_tm(struct kvm_vcpu *vcpu)
+static void kvmppc_copyto_vcpu_tm(struct kvm_vcpu *vcpu)
 {
 	memcpy(&vcpu->arch.gpr_tm[0], &vcpu->arch.gpr[0],
 			sizeof(vcpu->arch.gpr_tm));
@@ -166,6 +167,32 @@ static void kvmppc_emulate_treclaim(struct kvm_vcpu *vcpu, int ra_val)
 	tm_disable();
 	preempt_enable();
 }
+
+static void kvmppc_emulate_trchkpt(struct kvm_vcpu *vcpu)
+{
+	unsigned long guest_msr = kvmppc_get_msr(vcpu);
+
+	preempt_disable();
+	vcpu->arch.save_msr_tm = MSR_TS_S;
+	vcpu->arch.save_msr_tm &= ~(MSR_FP | MSR_VEC | MSR_VSX);
+	vcpu->arch.save_msr_tm |= (vcpu->arch.guest_owned_ext &
+			(MSR_FP | MSR_VEC | MSR_VSX));
+	/*
+	 * need flush FP/VEC/VSX to vcpu save area before
+	 * copy.
+	 */
+	kvmppc_giveup_ext(vcpu, MSR_VSX);
+	kvmppc_copyto_vcpu_tm(vcpu);
+	kvmppc_restore_tm_pr(vcpu);
+	preempt_enable();
+
+	/*
+	 * as a result of trecheckpoint. set TS to suspended.
+	 */
+	guest_msr &= ~(MSR_TS_MASK);
+	guest_msr |= MSR_TS_S;
+	kvmppc_set_msr(vcpu, guest_msr);
+}
 #endif
 
 int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
@@ -457,6 +484,34 @@ int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			kvmppc_emulate_treclaim(vcpu, ra_val);
 			break;
 		}
+		case OP_31_XOP_TRCHKPT:
+		{
+			ulong guest_msr = kvmppc_get_msr(vcpu);
+			unsigned long texasr;
+
+			/* generate interrupt based on priorities */
+			if (guest_msr & MSR_PR) {
+				/* Privileged Instruction type Program Intr */
+				kvmppc_core_queue_program(vcpu, SRR1_PROGPRIV);
+				emulated = EMULATE_AGAIN;
+				break;
+			}
+
+			tm_enable();
+			texasr = mfspr(SPRN_TEXASR);
+			tm_disable();
+
+			if (MSR_TM_ACTIVE(guest_msr) ||
+				!(texasr & (TEXASR_FS))) {
+				/* TM bad thing interrupt */
+				kvmppc_core_queue_program(vcpu, SRR1_PROGTM);
+				emulated = EMULATE_AGAIN;
+				break;
+			}
+
+			kvmppc_emulate_trchkpt(vcpu);
+			break;
+		}
 #endif
 		default:
 			emulated = EMULATE_FAIL;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 23/26] KVM: PPC: Book3S PR: add emulation for tabort. for privilege guest
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 10:11   ` wei.guo.simon
  -1 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

Currently privilege guest will be run with TM disabled.

Although the privilege guest cannot initiate a new transaction,
it can use tabort to terminate its problem state's transaction.
So it is still necessary to emulate tabort. for privilege guest.

This patch adds emulation for tabort. of privilege guest.

Tested with:
https://github.com/justdoitqd/publicFiles/blob/master/test_tabort.c

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_book3s.h |  1 +
 arch/powerpc/kvm/book3s_emulate.c     | 31 +++++++++++++++++++++++++++++++
 arch/powerpc/kvm/book3s_pr.c          |  2 +-
 3 files changed, 33 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 524cd82..8bd454c 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -258,6 +258,7 @@ extern void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
 void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu);
 void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu);
 void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu);
+void kvmppc_save_tm_sprs(struct kvm_vcpu *vcpu);
 #endif
 
 extern int kvm_irq_bypass;
diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index 52a2e46..65eb236 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -50,6 +50,7 @@
 #define OP_31_XOP_SLBMFEE	915
 
 #define OP_31_XOP_TBEGIN	654
+#define OP_31_XOP_TABORT	910
 
 #define OP_31_XOP_TRECLAIM	942
 #define OP_31_XOP_TRCHKPT	1006
@@ -193,6 +194,19 @@ static void kvmppc_emulate_trchkpt(struct kvm_vcpu *vcpu)
 	guest_msr |= MSR_TS_S;
 	kvmppc_set_msr(vcpu, guest_msr);
 }
+
+/* emulate tabort. at guest privilege state */
+static void kvmppc_emulate_tabort(struct kvm_vcpu *vcpu, int ra_val)
+{
+	/* currently we only emulate tabort. but no emulation of other
+	 * tabort variants since there is no kernel usage of them at
+	 * present.
+	 */
+	tm_enable();
+	tm_abort(ra_val);
+	tm_disable();
+}
+
 #endif
 
 int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
@@ -459,6 +473,23 @@ int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
 				emulated = EMULATE_FAIL;
 			break;
 		}
+		case OP_31_XOP_TABORT:
+		{
+			ulong guest_msr = kvmppc_get_msr(vcpu);
+			unsigned long ra_val = 0;
+
+			/* only emulate for privilege guest, since problem state
+			 * guest can run with TM enabled and we don't expect to
+			 * trap at here for that case.
+			 */
+			WARN_ON(guest_msr & MSR_PR);
+
+			if (ra)
+				ra_val = kvmppc_get_gpr(vcpu, ra);
+
+			kvmppc_emulate_tabort(vcpu, ra_val);
+			break;
+		}
 		case OP_31_XOP_TRECLAIM:
 		{
 			ulong guest_msr = kvmppc_get_msr(vcpu);
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 1d105fa..f65415b 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -246,7 +246,7 @@ void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
 }
 
 #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
-static inline void kvmppc_save_tm_sprs(struct kvm_vcpu *vcpu)
+inline void kvmppc_save_tm_sprs(struct kvm_vcpu *vcpu)
 {
 	tm_enable();
 	vcpu->arch.tfhar = mfspr(SPRN_TFHAR);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 23/26] KVM: PPC: Book3S PR: add emulation for tabort. for privilege guest
@ 2018-01-11 10:11   ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

Currently privilege guest will be run with TM disabled.

Although the privilege guest cannot initiate a new transaction,
it can use tabort to terminate its problem state's transaction.
So it is still necessary to emulate tabort. for privilege guest.

This patch adds emulation for tabort. of privilege guest.

Tested with:
https://github.com/justdoitqd/publicFiles/blob/master/test_tabort.c

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_book3s.h |  1 +
 arch/powerpc/kvm/book3s_emulate.c     | 31 +++++++++++++++++++++++++++++++
 arch/powerpc/kvm/book3s_pr.c          |  2 +-
 3 files changed, 33 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 524cd82..8bd454c 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -258,6 +258,7 @@ extern void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
 void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu);
 void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu);
 void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu);
+void kvmppc_save_tm_sprs(struct kvm_vcpu *vcpu);
 #endif
 
 extern int kvm_irq_bypass;
diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index 52a2e46..65eb236 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -50,6 +50,7 @@
 #define OP_31_XOP_SLBMFEE	915
 
 #define OP_31_XOP_TBEGIN	654
+#define OP_31_XOP_TABORT	910
 
 #define OP_31_XOP_TRECLAIM	942
 #define OP_31_XOP_TRCHKPT	1006
@@ -193,6 +194,19 @@ static void kvmppc_emulate_trchkpt(struct kvm_vcpu *vcpu)
 	guest_msr |= MSR_TS_S;
 	kvmppc_set_msr(vcpu, guest_msr);
 }
+
+/* emulate tabort. at guest privilege state */
+static void kvmppc_emulate_tabort(struct kvm_vcpu *vcpu, int ra_val)
+{
+	/* currently we only emulate tabort. but no emulation of other
+	 * tabort variants since there is no kernel usage of them at
+	 * present.
+	 */
+	tm_enable();
+	tm_abort(ra_val);
+	tm_disable();
+}
+
 #endif
 
 int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
@@ -459,6 +473,23 @@ int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
 				emulated = EMULATE_FAIL;
 			break;
 		}
+		case OP_31_XOP_TABORT:
+		{
+			ulong guest_msr = kvmppc_get_msr(vcpu);
+			unsigned long ra_val = 0;
+
+			/* only emulate for privilege guest, since problem state
+			 * guest can run with TM enabled and we don't expect to
+			 * trap at here for that case.
+			 */
+			WARN_ON(guest_msr & MSR_PR);
+
+			if (ra)
+				ra_val = kvmppc_get_gpr(vcpu, ra);
+
+			kvmppc_emulate_tabort(vcpu, ra_val);
+			break;
+		}
 		case OP_31_XOP_TRECLAIM:
 		{
 			ulong guest_msr = kvmppc_get_msr(vcpu);
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 1d105fa..f65415b 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -246,7 +246,7 @@ void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
 }
 
 #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
-static inline void kvmppc_save_tm_sprs(struct kvm_vcpu *vcpu)
+inline void kvmppc_save_tm_sprs(struct kvm_vcpu *vcpu)
 {
 	tm_enable();
 	vcpu->arch.tfhar = mfspr(SPRN_TFHAR);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 24/26] KVM: PPC: Book3S PR: add guard code to prevent returning to guest with PR=0 and Transactional state
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 10:11   ` wei.guo.simon
  -1 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

Currently PR KVM doesn't support transaction memory at guest privilege
state.

This patch adds a check at setting guest msr, so that we can never return
to guest with PR=0 and TS=0b10. A tabort will be emulated to indicate
this and fail transaction immediately.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/uapi/asm/tm.h |  2 +-
 arch/powerpc/kvm/book3s.h          |  1 +
 arch/powerpc/kvm/book3s_emulate.c  |  2 +-
 arch/powerpc/kvm/book3s_pr.c       | 13 ++++++++++++-
 4 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/uapi/asm/tm.h b/arch/powerpc/include/uapi/asm/tm.h
index e1bf0e2..e2947c9 100644
--- a/arch/powerpc/include/uapi/asm/tm.h
+++ b/arch/powerpc/include/uapi/asm/tm.h
@@ -13,7 +13,7 @@
 #define TM_CAUSE_TLBI		0xdc
 #define TM_CAUSE_FAC_UNAV	0xda
 #define TM_CAUSE_SYSCALL	0xd8
-#define TM_CAUSE_MISC		0xd6  /* future use */
+#define TM_CAUSE_PRIV_T		0xd6
 #define TM_CAUSE_SIGNAL		0xd4
 #define TM_CAUSE_ALIGNMENT	0xd2
 #define TM_CAUSE_EMULATE	0xd0
diff --git a/arch/powerpc/kvm/book3s.h b/arch/powerpc/kvm/book3s.h
index d2b3ec0..9beb57b 100644
--- a/arch/powerpc/kvm/book3s.h
+++ b/arch/powerpc/kvm/book3s.h
@@ -32,4 +32,5 @@ extern int kvmppc_core_emulate_mfspr_pr(struct kvm_vcpu *vcpu,
 extern int kvmppc_book3s_init_pr(void);
 extern void kvmppc_book3s_exit_pr(void);
 
+extern void kvmppc_emulate_tabort(struct kvm_vcpu *vcpu, int ra_val);
 #endif
diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index 65eb236..11d76be 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -196,7 +196,7 @@ static void kvmppc_emulate_trchkpt(struct kvm_vcpu *vcpu)
 }
 
 /* emulate tabort. at guest privilege state */
-static void kvmppc_emulate_tabort(struct kvm_vcpu *vcpu, int ra_val)
+void kvmppc_emulate_tabort(struct kvm_vcpu *vcpu, int ra_val)
 {
 	/* currently we only emulate tabort. but no emulation of other
 	 * tabort variants since there is no kernel usage of them at
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index f65415b..cc568bc 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -461,12 +461,23 @@ static void kvmppc_recalc_shadow_msr(struct kvm_vcpu *vcpu)
 
 static void kvmppc_set_msr_pr(struct kvm_vcpu *vcpu, u64 msr)
 {
-	ulong old_msr = kvmppc_get_msr(vcpu);
+	ulong old_msr;
 
 #ifdef EXIT_DEBUG
 	printk(KERN_INFO "KVM: Set MSR to 0x%llx\n", msr);
 #endif
 
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+	/* We should never target guest MSR to TS=10 && PR=0,
+	 * since we always fail transaction for guest privilege
+	 * state.
+	 */
+	if (!(msr & MSR_PR) && MSR_TM_TRANSACTIONAL(msr))
+		kvmppc_emulate_tabort(vcpu,
+			TM_CAUSE_PRIV_T | TM_CAUSE_PERSISTENT);
+#endif
+
+	old_msr = kvmppc_get_msr(vcpu);
 	msr &= to_book3s(vcpu)->msr_mask;
 	kvmppc_set_msr_fast(vcpu, msr);
 	kvmppc_recalc_shadow_msr(vcpu);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 24/26] KVM: PPC: Book3S PR: add guard code to prevent returning to guest with PR=0 and Transa
@ 2018-01-11 10:11   ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

Currently PR KVM doesn't support transaction memory at guest privilege
state.

This patch adds a check at setting guest msr, so that we can never return
to guest with PR=0 and TS\v10. A tabort will be emulated to indicate
this and fail transaction immediately.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/uapi/asm/tm.h |  2 +-
 arch/powerpc/kvm/book3s.h          |  1 +
 arch/powerpc/kvm/book3s_emulate.c  |  2 +-
 arch/powerpc/kvm/book3s_pr.c       | 13 ++++++++++++-
 4 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/uapi/asm/tm.h b/arch/powerpc/include/uapi/asm/tm.h
index e1bf0e2..e2947c9 100644
--- a/arch/powerpc/include/uapi/asm/tm.h
+++ b/arch/powerpc/include/uapi/asm/tm.h
@@ -13,7 +13,7 @@
 #define TM_CAUSE_TLBI		0xdc
 #define TM_CAUSE_FAC_UNAV	0xda
 #define TM_CAUSE_SYSCALL	0xd8
-#define TM_CAUSE_MISC		0xd6  /* future use */
+#define TM_CAUSE_PRIV_T		0xd6
 #define TM_CAUSE_SIGNAL		0xd4
 #define TM_CAUSE_ALIGNMENT	0xd2
 #define TM_CAUSE_EMULATE	0xd0
diff --git a/arch/powerpc/kvm/book3s.h b/arch/powerpc/kvm/book3s.h
index d2b3ec0..9beb57b 100644
--- a/arch/powerpc/kvm/book3s.h
+++ b/arch/powerpc/kvm/book3s.h
@@ -32,4 +32,5 @@ extern int kvmppc_core_emulate_mfspr_pr(struct kvm_vcpu *vcpu,
 extern int kvmppc_book3s_init_pr(void);
 extern void kvmppc_book3s_exit_pr(void);
 
+extern void kvmppc_emulate_tabort(struct kvm_vcpu *vcpu, int ra_val);
 #endif
diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index 65eb236..11d76be 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -196,7 +196,7 @@ static void kvmppc_emulate_trchkpt(struct kvm_vcpu *vcpu)
 }
 
 /* emulate tabort. at guest privilege state */
-static void kvmppc_emulate_tabort(struct kvm_vcpu *vcpu, int ra_val)
+void kvmppc_emulate_tabort(struct kvm_vcpu *vcpu, int ra_val)
 {
 	/* currently we only emulate tabort. but no emulation of other
 	 * tabort variants since there is no kernel usage of them at
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index f65415b..cc568bc 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -461,12 +461,23 @@ static void kvmppc_recalc_shadow_msr(struct kvm_vcpu *vcpu)
 
 static void kvmppc_set_msr_pr(struct kvm_vcpu *vcpu, u64 msr)
 {
-	ulong old_msr = kvmppc_get_msr(vcpu);
+	ulong old_msr;
 
 #ifdef EXIT_DEBUG
 	printk(KERN_INFO "KVM: Set MSR to 0x%llx\n", msr);
 #endif
 
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+	/* We should never target guest MSR to TS\x10 && PR=0,
+	 * since we always fail transaction for guest privilege
+	 * state.
+	 */
+	if (!(msr & MSR_PR) && MSR_TM_TRANSACTIONAL(msr))
+		kvmppc_emulate_tabort(vcpu,
+			TM_CAUSE_PRIV_T | TM_CAUSE_PERSISTENT);
+#endif
+
+	old_msr = kvmppc_get_msr(vcpu);
 	msr &= to_book3s(vcpu)->msr_mask;
 	kvmppc_set_msr_fast(vcpu, msr);
 	kvmppc_recalc_shadow_msr(vcpu);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 25/26] KVM: PPC: Book3S PR: Support TAR handling for PR KVM HTM.
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 10:11   ` wei.guo.simon
  -1 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

Currently guest kernel doesn't handle TAR fac unavailable and it always
runs with TAR bit on. PR KVM will lazily enable TAR. TAR is not a
frequent-use reg and it is not included in SVCPU struct.

To make it work for transaction memory at PR KVM:
1). Flush/giveup TAR at kvmppc_save_tm_pr().
2) If we are receiving a TAR fac unavail exception inside a transaction,
the checkpointed TAR might be a TAR value from another process. So we need
treclaim the transaction, then load the desired TAR value into reg, and
perform trecheckpoint.
3) Load TAR facility at kvmppc_restore_tm_pr() when TM active.
The reason we always loads TAR when restoring TM is that:
If we don't do this way, when there is a TAR fac unavailable exception
during TM active:
case 1: it is the 1st TAR fac unavail exception after tbegin.
vcpu->arch.tar should be reloaded as checkpoint tar val.
case 2: it is the 2nd or later TAR fac unavail exception after tbegin.
vcpu->arch.tar_tm should be reloaded as checkpoint tar val.
There will be unnecessary difficulty to handle the above 2 cases.

at the end of emulating treclaim., the correct TAR val need to be loaded
into reg if FSCR_TAR bit is on.
at the beginning of emulating trechkpt., TAR needs to be flushed so that
the right tar val can be copy into tar_tm.

Tested with:
tools/testing/selftests/powerpc/tm/tm-tar
tools/testing/selftests/powerpc/ptrace/ptrace-tm-tar (remove DSCR/PPR
related testing).

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_book3s.h |  1 +
 arch/powerpc/kvm/book3s_emulate.c     |  4 ++++
 arch/powerpc/kvm/book3s_pr.c          | 31 +++++++++++++++++++++++++++++--
 arch/powerpc/kvm/tm.S                 | 16 ++++++++++++++--
 4 files changed, 48 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 8bd454c..6635506 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -259,6 +259,7 @@ extern void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
 void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu);
 void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu);
 void kvmppc_save_tm_sprs(struct kvm_vcpu *vcpu);
+void kvmppc_giveup_fac(struct kvm_vcpu *vcpu, ulong fac);
 #endif
 
 extern int kvm_irq_bypass;
diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index 11d76be..52ae307 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -167,6 +167,9 @@ static void kvmppc_emulate_treclaim(struct kvm_vcpu *vcpu, int ra_val)
 	mtspr(SPRN_TFIAR, vcpu->arch.tfiar);
 	tm_disable();
 	preempt_enable();
+
+	if (vcpu->arch.shadow_fscr & FSCR_TAR)
+		mtspr(SPRN_TAR, vcpu->arch.tar);
 }
 
 static void kvmppc_emulate_trchkpt(struct kvm_vcpu *vcpu)
@@ -183,6 +186,7 @@ static void kvmppc_emulate_trchkpt(struct kvm_vcpu *vcpu)
 	 * copy.
 	 */
 	kvmppc_giveup_ext(vcpu, MSR_VSX);
+	kvmppc_giveup_fac(vcpu, FSCR_TAR_LG);
 	kvmppc_copyto_vcpu_tm(vcpu);
 	kvmppc_restore_tm_pr(vcpu);
 	preempt_enable();
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index cc568bc..9085524 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -56,7 +56,6 @@
 static int kvmppc_handle_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr,
 			     ulong msr);
 static int kvmppc_load_ext(struct kvm_vcpu *vcpu, ulong msr);
-static void kvmppc_giveup_fac(struct kvm_vcpu *vcpu, ulong fac);
 
 /* Some compatibility defines */
 #ifdef CONFIG_PPC_BOOK3S_32
@@ -306,6 +305,7 @@ void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu)
 	vcpu->arch.save_msr_tm |= (vcpu->arch.guest_owned_ext &
 			(MSR_FP | MSR_VEC | MSR_VSX));
 
+	kvmppc_giveup_fac(vcpu, FSCR_TAR_LG);
 	kvmppc_giveup_ext(vcpu, MSR_VSX);
 
 	preempt_disable();
@@ -320,8 +320,20 @@ void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu)
 		return;
 	}
 
+
 	preempt_disable();
 	_kvmppc_restore_tm_pr(vcpu, vcpu->arch.save_msr_tm);
+
+	if (!(vcpu->arch.shadow_fscr & FSCR_TAR)) {
+		/* always restore TAR in TM active state, since we don't
+		 * want to be confused at fac unavailable while TM active:
+		 * load vcpu->arch.tar or vcpu->arch.tar_tm as chkpt value?
+		 */
+		current->thread.tar = mfspr(SPRN_TAR);
+		mtspr(SPRN_TAR, vcpu->arch.tar);
+		vcpu->arch.shadow_fscr |= FSCR_TAR;
+	}
+
 	preempt_enable();
 
 	if (vcpu->arch.save_msr_tm & MSR_VSX)
@@ -333,6 +345,7 @@ void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu)
 		if (vcpu->arch.save_msr_tm & MSR_FP)
 			kvmppc_load_ext(vcpu, MSR_FP);
 	}
+
 }
 #endif
 
@@ -828,7 +841,7 @@ void kvmppc_giveup_ext(struct kvm_vcpu *vcpu, ulong msr)
 }
 
 /* Give up facility (TAR / EBB / DSCR) */
-static void kvmppc_giveup_fac(struct kvm_vcpu *vcpu, ulong fac)
+void kvmppc_giveup_fac(struct kvm_vcpu *vcpu, ulong fac)
 {
 #ifdef CONFIG_PPC_BOOK3S_64
 	if (!(vcpu->arch.shadow_fscr & (1ULL << fac))) {
@@ -1031,6 +1044,20 @@ static int kvmppc_handle_fac(struct kvm_vcpu *vcpu, ulong fac)
 
 	switch (fac) {
 	case FSCR_TAR_LG:
+		if (MSR_TM_ACTIVE(mfmsr())) {
+			/* When tbegin. was executed, the TAR in checkpoint
+			 * state might be invalid. We need treclaim., then
+			 * load correct TAR value, and perform trechkpt.,
+			 * so that valid TAR val can be checkpointed.
+			 */
+			preempt_disable();
+			kvmppc_save_tm_pr(vcpu);
+
+			vcpu->arch.tar_tm = vcpu->arch.tar;
+
+			kvmppc_restore_tm_pr(vcpu);
+			preempt_enable();
+		}
 		/* TAR switching isn't lazy in Linux yet */
 		current->thread.tar = mfspr(SPRN_TAR);
 		mtspr(SPRN_TAR, vcpu->arch.tar);
diff --git a/arch/powerpc/kvm/tm.S b/arch/powerpc/kvm/tm.S
index 5752bae..8b73af4 100644
--- a/arch/powerpc/kvm/tm.S
+++ b/arch/powerpc/kvm/tm.S
@@ -164,13 +164,16 @@ _GLOBAL(_kvmppc_save_tm_pr)
 	mfmsr	r5
 	SAVE_GPR(5, r1)
 
-	/* also save DSCR/CR so that it can be recovered later */
+	/* also save DSCR/CR/TAR so that it can be recovered later */
 	mfspr   r6, SPRN_DSCR
 	SAVE_GPR(6, r1)
 
 	mfcr    r7
 	stw     r7, _CCR(r1)
 
+	mfspr   r8, SPRN_TAR
+	SAVE_GPR(8, r1)
+
 	/* allocate stack frame for __kvmppc_save_tm since
 	 * it will save LR into its stackframe and we don't
 	 * want to corrupt _kvmppc_save_tm_pr's.
@@ -179,6 +182,9 @@ _GLOBAL(_kvmppc_save_tm_pr)
 	bl	__kvmppc_save_tm
 	addi    r1, r1, PPC_MIN_STKFRM
 
+	REST_GPR(8, r1)
+	mtspr   SPRN_TAR, r8
+
 	ld      r7, _CCR(r1)
 	mtcr	r7
 
@@ -341,13 +347,16 @@ _GLOBAL(_kvmppc_restore_tm_pr)
 	mfmsr	r5
 	SAVE_GPR(5, r1)
 
-	/* also save DSCR/CR so that it can be recovered later */
+	/* also save DSCR/CR/TAR so that it can be recovered later */
 	mfspr   r6, SPRN_DSCR
 	SAVE_GPR(6, r1)
 
 	mfcr    r7
 	stw     r7, _CCR(r1)
 
+	mfspr   r8, SPRN_TAR
+	SAVE_GPR(8, r1)
+
 	/* allocate stack frame for __kvmppc_restore_tm since
 	 * it will save LR into its own stackframe.
 	 */
@@ -356,6 +365,9 @@ _GLOBAL(_kvmppc_restore_tm_pr)
 	bl	__kvmppc_restore_tm
 	addi    r1, r1, PPC_MIN_STKFRM
 
+	REST_GPR(8, r1)
+	mtspr   SPRN_TAR, r8
+
 	ld      r7, _CCR(r1)
 	mtcr	r7
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 25/26] KVM: PPC: Book3S PR: Support TAR handling for PR KVM HTM.
@ 2018-01-11 10:11   ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

Currently guest kernel doesn't handle TAR fac unavailable and it always
runs with TAR bit on. PR KVM will lazily enable TAR. TAR is not a
frequent-use reg and it is not included in SVCPU struct.

To make it work for transaction memory at PR KVM:
1). Flush/giveup TAR at kvmppc_save_tm_pr().
2) If we are receiving a TAR fac unavail exception inside a transaction,
the checkpointed TAR might be a TAR value from another process. So we need
treclaim the transaction, then load the desired TAR value into reg, and
perform trecheckpoint.
3) Load TAR facility at kvmppc_restore_tm_pr() when TM active.
The reason we always loads TAR when restoring TM is that:
If we don't do this way, when there is a TAR fac unavailable exception
during TM active:
case 1: it is the 1st TAR fac unavail exception after tbegin.
vcpu->arch.tar should be reloaded as checkpoint tar val.
case 2: it is the 2nd or later TAR fac unavail exception after tbegin.
vcpu->arch.tar_tm should be reloaded as checkpoint tar val.
There will be unnecessary difficulty to handle the above 2 cases.

at the end of emulating treclaim., the correct TAR val need to be loaded
into reg if FSCR_TAR bit is on.
at the beginning of emulating trechkpt., TAR needs to be flushed so that
the right tar val can be copy into tar_tm.

Tested with:
tools/testing/selftests/powerpc/tm/tm-tar
tools/testing/selftests/powerpc/ptrace/ptrace-tm-tar (remove DSCR/PPR
related testing).

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_book3s.h |  1 +
 arch/powerpc/kvm/book3s_emulate.c     |  4 ++++
 arch/powerpc/kvm/book3s_pr.c          | 31 +++++++++++++++++++++++++++++--
 arch/powerpc/kvm/tm.S                 | 16 ++++++++++++++--
 4 files changed, 48 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 8bd454c..6635506 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -259,6 +259,7 @@ extern void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
 void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu);
 void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu);
 void kvmppc_save_tm_sprs(struct kvm_vcpu *vcpu);
+void kvmppc_giveup_fac(struct kvm_vcpu *vcpu, ulong fac);
 #endif
 
 extern int kvm_irq_bypass;
diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index 11d76be..52ae307 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -167,6 +167,9 @@ static void kvmppc_emulate_treclaim(struct kvm_vcpu *vcpu, int ra_val)
 	mtspr(SPRN_TFIAR, vcpu->arch.tfiar);
 	tm_disable();
 	preempt_enable();
+
+	if (vcpu->arch.shadow_fscr & FSCR_TAR)
+		mtspr(SPRN_TAR, vcpu->arch.tar);
 }
 
 static void kvmppc_emulate_trchkpt(struct kvm_vcpu *vcpu)
@@ -183,6 +186,7 @@ static void kvmppc_emulate_trchkpt(struct kvm_vcpu *vcpu)
 	 * copy.
 	 */
 	kvmppc_giveup_ext(vcpu, MSR_VSX);
+	kvmppc_giveup_fac(vcpu, FSCR_TAR_LG);
 	kvmppc_copyto_vcpu_tm(vcpu);
 	kvmppc_restore_tm_pr(vcpu);
 	preempt_enable();
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index cc568bc..9085524 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -56,7 +56,6 @@
 static int kvmppc_handle_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr,
 			     ulong msr);
 static int kvmppc_load_ext(struct kvm_vcpu *vcpu, ulong msr);
-static void kvmppc_giveup_fac(struct kvm_vcpu *vcpu, ulong fac);
 
 /* Some compatibility defines */
 #ifdef CONFIG_PPC_BOOK3S_32
@@ -306,6 +305,7 @@ void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu)
 	vcpu->arch.save_msr_tm |= (vcpu->arch.guest_owned_ext &
 			(MSR_FP | MSR_VEC | MSR_VSX));
 
+	kvmppc_giveup_fac(vcpu, FSCR_TAR_LG);
 	kvmppc_giveup_ext(vcpu, MSR_VSX);
 
 	preempt_disable();
@@ -320,8 +320,20 @@ void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu)
 		return;
 	}
 
+
 	preempt_disable();
 	_kvmppc_restore_tm_pr(vcpu, vcpu->arch.save_msr_tm);
+
+	if (!(vcpu->arch.shadow_fscr & FSCR_TAR)) {
+		/* always restore TAR in TM active state, since we don't
+		 * want to be confused at fac unavailable while TM active:
+		 * load vcpu->arch.tar or vcpu->arch.tar_tm as chkpt value?
+		 */
+		current->thread.tar = mfspr(SPRN_TAR);
+		mtspr(SPRN_TAR, vcpu->arch.tar);
+		vcpu->arch.shadow_fscr |= FSCR_TAR;
+	}
+
 	preempt_enable();
 
 	if (vcpu->arch.save_msr_tm & MSR_VSX)
@@ -333,6 +345,7 @@ void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu)
 		if (vcpu->arch.save_msr_tm & MSR_FP)
 			kvmppc_load_ext(vcpu, MSR_FP);
 	}
+
 }
 #endif
 
@@ -828,7 +841,7 @@ void kvmppc_giveup_ext(struct kvm_vcpu *vcpu, ulong msr)
 }
 
 /* Give up facility (TAR / EBB / DSCR) */
-static void kvmppc_giveup_fac(struct kvm_vcpu *vcpu, ulong fac)
+void kvmppc_giveup_fac(struct kvm_vcpu *vcpu, ulong fac)
 {
 #ifdef CONFIG_PPC_BOOK3S_64
 	if (!(vcpu->arch.shadow_fscr & (1ULL << fac))) {
@@ -1031,6 +1044,20 @@ static int kvmppc_handle_fac(struct kvm_vcpu *vcpu, ulong fac)
 
 	switch (fac) {
 	case FSCR_TAR_LG:
+		if (MSR_TM_ACTIVE(mfmsr())) {
+			/* When tbegin. was executed, the TAR in checkpoint
+			 * state might be invalid. We need treclaim., then
+			 * load correct TAR value, and perform trechkpt.,
+			 * so that valid TAR val can be checkpointed.
+			 */
+			preempt_disable();
+			kvmppc_save_tm_pr(vcpu);
+
+			vcpu->arch.tar_tm = vcpu->arch.tar;
+
+			kvmppc_restore_tm_pr(vcpu);
+			preempt_enable();
+		}
 		/* TAR switching isn't lazy in Linux yet */
 		current->thread.tar = mfspr(SPRN_TAR);
 		mtspr(SPRN_TAR, vcpu->arch.tar);
diff --git a/arch/powerpc/kvm/tm.S b/arch/powerpc/kvm/tm.S
index 5752bae..8b73af4 100644
--- a/arch/powerpc/kvm/tm.S
+++ b/arch/powerpc/kvm/tm.S
@@ -164,13 +164,16 @@ _GLOBAL(_kvmppc_save_tm_pr)
 	mfmsr	r5
 	SAVE_GPR(5, r1)
 
-	/* also save DSCR/CR so that it can be recovered later */
+	/* also save DSCR/CR/TAR so that it can be recovered later */
 	mfspr   r6, SPRN_DSCR
 	SAVE_GPR(6, r1)
 
 	mfcr    r7
 	stw     r7, _CCR(r1)
 
+	mfspr   r8, SPRN_TAR
+	SAVE_GPR(8, r1)
+
 	/* allocate stack frame for __kvmppc_save_tm since
 	 * it will save LR into its stackframe and we don't
 	 * want to corrupt _kvmppc_save_tm_pr's.
@@ -179,6 +182,9 @@ _GLOBAL(_kvmppc_save_tm_pr)
 	bl	__kvmppc_save_tm
 	addi    r1, r1, PPC_MIN_STKFRM
 
+	REST_GPR(8, r1)
+	mtspr   SPRN_TAR, r8
+
 	ld      r7, _CCR(r1)
 	mtcr	r7
 
@@ -341,13 +347,16 @@ _GLOBAL(_kvmppc_restore_tm_pr)
 	mfmsr	r5
 	SAVE_GPR(5, r1)
 
-	/* also save DSCR/CR so that it can be recovered later */
+	/* also save DSCR/CR/TAR so that it can be recovered later */
 	mfspr   r6, SPRN_DSCR
 	SAVE_GPR(6, r1)
 
 	mfcr    r7
 	stw     r7, _CCR(r1)
 
+	mfspr   r8, SPRN_TAR
+	SAVE_GPR(8, r1)
+
 	/* allocate stack frame for __kvmppc_restore_tm since
 	 * it will save LR into its own stackframe.
 	 */
@@ -356,6 +365,9 @@ _GLOBAL(_kvmppc_restore_tm_pr)
 	bl	__kvmppc_restore_tm
 	addi    r1, r1, PPC_MIN_STKFRM
 
+	REST_GPR(8, r1)
+	mtspr   SPRN_TAR, r8
+
 	ld      r7, _CCR(r1)
 	mtcr	r7
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 26/26] KVM: PPC: Book3S PR: enable HTM for PR KVM for KVM_CHECK_EXTENSION ioctl
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 10:11   ` wei.guo.simon
  -1 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

With current patch set, PR KVM now supports HTM. So this patch turns it
on for PR KVM.

Tested with:
https://github.com/justdoitqd/publicFiles/blob/master/test_kvm_htm_cap.c

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/kvm/powerpc.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 1915e86..0b431aa 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -643,8 +643,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 		break;
 #endif
 	case KVM_CAP_PPC_HTM:
-		r = hv_enabled &&
-		    (cur_cpu_spec->cpu_user_features2 & PPC_FEATURE2_HTM_COMP);
+		r = (cur_cpu_spec->cpu_user_features2 & PPC_FEATURE2_HTM_COMP);
 		break;
 	default:
 		r = 0;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 26/26] KVM: PPC: Book3S PR: enable HTM for PR KVM for KVM_CHECK_EXTENSION ioctl
@ 2018-01-11 10:11   ` wei.guo.simon
  0 siblings, 0 replies; 116+ messages in thread
From: wei.guo.simon @ 2018-01-11 10:11 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras, kvm, kvm-ppc, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

With current patch set, PR KVM now supports HTM. So this patch turns it
on for PR KVM.

Tested with:
https://github.com/justdoitqd/publicFiles/blob/master/test_kvm_htm_cap.c

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/kvm/powerpc.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 1915e86..0b431aa 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -643,8 +643,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 		break;
 #endif
 	case KVM_CAP_PPC_HTM:
-		r = hv_enabled &&
-		    (cur_cpu_spec->cpu_user_features2 & PPC_FEATURE2_HTM_COMP);
+		r = (cur_cpu_spec->cpu_user_features2 & PPC_FEATURE2_HTM_COMP);
 		break;
 	default:
 		r = 0;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 116+ messages in thread

* Re: [PATCH 00/26] KVM: PPC: Book3S PR: Transaction memory support on PR KVM
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-11 13:56   ` Gustavo Romero
  -1 siblings, 0 replies; 116+ messages in thread
From: Gustavo Romero @ 2018-01-11 13:56 UTC (permalink / raw)
  To: wei.guo.simon, linuxppc-dev; +Cc: kvm-ppc, kvm

Hi Simon,

On 01/11/2018 08:11 AM, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> In current days, many OS distributions have utilized transaction
> memory functionality. In PowerPC, HV KVM supports TM. But PR KVM
> does not.
> 
> The drive for the transaction memory support of PR KVM is the
> openstack Continuous Integration testing - They runs a HV(hypervisor)
> KVM(as level 1) and then run PR KVM(as level 2) on top of that.
> 
> This patch set add transaction memory support on PR KVM.

Is this correct to assume that this emulation mode will just kick in on P9
with kernel TM workarounds and HV KVM will continue to be used on POWER8
since HV KVM is supported on POWER8 hosts?


Regards,
Gustavo

> Test cases performed:
> linux/tools/testing/selftests/powerpc/tm/tm-syscall
> linux/tools/testing/selftests/powerpc/tm/tm-fork
> linux/tools/testing/selftests/powerpc/tm/tm-vmx-unavail
> linux/tools/testing/selftests/powerpc/tm/tm-tmspr
> linux/tools/testing/selftests/powerpc/tm/tm-signal-msr-resv
> linux/tools/testing/selftests/powerpc/math/vsx_preempt
> linux/tools/testing/selftests/powerpc/math/fpu_signal
> linux/tools/testing/selftests/powerpc/math/vmx_preempt
> linux/tools/testing/selftests/powerpc/math/fpu_syscall
> linux/tools/testing/selftests/powerpc/math/vmx_syscall
> linux/tools/testing/selftests/powerpc/math/fpu_preempt
> linux/tools/testing/selftests/powerpc/math/vmx_signal
> linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-gpr
> linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr
> linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-vsx
> linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spr
> linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-vsx
> https://github.com/justdoitqd/publicFiles/blob/master/test_tbegin_pr.c
> https://github.com/justdoitqd/publicFiles/blob/master/test_tabort.c
> https://github.com/justdoitqd/publicFiles/blob/master/test_kvm_htm_cap.c
> 
> Simon Guo (25):
>   KVM: PPC: Book3S PR: Move kvmppc_save_tm/kvmppc_restore_tm to separate
>     file
>   KVM: PPC: Book3S PR: add new parameter (guest MSR) for
>     kvmppc_save_tm()/kvmppc_restore_tm()
>   KVM: PPC: Book3S PR: turn on FP/VSX/VMX MSR bits in kvmppc_save_tm()
>   KVM: PPC: Book3S PR: add C function wrapper for
>     _kvmppc_save/restore_tm()
>   KVM: PPC: Book3S PR: In PR KVM suspends Transactional state when
>     inject an interrupt.
>   KVM: PPC: Book3S PR: PR KVM pass through MSR TM/TS bits to shadow_msr.
>   KVM: PPC: Book3S PR: add TEXASR related macros
>   KVM: PPC: Book3S PR: Sync TM bits to shadow msr for problem state
>     guest
>   KVM: PPC: Book3S PR: implement RFID TM behavior to suppress change
>     from S0 to N0
>   KVM: PPC: Book3S PR: set MSR HV bit accordingly for PPC970 and others.
>   KVM: PPC: Book3S PR: prevent TS bits change in kvmppc_interrupt_pr()
>   powerpc: export symbol msr_check_and_set().
>   KVM: PPC: Book3S PR: adds new
>     kvmppc_copyto_vcpu_tm/kvmppc_copyfrom_vcpu_tm API for PR KVM.
>   KVM: PPC: Book3S PR: export tm_enable()/tm_disable/tm_abort() APIs
>   KVM: PPC: Book3S PR: add kvmppc_save/restore_tm_sprs() APIs
>   KVM: PPC: Book3S PR: add transaction memory save/restore skeleton for
>     PR KVM
>   KVM: PPC: Book3S PR: add math support for PR KVM HTM
>   KVM: PPC: Book3S PR: make mtspr/mfspr emulation behavior based on
>     active TM SPRs
>   KVM: PPC: Book3S PR: always fail transaction in guest privilege state
>   KVM: PPC: Book3S PR: enable NV reg restore for reading TM SPR at guest
>     privilege state
>   KVM: PPC: Book3S PR: adds emulation for treclaim.
>   KVM: PPC: Book3S PR: add emulation for trechkpt in PR KVM.
>   KVM: PPC: Book3S PR: add emulation for tabort. for privilege guest
>   KVM: PPC: Book3S PR: add guard code to prevent returning to guest with
>     PR=0 and Transactional state
>   KVM: PPC: Book3S PR: enable HTM for PR KVM for KVM_CHECK_EXTENSION
>     ioctl
> 
>  arch/powerpc/include/asm/asm-prototypes.h   |  10 +
>  arch/powerpc/include/asm/kvm_book3s.h       |   8 +
>  arch/powerpc/include/asm/kvm_host.h         |   3 +
>  arch/powerpc/include/asm/reg.h              |  25 +-
>  arch/powerpc/include/asm/tm.h               |   2 -
>  arch/powerpc/include/uapi/asm/tm.h          |   2 +-
>  arch/powerpc/kernel/process.c               |   1 +
>  arch/powerpc/kernel/tm.S                    |  12 +
>  arch/powerpc/kvm/Makefile                   |   3 +
>  arch/powerpc/kvm/book3s.h                   |   1 +
>  arch/powerpc/kvm/book3s_64_mmu.c            |  11 +-
>  arch/powerpc/kvm/book3s_emulate.c           | 279 +++++++++++++++++++-
>  arch/powerpc/kvm/book3s_hv_rmhandlers.S     | 259 ++-----------------
>  arch/powerpc/kvm/book3s_pr.c                | 256 +++++++++++++++++--
>  arch/powerpc/kvm/book3s_segment.S           |  13 +
>  arch/powerpc/kvm/powerpc.c                  |   3 +-
>  arch/powerpc/kvm/tm.S                       | 379 ++++++++++++++++++++++++++++
>  arch/powerpc/mm/hash_utils_64.c             |   1 +
>  arch/powerpc/platforms/powernv/copy-paste.h |   3 +-
>  19 files changed, 982 insertions(+), 289 deletions(-)
>  create mode 100644 arch/powerpc/kvm/tm.S
> 

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 00/26] KVM: PPC: Book3S PR: Transaction memory support on PR KVM
@ 2018-01-11 13:56   ` Gustavo Romero
  0 siblings, 0 replies; 116+ messages in thread
From: Gustavo Romero @ 2018-01-11 13:56 UTC (permalink / raw)
  To: wei.guo.simon, linuxppc-dev; +Cc: kvm-ppc, kvm

Hi Simon,

On 01/11/2018 08:11 AM, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> In current days, many OS distributions have utilized transaction
> memory functionality. In PowerPC, HV KVM supports TM. But PR KVM
> does not.
> 
> The drive for the transaction memory support of PR KVM is the
> openstack Continuous Integration testing - They runs a HV(hypervisor)
> KVM(as level 1) and then run PR KVM(as level 2) on top of that.
> 
> This patch set add transaction memory support on PR KVM.

Is this correct to assume that this emulation mode will just kick in on P9
with kernel TM workarounds and HV KVM will continue to be used on POWER8
since HV KVM is supported on POWER8 hosts?


Regards,
Gustavo

> Test cases performed:
> linux/tools/testing/selftests/powerpc/tm/tm-syscall
> linux/tools/testing/selftests/powerpc/tm/tm-fork
> linux/tools/testing/selftests/powerpc/tm/tm-vmx-unavail
> linux/tools/testing/selftests/powerpc/tm/tm-tmspr
> linux/tools/testing/selftests/powerpc/tm/tm-signal-msr-resv
> linux/tools/testing/selftests/powerpc/math/vsx_preempt
> linux/tools/testing/selftests/powerpc/math/fpu_signal
> linux/tools/testing/selftests/powerpc/math/vmx_preempt
> linux/tools/testing/selftests/powerpc/math/fpu_syscall
> linux/tools/testing/selftests/powerpc/math/vmx_syscall
> linux/tools/testing/selftests/powerpc/math/fpu_preempt
> linux/tools/testing/selftests/powerpc/math/vmx_signal
> linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-gpr
> linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr
> linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-vsx
> linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spr
> linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-vsx
> https://github.com/justdoitqd/publicFiles/blob/master/test_tbegin_pr.c
> https://github.com/justdoitqd/publicFiles/blob/master/test_tabort.c
> https://github.com/justdoitqd/publicFiles/blob/master/test_kvm_htm_cap.c
> 
> Simon Guo (25):
>   KVM: PPC: Book3S PR: Move kvmppc_save_tm/kvmppc_restore_tm to separate
>     file
>   KVM: PPC: Book3S PR: add new parameter (guest MSR) for
>     kvmppc_save_tm()/kvmppc_restore_tm()
>   KVM: PPC: Book3S PR: turn on FP/VSX/VMX MSR bits in kvmppc_save_tm()
>   KVM: PPC: Book3S PR: add C function wrapper for
>     _kvmppc_save/restore_tm()
>   KVM: PPC: Book3S PR: In PR KVM suspends Transactional state when
>     inject an interrupt.
>   KVM: PPC: Book3S PR: PR KVM pass through MSR TM/TS bits to shadow_msr.
>   KVM: PPC: Book3S PR: add TEXASR related macros
>   KVM: PPC: Book3S PR: Sync TM bits to shadow msr for problem state
>     guest
>   KVM: PPC: Book3S PR: implement RFID TM behavior to suppress change
>     from S0 to N0
>   KVM: PPC: Book3S PR: set MSR HV bit accordingly for PPC970 and others.
>   KVM: PPC: Book3S PR: prevent TS bits change in kvmppc_interrupt_pr()
>   powerpc: export symbol msr_check_and_set().
>   KVM: PPC: Book3S PR: adds new
>     kvmppc_copyto_vcpu_tm/kvmppc_copyfrom_vcpu_tm API for PR KVM.
>   KVM: PPC: Book3S PR: export tm_enable()/tm_disable/tm_abort() APIs
>   KVM: PPC: Book3S PR: add kvmppc_save/restore_tm_sprs() APIs
>   KVM: PPC: Book3S PR: add transaction memory save/restore skeleton for
>     PR KVM
>   KVM: PPC: Book3S PR: add math support for PR KVM HTM
>   KVM: PPC: Book3S PR: make mtspr/mfspr emulation behavior based on
>     active TM SPRs
>   KVM: PPC: Book3S PR: always fail transaction in guest privilege state
>   KVM: PPC: Book3S PR: enable NV reg restore for reading TM SPR at guest
>     privilege state
>   KVM: PPC: Book3S PR: adds emulation for treclaim.
>   KVM: PPC: Book3S PR: add emulation for trechkpt in PR KVM.
>   KVM: PPC: Book3S PR: add emulation for tabort. for privilege guest
>   KVM: PPC: Book3S PR: add guard code to prevent returning to guest with
>     PR=0 and Transactional state
>   KVM: PPC: Book3S PR: enable HTM for PR KVM for KVM_CHECK_EXTENSION
>     ioctl
> 
>  arch/powerpc/include/asm/asm-prototypes.h   |  10 +
>  arch/powerpc/include/asm/kvm_book3s.h       |   8 +
>  arch/powerpc/include/asm/kvm_host.h         |   3 +
>  arch/powerpc/include/asm/reg.h              |  25 +-
>  arch/powerpc/include/asm/tm.h               |   2 -
>  arch/powerpc/include/uapi/asm/tm.h          |   2 +-
>  arch/powerpc/kernel/process.c               |   1 +
>  arch/powerpc/kernel/tm.S                    |  12 +
>  arch/powerpc/kvm/Makefile                   |   3 +
>  arch/powerpc/kvm/book3s.h                   |   1 +
>  arch/powerpc/kvm/book3s_64_mmu.c            |  11 +-
>  arch/powerpc/kvm/book3s_emulate.c           | 279 +++++++++++++++++++-
>  arch/powerpc/kvm/book3s_hv_rmhandlers.S     | 259 ++-----------------
>  arch/powerpc/kvm/book3s_pr.c                | 256 +++++++++++++++++--
>  arch/powerpc/kvm/book3s_segment.S           |  13 +
>  arch/powerpc/kvm/powerpc.c                  |   3 +-
>  arch/powerpc/kvm/tm.S                       | 379 ++++++++++++++++++++++++++++
>  arch/powerpc/mm/hash_utils_64.c             |   1 +
>  arch/powerpc/platforms/powernv/copy-paste.h |   3 +-
>  19 files changed, 982 insertions(+), 289 deletions(-)
>  create mode 100644 arch/powerpc/kvm/tm.S
> 


^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 00/26] KVM: PPC: Book3S PR: Transaction memory support on PR KVM
  2018-01-11 13:56   ` Gustavo Romero
@ 2018-01-11 22:04     ` Benjamin Herrenschmidt
  -1 siblings, 0 replies; 116+ messages in thread
From: Benjamin Herrenschmidt @ 2018-01-11 22:04 UTC (permalink / raw)
  To: Gustavo Romero, wei.guo.simon, linuxppc-dev; +Cc: kvm, kvm-ppc

On Thu, 2018-01-11 at 11:56 -0200, Gustavo Romero wrote:
> Hi Simon,
> 
> On 01/11/2018 08:11 AM, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > In current days, many OS distributions have utilized transaction
> > memory functionality. In PowerPC, HV KVM supports TM. But PR KVM
> > does not.
> > 
> > The drive for the transaction memory support of PR KVM is the
> > openstack Continuous Integration testing - They runs a HV(hypervisor)
> > KVM(as level 1) and then run PR KVM(as level 2) on top of that.
> > 
> > This patch set add transaction memory support on PR KVM.
> 
> Is this correct to assume that this emulation mode will just kick in on P9
> with kernel TM workarounds and HV KVM will continue to be used on POWER8
> since HV KVM is supported on POWER8 hosts?

HV KVM is supported on POWER9. In fact it's PR KVM that isn't (at least
not yet and never will be in Radix mode at least).

Cheers,
Ben.

> 
> 
> Regards,
> Gustavo
> 
> > Test cases performed:
> > linux/tools/testing/selftests/powerpc/tm/tm-syscall
> > linux/tools/testing/selftests/powerpc/tm/tm-fork
> > linux/tools/testing/selftests/powerpc/tm/tm-vmx-unavail
> > linux/tools/testing/selftests/powerpc/tm/tm-tmspr
> > linux/tools/testing/selftests/powerpc/tm/tm-signal-msr-resv
> > linux/tools/testing/selftests/powerpc/math/vsx_preempt
> > linux/tools/testing/selftests/powerpc/math/fpu_signal
> > linux/tools/testing/selftests/powerpc/math/vmx_preempt
> > linux/tools/testing/selftests/powerpc/math/fpu_syscall
> > linux/tools/testing/selftests/powerpc/math/vmx_syscall
> > linux/tools/testing/selftests/powerpc/math/fpu_preempt
> > linux/tools/testing/selftests/powerpc/math/vmx_signal
> > linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-gpr
> > linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr
> > linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-vsx
> > linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spr
> > linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-vsx
> > https://github.com/justdoitqd/publicFiles/blob/master/test_tbegin_pr.c
> > https://github.com/justdoitqd/publicFiles/blob/master/test_tabort.c
> > https://github.com/justdoitqd/publicFiles/blob/master/test_kvm_htm_cap.c
> > 
> > Simon Guo (25):
> >   KVM: PPC: Book3S PR: Move kvmppc_save_tm/kvmppc_restore_tm to separate
> >     file
> >   KVM: PPC: Book3S PR: add new parameter (guest MSR) for
> >     kvmppc_save_tm()/kvmppc_restore_tm()
> >   KVM: PPC: Book3S PR: turn on FP/VSX/VMX MSR bits in kvmppc_save_tm()
> >   KVM: PPC: Book3S PR: add C function wrapper for
> >     _kvmppc_save/restore_tm()
> >   KVM: PPC: Book3S PR: In PR KVM suspends Transactional state when
> >     inject an interrupt.
> >   KVM: PPC: Book3S PR: PR KVM pass through MSR TM/TS bits to shadow_msr.
> >   KVM: PPC: Book3S PR: add TEXASR related macros
> >   KVM: PPC: Book3S PR: Sync TM bits to shadow msr for problem state
> >     guest
> >   KVM: PPC: Book3S PR: implement RFID TM behavior to suppress change
> >     from S0 to N0
> >   KVM: PPC: Book3S PR: set MSR HV bit accordingly for PPC970 and others.
> >   KVM: PPC: Book3S PR: prevent TS bits change in kvmppc_interrupt_pr()
> >   powerpc: export symbol msr_check_and_set().
> >   KVM: PPC: Book3S PR: adds new
> >     kvmppc_copyto_vcpu_tm/kvmppc_copyfrom_vcpu_tm API for PR KVM.
> >   KVM: PPC: Book3S PR: export tm_enable()/tm_disable/tm_abort() APIs
> >   KVM: PPC: Book3S PR: add kvmppc_save/restore_tm_sprs() APIs
> >   KVM: PPC: Book3S PR: add transaction memory save/restore skeleton for
> >     PR KVM
> >   KVM: PPC: Book3S PR: add math support for PR KVM HTM
> >   KVM: PPC: Book3S PR: make mtspr/mfspr emulation behavior based on
> >     active TM SPRs
> >   KVM: PPC: Book3S PR: always fail transaction in guest privilege state
> >   KVM: PPC: Book3S PR: enable NV reg restore for reading TM SPR at guest
> >     privilege state
> >   KVM: PPC: Book3S PR: adds emulation for treclaim.
> >   KVM: PPC: Book3S PR: add emulation for trechkpt in PR KVM.
> >   KVM: PPC: Book3S PR: add emulation for tabort. for privilege guest
> >   KVM: PPC: Book3S PR: add guard code to prevent returning to guest with
> >     PR=0 and Transactional state
> >   KVM: PPC: Book3S PR: enable HTM for PR KVM for KVM_CHECK_EXTENSION
> >     ioctl
> > 
> >  arch/powerpc/include/asm/asm-prototypes.h   |  10 +
> >  arch/powerpc/include/asm/kvm_book3s.h       |   8 +
> >  arch/powerpc/include/asm/kvm_host.h         |   3 +
> >  arch/powerpc/include/asm/reg.h              |  25 +-
> >  arch/powerpc/include/asm/tm.h               |   2 -
> >  arch/powerpc/include/uapi/asm/tm.h          |   2 +-
> >  arch/powerpc/kernel/process.c               |   1 +
> >  arch/powerpc/kernel/tm.S                    |  12 +
> >  arch/powerpc/kvm/Makefile                   |   3 +
> >  arch/powerpc/kvm/book3s.h                   |   1 +
> >  arch/powerpc/kvm/book3s_64_mmu.c            |  11 +-
> >  arch/powerpc/kvm/book3s_emulate.c           | 279 +++++++++++++++++++-
> >  arch/powerpc/kvm/book3s_hv_rmhandlers.S     | 259 ++-----------------
> >  arch/powerpc/kvm/book3s_pr.c                | 256 +++++++++++++++++--
> >  arch/powerpc/kvm/book3s_segment.S           |  13 +
> >  arch/powerpc/kvm/powerpc.c                  |   3 +-
> >  arch/powerpc/kvm/tm.S                       | 379 ++++++++++++++++++++++++++++
> >  arch/powerpc/mm/hash_utils_64.c             |   1 +
> >  arch/powerpc/platforms/powernv/copy-paste.h |   3 +-
> >  19 files changed, 982 insertions(+), 289 deletions(-)
> >  create mode 100644 arch/powerpc/kvm/tm.S
> > 

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 00/26] KVM: PPC: Book3S PR: Transaction memory support on PR KVM
@ 2018-01-11 22:04     ` Benjamin Herrenschmidt
  0 siblings, 0 replies; 116+ messages in thread
From: Benjamin Herrenschmidt @ 2018-01-11 22:04 UTC (permalink / raw)
  To: Gustavo Romero, wei.guo.simon, linuxppc-dev; +Cc: kvm, kvm-ppc

On Thu, 2018-01-11 at 11:56 -0200, Gustavo Romero wrote:
> Hi Simon,
> 
> On 01/11/2018 08:11 AM, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > In current days, many OS distributions have utilized transaction
> > memory functionality. In PowerPC, HV KVM supports TM. But PR KVM
> > does not.
> > 
> > The drive for the transaction memory support of PR KVM is the
> > openstack Continuous Integration testing - They runs a HV(hypervisor)
> > KVM(as level 1) and then run PR KVM(as level 2) on top of that.
> > 
> > This patch set add transaction memory support on PR KVM.
> 
> Is this correct to assume that this emulation mode will just kick in on P9
> with kernel TM workarounds and HV KVM will continue to be used on POWER8
> since HV KVM is supported on POWER8 hosts?

HV KVM is supported on POWER9. In fact it's PR KVM that isn't (at least
not yet and never will be in Radix mode at least).

Cheers,
Ben.

> 
> 
> Regards,
> Gustavo
> 
> > Test cases performed:
> > linux/tools/testing/selftests/powerpc/tm/tm-syscall
> > linux/tools/testing/selftests/powerpc/tm/tm-fork
> > linux/tools/testing/selftests/powerpc/tm/tm-vmx-unavail
> > linux/tools/testing/selftests/powerpc/tm/tm-tmspr
> > linux/tools/testing/selftests/powerpc/tm/tm-signal-msr-resv
> > linux/tools/testing/selftests/powerpc/math/vsx_preempt
> > linux/tools/testing/selftests/powerpc/math/fpu_signal
> > linux/tools/testing/selftests/powerpc/math/vmx_preempt
> > linux/tools/testing/selftests/powerpc/math/fpu_syscall
> > linux/tools/testing/selftests/powerpc/math/vmx_syscall
> > linux/tools/testing/selftests/powerpc/math/fpu_preempt
> > linux/tools/testing/selftests/powerpc/math/vmx_signal
> > linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-gpr
> > linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr
> > linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-vsx
> > linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spr
> > linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-vsx
> > https://github.com/justdoitqd/publicFiles/blob/master/test_tbegin_pr.c
> > https://github.com/justdoitqd/publicFiles/blob/master/test_tabort.c
> > https://github.com/justdoitqd/publicFiles/blob/master/test_kvm_htm_cap.c
> > 
> > Simon Guo (25):
> >   KVM: PPC: Book3S PR: Move kvmppc_save_tm/kvmppc_restore_tm to separate
> >     file
> >   KVM: PPC: Book3S PR: add new parameter (guest MSR) for
> >     kvmppc_save_tm()/kvmppc_restore_tm()
> >   KVM: PPC: Book3S PR: turn on FP/VSX/VMX MSR bits in kvmppc_save_tm()
> >   KVM: PPC: Book3S PR: add C function wrapper for
> >     _kvmppc_save/restore_tm()
> >   KVM: PPC: Book3S PR: In PR KVM suspends Transactional state when
> >     inject an interrupt.
> >   KVM: PPC: Book3S PR: PR KVM pass through MSR TM/TS bits to shadow_msr.
> >   KVM: PPC: Book3S PR: add TEXASR related macros
> >   KVM: PPC: Book3S PR: Sync TM bits to shadow msr for problem state
> >     guest
> >   KVM: PPC: Book3S PR: implement RFID TM behavior to suppress change
> >     from S0 to N0
> >   KVM: PPC: Book3S PR: set MSR HV bit accordingly for PPC970 and others.
> >   KVM: PPC: Book3S PR: prevent TS bits change in kvmppc_interrupt_pr()
> >   powerpc: export symbol msr_check_and_set().
> >   KVM: PPC: Book3S PR: adds new
> >     kvmppc_copyto_vcpu_tm/kvmppc_copyfrom_vcpu_tm API for PR KVM.
> >   KVM: PPC: Book3S PR: export tm_enable()/tm_disable/tm_abort() APIs
> >   KVM: PPC: Book3S PR: add kvmppc_save/restore_tm_sprs() APIs
> >   KVM: PPC: Book3S PR: add transaction memory save/restore skeleton for
> >     PR KVM
> >   KVM: PPC: Book3S PR: add math support for PR KVM HTM
> >   KVM: PPC: Book3S PR: make mtspr/mfspr emulation behavior based on
> >     active TM SPRs
> >   KVM: PPC: Book3S PR: always fail transaction in guest privilege state
> >   KVM: PPC: Book3S PR: enable NV reg restore for reading TM SPR at guest
> >     privilege state
> >   KVM: PPC: Book3S PR: adds emulation for treclaim.
> >   KVM: PPC: Book3S PR: add emulation for trechkpt in PR KVM.
> >   KVM: PPC: Book3S PR: add emulation for tabort. for privilege guest
> >   KVM: PPC: Book3S PR: add guard code to prevent returning to guest with
> >     PR=0 and Transactional state
> >   KVM: PPC: Book3S PR: enable HTM for PR KVM for KVM_CHECK_EXTENSION
> >     ioctl
> > 
> >  arch/powerpc/include/asm/asm-prototypes.h   |  10 +
> >  arch/powerpc/include/asm/kvm_book3s.h       |   8 +
> >  arch/powerpc/include/asm/kvm_host.h         |   3 +
> >  arch/powerpc/include/asm/reg.h              |  25 +-
> >  arch/powerpc/include/asm/tm.h               |   2 -
> >  arch/powerpc/include/uapi/asm/tm.h          |   2 +-
> >  arch/powerpc/kernel/process.c               |   1 +
> >  arch/powerpc/kernel/tm.S                    |  12 +
> >  arch/powerpc/kvm/Makefile                   |   3 +
> >  arch/powerpc/kvm/book3s.h                   |   1 +
> >  arch/powerpc/kvm/book3s_64_mmu.c            |  11 +-
> >  arch/powerpc/kvm/book3s_emulate.c           | 279 +++++++++++++++++++-
> >  arch/powerpc/kvm/book3s_hv_rmhandlers.S     | 259 ++-----------------
> >  arch/powerpc/kvm/book3s_pr.c                | 256 +++++++++++++++++--
> >  arch/powerpc/kvm/book3s_segment.S           |  13 +
> >  arch/powerpc/kvm/powerpc.c                  |   3 +-
> >  arch/powerpc/kvm/tm.S                       | 379 ++++++++++++++++++++++++++++
> >  arch/powerpc/mm/hash_utils_64.c             |   1 +
> >  arch/powerpc/platforms/powernv/copy-paste.h |   3 +-
> >  19 files changed, 982 insertions(+), 289 deletions(-)
> >  create mode 100644 arch/powerpc/kvm/tm.S
> > 


^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 00/26] KVM: PPC: Book3S PR: Transaction memory support on PR KVM
  2018-01-11 13:56   ` Gustavo Romero
@ 2018-01-12  2:41     ` Simon Guo
  -1 siblings, 0 replies; 116+ messages in thread
From: Simon Guo @ 2018-01-12  2:41 UTC (permalink / raw)
  To: Gustavo Romero; +Cc: linuxppc-dev, kvm-ppc, kvm

Hi Gustavo,
On Thu, Jan 11, 2018 at 11:56:59AM -0200, Gustavo Romero wrote:
> Hi Simon,
> 
> On 01/11/2018 08:11 AM, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > In current days, many OS distributions have utilized transaction
> > memory functionality. In PowerPC, HV KVM supports TM. But PR KVM
> > does not.
> > 
> > The drive for the transaction memory support of PR KVM is the
> > openstack Continuous Integration testing - They runs a HV(hypervisor)
> > KVM(as level 1) and then run PR KVM(as level 2) on top of that.
> > 
> > This patch set add transaction memory support on PR KVM.
> 
> Is this correct to assume that this emulation mode will just kick in on P9
> with kernel TM workarounds and HV KVM will continue to be used on POWER8
> since HV KVM is supported on POWER8 hosts?

As Ben mentioned, this patch set aims to enhancement PR KVM on Power8
to support transaction memory.

Thanks,
- Simon

> 
> 
> Regards,
> Gustavo
> 
> > Test cases performed:
> > linux/tools/testing/selftests/powerpc/tm/tm-syscall
> > linux/tools/testing/selftests/powerpc/tm/tm-fork
> > linux/tools/testing/selftests/powerpc/tm/tm-vmx-unavail
> > linux/tools/testing/selftests/powerpc/tm/tm-tmspr
> > linux/tools/testing/selftests/powerpc/tm/tm-signal-msr-resv
> > linux/tools/testing/selftests/powerpc/math/vsx_preempt
> > linux/tools/testing/selftests/powerpc/math/fpu_signal
> > linux/tools/testing/selftests/powerpc/math/vmx_preempt
> > linux/tools/testing/selftests/powerpc/math/fpu_syscall
> > linux/tools/testing/selftests/powerpc/math/vmx_syscall
> > linux/tools/testing/selftests/powerpc/math/fpu_preempt
> > linux/tools/testing/selftests/powerpc/math/vmx_signal
> > linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-gpr
> > linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr
> > linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-vsx
> > linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spr
> > linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-vsx
> > https://github.com/justdoitqd/publicFiles/blob/master/test_tbegin_pr.c
> > https://github.com/justdoitqd/publicFiles/blob/master/test_tabort.c
> > https://github.com/justdoitqd/publicFiles/blob/master/test_kvm_htm_cap.c
> > 
> > Simon Guo (25):
> >   KVM: PPC: Book3S PR: Move kvmppc_save_tm/kvmppc_restore_tm to separate
> >     file
> >   KVM: PPC: Book3S PR: add new parameter (guest MSR) for
> >     kvmppc_save_tm()/kvmppc_restore_tm()
> >   KVM: PPC: Book3S PR: turn on FP/VSX/VMX MSR bits in kvmppc_save_tm()
> >   KVM: PPC: Book3S PR: add C function wrapper for
> >     _kvmppc_save/restore_tm()
> >   KVM: PPC: Book3S PR: In PR KVM suspends Transactional state when
> >     inject an interrupt.
> >   KVM: PPC: Book3S PR: PR KVM pass through MSR TM/TS bits to shadow_msr.
> >   KVM: PPC: Book3S PR: add TEXASR related macros
> >   KVM: PPC: Book3S PR: Sync TM bits to shadow msr for problem state
> >     guest
> >   KVM: PPC: Book3S PR: implement RFID TM behavior to suppress change
> >     from S0 to N0
> >   KVM: PPC: Book3S PR: set MSR HV bit accordingly for PPC970 and others.
> >   KVM: PPC: Book3S PR: prevent TS bits change in kvmppc_interrupt_pr()
> >   powerpc: export symbol msr_check_and_set().
> >   KVM: PPC: Book3S PR: adds new
> >     kvmppc_copyto_vcpu_tm/kvmppc_copyfrom_vcpu_tm API for PR KVM.
> >   KVM: PPC: Book3S PR: export tm_enable()/tm_disable/tm_abort() APIs
> >   KVM: PPC: Book3S PR: add kvmppc_save/restore_tm_sprs() APIs
> >   KVM: PPC: Book3S PR: add transaction memory save/restore skeleton for
> >     PR KVM
> >   KVM: PPC: Book3S PR: add math support for PR KVM HTM
> >   KVM: PPC: Book3S PR: make mtspr/mfspr emulation behavior based on
> >     active TM SPRs
> >   KVM: PPC: Book3S PR: always fail transaction in guest privilege state
> >   KVM: PPC: Book3S PR: enable NV reg restore for reading TM SPR at guest
> >     privilege state
> >   KVM: PPC: Book3S PR: adds emulation for treclaim.
> >   KVM: PPC: Book3S PR: add emulation for trechkpt in PR KVM.
> >   KVM: PPC: Book3S PR: add emulation for tabort. for privilege guest
> >   KVM: PPC: Book3S PR: add guard code to prevent returning to guest with
> >     PR=0 and Transactional state
> >   KVM: PPC: Book3S PR: enable HTM for PR KVM for KVM_CHECK_EXTENSION
> >     ioctl
> > 
> >  arch/powerpc/include/asm/asm-prototypes.h   |  10 +
> >  arch/powerpc/include/asm/kvm_book3s.h       |   8 +
> >  arch/powerpc/include/asm/kvm_host.h         |   3 +
> >  arch/powerpc/include/asm/reg.h              |  25 +-
> >  arch/powerpc/include/asm/tm.h               |   2 -
> >  arch/powerpc/include/uapi/asm/tm.h          |   2 +-
> >  arch/powerpc/kernel/process.c               |   1 +
> >  arch/powerpc/kernel/tm.S                    |  12 +
> >  arch/powerpc/kvm/Makefile                   |   3 +
> >  arch/powerpc/kvm/book3s.h                   |   1 +
> >  arch/powerpc/kvm/book3s_64_mmu.c            |  11 +-
> >  arch/powerpc/kvm/book3s_emulate.c           | 279 +++++++++++++++++++-
> >  arch/powerpc/kvm/book3s_hv_rmhandlers.S     | 259 ++-----------------
> >  arch/powerpc/kvm/book3s_pr.c                | 256 +++++++++++++++++--
> >  arch/powerpc/kvm/book3s_segment.S           |  13 +
> >  arch/powerpc/kvm/powerpc.c                  |   3 +-
> >  arch/powerpc/kvm/tm.S                       | 379 ++++++++++++++++++++++++++++
> >  arch/powerpc/mm/hash_utils_64.c             |   1 +
> >  arch/powerpc/platforms/powernv/copy-paste.h |   3 +-
> >  19 files changed, 982 insertions(+), 289 deletions(-)
> >  create mode 100644 arch/powerpc/kvm/tm.S
> > 
> 

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 00/26] KVM: PPC: Book3S PR: Transaction memory support on PR KVM
@ 2018-01-12  2:41     ` Simon Guo
  0 siblings, 0 replies; 116+ messages in thread
From: Simon Guo @ 2018-01-12  2:41 UTC (permalink / raw)
  To: Gustavo Romero; +Cc: linuxppc-dev, kvm-ppc, kvm

Hi Gustavo,
On Thu, Jan 11, 2018 at 11:56:59AM -0200, Gustavo Romero wrote:
> Hi Simon,
> 
> On 01/11/2018 08:11 AM, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > In current days, many OS distributions have utilized transaction
> > memory functionality. In PowerPC, HV KVM supports TM. But PR KVM
> > does not.
> > 
> > The drive for the transaction memory support of PR KVM is the
> > openstack Continuous Integration testing - They runs a HV(hypervisor)
> > KVM(as level 1) and then run PR KVM(as level 2) on top of that.
> > 
> > This patch set add transaction memory support on PR KVM.
> 
> Is this correct to assume that this emulation mode will just kick in on P9
> with kernel TM workarounds and HV KVM will continue to be used on POWER8
> since HV KVM is supported on POWER8 hosts?

As Ben mentioned, this patch set aims to enhancement PR KVM on Power8
to support transaction memory.

Thanks,
- Simon

> 
> 
> Regards,
> Gustavo
> 
> > Test cases performed:
> > linux/tools/testing/selftests/powerpc/tm/tm-syscall
> > linux/tools/testing/selftests/powerpc/tm/tm-fork
> > linux/tools/testing/selftests/powerpc/tm/tm-vmx-unavail
> > linux/tools/testing/selftests/powerpc/tm/tm-tmspr
> > linux/tools/testing/selftests/powerpc/tm/tm-signal-msr-resv
> > linux/tools/testing/selftests/powerpc/math/vsx_preempt
> > linux/tools/testing/selftests/powerpc/math/fpu_signal
> > linux/tools/testing/selftests/powerpc/math/vmx_preempt
> > linux/tools/testing/selftests/powerpc/math/fpu_syscall
> > linux/tools/testing/selftests/powerpc/math/vmx_syscall
> > linux/tools/testing/selftests/powerpc/math/fpu_preempt
> > linux/tools/testing/selftests/powerpc/math/vmx_signal
> > linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-gpr
> > linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr
> > linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-vsx
> > linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spr
> > linux/tools/testing/selftests/powerpc/ptrace/ptrace-tm-vsx
> > https://github.com/justdoitqd/publicFiles/blob/master/test_tbegin_pr.c
> > https://github.com/justdoitqd/publicFiles/blob/master/test_tabort.c
> > https://github.com/justdoitqd/publicFiles/blob/master/test_kvm_htm_cap.c
> > 
> > Simon Guo (25):
> >   KVM: PPC: Book3S PR: Move kvmppc_save_tm/kvmppc_restore_tm to separate
> >     file
> >   KVM: PPC: Book3S PR: add new parameter (guest MSR) for
> >     kvmppc_save_tm()/kvmppc_restore_tm()
> >   KVM: PPC: Book3S PR: turn on FP/VSX/VMX MSR bits in kvmppc_save_tm()
> >   KVM: PPC: Book3S PR: add C function wrapper for
> >     _kvmppc_save/restore_tm()
> >   KVM: PPC: Book3S PR: In PR KVM suspends Transactional state when
> >     inject an interrupt.
> >   KVM: PPC: Book3S PR: PR KVM pass through MSR TM/TS bits to shadow_msr.
> >   KVM: PPC: Book3S PR: add TEXASR related macros
> >   KVM: PPC: Book3S PR: Sync TM bits to shadow msr for problem state
> >     guest
> >   KVM: PPC: Book3S PR: implement RFID TM behavior to suppress change
> >     from S0 to N0
> >   KVM: PPC: Book3S PR: set MSR HV bit accordingly for PPC970 and others.
> >   KVM: PPC: Book3S PR: prevent TS bits change in kvmppc_interrupt_pr()
> >   powerpc: export symbol msr_check_and_set().
> >   KVM: PPC: Book3S PR: adds new
> >     kvmppc_copyto_vcpu_tm/kvmppc_copyfrom_vcpu_tm API for PR KVM.
> >   KVM: PPC: Book3S PR: export tm_enable()/tm_disable/tm_abort() APIs
> >   KVM: PPC: Book3S PR: add kvmppc_save/restore_tm_sprs() APIs
> >   KVM: PPC: Book3S PR: add transaction memory save/restore skeleton for
> >     PR KVM
> >   KVM: PPC: Book3S PR: add math support for PR KVM HTM
> >   KVM: PPC: Book3S PR: make mtspr/mfspr emulation behavior based on
> >     active TM SPRs
> >   KVM: PPC: Book3S PR: always fail transaction in guest privilege state
> >   KVM: PPC: Book3S PR: enable NV reg restore for reading TM SPR at guest
> >     privilege state
> >   KVM: PPC: Book3S PR: adds emulation for treclaim.
> >   KVM: PPC: Book3S PR: add emulation for trechkpt in PR KVM.
> >   KVM: PPC: Book3S PR: add emulation for tabort. for privilege guest
> >   KVM: PPC: Book3S PR: add guard code to prevent returning to guest with
> >     PR=0 and Transactional state
> >   KVM: PPC: Book3S PR: enable HTM for PR KVM for KVM_CHECK_EXTENSION
> >     ioctl
> > 
> >  arch/powerpc/include/asm/asm-prototypes.h   |  10 +
> >  arch/powerpc/include/asm/kvm_book3s.h       |   8 +
> >  arch/powerpc/include/asm/kvm_host.h         |   3 +
> >  arch/powerpc/include/asm/reg.h              |  25 +-
> >  arch/powerpc/include/asm/tm.h               |   2 -
> >  arch/powerpc/include/uapi/asm/tm.h          |   2 +-
> >  arch/powerpc/kernel/process.c               |   1 +
> >  arch/powerpc/kernel/tm.S                    |  12 +
> >  arch/powerpc/kvm/Makefile                   |   3 +
> >  arch/powerpc/kvm/book3s.h                   |   1 +
> >  arch/powerpc/kvm/book3s_64_mmu.c            |  11 +-
> >  arch/powerpc/kvm/book3s_emulate.c           | 279 +++++++++++++++++++-
> >  arch/powerpc/kvm/book3s_hv_rmhandlers.S     | 259 ++-----------------
> >  arch/powerpc/kvm/book3s_pr.c                | 256 +++++++++++++++++--
> >  arch/powerpc/kvm/book3s_segment.S           |  13 +
> >  arch/powerpc/kvm/powerpc.c                  |   3 +-
> >  arch/powerpc/kvm/tm.S                       | 379 ++++++++++++++++++++++++++++
> >  arch/powerpc/mm/hash_utils_64.c             |   1 +
> >  arch/powerpc/platforms/powernv/copy-paste.h |   3 +-
> >  19 files changed, 982 insertions(+), 289 deletions(-)
> >  create mode 100644 arch/powerpc/kvm/tm.S
> > 
> 

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 00/26] KVM: PPC: Book3S PR: Transaction memory support on PR KVM
  2018-01-11 10:11 ` wei.guo.simon
@ 2018-01-23  5:38   ` Paul Mackerras
  -1 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  5:38 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:13PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> In current days, many OS distributions have utilized transaction
> memory functionality. In PowerPC, HV KVM supports TM. But PR KVM
> does not.
> 
> The drive for the transaction memory support of PR KVM is the
> openstack Continuous Integration testing - They runs a HV(hypervisor)
> KVM(as level 1) and then run PR KVM(as level 2) on top of that.
> 
> This patch set add transaction memory support on PR KVM.

Thanks for the patch set.  It mostly looks good, though I have some
comments on the individual patches.

I don't see where you are implementing support for userspace accessing
the TM checkpointed register values using the GET_ONE_REG/SET_ONE_REG
API.  This would mean that you couldn't migrate a guest that was in
the middle of a transaction.  We will need to have the one_reg API
access to the TM checkpoint implemented, though there will be a
difficulty in that kvmppc_get_one_reg() and kvmppc_set_one_reg() are
called with the vcpu context loaded.  With your scheme of having the
TM checkpoint stored in the CPU while the vcpu context is loaded, the
values you want to access in kvmppc_get/set_one_reg are inaccessible
since they're stored in the CPU.  You would have to arrange for
kvmppc_get/set_one_reg to be called without the vcpu context loaded
(recent patches in the kvm next branch probably make that easier) or
else explicitly unload and reload the vcpu context in those functions.
(This is easier in HV KVM since the checkpoint is not in the CPU at
the point of doing kvmppc_get/set_one_reg.)

There is also complexity added because it's possible for the guest to
have TM, FP, VEC and VSX all enabled from its point of view but to
have FP/VEC/VSX not actually enabled in the hardware when the guest is
running.  As you note in your patch descriptions, this means that the
guest can do tbegin and create a checkpoint with bogus values for the
FP/VEC/VSX registers.  Rather than trying to detect and fix up this
situation after the fact, I would suggest that if the guest has TM
enabled then we make sure that the real FP/VEC/VSX bits in the MSR
match what the guest thinks it has.  That way we would avoid the bogus
checkpoint problem.  (There is still the possibility of getting bogus
checkpointed FP/VEC/VSX registers if the guest does tbegin with the
FP/VEC/VSX bits clear in the MSR, but that is the guest's problem to
deal with.)

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 00/26] KVM: PPC: Book3S PR: Transaction memory support on PR KVM
@ 2018-01-23  5:38   ` Paul Mackerras
  0 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  5:38 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:13PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> In current days, many OS distributions have utilized transaction
> memory functionality. In PowerPC, HV KVM supports TM. But PR KVM
> does not.
> 
> The drive for the transaction memory support of PR KVM is the
> openstack Continuous Integration testing - They runs a HV(hypervisor)
> KVM(as level 1) and then run PR KVM(as level 2) on top of that.
> 
> This patch set add transaction memory support on PR KVM.

Thanks for the patch set.  It mostly looks good, though I have some
comments on the individual patches.

I don't see where you are implementing support for userspace accessing
the TM checkpointed register values using the GET_ONE_REG/SET_ONE_REG
API.  This would mean that you couldn't migrate a guest that was in
the middle of a transaction.  We will need to have the one_reg API
access to the TM checkpoint implemented, though there will be a
difficulty in that kvmppc_get_one_reg() and kvmppc_set_one_reg() are
called with the vcpu context loaded.  With your scheme of having the
TM checkpoint stored in the CPU while the vcpu context is loaded, the
values you want to access in kvmppc_get/set_one_reg are inaccessible
since they're stored in the CPU.  You would have to arrange for
kvmppc_get/set_one_reg to be called without the vcpu context loaded
(recent patches in the kvm next branch probably make that easier) or
else explicitly unload and reload the vcpu context in those functions.
(This is easier in HV KVM since the checkpoint is not in the CPU at
the point of doing kvmppc_get/set_one_reg.)

There is also complexity added because it's possible for the guest to
have TM, FP, VEC and VSX all enabled from its point of view but to
have FP/VEC/VSX not actually enabled in the hardware when the guest is
running.  As you note in your patch descriptions, this means that the
guest can do tbegin and create a checkpoint with bogus values for the
FP/VEC/VSX registers.  Rather than trying to detect and fix up this
situation after the fact, I would suggest that if the guest has TM
enabled then we make sure that the real FP/VEC/VSX bits in the MSR
match what the guest thinks it has.  That way we would avoid the bogus
checkpoint problem.  (There is still the possibility of getting bogus
checkpointed FP/VEC/VSX registers if the guest does tbegin with the
FP/VEC/VSX bits clear in the MSR, but that is the guest's problem to
deal with.)

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 02/26] KVM: PPC: Book3S PR: add new parameter (guest MSR) for kvmppc_save_tm()/kvmppc_restore_tm()
  2018-01-11 10:11   ` [PATCH 02/26] KVM: PPC: Book3S PR: add new parameter (guest MSR) for kvmppc_save_tm()/kvmppc_restore wei.guo.simon
@ 2018-01-23  5:42     ` Paul Mackerras
  -1 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  5:42 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:15PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> HV KVM and PR KVM need different MSR source to indicate whether
> treclaim. or trecheckpoint. is necessary.
> 
> This patch add new parameter (guest MSR) for these kvmppc_save_tm/
> kvmppc_restore_tm() APIs:
> - For HV KVM, it is VCPU_MSR
> - For PR KVM, it is current host MSR or VCPU_SHADOW_SRR1
> 
> This enhancement enables these 2 APIs to be reused by PR KVM later.
> And the patch keeps HV KVM logic unchanged.
> 
> This patch also reworks kvmppc_save_tm()/kvmppc_restore_tm() to
> have a clean ABI: r3 for vcpu and r4 for guest_msr.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

Question: why do you switch from using HSTATE_HOST_R1 to HSTATE_SCRATCH2

> @@ -42,11 +45,11 @@ _GLOBAL(kvmppc_save_tm)
>  	rldimi	r8, r0, MSR_TM_LG, 63-MSR_TM_LG
>  	mtmsrd	r8
>  
> -	ld	r5, VCPU_MSR(r9)
> -	rldicl. r5, r5, 64 - MSR_TS_S_LG, 62
> +	rldicl. r4, r4, 64 - MSR_TS_S_LG, 62
>  	beq	1f	/* TM not active in guest. */
>  
> -	std	r1, HSTATE_HOST_R1(r13)
> +	std	r1, HSTATE_SCRATCH2(r13)

... here?

> @@ -166,17 +173,17 @@ _GLOBAL(kvmppc_restore_tm)
>  	 * The user may change these outside of a transaction, so they must
>  	 * always be context switched.
>  	 */
> -	ld	r5, VCPU_TFHAR(r4)
> -	ld	r6, VCPU_TFIAR(r4)
> -	ld	r7, VCPU_TEXASR(r4)
> +	ld	r5, VCPU_TFHAR(r3)
> +	ld	r6, VCPU_TFIAR(r3)
> +	ld	r7, VCPU_TEXASR(r3)
>  	mtspr	SPRN_TFHAR, r5
>  	mtspr	SPRN_TFIAR, r6
>  	mtspr	SPRN_TEXASR, r7
>  
> -	ld	r5, VCPU_MSR(r4)
> +	mr	r5, r4
>  	rldicl. r5, r5, 64 - MSR_TS_S_LG, 62
>  	beqlr		/* TM not active in guest */
> -	std	r1, HSTATE_HOST_R1(r13)
> +	std	r1, HSTATE_SCRATCH2(r13)

and here?

Please add a paragraph to the patch description explaining why you are
making that change.

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 02/26] KVM: PPC: Book3S PR: add new parameter (guest MSR) for kvmppc_save_tm()/kvmppc_res
@ 2018-01-23  5:42     ` Paul Mackerras
  0 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  5:42 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:15PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> HV KVM and PR KVM need different MSR source to indicate whether
> treclaim. or trecheckpoint. is necessary.
> 
> This patch add new parameter (guest MSR) for these kvmppc_save_tm/
> kvmppc_restore_tm() APIs:
> - For HV KVM, it is VCPU_MSR
> - For PR KVM, it is current host MSR or VCPU_SHADOW_SRR1
> 
> This enhancement enables these 2 APIs to be reused by PR KVM later.
> And the patch keeps HV KVM logic unchanged.
> 
> This patch also reworks kvmppc_save_tm()/kvmppc_restore_tm() to
> have a clean ABI: r3 for vcpu and r4 for guest_msr.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

Question: why do you switch from using HSTATE_HOST_R1 to HSTATE_SCRATCH2

> @@ -42,11 +45,11 @@ _GLOBAL(kvmppc_save_tm)
>  	rldimi	r8, r0, MSR_TM_LG, 63-MSR_TM_LG
>  	mtmsrd	r8
>  
> -	ld	r5, VCPU_MSR(r9)
> -	rldicl. r5, r5, 64 - MSR_TS_S_LG, 62
> +	rldicl. r4, r4, 64 - MSR_TS_S_LG, 62
>  	beq	1f	/* TM not active in guest. */
>  
> -	std	r1, HSTATE_HOST_R1(r13)
> +	std	r1, HSTATE_SCRATCH2(r13)

... here?

> @@ -166,17 +173,17 @@ _GLOBAL(kvmppc_restore_tm)
>  	 * The user may change these outside of a transaction, so they must
>  	 * always be context switched.
>  	 */
> -	ld	r5, VCPU_TFHAR(r4)
> -	ld	r6, VCPU_TFIAR(r4)
> -	ld	r7, VCPU_TEXASR(r4)
> +	ld	r5, VCPU_TFHAR(r3)
> +	ld	r6, VCPU_TFIAR(r3)
> +	ld	r7, VCPU_TEXASR(r3)
>  	mtspr	SPRN_TFHAR, r5
>  	mtspr	SPRN_TFIAR, r6
>  	mtspr	SPRN_TEXASR, r7
>  
> -	ld	r5, VCPU_MSR(r4)
> +	mr	r5, r4
>  	rldicl. r5, r5, 64 - MSR_TS_S_LG, 62
>  	beqlr		/* TM not active in guest */
> -	std	r1, HSTATE_HOST_R1(r13)
> +	std	r1, HSTATE_SCRATCH2(r13)

and here?

Please add a paragraph to the patch description explaining why you are
making that change.

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 04/26] KVM: PPC: Book3S PR: add C function wrapper for _kvmppc_save/restore_tm()
  2018-01-11 10:11   ` wei.guo.simon
@ 2018-01-23  5:49     ` Paul Mackerras
  -1 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  5:49 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:17PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> Currently _kvmppc_save/restore_tm() APIs can only be invoked from
> assembly function. This patch adds C function wrappers for them so
> that they can be safely called from C function.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

[snip]

> --- a/arch/powerpc/include/asm/asm-prototypes.h
> +++ b/arch/powerpc/include/asm/asm-prototypes.h
> @@ -126,4 +126,11 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
>  void _mcount(void);
>  unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip);
>  
> +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> +/* Transaction memory related */
> +struct kvm_vcpu;
> +void _kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu, u64 guest_msr);
> +void _kvmppc_save_tm_pr(struct kvm_vcpu *vcpu, u64 guest_msr);
> +#endif

It's not generally necessary to have ifdefs around function
declarations.  If the function is never defined because the feature
is not configured in, that is fine.

> @@ -149,6 +149,58 @@ _GLOBAL(kvmppc_save_tm)
>  	blr
>  
>  /*
> + * _kvmppc_save_tm() is a wrapper around __kvmppc_save_tm(), so that it can
> + * be invoked from C function by PR KVM only.
> + */
> +_GLOBAL(_kvmppc_save_tm_pr)
> +	mflr	r5
> +	std	r5, PPC_LR_STKOFF(r1)
> +	stdu    r1, -SWITCH_FRAME_SIZE(r1)
> +	SAVE_NVGPRS(r1)
> +
> +	/* save MSR since TM/math bits might be impacted
> +	 * by __kvmppc_save_tm().
> +	 */
> +	mfmsr	r5
> +	SAVE_GPR(5, r1)
> +
> +	/* also save DSCR/CR so that it can be recovered later */
> +	mfspr   r6, SPRN_DSCR
> +	SAVE_GPR(6, r1)
> +
> +	mfcr    r7
> +	stw     r7, _CCR(r1)
> +
> +	/* allocate stack frame for __kvmppc_save_tm since
> +	 * it will save LR into its stackframe and we don't
> +	 * want to corrupt _kvmppc_save_tm_pr's.
> +	 */
> +	stdu    r1, -PPC_MIN_STKFRM(r1)

You don't need to do this.  In the PowerPC ELF ABI, functions always
save their LR (i.e. their return address) in their *caller's* stack
frame, not their own.  You have established a stack frame for
_kvmppc_save_tm_pr above, and that is sufficient.  Same comment
applies for _kvmppc_restore_tm_pr.

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 04/26] KVM: PPC: Book3S PR: add C function wrapper for _kvmppc_save/restore_tm()
@ 2018-01-23  5:49     ` Paul Mackerras
  0 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  5:49 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:17PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> Currently _kvmppc_save/restore_tm() APIs can only be invoked from
> assembly function. This patch adds C function wrappers for them so
> that they can be safely called from C function.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

[snip]

> --- a/arch/powerpc/include/asm/asm-prototypes.h
> +++ b/arch/powerpc/include/asm/asm-prototypes.h
> @@ -126,4 +126,11 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
>  void _mcount(void);
>  unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip);
>  
> +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> +/* Transaction memory related */
> +struct kvm_vcpu;
> +void _kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu, u64 guest_msr);
> +void _kvmppc_save_tm_pr(struct kvm_vcpu *vcpu, u64 guest_msr);
> +#endif

It's not generally necessary to have ifdefs around function
declarations.  If the function is never defined because the feature
is not configured in, that is fine.

> @@ -149,6 +149,58 @@ _GLOBAL(kvmppc_save_tm)
>  	blr
>  
>  /*
> + * _kvmppc_save_tm() is a wrapper around __kvmppc_save_tm(), so that it can
> + * be invoked from C function by PR KVM only.
> + */
> +_GLOBAL(_kvmppc_save_tm_pr)
> +	mflr	r5
> +	std	r5, PPC_LR_STKOFF(r1)
> +	stdu    r1, -SWITCH_FRAME_SIZE(r1)
> +	SAVE_NVGPRS(r1)
> +
> +	/* save MSR since TM/math bits might be impacted
> +	 * by __kvmppc_save_tm().
> +	 */
> +	mfmsr	r5
> +	SAVE_GPR(5, r1)
> +
> +	/* also save DSCR/CR so that it can be recovered later */
> +	mfspr   r6, SPRN_DSCR
> +	SAVE_GPR(6, r1)
> +
> +	mfcr    r7
> +	stw     r7, _CCR(r1)
> +
> +	/* allocate stack frame for __kvmppc_save_tm since
> +	 * it will save LR into its stackframe and we don't
> +	 * want to corrupt _kvmppc_save_tm_pr's.
> +	 */
> +	stdu    r1, -PPC_MIN_STKFRM(r1)

You don't need to do this.  In the PowerPC ELF ABI, functions always
save their LR (i.e. their return address) in their *caller's* stack
frame, not their own.  You have established a stack frame for
_kvmppc_save_tm_pr above, and that is sufficient.  Same comment
applies for _kvmppc_restore_tm_pr.

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 07/26] KVM: PPC: Book3S PR: add TEXASR related macros
  2018-01-11 10:11   ` wei.guo.simon
@ 2018-01-23  5:50     ` Paul Mackerras
  -1 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  5:50 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:20PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> This patches add some macros for CR0/TEXASR bits so that PR KVM TM
> logic(tbegin./treclaim./tabort.) can make use of them later.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

Reviewed-by: Paul Mackerras <paulus@ozlabs.org>

This and some of your other patches will need to go via Michael
Ellerman's tree.

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 07/26] KVM: PPC: Book3S PR: add TEXASR related macros
@ 2018-01-23  5:50     ` Paul Mackerras
  0 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  5:50 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:20PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> This patches add some macros for CR0/TEXASR bits so that PR KVM TM
> logic(tbegin./treclaim./tabort.) can make use of them later.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

Reviewed-by: Paul Mackerras <paulus@ozlabs.org>

This and some of your other patches will need to go via Michael
Ellerman's tree.

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 10/26] KVM: PPC: Book3S PR: set MSR HV bit accordingly for PPC970 and others.
  2018-01-11 10:11   ` wei.guo.simon
@ 2018-01-23  5:51     ` Paul Mackerras
  -1 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  5:51 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:23PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> Apple G5 machines(PPC970/FX/GX/MP) have supervisor mode disabled and
> MSR HV bit is forced into 1. We should follow this in PR KVM guest.
> 
> This patch set MSR HV=1 for G5 machines and HV=0 for others on PR
> KVM guest.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> Suggested-by: Paul Mackerras <paulus@ozlabs.org>

Reviewed-by: Paul Mackerras <paulus@ozlabs.org>

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 10/26] KVM: PPC: Book3S PR: set MSR HV bit accordingly for PPC970 and others.
@ 2018-01-23  5:51     ` Paul Mackerras
  0 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  5:51 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:23PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> Apple G5 machines(PPC970/FX/GX/MP) have supervisor mode disabled and
> MSR HV bit is forced into 1. We should follow this in PR KVM guest.
> 
> This patch set MSR HV=1 for G5 machines and HV=0 for others on PR
> KVM guest.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> Suggested-by: Paul Mackerras <paulus@ozlabs.org>

Reviewed-by: Paul Mackerras <paulus@ozlabs.org>

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 13/26] KVM: PPC: Book3S PR: adds new kvmppc_copyto_vcpu_tm/kvmppc_copyfrom_vcpu_tm API for PR KVM.
  2018-01-11 10:11   ` [PATCH 13/26] KVM: PPC: Book3S PR: adds new kvmppc_copyto_vcpu_tm/kvmppc_copyfrom_vcpu_tm API for PR wei.guo.simon
@ 2018-01-23  5:52     ` Paul Mackerras
  -1 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  5:52 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:26PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> This patch adds 2 new APIs: kvmppc_copyto_vcpu_tm() and
> kvmppc_copyfrom_vcpu_tm().  These 2 APIs will be used to copy from/to TM
> data between VCPU_TM/VCPU area.
> 
> PR KVM will use these APIs for treclaim. or trchkpt. emulation.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> Reviewed-by: Paul Mackerras <paulus@ozlabs.org>

Actually, I take that back.  You have missed XER. :)

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 13/26] KVM: PPC: Book3S PR: adds new kvmppc_copyto_vcpu_tm/kvmppc_copyfrom_vcpu_tm API fo
@ 2018-01-23  5:52     ` Paul Mackerras
  0 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  5:52 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:26PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> This patch adds 2 new APIs: kvmppc_copyto_vcpu_tm() and
> kvmppc_copyfrom_vcpu_tm().  These 2 APIs will be used to copy from/to TM
> data between VCPU_TM/VCPU area.
> 
> PR KVM will use these APIs for treclaim. or trchkpt. emulation.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> Reviewed-by: Paul Mackerras <paulus@ozlabs.org>

Actually, I take that back.  You have missed XER. :)

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 16/26] KVM: PPC: Book3S PR: add transaction memory save/restore skeleton for PR KVM
  2018-01-11 10:11   ` wei.guo.simon
@ 2018-01-23  6:04     ` Paul Mackerras
  -1 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  6:04 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:29PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> The transaction memory checkpoint area save/restore behavior is
> triggered when VCPU qemu process is switching out/into CPU. ie.
> at kvmppc_core_vcpu_put_pr() and kvmppc_core_vcpu_load_pr().
> 
> MSR TM active state is determined by TS bits:
>     active: 10(transactional) or 01 (suspended)
>     inactive: 00 (non-transactional)
> We don't "fake" TM functionality for guest. We "sync" guest virtual
> MSR TM active state(10 or 01) with shadow MSR. That is to say,
> we don't emulate a transactional guest with a TM inactive MSR.
> 
> TM SPR support(TFIAR/TFAR/TEXASR) has already been supported by
> commit 9916d57e64a4 ("KVM: PPC: Book3S PR: Expose TM registers").
> Math register support (FPR/VMX/VSX) will be done at subsequent
> patch.
> 
> - TM save:
> When kvmppc_save_tm_pr() is invoked, whether TM context need to
> be saved can be determined by current host MSR state:
> 	* TM active - save TM context
> 	* TM inactive - no need to do so and only save TM SPRs.
> 
> - TM restore:
> However when kvmppc_restore_tm_pr() is invoked, there is an
> issue to determine whether TM restore should be performed.
> The TM active host MSR val saved in kernel stack is not loaded yet.

I don't follow this exactly.  What is the value saved on the kernel
stack?

I get that we may not have done the sync from the shadow MSR back to
the guest MSR, since that is done in kvmppc_handle_exit_pr() with
interrupts enabled and we might be unloading because we got
preempted.  In that case we would have svcpu->in_use = 1, and we
should in fact do the sync of the TS bits from shadow_msr to the vcpu
MSR value in kvmppc_copy_from_svcpu().  If you did that then both the
load and put functions could just rely on the vcpu's MSR value.

> We don't know whether there is a transaction to be restored from
> current host MSR TM status at kvmppc_restore_tm_pr(). To solve this
> issue, we save current MSR into vcpu->arch.save_msr_tm at
> kvmppc_save_tm_pr(), and kvmppc_restore_tm_pr() check TS bits of
> vcpu->arch.save_msr_tm to decide whether to do TM restore.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> ---
>  arch/powerpc/include/asm/kvm_book3s.h |  6 +++++
>  arch/powerpc/include/asm/kvm_host.h   |  1 +
>  arch/powerpc/kvm/book3s_pr.c          | 41 +++++++++++++++++++++++++++++++++++
>  3 files changed, 48 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> index 9a66700..d8dbfa5 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -253,6 +253,12 @@ extern void kvmppc_copy_to_svcpu(struct kvmppc_book3s_shadow_vcpu *svcpu,
>  				 struct kvm_vcpu *vcpu);
>  extern void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
>  				   struct kvmppc_book3s_shadow_vcpu *svcpu);
> +
> +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> +void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu);
> +void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu);
> +#endif

It would be cleaner at the point where you use these if you added a
#else clause to define a null version for the case when transactional
memory support is not configured, like this:

+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu);
+void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu);
+#else
+static inline void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu) {}
+static inline void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu) {}
+#endif

That way you don't need the #ifdef at the call site.

> @@ -131,6 +135,10 @@ static void kvmppc_core_vcpu_put_pr(struct kvm_vcpu *vcpu)
>  	if (kvmppc_is_split_real(vcpu))
>  		kvmppc_unfixup_split_real(vcpu);
>  
> +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> +	kvmppc_save_tm_pr(vcpu);
> +#endif
> +
>  	kvmppc_giveup_ext(vcpu, MSR_FP | MSR_VEC | MSR_VSX);
>  	kvmppc_giveup_fac(vcpu, FSCR_TAR_LG);

I think you should do these giveup_ext/giveup_fac calls before calling
kvmppc_save_tm_pr, because the treclaim in kvmppc_save_tm_pr will
modify all the FP/VEC/VSX registers and the TAR.

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 16/26] KVM: PPC: Book3S PR: add transaction memory save/restore skeleton for PR KVM
@ 2018-01-23  6:04     ` Paul Mackerras
  0 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  6:04 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:29PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> The transaction memory checkpoint area save/restore behavior is
> triggered when VCPU qemu process is switching out/into CPU. ie.
> at kvmppc_core_vcpu_put_pr() and kvmppc_core_vcpu_load_pr().
> 
> MSR TM active state is determined by TS bits:
>     active: 10(transactional) or 01 (suspended)
>     inactive: 00 (non-transactional)
> We don't "fake" TM functionality for guest. We "sync" guest virtual
> MSR TM active state(10 or 01) with shadow MSR. That is to say,
> we don't emulate a transactional guest with a TM inactive MSR.
> 
> TM SPR support(TFIAR/TFAR/TEXASR) has already been supported by
> commit 9916d57e64a4 ("KVM: PPC: Book3S PR: Expose TM registers").
> Math register support (FPR/VMX/VSX) will be done at subsequent
> patch.
> 
> - TM save:
> When kvmppc_save_tm_pr() is invoked, whether TM context need to
> be saved can be determined by current host MSR state:
> 	* TM active - save TM context
> 	* TM inactive - no need to do so and only save TM SPRs.
> 
> - TM restore:
> However when kvmppc_restore_tm_pr() is invoked, there is an
> issue to determine whether TM restore should be performed.
> The TM active host MSR val saved in kernel stack is not loaded yet.

I don't follow this exactly.  What is the value saved on the kernel
stack?

I get that we may not have done the sync from the shadow MSR back to
the guest MSR, since that is done in kvmppc_handle_exit_pr() with
interrupts enabled and we might be unloading because we got
preempted.  In that case we would have svcpu->in_use = 1, and we
should in fact do the sync of the TS bits from shadow_msr to the vcpu
MSR value in kvmppc_copy_from_svcpu().  If you did that then both the
load and put functions could just rely on the vcpu's MSR value.

> We don't know whether there is a transaction to be restored from
> current host MSR TM status at kvmppc_restore_tm_pr(). To solve this
> issue, we save current MSR into vcpu->arch.save_msr_tm at
> kvmppc_save_tm_pr(), and kvmppc_restore_tm_pr() check TS bits of
> vcpu->arch.save_msr_tm to decide whether to do TM restore.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> ---
>  arch/powerpc/include/asm/kvm_book3s.h |  6 +++++
>  arch/powerpc/include/asm/kvm_host.h   |  1 +
>  arch/powerpc/kvm/book3s_pr.c          | 41 +++++++++++++++++++++++++++++++++++
>  3 files changed, 48 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> index 9a66700..d8dbfa5 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -253,6 +253,12 @@ extern void kvmppc_copy_to_svcpu(struct kvmppc_book3s_shadow_vcpu *svcpu,
>  				 struct kvm_vcpu *vcpu);
>  extern void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
>  				   struct kvmppc_book3s_shadow_vcpu *svcpu);
> +
> +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> +void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu);
> +void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu);
> +#endif

It would be cleaner at the point where you use these if you added a
#else clause to define a null version for the case when transactional
memory support is not configured, like this:

+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu);
+void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu);
+#else
+static inline void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu) {}
+static inline void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu) {}
+#endif

That way you don't need the #ifdef at the call site.

> @@ -131,6 +135,10 @@ static void kvmppc_core_vcpu_put_pr(struct kvm_vcpu *vcpu)
>  	if (kvmppc_is_split_real(vcpu))
>  		kvmppc_unfixup_split_real(vcpu);
>  
> +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> +	kvmppc_save_tm_pr(vcpu);
> +#endif
> +
>  	kvmppc_giveup_ext(vcpu, MSR_FP | MSR_VEC | MSR_VSX);
>  	kvmppc_giveup_fac(vcpu, FSCR_TAR_LG);

I think you should do these giveup_ext/giveup_fac calls before calling
kvmppc_save_tm_pr, because the treclaim in kvmppc_save_tm_pr will
modify all the FP/VEC/VSX registers and the TAR.

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 00/26] KVM: PPC: Book3S PR: Transaction memory support on PR KVM
  2018-01-23  5:38   ` Paul Mackerras
@ 2018-01-23  7:16     ` Paul Mackerras
  -1 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  7:16 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Tue, Jan 23, 2018 at 04:38:32PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:13PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > In current days, many OS distributions have utilized transaction
> > memory functionality. In PowerPC, HV KVM supports TM. But PR KVM
> > does not.
> > 
> > The drive for the transaction memory support of PR KVM is the
> > openstack Continuous Integration testing - They runs a HV(hypervisor)
> > KVM(as level 1) and then run PR KVM(as level 2) on top of that.
> > 
> > This patch set add transaction memory support on PR KVM.
> 
> Thanks for the patch set.  It mostly looks good, though I have some
> comments on the individual patches.
> 
> I don't see where you are implementing support for userspace accessing
> the TM checkpointed register values using the GET_ONE_REG/SET_ONE_REG
> API.  This would mean that you couldn't migrate a guest that was in
> the middle of a transaction.  We will need to have the one_reg API
> access to the TM checkpoint implemented, though there will be a
> difficulty in that kvmppc_get_one_reg() and kvmppc_set_one_reg() are
> called with the vcpu context loaded.  With your scheme of having the
> TM checkpoint stored in the CPU while the vcpu context is loaded, the
> values you want to access in kvmppc_get/set_one_reg are inaccessible
> since they're stored in the CPU.  You would have to arrange for
> kvmppc_get/set_one_reg to be called without the vcpu context loaded
> (recent patches in the kvm next branch probably make that easier) or
> else explicitly unload and reload the vcpu context in those functions.
> (This is easier in HV KVM since the checkpoint is not in the CPU at
> the point of doing kvmppc_get/set_one_reg.)

Another complexity that hasn't been dealt with as far as I can see is
that if userspace does a KVM_SET_REGS that changes the TS field in the
guest MSR, we don't do anything to make the state of the CPU match.
As with GET/SET_ONE_REG, KVM_SET_REGS is called with the vcpu loaded,
so it needs to make the physical CPU state match what it would have
been had the new state been present at load time, perhaps by unloading
the CPU before changing the state, then reloading it.  But if you do
that and you are using the vcpu->arch.save_msr_tm field that you add,
then you need to modify that when you do kvmppc_set_msr().  (I would
rather that your save/restore TM functions work off kvmppc_get_msr()
rather than having the save_msr_tm field.)

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 00/26] KVM: PPC: Book3S PR: Transaction memory support on PR KVM
@ 2018-01-23  7:16     ` Paul Mackerras
  0 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  7:16 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Tue, Jan 23, 2018 at 04:38:32PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:13PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > In current days, many OS distributions have utilized transaction
> > memory functionality. In PowerPC, HV KVM supports TM. But PR KVM
> > does not.
> > 
> > The drive for the transaction memory support of PR KVM is the
> > openstack Continuous Integration testing - They runs a HV(hypervisor)
> > KVM(as level 1) and then run PR KVM(as level 2) on top of that.
> > 
> > This patch set add transaction memory support on PR KVM.
> 
> Thanks for the patch set.  It mostly looks good, though I have some
> comments on the individual patches.
> 
> I don't see where you are implementing support for userspace accessing
> the TM checkpointed register values using the GET_ONE_REG/SET_ONE_REG
> API.  This would mean that you couldn't migrate a guest that was in
> the middle of a transaction.  We will need to have the one_reg API
> access to the TM checkpoint implemented, though there will be a
> difficulty in that kvmppc_get_one_reg() and kvmppc_set_one_reg() are
> called with the vcpu context loaded.  With your scheme of having the
> TM checkpoint stored in the CPU while the vcpu context is loaded, the
> values you want to access in kvmppc_get/set_one_reg are inaccessible
> since they're stored in the CPU.  You would have to arrange for
> kvmppc_get/set_one_reg to be called without the vcpu context loaded
> (recent patches in the kvm next branch probably make that easier) or
> else explicitly unload and reload the vcpu context in those functions.
> (This is easier in HV KVM since the checkpoint is not in the CPU at
> the point of doing kvmppc_get/set_one_reg.)

Another complexity that hasn't been dealt with as far as I can see is
that if userspace does a KVM_SET_REGS that changes the TS field in the
guest MSR, we don't do anything to make the state of the CPU match.
As with GET/SET_ONE_REG, KVM_SET_REGS is called with the vcpu loaded,
so it needs to make the physical CPU state match what it would have
been had the new state been present at load time, perhaps by unloading
the CPU before changing the state, then reloading it.  But if you do
that and you are using the vcpu->arch.save_msr_tm field that you add,
then you need to modify that when you do kvmppc_set_msr().  (I would
rather that your save/restore TM functions work off kvmppc_get_msr()
rather than having the save_msr_tm field.)

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 17/26] KVM: PPC: Book3S PR: add math support for PR KVM HTM
  2018-01-11 10:11   ` wei.guo.simon
@ 2018-01-23  7:29     ` Paul Mackerras
  -1 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  7:29 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:30PM +0800, wei.guo.simon@gmail.com wrote:
> ines: 219
> 
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> The math registers will be saved into vcpu->arch.fp/vr and corresponding
> vcpu->arch.fp_tm/vr_tm area.
> 
> We flush or giveup the math regs into vcpu->arch.fp/vr before saving
> transaction. After transaction is restored, the math regs will be loaded
> back into regs.

It looks to me that you are loading up the math regs on every vcpu
load, not just those with an active transaction.  That seems like
overkill.

> If there is a FP/VEC/VSX unavailable exception during transaction active
> state, the math checkpoint content might be incorrect and we need to do
> treclaim./load the correct checkpoint val/trechkpt. sequence to retry the
> transaction.

I would prefer a simpler approach where just before entering the
guest, we check if the guest MSR TM bit is set, and if so we make sure
that whichever math regs are enabled in the guest MSR are actually
loaded on the CPU, that is, that guest_owned_ext has the same bits set
as the guest MSR.  Then we never have to handle a FP/VEC/VSX
unavailable interrupt with a transaction active (other than by simply
passing it on to the guest).

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 17/26] KVM: PPC: Book3S PR: add math support for PR KVM HTM
@ 2018-01-23  7:29     ` Paul Mackerras
  0 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  7:29 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:30PM +0800, wei.guo.simon@gmail.com wrote:
> ines: 219
> 
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> The math registers will be saved into vcpu->arch.fp/vr and corresponding
> vcpu->arch.fp_tm/vr_tm area.
> 
> We flush or giveup the math regs into vcpu->arch.fp/vr before saving
> transaction. After transaction is restored, the math regs will be loaded
> back into regs.

It looks to me that you are loading up the math regs on every vcpu
load, not just those with an active transaction.  That seems like
overkill.

> If there is a FP/VEC/VSX unavailable exception during transaction active
> state, the math checkpoint content might be incorrect and we need to do
> treclaim./load the correct checkpoint val/trechkpt. sequence to retry the
> transaction.

I would prefer a simpler approach where just before entering the
guest, we check if the guest MSR TM bit is set, and if so we make sure
that whichever math regs are enabled in the guest MSR are actually
loaded on the CPU, that is, that guest_owned_ext has the same bits set
as the guest MSR.  Then we never have to handle a FP/VEC/VSX
unavailable interrupt with a transaction active (other than by simply
passing it on to the guest).

Paul.


^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 18/26] KVM: PPC: Book3S PR: make mtspr/mfspr emulation behavior based on active TM SPRs
  2018-01-11 10:11   ` wei.guo.simon
@ 2018-01-23  8:17     ` Paul Mackerras
  -1 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  8:17 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:31PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> The mfspr/mtspr on TM SPRs(TEXASR/TFIAR/TFHAR) are non-privileged
> instructions and can be executed at PR KVM guest without trapping
> into host in problem state. We only emulate mtspr/mfspr
> texasr/tfiar/tfhar at guest PR=0 state.
> 
> When we are emulating mtspr tm sprs at guest PR=0 state, the emulation
> result need to be visible to guest PR=1 state. That is, the actual TM
> SPR val should be loaded into actual registers.
> 
> We already flush TM SPRs into vcpu when switching out of CPU, and load
> TM SPRs when switching back.
> 
> This patch corrects mfspr()/mtspr() emulation for TM SPRs to make the
> actual source/dest based on actual TM SPRs.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> ---
>  arch/powerpc/kvm/book3s_emulate.c | 35 +++++++++++++++++++++++++++--------
>  1 file changed, 27 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
> index e096d01..c2836330 100644
> --- a/arch/powerpc/kvm/book3s_emulate.c
> +++ b/arch/powerpc/kvm/book3s_emulate.c
> @@ -521,13 +521,26 @@ int kvmppc_core_emulate_mtspr_pr(struct kvm_vcpu *vcpu, int sprn, ulong spr_val)
>  		break;
>  #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
>  	case SPRN_TFHAR:
> -		vcpu->arch.tfhar = spr_val;
> -		break;
>  	case SPRN_TEXASR:
> -		vcpu->arch.texasr = spr_val;
> -		break;
>  	case SPRN_TFIAR:
> -		vcpu->arch.tfiar = spr_val;
> +		if (MSR_TM_ACTIVE(kvmppc_get_msr(vcpu))) {
> +			/* it is illegal to mtspr() TM regs in
> +			 * other than non-transactional state.
> +			 */
> +			kvmppc_core_queue_program(vcpu, SRR1_PROGTM);
> +			emulated = EMULATE_AGAIN;
> +			break;
> +		}

We also need to check that the guest has TM enabled in the guest MSR,
and give them a facility unavailable interrupt if not.

> +
> +		tm_enable();
> +		if (sprn == SPRN_TFHAR)
> +			mtspr(SPRN_TFHAR, spr_val);
> +		else if (sprn == SPRN_TEXASR)
> +			mtspr(SPRN_TEXASR, spr_val);
> +		else
> +			mtspr(SPRN_TFIAR, spr_val);
> +		tm_disable();

I haven't seen any checks that we are on a CPU that has TM.  What
happens if a guest does a mtmsrd with TM=1 and then a mtspr to TEXASR
when running on a POWER7 (assuming the host kernel was compiled with
CONFIG_PPC_TRANSACTIONAL_MEM=y)?

Ideally, if the host CPU does not have TM functionality, these mtsprs
would be treated as no-ops and attempts to set the TM or TS fields in
the guest MSR would be ignored.

> +
>  		break;
>  #endif
>  #endif
> @@ -674,13 +687,19 @@ int kvmppc_core_emulate_mfspr_pr(struct kvm_vcpu *vcpu, int sprn, ulong *spr_val
>  		break;
>  #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
>  	case SPRN_TFHAR:
> -		*spr_val = vcpu->arch.tfhar;
> +		tm_enable();
> +		*spr_val = mfspr(SPRN_TFHAR);
> +		tm_disable();
>  		break;
>  	case SPRN_TEXASR:
> -		*spr_val = vcpu->arch.texasr;
> +		tm_enable();
> +		*spr_val = mfspr(SPRN_TEXASR);
> +		tm_disable();
>  		break;
>  	case SPRN_TFIAR:
> -		*spr_val = vcpu->arch.tfiar;
> +		tm_enable();
> +		*spr_val = mfspr(SPRN_TFIAR);
> +		tm_disable();
>  		break;

These need to check MSR_TM in the guest MSR, and become no-ops on
machines without TM capability.

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 18/26] KVM: PPC: Book3S PR: make mtspr/mfspr emulation behavior based on active TM SPRs
@ 2018-01-23  8:17     ` Paul Mackerras
  0 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  8:17 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:31PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> The mfspr/mtspr on TM SPRs(TEXASR/TFIAR/TFHAR) are non-privileged
> instructions and can be executed at PR KVM guest without trapping
> into host in problem state. We only emulate mtspr/mfspr
> texasr/tfiar/tfhar at guest PR=0 state.
> 
> When we are emulating mtspr tm sprs at guest PR=0 state, the emulation
> result need to be visible to guest PR=1 state. That is, the actual TM
> SPR val should be loaded into actual registers.
> 
> We already flush TM SPRs into vcpu when switching out of CPU, and load
> TM SPRs when switching back.
> 
> This patch corrects mfspr()/mtspr() emulation for TM SPRs to make the
> actual source/dest based on actual TM SPRs.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> ---
>  arch/powerpc/kvm/book3s_emulate.c | 35 +++++++++++++++++++++++++++--------
>  1 file changed, 27 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
> index e096d01..c2836330 100644
> --- a/arch/powerpc/kvm/book3s_emulate.c
> +++ b/arch/powerpc/kvm/book3s_emulate.c
> @@ -521,13 +521,26 @@ int kvmppc_core_emulate_mtspr_pr(struct kvm_vcpu *vcpu, int sprn, ulong spr_val)
>  		break;
>  #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
>  	case SPRN_TFHAR:
> -		vcpu->arch.tfhar = spr_val;
> -		break;
>  	case SPRN_TEXASR:
> -		vcpu->arch.texasr = spr_val;
> -		break;
>  	case SPRN_TFIAR:
> -		vcpu->arch.tfiar = spr_val;
> +		if (MSR_TM_ACTIVE(kvmppc_get_msr(vcpu))) {
> +			/* it is illegal to mtspr() TM regs in
> +			 * other than non-transactional state.
> +			 */
> +			kvmppc_core_queue_program(vcpu, SRR1_PROGTM);
> +			emulated = EMULATE_AGAIN;
> +			break;
> +		}

We also need to check that the guest has TM enabled in the guest MSR,
and give them a facility unavailable interrupt if not.

> +
> +		tm_enable();
> +		if (sprn = SPRN_TFHAR)
> +			mtspr(SPRN_TFHAR, spr_val);
> +		else if (sprn = SPRN_TEXASR)
> +			mtspr(SPRN_TEXASR, spr_val);
> +		else
> +			mtspr(SPRN_TFIAR, spr_val);
> +		tm_disable();

I haven't seen any checks that we are on a CPU that has TM.  What
happens if a guest does a mtmsrd with TM=1 and then a mtspr to TEXASR
when running on a POWER7 (assuming the host kernel was compiled with
CONFIG_PPC_TRANSACTIONAL_MEM=y)?

Ideally, if the host CPU does not have TM functionality, these mtsprs
would be treated as no-ops and attempts to set the TM or TS fields in
the guest MSR would be ignored.

> +
>  		break;
>  #endif
>  #endif
> @@ -674,13 +687,19 @@ int kvmppc_core_emulate_mfspr_pr(struct kvm_vcpu *vcpu, int sprn, ulong *spr_val
>  		break;
>  #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
>  	case SPRN_TFHAR:
> -		*spr_val = vcpu->arch.tfhar;
> +		tm_enable();
> +		*spr_val = mfspr(SPRN_TFHAR);
> +		tm_disable();
>  		break;
>  	case SPRN_TEXASR:
> -		*spr_val = vcpu->arch.texasr;
> +		tm_enable();
> +		*spr_val = mfspr(SPRN_TEXASR);
> +		tm_disable();
>  		break;
>  	case SPRN_TFIAR:
> -		*spr_val = vcpu->arch.tfiar;
> +		tm_enable();
> +		*spr_val = mfspr(SPRN_TFIAR);
> +		tm_disable();
>  		break;

These need to check MSR_TM in the guest MSR, and become no-ops on
machines without TM capability.

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 19/26] KVM: PPC: Book3S PR: always fail transaction in guest privilege state
  2018-01-11 10:11   ` wei.guo.simon
@ 2018-01-23  8:30     ` Paul Mackerras
  -1 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  8:30 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:32PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> Currently kernel doesn't use transaction memory.
> And there is an issue for privilege guest that:
> tbegin/tsuspend/tresume/tabort TM instructions can impact MSR TM bits
> without trap into PR host. So following code will lead to a false mfmsr
> result:
> 	tbegin	<- MSR bits update to Transaction active.
> 	beq 	<- failover handler branch
> 	mfmsr	<- still read MSR bits from magic page with
> 		transaction inactive.
> 
> It is not an issue for non-privilege guest since its mfmsr is not patched
> with magic page and will always trap into PR host.
> 
> This patch will always fail tbegin attempt for privilege guest, so that
> the above issue is prevented. It is benign since currently (guest) kernel
> doesn't initiate a transaction.
> 
> Test case:
> https://github.com/justdoitqd/publicFiles/blob/master/test_tbegin_pr.c
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

You need to handle the case where MSR_TM is not set in the guest MSR,
and give the guest a facility unavailable interrupt.

[snip]

> --- a/arch/powerpc/kvm/book3s_pr.c
> +++ b/arch/powerpc/kvm/book3s_pr.c
> @@ -255,7 +255,7 @@ static inline void kvmppc_save_tm_sprs(struct kvm_vcpu *vcpu)
>  	tm_disable();
>  }
>  
> -static inline void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu)
> +inline void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu)

You should probably remove the 'inline' here too.

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 19/26] KVM: PPC: Book3S PR: always fail transaction in guest privilege state
@ 2018-01-23  8:30     ` Paul Mackerras
  0 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  8:30 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:32PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> Currently kernel doesn't use transaction memory.
> And there is an issue for privilege guest that:
> tbegin/tsuspend/tresume/tabort TM instructions can impact MSR TM bits
> without trap into PR host. So following code will lead to a false mfmsr
> result:
> 	tbegin	<- MSR bits update to Transaction active.
> 	beq 	<- failover handler branch
> 	mfmsr	<- still read MSR bits from magic page with
> 		transaction inactive.
> 
> It is not an issue for non-privilege guest since its mfmsr is not patched
> with magic page and will always trap into PR host.
> 
> This patch will always fail tbegin attempt for privilege guest, so that
> the above issue is prevented. It is benign since currently (guest) kernel
> doesn't initiate a transaction.
> 
> Test case:
> https://github.com/justdoitqd/publicFiles/blob/master/test_tbegin_pr.c
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

You need to handle the case where MSR_TM is not set in the guest MSR,
and give the guest a facility unavailable interrupt.

[snip]

> --- a/arch/powerpc/kvm/book3s_pr.c
> +++ b/arch/powerpc/kvm/book3s_pr.c
> @@ -255,7 +255,7 @@ static inline void kvmppc_save_tm_sprs(struct kvm_vcpu *vcpu)
>  	tm_disable();
>  }
>  
> -static inline void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu)
> +inline void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu)

You should probably remove the 'inline' here too.

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 20/26] KVM: PPC: Book3S PR: enable NV reg restore for reading TM SPR at guest privilege state
  2018-01-11 10:11   ` wei.guo.simon
@ 2018-01-23  9:08     ` Paul Mackerras
  -1 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  9:08 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:33PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> Currently kvmppc_handle_fac() will not update NV GPRs and thus it can
> return with GUEST_RESUME.
> 
> However PR KVM guest always disables MSR_TM bit at privilege state. If PR
> privilege guest are trying to read TM SPRs, it will trigger TM facility
> unavailable exception and fall into kvmppc_handle_fac(). Then the emulation
> will be done by kvmppc_core_emulate_mfspr_pr(). The mfspr instruction can
> include a RT with NV reg. So it is necessary to restore NV GPRs at this
> case, to reflect the update to NV RT.
> 
> This patch make kvmppc_handle_fac() return GUEST_RESUME_NV at TM fac
> exception and with guest privilege state.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

Reviewed-by: Paul Mackerras <paulus@ozlabs.org>

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 20/26] KVM: PPC: Book3S PR: enable NV reg restore for reading TM SPR at guest privilege s
@ 2018-01-23  9:08     ` Paul Mackerras
  0 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  9:08 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:33PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> Currently kvmppc_handle_fac() will not update NV GPRs and thus it can
> return with GUEST_RESUME.
> 
> However PR KVM guest always disables MSR_TM bit at privilege state. If PR
> privilege guest are trying to read TM SPRs, it will trigger TM facility
> unavailable exception and fall into kvmppc_handle_fac(). Then the emulation
> will be done by kvmppc_core_emulate_mfspr_pr(). The mfspr instruction can
> include a RT with NV reg. So it is necessary to restore NV GPRs at this
> case, to reflect the update to NV RT.
> 
> This patch make kvmppc_handle_fac() return GUEST_RESUME_NV at TM fac
> exception and with guest privilege state.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

Reviewed-by: Paul Mackerras <paulus@ozlabs.org>

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 21/26] KVM: PPC: Book3S PR: adds emulation for treclaim.
  2018-01-11 10:11   ` wei.guo.simon
@ 2018-01-23  9:23     ` Paul Mackerras
  -1 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  9:23 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:34PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> This patch adds support for "treclaim." emulation when PR KVM guest
> executes treclaim. and traps to host.
> 
> We will firstly doing treclaim. and save TM checkpoint and doing
> treclaim. Then it is necessary to update vcpu current reg content
> with checkpointed vals. When rfid into guest again, those vcpu
> current reg content(now the checkpoint vals) will be loaded into
> regs.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> ---
>  arch/powerpc/include/asm/reg.h    |  4 +++
>  arch/powerpc/kvm/book3s_emulate.c | 66 ++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 69 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
> index 6c293bc..b3bcf6b 100644
> --- a/arch/powerpc/include/asm/reg.h
> +++ b/arch/powerpc/include/asm/reg.h
> @@ -244,12 +244,16 @@
>  #define SPRN_TEXASR	0x82	/* Transaction EXception & Summary */
>  #define SPRN_TEXASRU	0x83	/* ''	   ''	   ''	 Upper 32  */
>  #define TEXASR_FC_LG	(63 - 7)	/* Failure Code */
> +#define TEXASR_AB_LG	(63 - 31)	/* Abort */
> +#define TEXASR_SU_LG	(63 - 32)	/* Suspend */
>  #define TEXASR_HV_LG	(63 - 34)	/* Hypervisor state*/
>  #define TEXASR_PR_LG	(63 - 35)	/* Privilege level */
>  #define TEXASR_FS_LG	(63 - 36)	/* failure summary */
>  #define TEXASR_EX_LG	(63 - 37)	/* TFIAR exact bit */
>  #define TEXASR_ROT_LG	(63 - 38)	/* ROT bit */
>  #define TEXASR_FC	(ASM_CONST(0xFF) << TEXASR_FC_LG)
> +#define TEXASR_AB	__MASK(TEXASR_AB_LG)
> +#define TEXASR_SU	__MASK(TEXASR_SU_LG)
>  #define TEXASR_HV	__MASK(TEXASR_HV_LG)
>  #define TEXASR_PR	__MASK(TEXASR_PR_LG)
>  #define TEXASR_FS	__MASK(TEXASR_FS_LG)

It would be good to collect up all the modifications you need to make
to reg.h into a single patch at the beginning of the patch series --
that will make it easier to merge it all.

> diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
> index 1eb1900..51c0e20 100644
> --- a/arch/powerpc/kvm/book3s_emulate.c
> +++ b/arch/powerpc/kvm/book3s_emulate.c

[snip]

> @@ -127,6 +130,42 @@ void kvmppc_copyfrom_vcpu_tm(struct kvm_vcpu *vcpu)
>  	vcpu->arch.vrsave = vcpu->arch.vrsave_tm;
>  }
>  
> +static void kvmppc_emulate_treclaim(struct kvm_vcpu *vcpu, int ra_val)
> +{
> +	unsigned long guest_msr = kvmppc_get_msr(vcpu);
> +	int fc_val = ra_val ? ra_val : 1;
> +
> +	kvmppc_save_tm_pr(vcpu);
> +
> +	preempt_disable();
> +	kvmppc_copyfrom_vcpu_tm(vcpu);
> +	preempt_enable();
> +
> +	/*
> +	 * treclaim need quit to non-transactional state.
> +	 */
> +	guest_msr &= ~(MSR_TS_MASK);
> +	kvmppc_set_msr(vcpu, guest_msr);
> +
> +	preempt_disable();
> +	tm_enable();
> +	vcpu->arch.texasr = mfspr(SPRN_TEXASR);
> +	vcpu->arch.texasr &= ~TEXASR_FC;
> +	vcpu->arch.texasr |= ((u64)fc_val << TEXASR_FC_LG);

You're doing failure recording here unconditionally, but the
architecture says that treclaim. only does failure recording if
TEXASR_FS is not already set.

> +	vcpu->arch.texasr &= ~(TEXASR_PR | TEXASR_HV);
> +	if (kvmppc_get_msr(vcpu) & MSR_PR)
> +		vcpu->arch.texasr |= TEXASR_PR;
> +
> +	if (kvmppc_get_msr(vcpu) & MSR_HV)
> +		vcpu->arch.texasr |= TEXASR_HV;
> +
> +	vcpu->arch.tfiar = kvmppc_get_pc(vcpu);
> +	mtspr(SPRN_TEXASR, vcpu->arch.texasr);
> +	mtspr(SPRN_TFIAR, vcpu->arch.tfiar);
> +	tm_disable();
> +	preempt_enable();
> +}
>  #endif

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 21/26] KVM: PPC: Book3S PR: adds emulation for treclaim.
@ 2018-01-23  9:23     ` Paul Mackerras
  0 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  9:23 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:34PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> This patch adds support for "treclaim." emulation when PR KVM guest
> executes treclaim. and traps to host.
> 
> We will firstly doing treclaim. and save TM checkpoint and doing
> treclaim. Then it is necessary to update vcpu current reg content
> with checkpointed vals. When rfid into guest again, those vcpu
> current reg content(now the checkpoint vals) will be loaded into
> regs.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> ---
>  arch/powerpc/include/asm/reg.h    |  4 +++
>  arch/powerpc/kvm/book3s_emulate.c | 66 ++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 69 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
> index 6c293bc..b3bcf6b 100644
> --- a/arch/powerpc/include/asm/reg.h
> +++ b/arch/powerpc/include/asm/reg.h
> @@ -244,12 +244,16 @@
>  #define SPRN_TEXASR	0x82	/* Transaction EXception & Summary */
>  #define SPRN_TEXASRU	0x83	/* ''	   ''	   ''	 Upper 32  */
>  #define TEXASR_FC_LG	(63 - 7)	/* Failure Code */
> +#define TEXASR_AB_LG	(63 - 31)	/* Abort */
> +#define TEXASR_SU_LG	(63 - 32)	/* Suspend */
>  #define TEXASR_HV_LG	(63 - 34)	/* Hypervisor state*/
>  #define TEXASR_PR_LG	(63 - 35)	/* Privilege level */
>  #define TEXASR_FS_LG	(63 - 36)	/* failure summary */
>  #define TEXASR_EX_LG	(63 - 37)	/* TFIAR exact bit */
>  #define TEXASR_ROT_LG	(63 - 38)	/* ROT bit */
>  #define TEXASR_FC	(ASM_CONST(0xFF) << TEXASR_FC_LG)
> +#define TEXASR_AB	__MASK(TEXASR_AB_LG)
> +#define TEXASR_SU	__MASK(TEXASR_SU_LG)
>  #define TEXASR_HV	__MASK(TEXASR_HV_LG)
>  #define TEXASR_PR	__MASK(TEXASR_PR_LG)
>  #define TEXASR_FS	__MASK(TEXASR_FS_LG)

It would be good to collect up all the modifications you need to make
to reg.h into a single patch at the beginning of the patch series --
that will make it easier to merge it all.

> diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
> index 1eb1900..51c0e20 100644
> --- a/arch/powerpc/kvm/book3s_emulate.c
> +++ b/arch/powerpc/kvm/book3s_emulate.c

[snip]

> @@ -127,6 +130,42 @@ void kvmppc_copyfrom_vcpu_tm(struct kvm_vcpu *vcpu)
>  	vcpu->arch.vrsave = vcpu->arch.vrsave_tm;
>  }
>  
> +static void kvmppc_emulate_treclaim(struct kvm_vcpu *vcpu, int ra_val)
> +{
> +	unsigned long guest_msr = kvmppc_get_msr(vcpu);
> +	int fc_val = ra_val ? ra_val : 1;
> +
> +	kvmppc_save_tm_pr(vcpu);
> +
> +	preempt_disable();
> +	kvmppc_copyfrom_vcpu_tm(vcpu);
> +	preempt_enable();
> +
> +	/*
> +	 * treclaim need quit to non-transactional state.
> +	 */
> +	guest_msr &= ~(MSR_TS_MASK);
> +	kvmppc_set_msr(vcpu, guest_msr);
> +
> +	preempt_disable();
> +	tm_enable();
> +	vcpu->arch.texasr = mfspr(SPRN_TEXASR);
> +	vcpu->arch.texasr &= ~TEXASR_FC;
> +	vcpu->arch.texasr |= ((u64)fc_val << TEXASR_FC_LG);

You're doing failure recording here unconditionally, but the
architecture says that treclaim. only does failure recording if
TEXASR_FS is not already set.

> +	vcpu->arch.texasr &= ~(TEXASR_PR | TEXASR_HV);
> +	if (kvmppc_get_msr(vcpu) & MSR_PR)
> +		vcpu->arch.texasr |= TEXASR_PR;
> +
> +	if (kvmppc_get_msr(vcpu) & MSR_HV)
> +		vcpu->arch.texasr |= TEXASR_HV;
> +
> +	vcpu->arch.tfiar = kvmppc_get_pc(vcpu);
> +	mtspr(SPRN_TEXASR, vcpu->arch.texasr);
> +	mtspr(SPRN_TFIAR, vcpu->arch.tfiar);
> +	tm_disable();
> +	preempt_enable();
> +}
>  #endif

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 22/26] KVM: PPC: Book3S PR: add emulation for trechkpt in PR KVM.
  2018-01-11 10:11   ` wei.guo.simon
@ 2018-01-23  9:36     ` Paul Mackerras
  -1 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  9:36 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:35PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> This patch adds host emulation when guest PR KVM executes "trechkpt.",
> which is a privileged instruction and will trap into host.
> 
> We firstly copy vcpu ongoing content into vcpu tm checkpoint
> content, then perform kvmppc_restore_tm_pr() to do trechkpt.
> with updated vcpu tm checkpoint vals.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

[snip]

> +static void kvmppc_emulate_trchkpt(struct kvm_vcpu *vcpu)
> +{
> +	unsigned long guest_msr = kvmppc_get_msr(vcpu);
> +
> +	preempt_disable();
> +	vcpu->arch.save_msr_tm = MSR_TS_S;
> +	vcpu->arch.save_msr_tm &= ~(MSR_FP | MSR_VEC | MSR_VSX);

This looks odd, since you are clearing bits when you have just set
save_msr_tm to a constant value that doesn't have these bits set.
This could be taken as a sign that the previous line has a bug and you
meant "|=" or something similar instead of "=".  I think you probably
did mean "=", in which case you should remove the line clearing
FP/VEC/VSX.

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 22/26] KVM: PPC: Book3S PR: add emulation for trechkpt in PR KVM.
@ 2018-01-23  9:36     ` Paul Mackerras
  0 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  9:36 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:35PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> This patch adds host emulation when guest PR KVM executes "trechkpt.",
> which is a privileged instruction and will trap into host.
> 
> We firstly copy vcpu ongoing content into vcpu tm checkpoint
> content, then perform kvmppc_restore_tm_pr() to do trechkpt.
> with updated vcpu tm checkpoint vals.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

[snip]

> +static void kvmppc_emulate_trchkpt(struct kvm_vcpu *vcpu)
> +{
> +	unsigned long guest_msr = kvmppc_get_msr(vcpu);
> +
> +	preempt_disable();
> +	vcpu->arch.save_msr_tm = MSR_TS_S;
> +	vcpu->arch.save_msr_tm &= ~(MSR_FP | MSR_VEC | MSR_VSX);

This looks odd, since you are clearing bits when you have just set
save_msr_tm to a constant value that doesn't have these bits set.
This could be taken as a sign that the previous line has a bug and you
meant "|=" or something similar instead of "=".  I think you probably
did mean "=", in which case you should remove the line clearing
FP/VEC/VSX.

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 23/26] KVM: PPC: Book3S PR: add emulation for tabort. for privilege guest
  2018-01-11 10:11   ` wei.guo.simon
@ 2018-01-23  9:44     ` Paul Mackerras
  -1 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  9:44 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:36PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> Currently privilege guest will be run with TM disabled.
> 
> Although the privilege guest cannot initiate a new transaction,
> it can use tabort to terminate its problem state's transaction.
> So it is still necessary to emulate tabort. for privilege guest.
> 
> This patch adds emulation for tabort. of privilege guest.
> 
> Tested with:
> https://github.com/justdoitqd/publicFiles/blob/master/test_tabort.c
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> ---
>  arch/powerpc/include/asm/kvm_book3s.h |  1 +
>  arch/powerpc/kvm/book3s_emulate.c     | 31 +++++++++++++++++++++++++++++++
>  arch/powerpc/kvm/book3s_pr.c          |  2 +-
>  3 files changed, 33 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> index 524cd82..8bd454c 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -258,6 +258,7 @@ extern void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
>  void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu);
>  void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu);
>  void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu);
> +void kvmppc_save_tm_sprs(struct kvm_vcpu *vcpu);

Why do you add this declaration, and change it from "static inline" to
"inline" below, when this patch doesn't use it?  Also, making it
"inline" is pointless if it has a caller outside the source file where
it's defined (if gcc wants to inline uses of it inside the same source
file, it will do so anyway even without the "inline" keyword.)

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 23/26] KVM: PPC: Book3S PR: add emulation for tabort. for privilege guest
@ 2018-01-23  9:44     ` Paul Mackerras
  0 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-23  9:44 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:36PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> Currently privilege guest will be run with TM disabled.
> 
> Although the privilege guest cannot initiate a new transaction,
> it can use tabort to terminate its problem state's transaction.
> So it is still necessary to emulate tabort. for privilege guest.
> 
> This patch adds emulation for tabort. of privilege guest.
> 
> Tested with:
> https://github.com/justdoitqd/publicFiles/blob/master/test_tabort.c
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> ---
>  arch/powerpc/include/asm/kvm_book3s.h |  1 +
>  arch/powerpc/kvm/book3s_emulate.c     | 31 +++++++++++++++++++++++++++++++
>  arch/powerpc/kvm/book3s_pr.c          |  2 +-
>  3 files changed, 33 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> index 524cd82..8bd454c 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -258,6 +258,7 @@ extern void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
>  void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu);
>  void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu);
>  void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu);
> +void kvmppc_save_tm_sprs(struct kvm_vcpu *vcpu);

Why do you add this declaration, and change it from "static inline" to
"inline" below, when this patch doesn't use it?  Also, making it
"inline" is pointless if it has a caller outside the source file where
it's defined (if gcc wants to inline uses of it inside the same source
file, it will do so anyway even without the "inline" keyword.)

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 25/26] KVM: PPC: Book3S PR: Support TAR handling for PR KVM HTM.
  2018-01-11 10:11   ` wei.guo.simon
@ 2018-01-24  4:02     ` Paul Mackerras
  -1 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-24  4:02 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:38PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> Currently guest kernel doesn't handle TAR fac unavailable and it always
> runs with TAR bit on. PR KVM will lazily enable TAR. TAR is not a
> frequent-use reg and it is not included in SVCPU struct.
> 
> To make it work for transaction memory at PR KVM:
> 1). Flush/giveup TAR at kvmppc_save_tm_pr().
> 2) If we are receiving a TAR fac unavail exception inside a transaction,
> the checkpointed TAR might be a TAR value from another process. So we need
> treclaim the transaction, then load the desired TAR value into reg, and
> perform trecheckpoint.
> 3) Load TAR facility at kvmppc_restore_tm_pr() when TM active.
> The reason we always loads TAR when restoring TM is that:
> If we don't do this way, when there is a TAR fac unavailable exception
> during TM active:
> case 1: it is the 1st TAR fac unavail exception after tbegin.
> vcpu->arch.tar should be reloaded as checkpoint tar val.
> case 2: it is the 2nd or later TAR fac unavail exception after tbegin.
> vcpu->arch.tar_tm should be reloaded as checkpoint tar val.
> There will be unnecessary difficulty to handle the above 2 cases.
> 
> at the end of emulating treclaim., the correct TAR val need to be loaded
> into reg if FSCR_TAR bit is on.
> at the beginning of emulating trechkpt., TAR needs to be flushed so that
> the right tar val can be copy into tar_tm.

Would it be simpler always to load up TAR when guest_MSR[TM] is 1?

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 25/26] KVM: PPC: Book3S PR: Support TAR handling for PR KVM HTM.
@ 2018-01-24  4:02     ` Paul Mackerras
  0 siblings, 0 replies; 116+ messages in thread
From: Paul Mackerras @ 2018-01-24  4:02 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, Jan 11, 2018 at 06:11:38PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> Currently guest kernel doesn't handle TAR fac unavailable and it always
> runs with TAR bit on. PR KVM will lazily enable TAR. TAR is not a
> frequent-use reg and it is not included in SVCPU struct.
> 
> To make it work for transaction memory at PR KVM:
> 1). Flush/giveup TAR at kvmppc_save_tm_pr().
> 2) If we are receiving a TAR fac unavail exception inside a transaction,
> the checkpointed TAR might be a TAR value from another process. So we need
> treclaim the transaction, then load the desired TAR value into reg, and
> perform trecheckpoint.
> 3) Load TAR facility at kvmppc_restore_tm_pr() when TM active.
> The reason we always loads TAR when restoring TM is that:
> If we don't do this way, when there is a TAR fac unavailable exception
> during TM active:
> case 1: it is the 1st TAR fac unavail exception after tbegin.
> vcpu->arch.tar should be reloaded as checkpoint tar val.
> case 2: it is the 2nd or later TAR fac unavail exception after tbegin.
> vcpu->arch.tar_tm should be reloaded as checkpoint tar val.
> There will be unnecessary difficulty to handle the above 2 cases.
> 
> at the end of emulating treclaim., the correct TAR val need to be loaded
> into reg if FSCR_TAR bit is on.
> at the beginning of emulating trechkpt., TAR needs to be flushed so that
> the right tar val can be copy into tar_tm.

Would it be simpler always to load up TAR when guest_MSR[TM] is 1?

Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 00/26] KVM: PPC: Book3S PR: Transaction memory support on PR KVM
  2018-01-23  5:38   ` Paul Mackerras
@ 2018-01-27 13:10     ` Simon Guo
  -1 siblings, 0 replies; 116+ messages in thread
From: Simon Guo @ 2018-01-27 13:10 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

Hi Paul,
On Tue, Jan 23, 2018 at 04:38:32PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:13PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > In current days, many OS distributions have utilized transaction
> > memory functionality. In PowerPC, HV KVM supports TM. But PR KVM
> > does not.
> > 
> > The drive for the transaction memory support of PR KVM is the
> > openstack Continuous Integration testing - They runs a HV(hypervisor)
> > KVM(as level 1) and then run PR KVM(as level 2) on top of that.
> > 
> > This patch set add transaction memory support on PR KVM.
> 
> Thanks for the patch set.  It mostly looks good, though I have some
> comments on the individual patches.
> 
> I don't see where you are implementing support for userspace accessing
> the TM checkpointed register values using the GET_ONE_REG/SET_ONE_REG
> API.  This would mean that you couldn't migrate a guest that was in
> the middle of a transaction.  We will need to have the one_reg API
> access to the TM checkpoint implemented, though there will be a
> difficulty in that kvmppc_get_one_reg() and kvmppc_set_one_reg() are
> called with the vcpu context loaded.  With your scheme of having the
> TM checkpoint stored in the CPU while the vcpu context is loaded, the
> values you want to access in kvmppc_get/set_one_reg are inaccessible
> since they're stored in the CPU.  You would have to arrange for
> kvmppc_get/set_one_reg to be called without the vcpu context loaded
> (recent patches in the kvm next branch probably make that easier) or
> else explicitly unload and reload the vcpu context in those functions.
> (This is easier in HV KVM since the checkpoint is not in the CPU at
> the point of doing kvmppc_get/set_one_reg.)
Thanks for point it out. I didn't think about it before and will 
investigate. 

I plan to work out this PR KVM HTM kvmppc_get/set_one_reg() 
(and the KVM_SET_REGS you mentioned in another mail) with seperate 
patch/patch set, so that the reworked V2 of current patches can be 
sent out in parallel. In case it is not appropriate for you, please 
let me know.

> 
> There is also complexity added because it's possible for the guest to
> have TM, FP, VEC and VSX all enabled from its point of view but to
> have FP/VEC/VSX not actually enabled in the hardware when the guest is
> running.  As you note in your patch descriptions, this means that the
> guest can do tbegin and create a checkpoint with bogus values for the
> FP/VEC/VSX registers.  Rather than trying to detect and fix up this
> situation after the fact, I would suggest that if the guest has TM
> enabled then we make sure that the real FP/VEC/VSX bits in the MSR
> match what the guest thinks it has.  That way we would avoid the bogus
> checkpoint problem.  (There is still the possibility of getting bogus
> checkpointed FP/VEC/VSX registers if the guest does tbegin with the
> FP/VEC/VSX bits clear in the MSR, but that is the guest's problem to
> deal with.)
Good idea. I will look into kvmppc_set_msr_pr() / kvmppc_giveup_ext()
to simplify the solution.

Thanks for your review and time.

BR,
- Simon

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 00/26] KVM: PPC: Book3S PR: Transaction memory support on PR KVM
@ 2018-01-27 13:10     ` Simon Guo
  0 siblings, 0 replies; 116+ messages in thread
From: Simon Guo @ 2018-01-27 13:10 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

Hi Paul,
On Tue, Jan 23, 2018 at 04:38:32PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:13PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > In current days, many OS distributions have utilized transaction
> > memory functionality. In PowerPC, HV KVM supports TM. But PR KVM
> > does not.
> > 
> > The drive for the transaction memory support of PR KVM is the
> > openstack Continuous Integration testing - They runs a HV(hypervisor)
> > KVM(as level 1) and then run PR KVM(as level 2) on top of that.
> > 
> > This patch set add transaction memory support on PR KVM.
> 
> Thanks for the patch set.  It mostly looks good, though I have some
> comments on the individual patches.
> 
> I don't see where you are implementing support for userspace accessing
> the TM checkpointed register values using the GET_ONE_REG/SET_ONE_REG
> API.  This would mean that you couldn't migrate a guest that was in
> the middle of a transaction.  We will need to have the one_reg API
> access to the TM checkpoint implemented, though there will be a
> difficulty in that kvmppc_get_one_reg() and kvmppc_set_one_reg() are
> called with the vcpu context loaded.  With your scheme of having the
> TM checkpoint stored in the CPU while the vcpu context is loaded, the
> values you want to access in kvmppc_get/set_one_reg are inaccessible
> since they're stored in the CPU.  You would have to arrange for
> kvmppc_get/set_one_reg to be called without the vcpu context loaded
> (recent patches in the kvm next branch probably make that easier) or
> else explicitly unload and reload the vcpu context in those functions.
> (This is easier in HV KVM since the checkpoint is not in the CPU at
> the point of doing kvmppc_get/set_one_reg.)
Thanks for point it out. I didn't think about it before and will 
investigate. 

I plan to work out this PR KVM HTM kvmppc_get/set_one_reg() 
(and the KVM_SET_REGS you mentioned in another mail) with seperate 
patch/patch set, so that the reworked V2 of current patches can be 
sent out in parallel. In case it is not appropriate for you, please 
let me know.

> 
> There is also complexity added because it's possible for the guest to
> have TM, FP, VEC and VSX all enabled from its point of view but to
> have FP/VEC/VSX not actually enabled in the hardware when the guest is
> running.  As you note in your patch descriptions, this means that the
> guest can do tbegin and create a checkpoint with bogus values for the
> FP/VEC/VSX registers.  Rather than trying to detect and fix up this
> situation after the fact, I would suggest that if the guest has TM
> enabled then we make sure that the real FP/VEC/VSX bits in the MSR
> match what the guest thinks it has.  That way we would avoid the bogus
> checkpoint problem.  (There is still the possibility of getting bogus
> checkpointed FP/VEC/VSX registers if the guest does tbegin with the
> FP/VEC/VSX bits clear in the MSR, but that is the guest's problem to
> deal with.)
Good idea. I will look into kvmppc_set_msr_pr() / kvmppc_giveup_ext()
to simplify the solution.

Thanks for your review and time.

BR,
- Simon

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 13/26] KVM: PPC: Book3S PR: adds new kvmppc_copyto_vcpu_tm/kvmppc_copyfrom_vcpu_tm API for PR KVM.
  2018-01-23  5:52     ` [PATCH 13/26] KVM: PPC: Book3S PR: adds new kvmppc_copyto_vcpu_tm/kvmppc_copyfrom_vcpu_tm API fo Paul Mackerras
@ 2018-01-30  2:15       ` Simon Guo
  -1 siblings, 0 replies; 116+ messages in thread
From: Simon Guo @ 2018-01-30  2:15 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

Hi Paul,
On Tue, Jan 23, 2018 at 04:52:19PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:26PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > This patch adds 2 new APIs: kvmppc_copyto_vcpu_tm() and
> > kvmppc_copyfrom_vcpu_tm().  These 2 APIs will be used to copy from/to TM
> > data between VCPU_TM/VCPU area.
> > 
> > PR KVM will use these APIs for treclaim. or trchkpt. emulation.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> > Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
> 
> Actually, I take that back.  You have missed XER. :)
Thanks for the catch. I will fix that.

> 
> Paul.

BR,
- Simon

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 13/26] KVM: PPC: Book3S PR: adds new kvmppc_copyto_vcpu_tm/kvmppc_copyfrom_vcpu_tm API fo
@ 2018-01-30  2:15       ` Simon Guo
  0 siblings, 0 replies; 116+ messages in thread
From: Simon Guo @ 2018-01-30  2:15 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

Hi Paul,
On Tue, Jan 23, 2018 at 04:52:19PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:26PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > This patch adds 2 new APIs: kvmppc_copyto_vcpu_tm() and
> > kvmppc_copyfrom_vcpu_tm().  These 2 APIs will be used to copy from/to TM
> > data between VCPU_TM/VCPU area.
> > 
> > PR KVM will use these APIs for treclaim. or trchkpt. emulation.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> > Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
> 
> Actually, I take that back.  You have missed XER. :)
Thanks for the catch. I will fix that.

> 
> Paul.

BR,
- Simon

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 02/26] KVM: PPC: Book3S PR: add new parameter (guest MSR) for kvmppc_save_tm()/kvmppc_restore_tm()
  2018-01-23  5:42     ` [PATCH 02/26] KVM: PPC: Book3S PR: add new parameter (guest MSR) for kvmppc_save_tm()/kvmppc_res Paul Mackerras
@ 2018-01-30  2:33       ` Simon Guo
  -1 siblings, 0 replies; 116+ messages in thread
From: Simon Guo @ 2018-01-30  2:33 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

Hi Paul,
On Tue, Jan 23, 2018 at 04:42:09PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:15PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > HV KVM and PR KVM need different MSR source to indicate whether
> > treclaim. or trecheckpoint. is necessary.
> > 
> > This patch add new parameter (guest MSR) for these kvmppc_save_tm/
> > kvmppc_restore_tm() APIs:
> > - For HV KVM, it is VCPU_MSR
> > - For PR KVM, it is current host MSR or VCPU_SHADOW_SRR1
> > 
> > This enhancement enables these 2 APIs to be reused by PR KVM later.
> > And the patch keeps HV KVM logic unchanged.
> > 
> > This patch also reworks kvmppc_save_tm()/kvmppc_restore_tm() to
> > have a clean ABI: r3 for vcpu and r4 for guest_msr.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> Question: why do you switch from using HSTATE_HOST_R1 to HSTATE_SCRATCH2
> 
> > @@ -42,11 +45,11 @@ _GLOBAL(kvmppc_save_tm)
> >  	rldimi	r8, r0, MSR_TM_LG, 63-MSR_TM_LG
> >  	mtmsrd	r8
> >  
> > -	ld	r5, VCPU_MSR(r9)
> > -	rldicl. r5, r5, 64 - MSR_TS_S_LG, 62
> > +	rldicl. r4, r4, 64 - MSR_TS_S_LG, 62
> >  	beq	1f	/* TM not active in guest. */
> >  
> > -	std	r1, HSTATE_HOST_R1(r13)
> > +	std	r1, HSTATE_SCRATCH2(r13)
> 
> ... here?
> 
> > @@ -166,17 +173,17 @@ _GLOBAL(kvmppc_restore_tm)
> >  	 * The user may change these outside of a transaction, so they must
> >  	 * always be context switched.
> >  	 */
> > -	ld	r5, VCPU_TFHAR(r4)
> > -	ld	r6, VCPU_TFIAR(r4)
> > -	ld	r7, VCPU_TEXASR(r4)
> > +	ld	r5, VCPU_TFHAR(r3)
> > +	ld	r6, VCPU_TFIAR(r3)
> > +	ld	r7, VCPU_TEXASR(r3)
> >  	mtspr	SPRN_TFHAR, r5
> >  	mtspr	SPRN_TFIAR, r6
> >  	mtspr	SPRN_TEXASR, r7
> >  
> > -	ld	r5, VCPU_MSR(r4)
> > +	mr	r5, r4
> >  	rldicl. r5, r5, 64 - MSR_TS_S_LG, 62
> >  	beqlr		/* TM not active in guest */
> > -	std	r1, HSTATE_HOST_R1(r13)
> > +	std	r1, HSTATE_SCRATCH2(r13)
> 
> and here?
> 
> Please add a paragraph to the patch description explaining why you are
> making that change.
In subsequent patches, kvmppc_save_tm/kvmppc_restore_tm() will be
invoked by wrapper function who setup addtional stack frame and 
update R1(then update HSTATE_HOST_R1 with addtional offset). Although 
HSTATE_HOST_R1 is now used safely(always PPC_STL before entering 
guest and PPC_LL in kvmppc_interrupt_pr()), I worried a future usage 
will take an assumption on HSTATE_HOST_R1 value and bring trouble.

As a result, in kvmppc_save_tm/kvmppc_restore_tm() case, I choose
HSTATE_SCRATCH2 to restore the r1. I will update the commit message. 


Thanks,
- Simon




> 
> Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 02/26] KVM: PPC: Book3S PR: add new parameter (guest MSR) for kvmppc_save_tm()/kvmppc_res
@ 2018-01-30  2:33       ` Simon Guo
  0 siblings, 0 replies; 116+ messages in thread
From: Simon Guo @ 2018-01-30  2:33 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

Hi Paul,
On Tue, Jan 23, 2018 at 04:42:09PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:15PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > HV KVM and PR KVM need different MSR source to indicate whether
> > treclaim. or trecheckpoint. is necessary.
> > 
> > This patch add new parameter (guest MSR) for these kvmppc_save_tm/
> > kvmppc_restore_tm() APIs:
> > - For HV KVM, it is VCPU_MSR
> > - For PR KVM, it is current host MSR or VCPU_SHADOW_SRR1
> > 
> > This enhancement enables these 2 APIs to be reused by PR KVM later.
> > And the patch keeps HV KVM logic unchanged.
> > 
> > This patch also reworks kvmppc_save_tm()/kvmppc_restore_tm() to
> > have a clean ABI: r3 for vcpu and r4 for guest_msr.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> Question: why do you switch from using HSTATE_HOST_R1 to HSTATE_SCRATCH2
> 
> > @@ -42,11 +45,11 @@ _GLOBAL(kvmppc_save_tm)
> >  	rldimi	r8, r0, MSR_TM_LG, 63-MSR_TM_LG
> >  	mtmsrd	r8
> >  
> > -	ld	r5, VCPU_MSR(r9)
> > -	rldicl. r5, r5, 64 - MSR_TS_S_LG, 62
> > +	rldicl. r4, r4, 64 - MSR_TS_S_LG, 62
> >  	beq	1f	/* TM not active in guest. */
> >  
> > -	std	r1, HSTATE_HOST_R1(r13)
> > +	std	r1, HSTATE_SCRATCH2(r13)
> 
> ... here?
> 
> > @@ -166,17 +173,17 @@ _GLOBAL(kvmppc_restore_tm)
> >  	 * The user may change these outside of a transaction, so they must
> >  	 * always be context switched.
> >  	 */
> > -	ld	r5, VCPU_TFHAR(r4)
> > -	ld	r6, VCPU_TFIAR(r4)
> > -	ld	r7, VCPU_TEXASR(r4)
> > +	ld	r5, VCPU_TFHAR(r3)
> > +	ld	r6, VCPU_TFIAR(r3)
> > +	ld	r7, VCPU_TEXASR(r3)
> >  	mtspr	SPRN_TFHAR, r5
> >  	mtspr	SPRN_TFIAR, r6
> >  	mtspr	SPRN_TEXASR, r7
> >  
> > -	ld	r5, VCPU_MSR(r4)
> > +	mr	r5, r4
> >  	rldicl. r5, r5, 64 - MSR_TS_S_LG, 62
> >  	beqlr		/* TM not active in guest */
> > -	std	r1, HSTATE_HOST_R1(r13)
> > +	std	r1, HSTATE_SCRATCH2(r13)
> 
> and here?
> 
> Please add a paragraph to the patch description explaining why you are
> making that change.
In subsequent patches, kvmppc_save_tm/kvmppc_restore_tm() will be
invoked by wrapper function who setup addtional stack frame and 
update R1(then update HSTATE_HOST_R1 with addtional offset). Although 
HSTATE_HOST_R1 is now used safely(always PPC_STL before entering 
guest and PPC_LL in kvmppc_interrupt_pr()), I worried a future usage 
will take an assumption on HSTATE_HOST_R1 value and bring trouble.

As a result, in kvmppc_save_tm/kvmppc_restore_tm() case, I choose
HSTATE_SCRATCH2 to restore the r1. I will update the commit message. 


Thanks,
- Simon




> 
> Paul.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 04/26] KVM: PPC: Book3S PR: add C function wrapper for _kvmppc_save/restore_tm()
  2018-01-23  5:49     ` Paul Mackerras
@ 2018-01-30  2:38       ` Simon Guo
  -1 siblings, 0 replies; 116+ messages in thread
From: Simon Guo @ 2018-01-30  2:38 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

Hi Paul,
On Tue, Jan 23, 2018 at 04:49:16PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:17PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > Currently _kvmppc_save/restore_tm() APIs can only be invoked from
> > assembly function. This patch adds C function wrappers for them so
> > that they can be safely called from C function.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> [snip]
> 
> > --- a/arch/powerpc/include/asm/asm-prototypes.h
> > +++ b/arch/powerpc/include/asm/asm-prototypes.h
> > @@ -126,4 +126,11 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
> >  void _mcount(void);
> >  unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip);
> >  
> > +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> > +/* Transaction memory related */
> > +struct kvm_vcpu;
> > +void _kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu, u64 guest_msr);
> > +void _kvmppc_save_tm_pr(struct kvm_vcpu *vcpu, u64 guest_msr);
> > +#endif
> 
> It's not generally necessary to have ifdefs around function
> declarations.  If the function is never defined because the feature
> is not configured in, that is fine.
> 
Got it. Thanks.

> > @@ -149,6 +149,58 @@ _GLOBAL(kvmppc_save_tm)
> >  	blr
> >  
> >  /*
> > + * _kvmppc_save_tm() is a wrapper around __kvmppc_save_tm(), so that it can
> > + * be invoked from C function by PR KVM only.
> > + */
> > +_GLOBAL(_kvmppc_save_tm_pr)
> > +	mflr	r5
> > +	std	r5, PPC_LR_STKOFF(r1)
> > +	stdu    r1, -SWITCH_FRAME_SIZE(r1)
> > +	SAVE_NVGPRS(r1)
> > +
> > +	/* save MSR since TM/math bits might be impacted
> > +	 * by __kvmppc_save_tm().
> > +	 */
> > +	mfmsr	r5
> > +	SAVE_GPR(5, r1)
> > +
> > +	/* also save DSCR/CR so that it can be recovered later */
> > +	mfspr   r6, SPRN_DSCR
> > +	SAVE_GPR(6, r1)
> > +
> > +	mfcr    r7
> > +	stw     r7, _CCR(r1)
> > +
> > +	/* allocate stack frame for __kvmppc_save_tm since
> > +	 * it will save LR into its stackframe and we don't
> > +	 * want to corrupt _kvmppc_save_tm_pr's.
> > +	 */
> > +	stdu    r1, -PPC_MIN_STKFRM(r1)
> 
> You don't need to do this.  In the PowerPC ELF ABI, functions always
> save their LR (i.e. their return address) in their *caller's* stack
> frame, not their own.  You have established a stack frame for
> _kvmppc_save_tm_pr above, and that is sufficient.  Same comment
> applies for _kvmppc_restore_tm_pr.
Ah..yes. I need remove that.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 04/26] KVM: PPC: Book3S PR: add C function wrapper for _kvmppc_save/restore_tm()
@ 2018-01-30  2:38       ` Simon Guo
  0 siblings, 0 replies; 116+ messages in thread
From: Simon Guo @ 2018-01-30  2:38 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

Hi Paul,
On Tue, Jan 23, 2018 at 04:49:16PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:17PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > Currently _kvmppc_save/restore_tm() APIs can only be invoked from
> > assembly function. This patch adds C function wrappers for them so
> > that they can be safely called from C function.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> [snip]
> 
> > --- a/arch/powerpc/include/asm/asm-prototypes.h
> > +++ b/arch/powerpc/include/asm/asm-prototypes.h
> > @@ -126,4 +126,11 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
> >  void _mcount(void);
> >  unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip);
> >  
> > +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> > +/* Transaction memory related */
> > +struct kvm_vcpu;
> > +void _kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu, u64 guest_msr);
> > +void _kvmppc_save_tm_pr(struct kvm_vcpu *vcpu, u64 guest_msr);
> > +#endif
> 
> It's not generally necessary to have ifdefs around function
> declarations.  If the function is never defined because the feature
> is not configured in, that is fine.
> 
Got it. Thanks.

> > @@ -149,6 +149,58 @@ _GLOBAL(kvmppc_save_tm)
> >  	blr
> >  
> >  /*
> > + * _kvmppc_save_tm() is a wrapper around __kvmppc_save_tm(), so that it can
> > + * be invoked from C function by PR KVM only.
> > + */
> > +_GLOBAL(_kvmppc_save_tm_pr)
> > +	mflr	r5
> > +	std	r5, PPC_LR_STKOFF(r1)
> > +	stdu    r1, -SWITCH_FRAME_SIZE(r1)
> > +	SAVE_NVGPRS(r1)
> > +
> > +	/* save MSR since TM/math bits might be impacted
> > +	 * by __kvmppc_save_tm().
> > +	 */
> > +	mfmsr	r5
> > +	SAVE_GPR(5, r1)
> > +
> > +	/* also save DSCR/CR so that it can be recovered later */
> > +	mfspr   r6, SPRN_DSCR
> > +	SAVE_GPR(6, r1)
> > +
> > +	mfcr    r7
> > +	stw     r7, _CCR(r1)
> > +
> > +	/* allocate stack frame for __kvmppc_save_tm since
> > +	 * it will save LR into its stackframe and we don't
> > +	 * want to corrupt _kvmppc_save_tm_pr's.
> > +	 */
> > +	stdu    r1, -PPC_MIN_STKFRM(r1)
> 
> You don't need to do this.  In the PowerPC ELF ABI, functions always
> save their LR (i.e. their return address) in their *caller's* stack
> frame, not their own.  You have established a stack frame for
> _kvmppc_save_tm_pr above, and that is sufficient.  Same comment
> applies for _kvmppc_restore_tm_pr.
Ah..yes. I need remove that.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 16/26] KVM: PPC: Book3S PR: add transaction memory save/restore skeleton for PR KVM
  2018-01-23  6:04     ` Paul Mackerras
@ 2018-01-30  2:57       ` Simon Guo
  -1 siblings, 0 replies; 116+ messages in thread
From: Simon Guo @ 2018-01-30  2:57 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

Hi Paul,
On Tue, Jan 23, 2018 at 05:04:09PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:29PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > The transaction memory checkpoint area save/restore behavior is
> > triggered when VCPU qemu process is switching out/into CPU. ie.
> > at kvmppc_core_vcpu_put_pr() and kvmppc_core_vcpu_load_pr().
> > 
> > MSR TM active state is determined by TS bits:
> >     active: 10(transactional) or 01 (suspended)
> >     inactive: 00 (non-transactional)
> > We don't "fake" TM functionality for guest. We "sync" guest virtual
> > MSR TM active state(10 or 01) with shadow MSR. That is to say,
> > we don't emulate a transactional guest with a TM inactive MSR.
> > 
> > TM SPR support(TFIAR/TFAR/TEXASR) has already been supported by
> > commit 9916d57e64a4 ("KVM: PPC: Book3S PR: Expose TM registers").
> > Math register support (FPR/VMX/VSX) will be done at subsequent
> > patch.
> > 
> > - TM save:
> > When kvmppc_save_tm_pr() is invoked, whether TM context need to
> > be saved can be determined by current host MSR state:
> > 	* TM active - save TM context
> > 	* TM inactive - no need to do so and only save TM SPRs.
> > 
> > - TM restore:
> > However when kvmppc_restore_tm_pr() is invoked, there is an
> > issue to determine whether TM restore should be performed.
> > The TM active host MSR val saved in kernel stack is not loaded yet.
> 
> I don't follow this exactly.  What is the value saved on the kernel
> stack?
> 
> I get that we may not have done the sync from the shadow MSR back to
> the guest MSR, since that is done in kvmppc_handle_exit_pr() with
> interrupts enabled and we might be unloading because we got
> preempted.  In that case we would have svcpu->in_use = 1, and we
> should in fact do the sync of the TS bits from shadow_msr to the vcpu
> MSR value in kvmppc_copy_from_svcpu().  If you did that then both the
> load and put functions could just rely on the vcpu's MSR value.
> 
Yes. that looks more clean and simpler!

> > We don't know whether there is a transaction to be restored from
> > current host MSR TM status at kvmppc_restore_tm_pr(). To solve this
> > issue, we save current MSR into vcpu->arch.save_msr_tm at
> > kvmppc_save_tm_pr(), and kvmppc_restore_tm_pr() check TS bits of
> > vcpu->arch.save_msr_tm to decide whether to do TM restore.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> > Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> > ---
> >  arch/powerpc/include/asm/kvm_book3s.h |  6 +++++
> >  arch/powerpc/include/asm/kvm_host.h   |  1 +
> >  arch/powerpc/kvm/book3s_pr.c          | 41 +++++++++++++++++++++++++++++++++++
> >  3 files changed, 48 insertions(+)
> > 
> > diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> > index 9a66700..d8dbfa5 100644
> > --- a/arch/powerpc/include/asm/kvm_book3s.h
> > +++ b/arch/powerpc/include/asm/kvm_book3s.h
> > @@ -253,6 +253,12 @@ extern void kvmppc_copy_to_svcpu(struct kvmppc_book3s_shadow_vcpu *svcpu,
> >  				 struct kvm_vcpu *vcpu);
> >  extern void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
> >  				   struct kvmppc_book3s_shadow_vcpu *svcpu);
> > +
> > +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> > +void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu);
> > +void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu);
> > +#endif
> 
> It would be cleaner at the point where you use these if you added a
> #else clause to define a null version for the case when transactional
> memory support is not configured, like this:
> 
> +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> +void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu);
> +void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu);
> +#else
> +static inline void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu) {}
> +static inline void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu) {}
> +#endif
> 
> That way you don't need the #ifdef at the call site.
> 
Thanks for the tip.

> > @@ -131,6 +135,10 @@ static void kvmppc_core_vcpu_put_pr(struct kvm_vcpu *vcpu)
> >  	if (kvmppc_is_split_real(vcpu))
> >  		kvmppc_unfixup_split_real(vcpu);
> >  
> > +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> > +	kvmppc_save_tm_pr(vcpu);
> > +#endif
> > +
> >  	kvmppc_giveup_ext(vcpu, MSR_FP | MSR_VEC | MSR_VSX);
> >  	kvmppc_giveup_fac(vcpu, FSCR_TAR_LG);
> 
> I think you should do these giveup_ext/giveup_fac calls before calling
> kvmppc_save_tm_pr, because the treclaim in kvmppc_save_tm_pr will
> modify all the FP/VEC/VSX registers and the TAR.
I handled giveup_ext/giveup_fac() within kvmppc_save_tm_pr(), so that
other place (like kvmppc_emulate_treclaim() can invoke
kvmppc_save_tm_pr() easily). But I think moving the calls sequence as 
you suggested above will be more readable.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 16/26] KVM: PPC: Book3S PR: add transaction memory save/restore skeleton for PR KVM
@ 2018-01-30  2:57       ` Simon Guo
  0 siblings, 0 replies; 116+ messages in thread
From: Simon Guo @ 2018-01-30  2:57 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

Hi Paul,
On Tue, Jan 23, 2018 at 05:04:09PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:29PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > The transaction memory checkpoint area save/restore behavior is
> > triggered when VCPU qemu process is switching out/into CPU. ie.
> > at kvmppc_core_vcpu_put_pr() and kvmppc_core_vcpu_load_pr().
> > 
> > MSR TM active state is determined by TS bits:
> >     active: 10(transactional) or 01 (suspended)
> >     inactive: 00 (non-transactional)
> > We don't "fake" TM functionality for guest. We "sync" guest virtual
> > MSR TM active state(10 or 01) with shadow MSR. That is to say,
> > we don't emulate a transactional guest with a TM inactive MSR.
> > 
> > TM SPR support(TFIAR/TFAR/TEXASR) has already been supported by
> > commit 9916d57e64a4 ("KVM: PPC: Book3S PR: Expose TM registers").
> > Math register support (FPR/VMX/VSX) will be done at subsequent
> > patch.
> > 
> > - TM save:
> > When kvmppc_save_tm_pr() is invoked, whether TM context need to
> > be saved can be determined by current host MSR state:
> > 	* TM active - save TM context
> > 	* TM inactive - no need to do so and only save TM SPRs.
> > 
> > - TM restore:
> > However when kvmppc_restore_tm_pr() is invoked, there is an
> > issue to determine whether TM restore should be performed.
> > The TM active host MSR val saved in kernel stack is not loaded yet.
> 
> I don't follow this exactly.  What is the value saved on the kernel
> stack?
> 
> I get that we may not have done the sync from the shadow MSR back to
> the guest MSR, since that is done in kvmppc_handle_exit_pr() with
> interrupts enabled and we might be unloading because we got
> preempted.  In that case we would have svcpu->in_use = 1, and we
> should in fact do the sync of the TS bits from shadow_msr to the vcpu
> MSR value in kvmppc_copy_from_svcpu().  If you did that then both the
> load and put functions could just rely on the vcpu's MSR value.
> 
Yes. that looks more clean and simpler!

> > We don't know whether there is a transaction to be restored from
> > current host MSR TM status at kvmppc_restore_tm_pr(). To solve this
> > issue, we save current MSR into vcpu->arch.save_msr_tm at
> > kvmppc_save_tm_pr(), and kvmppc_restore_tm_pr() check TS bits of
> > vcpu->arch.save_msr_tm to decide whether to do TM restore.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> > Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> > ---
> >  arch/powerpc/include/asm/kvm_book3s.h |  6 +++++
> >  arch/powerpc/include/asm/kvm_host.h   |  1 +
> >  arch/powerpc/kvm/book3s_pr.c          | 41 +++++++++++++++++++++++++++++++++++
> >  3 files changed, 48 insertions(+)
> > 
> > diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> > index 9a66700..d8dbfa5 100644
> > --- a/arch/powerpc/include/asm/kvm_book3s.h
> > +++ b/arch/powerpc/include/asm/kvm_book3s.h
> > @@ -253,6 +253,12 @@ extern void kvmppc_copy_to_svcpu(struct kvmppc_book3s_shadow_vcpu *svcpu,
> >  				 struct kvm_vcpu *vcpu);
> >  extern void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
> >  				   struct kvmppc_book3s_shadow_vcpu *svcpu);
> > +
> > +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> > +void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu);
> > +void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu);
> > +#endif
> 
> It would be cleaner at the point where you use these if you added a
> #else clause to define a null version for the case when transactional
> memory support is not configured, like this:
> 
> +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> +void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu);
> +void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu);
> +#else
> +static inline void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu) {}
> +static inline void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu) {}
> +#endif
> 
> That way you don't need the #ifdef at the call site.
> 
Thanks for the tip.

> > @@ -131,6 +135,10 @@ static void kvmppc_core_vcpu_put_pr(struct kvm_vcpu *vcpu)
> >  	if (kvmppc_is_split_real(vcpu))
> >  		kvmppc_unfixup_split_real(vcpu);
> >  
> > +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> > +	kvmppc_save_tm_pr(vcpu);
> > +#endif
> > +
> >  	kvmppc_giveup_ext(vcpu, MSR_FP | MSR_VEC | MSR_VSX);
> >  	kvmppc_giveup_fac(vcpu, FSCR_TAR_LG);
> 
> I think you should do these giveup_ext/giveup_fac calls before calling
> kvmppc_save_tm_pr, because the treclaim in kvmppc_save_tm_pr will
> modify all the FP/VEC/VSX registers and the TAR.
I handled giveup_ext/giveup_fac() within kvmppc_save_tm_pr(), so that
other place (like kvmppc_emulate_treclaim() can invoke
kvmppc_save_tm_pr() easily). But I think moving the calls sequence as 
you suggested above will be more readable.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 17/26] KVM: PPC: Book3S PR: add math support for PR KVM HTM
  2018-01-23  7:29     ` Paul Mackerras
@ 2018-01-30  3:00       ` Simon Guo
  -1 siblings, 0 replies; 116+ messages in thread
From: Simon Guo @ 2018-01-30  3:00 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

Hi Paul,
On Tue, Jan 23, 2018 at 06:29:27PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:30PM +0800, wei.guo.simon@gmail.com wrote:
> > ines: 219
> > 
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > The math registers will be saved into vcpu->arch.fp/vr and corresponding
> > vcpu->arch.fp_tm/vr_tm area.
> > 
> > We flush or giveup the math regs into vcpu->arch.fp/vr before saving
> > transaction. After transaction is restored, the math regs will be loaded
> > back into regs.
> 
> It looks to me that you are loading up the math regs on every vcpu
> load, not just those with an active transaction.  That seems like
> overkill.
> 
> > If there is a FP/VEC/VSX unavailable exception during transaction active
> > state, the math checkpoint content might be incorrect and we need to do
> > treclaim./load the correct checkpoint val/trechkpt. sequence to retry the
> > transaction.
> 
> I would prefer a simpler approach where just before entering the
> guest, we check if the guest MSR TM bit is set, and if so we make sure
> that whichever math regs are enabled in the guest MSR are actually
> loaded on the CPU, that is, that guest_owned_ext has the same bits set
> as the guest MSR.  Then we never have to handle a FP/VEC/VSX
> unavailable interrupt with a transaction active (other than by simply
> passing it on to the guest).

Good idea. I will rework as this way.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 17/26] KVM: PPC: Book3S PR: add math support for PR KVM HTM
@ 2018-01-30  3:00       ` Simon Guo
  0 siblings, 0 replies; 116+ messages in thread
From: Simon Guo @ 2018-01-30  3:00 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

Hi Paul,
On Tue, Jan 23, 2018 at 06:29:27PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:30PM +0800, wei.guo.simon@gmail.com wrote:
> > ines: 219
> > 
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > The math registers will be saved into vcpu->arch.fp/vr and corresponding
> > vcpu->arch.fp_tm/vr_tm area.
> > 
> > We flush or giveup the math regs into vcpu->arch.fp/vr before saving
> > transaction. After transaction is restored, the math regs will be loaded
> > back into regs.
> 
> It looks to me that you are loading up the math regs on every vcpu
> load, not just those with an active transaction.  That seems like
> overkill.
> 
> > If there is a FP/VEC/VSX unavailable exception during transaction active
> > state, the math checkpoint content might be incorrect and we need to do
> > treclaim./load the correct checkpoint val/trechkpt. sequence to retry the
> > transaction.
> 
> I would prefer a simpler approach where just before entering the
> guest, we check if the guest MSR TM bit is set, and if so we make sure
> that whichever math regs are enabled in the guest MSR are actually
> loaded on the CPU, that is, that guest_owned_ext has the same bits set
> as the guest MSR.  Then we never have to handle a FP/VEC/VSX
> unavailable interrupt with a transaction active (other than by simply
> passing it on to the guest).

Good idea. I will rework as this way.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 18/26] KVM: PPC: Book3S PR: make mtspr/mfspr emulation behavior based on active TM SPRs
  2018-01-23  8:17     ` Paul Mackerras
@ 2018-01-30  3:02       ` Simon Guo
  -1 siblings, 0 replies; 116+ messages in thread
From: Simon Guo @ 2018-01-30  3:02 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

Hi Paul,
On Tue, Jan 23, 2018 at 07:17:45PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:31PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > The mfspr/mtspr on TM SPRs(TEXASR/TFIAR/TFHAR) are non-privileged
> > instructions and can be executed at PR KVM guest without trapping
> > into host in problem state. We only emulate mtspr/mfspr
> > texasr/tfiar/tfhar at guest PR=0 state.
> > 
> > When we are emulating mtspr tm sprs at guest PR=0 state, the emulation
> > result need to be visible to guest PR=1 state. That is, the actual TM
> > SPR val should be loaded into actual registers.
> > 
> > We already flush TM SPRs into vcpu when switching out of CPU, and load
> > TM SPRs when switching back.
> > 
> > This patch corrects mfspr()/mtspr() emulation for TM SPRs to make the
> > actual source/dest based on actual TM SPRs.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> > ---
> >  arch/powerpc/kvm/book3s_emulate.c | 35 +++++++++++++++++++++++++++--------
> >  1 file changed, 27 insertions(+), 8 deletions(-)
> > 
> > diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
> > index e096d01..c2836330 100644
> > --- a/arch/powerpc/kvm/book3s_emulate.c
> > +++ b/arch/powerpc/kvm/book3s_emulate.c
> > @@ -521,13 +521,26 @@ int kvmppc_core_emulate_mtspr_pr(struct kvm_vcpu *vcpu, int sprn, ulong spr_val)
> >  		break;
> >  #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> >  	case SPRN_TFHAR:
> > -		vcpu->arch.tfhar = spr_val;
> > -		break;
> >  	case SPRN_TEXASR:
> > -		vcpu->arch.texasr = spr_val;
> > -		break;
> >  	case SPRN_TFIAR:
> > -		vcpu->arch.tfiar = spr_val;
> > +		if (MSR_TM_ACTIVE(kvmppc_get_msr(vcpu))) {
> > +			/* it is illegal to mtspr() TM regs in
> > +			 * other than non-transactional state.
> > +			 */
> > +			kvmppc_core_queue_program(vcpu, SRR1_PROGTM);
> > +			emulated = EMULATE_AGAIN;
> > +			break;
> > +		}
> 
> We also need to check that the guest has TM enabled in the guest MSR,
> and give them a facility unavailable interrupt if not.
> 
> > +
> > +		tm_enable();
> > +		if (sprn == SPRN_TFHAR)
> > +			mtspr(SPRN_TFHAR, spr_val);
> > +		else if (sprn == SPRN_TEXASR)
> > +			mtspr(SPRN_TEXASR, spr_val);
> > +		else
> > +			mtspr(SPRN_TFIAR, spr_val);
> > +		tm_disable();
> 
> I haven't seen any checks that we are on a CPU that has TM.  What
> happens if a guest does a mtmsrd with TM=1 and then a mtspr to TEXASR
> when running on a POWER7 (assuming the host kernel was compiled with
> CONFIG_PPC_TRANSACTIONAL_MEM=y)?
> 
> Ideally, if the host CPU does not have TM functionality, these mtsprs
> would be treated as no-ops and attempts to set the TM or TS fields in
> the guest MSR would be ignored.
> 
> > +
> >  		break;
> >  #endif
> >  #endif
> > @@ -674,13 +687,19 @@ int kvmppc_core_emulate_mfspr_pr(struct kvm_vcpu *vcpu, int sprn, ulong *spr_val
> >  		break;
> >  #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> >  	case SPRN_TFHAR:
> > -		*spr_val = vcpu->arch.tfhar;
> > +		tm_enable();
> > +		*spr_val = mfspr(SPRN_TFHAR);
> > +		tm_disable();
> >  		break;
> >  	case SPRN_TEXASR:
> > -		*spr_val = vcpu->arch.texasr;
> > +		tm_enable();
> > +		*spr_val = mfspr(SPRN_TEXASR);
> > +		tm_disable();
> >  		break;
> >  	case SPRN_TFIAR:
> > -		*spr_val = vcpu->arch.tfiar;
> > +		tm_enable();
> > +		*spr_val = mfspr(SPRN_TFIAR);
> > +		tm_disable();
> >  		break;
> 
> These need to check MSR_TM in the guest MSR, and become no-ops on
> machines without TM capability.

Thanks for the above catches. I will rework later.

BR,
- Simon

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 18/26] KVM: PPC: Book3S PR: make mtspr/mfspr emulation behavior based on active TM SPRs
@ 2018-01-30  3:02       ` Simon Guo
  0 siblings, 0 replies; 116+ messages in thread
From: Simon Guo @ 2018-01-30  3:02 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

Hi Paul,
On Tue, Jan 23, 2018 at 07:17:45PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:31PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > The mfspr/mtspr on TM SPRs(TEXASR/TFIAR/TFHAR) are non-privileged
> > instructions and can be executed at PR KVM guest without trapping
> > into host in problem state. We only emulate mtspr/mfspr
> > texasr/tfiar/tfhar at guest PR=0 state.
> > 
> > When we are emulating mtspr tm sprs at guest PR=0 state, the emulation
> > result need to be visible to guest PR=1 state. That is, the actual TM
> > SPR val should be loaded into actual registers.
> > 
> > We already flush TM SPRs into vcpu when switching out of CPU, and load
> > TM SPRs when switching back.
> > 
> > This patch corrects mfspr()/mtspr() emulation for TM SPRs to make the
> > actual source/dest based on actual TM SPRs.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> > ---
> >  arch/powerpc/kvm/book3s_emulate.c | 35 +++++++++++++++++++++++++++--------
> >  1 file changed, 27 insertions(+), 8 deletions(-)
> > 
> > diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
> > index e096d01..c2836330 100644
> > --- a/arch/powerpc/kvm/book3s_emulate.c
> > +++ b/arch/powerpc/kvm/book3s_emulate.c
> > @@ -521,13 +521,26 @@ int kvmppc_core_emulate_mtspr_pr(struct kvm_vcpu *vcpu, int sprn, ulong spr_val)
> >  		break;
> >  #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> >  	case SPRN_TFHAR:
> > -		vcpu->arch.tfhar = spr_val;
> > -		break;
> >  	case SPRN_TEXASR:
> > -		vcpu->arch.texasr = spr_val;
> > -		break;
> >  	case SPRN_TFIAR:
> > -		vcpu->arch.tfiar = spr_val;
> > +		if (MSR_TM_ACTIVE(kvmppc_get_msr(vcpu))) {
> > +			/* it is illegal to mtspr() TM regs in
> > +			 * other than non-transactional state.
> > +			 */
> > +			kvmppc_core_queue_program(vcpu, SRR1_PROGTM);
> > +			emulated = EMULATE_AGAIN;
> > +			break;
> > +		}
> 
> We also need to check that the guest has TM enabled in the guest MSR,
> and give them a facility unavailable interrupt if not.
> 
> > +
> > +		tm_enable();
> > +		if (sprn = SPRN_TFHAR)
> > +			mtspr(SPRN_TFHAR, spr_val);
> > +		else if (sprn = SPRN_TEXASR)
> > +			mtspr(SPRN_TEXASR, spr_val);
> > +		else
> > +			mtspr(SPRN_TFIAR, spr_val);
> > +		tm_disable();
> 
> I haven't seen any checks that we are on a CPU that has TM.  What
> happens if a guest does a mtmsrd with TM=1 and then a mtspr to TEXASR
> when running on a POWER7 (assuming the host kernel was compiled with
> CONFIG_PPC_TRANSACTIONAL_MEM=y)?
> 
> Ideally, if the host CPU does not have TM functionality, these mtsprs
> would be treated as no-ops and attempts to set the TM or TS fields in
> the guest MSR would be ignored.
> 
> > +
> >  		break;
> >  #endif
> >  #endif
> > @@ -674,13 +687,19 @@ int kvmppc_core_emulate_mfspr_pr(struct kvm_vcpu *vcpu, int sprn, ulong *spr_val
> >  		break;
> >  #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> >  	case SPRN_TFHAR:
> > -		*spr_val = vcpu->arch.tfhar;
> > +		tm_enable();
> > +		*spr_val = mfspr(SPRN_TFHAR);
> > +		tm_disable();
> >  		break;
> >  	case SPRN_TEXASR:
> > -		*spr_val = vcpu->arch.texasr;
> > +		tm_enable();
> > +		*spr_val = mfspr(SPRN_TEXASR);
> > +		tm_disable();
> >  		break;
> >  	case SPRN_TFIAR:
> > -		*spr_val = vcpu->arch.tfiar;
> > +		tm_enable();
> > +		*spr_val = mfspr(SPRN_TFIAR);
> > +		tm_disable();
> >  		break;
> 
> These need to check MSR_TM in the guest MSR, and become no-ops on
> machines without TM capability.

Thanks for the above catches. I will rework later.

BR,
- Simon

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 19/26] KVM: PPC: Book3S PR: always fail transaction in guest privilege state
  2018-01-23  8:30     ` Paul Mackerras
@ 2018-01-30  3:11       ` Simon Guo
  -1 siblings, 0 replies; 116+ messages in thread
From: Simon Guo @ 2018-01-30  3:11 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

Hi Paul,
On Tue, Jan 23, 2018 at 07:30:33PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:32PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > Currently kernel doesn't use transaction memory.
> > And there is an issue for privilege guest that:
> > tbegin/tsuspend/tresume/tabort TM instructions can impact MSR TM bits
> > without trap into PR host. So following code will lead to a false mfmsr
> > result:
> > 	tbegin	<- MSR bits update to Transaction active.
> > 	beq 	<- failover handler branch
> > 	mfmsr	<- still read MSR bits from magic page with
> > 		transaction inactive.
> > 
> > It is not an issue for non-privilege guest since its mfmsr is not patched
> > with magic page and will always trap into PR host.
> > 
> > This patch will always fail tbegin attempt for privilege guest, so that
> > the above issue is prevented. It is benign since currently (guest) kernel
> > doesn't initiate a transaction.
> > 
> > Test case:
> > https://github.com/justdoitqd/publicFiles/blob/master/test_tbegin_pr.c
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> You need to handle the case where MSR_TM is not set in the guest MSR,
> and give the guest a facility unavailable interrupt.
Thanks for the catch.

> 
> [snip]
> 
> > --- a/arch/powerpc/kvm/book3s_pr.c
> > +++ b/arch/powerpc/kvm/book3s_pr.c
> > @@ -255,7 +255,7 @@ static inline void kvmppc_save_tm_sprs(struct kvm_vcpu *vcpu)
> >  	tm_disable();
> >  }
> >  
> > -static inline void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu)
> > +inline void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu)
> 
> You should probably remove the 'inline' here too.
OK.

BR,
- Simon

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 19/26] KVM: PPC: Book3S PR: always fail transaction in guest privilege state
@ 2018-01-30  3:11       ` Simon Guo
  0 siblings, 0 replies; 116+ messages in thread
From: Simon Guo @ 2018-01-30  3:11 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

Hi Paul,
On Tue, Jan 23, 2018 at 07:30:33PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:32PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > Currently kernel doesn't use transaction memory.
> > And there is an issue for privilege guest that:
> > tbegin/tsuspend/tresume/tabort TM instructions can impact MSR TM bits
> > without trap into PR host. So following code will lead to a false mfmsr
> > result:
> > 	tbegin	<- MSR bits update to Transaction active.
> > 	beq 	<- failover handler branch
> > 	mfmsr	<- still read MSR bits from magic page with
> > 		transaction inactive.
> > 
> > It is not an issue for non-privilege guest since its mfmsr is not patched
> > with magic page and will always trap into PR host.
> > 
> > This patch will always fail tbegin attempt for privilege guest, so that
> > the above issue is prevented. It is benign since currently (guest) kernel
> > doesn't initiate a transaction.
> > 
> > Test case:
> > https://github.com/justdoitqd/publicFiles/blob/master/test_tbegin_pr.c
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> You need to handle the case where MSR_TM is not set in the guest MSR,
> and give the guest a facility unavailable interrupt.
Thanks for the catch.

> 
> [snip]
> 
> > --- a/arch/powerpc/kvm/book3s_pr.c
> > +++ b/arch/powerpc/kvm/book3s_pr.c
> > @@ -255,7 +255,7 @@ static inline void kvmppc_save_tm_sprs(struct kvm_vcpu *vcpu)
> >  	tm_disable();
> >  }
> >  
> > -static inline void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu)
> > +inline void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu)
> 
> You should probably remove the 'inline' here too.
OK.

BR,
- Simon


^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 22/26] KVM: PPC: Book3S PR: add emulation for trechkpt in PR KVM.
  2018-01-23  9:36     ` Paul Mackerras
@ 2018-01-30  3:13       ` Simon Guo
  -1 siblings, 0 replies; 116+ messages in thread
From: Simon Guo @ 2018-01-30  3:13 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

Hi Paul,
On Tue, Jan 23, 2018 at 08:36:44PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:35PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > This patch adds host emulation when guest PR KVM executes "trechkpt.",
> > which is a privileged instruction and will trap into host.
> > 
> > We firstly copy vcpu ongoing content into vcpu tm checkpoint
> > content, then perform kvmppc_restore_tm_pr() to do trechkpt.
> > with updated vcpu tm checkpoint vals.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> [snip]
> 
> > +static void kvmppc_emulate_trchkpt(struct kvm_vcpu *vcpu)
> > +{
> > +	unsigned long guest_msr = kvmppc_get_msr(vcpu);
> > +
> > +	preempt_disable();
> > +	vcpu->arch.save_msr_tm = MSR_TS_S;
> > +	vcpu->arch.save_msr_tm &= ~(MSR_FP | MSR_VEC | MSR_VSX);
> 
> This looks odd, since you are clearing bits when you have just set
> save_msr_tm to a constant value that doesn't have these bits set.
> This could be taken as a sign that the previous line has a bug and you
> meant "|=" or something similar instead of "=".  I think you probably
> did mean "=", in which case you should remove the line clearing
> FP/VEC/VSX.

I will rework and remove "save_msr_tm" from the code.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 22/26] KVM: PPC: Book3S PR: add emulation for trechkpt in PR KVM.
@ 2018-01-30  3:13       ` Simon Guo
  0 siblings, 0 replies; 116+ messages in thread
From: Simon Guo @ 2018-01-30  3:13 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

Hi Paul,
On Tue, Jan 23, 2018 at 08:36:44PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:35PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > This patch adds host emulation when guest PR KVM executes "trechkpt.",
> > which is a privileged instruction and will trap into host.
> > 
> > We firstly copy vcpu ongoing content into vcpu tm checkpoint
> > content, then perform kvmppc_restore_tm_pr() to do trechkpt.
> > with updated vcpu tm checkpoint vals.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> [snip]
> 
> > +static void kvmppc_emulate_trchkpt(struct kvm_vcpu *vcpu)
> > +{
> > +	unsigned long guest_msr = kvmppc_get_msr(vcpu);
> > +
> > +	preempt_disable();
> > +	vcpu->arch.save_msr_tm = MSR_TS_S;
> > +	vcpu->arch.save_msr_tm &= ~(MSR_FP | MSR_VEC | MSR_VSX);
> 
> This looks odd, since you are clearing bits when you have just set
> save_msr_tm to a constant value that doesn't have these bits set.
> This could be taken as a sign that the previous line has a bug and you
> meant "|=" or something similar instead of "=".  I think you probably
> did mean "=", in which case you should remove the line clearing
> FP/VEC/VSX.

I will rework and remove "save_msr_tm" from the code.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 21/26] KVM: PPC: Book3S PR: adds emulation for treclaim.
  2018-01-23  9:23     ` Paul Mackerras
@ 2018-01-30  3:18       ` Simon Guo
  -1 siblings, 0 replies; 116+ messages in thread
From: Simon Guo @ 2018-01-30  3:18 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

Hi Paul,
On Tue, Jan 23, 2018 at 08:23:23PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:34PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > This patch adds support for "treclaim." emulation when PR KVM guest
> > executes treclaim. and traps to host.
> > 
> > We will firstly doing treclaim. and save TM checkpoint and doing
> > treclaim. Then it is necessary to update vcpu current reg content
> > with checkpointed vals. When rfid into guest again, those vcpu
> > current reg content(now the checkpoint vals) will be loaded into
> > regs.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> > ---
> >  arch/powerpc/include/asm/reg.h    |  4 +++
> >  arch/powerpc/kvm/book3s_emulate.c | 66 ++++++++++++++++++++++++++++++++++++++-
> >  2 files changed, 69 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
> > index 6c293bc..b3bcf6b 100644
> > --- a/arch/powerpc/include/asm/reg.h
> > +++ b/arch/powerpc/include/asm/reg.h
> > @@ -244,12 +244,16 @@
> >  #define SPRN_TEXASR	0x82	/* Transaction EXception & Summary */
> >  #define SPRN_TEXASRU	0x83	/* ''	   ''	   ''	 Upper 32  */
> >  #define TEXASR_FC_LG	(63 - 7)	/* Failure Code */
> > +#define TEXASR_AB_LG	(63 - 31)	/* Abort */
> > +#define TEXASR_SU_LG	(63 - 32)	/* Suspend */
> >  #define TEXASR_HV_LG	(63 - 34)	/* Hypervisor state*/
> >  #define TEXASR_PR_LG	(63 - 35)	/* Privilege level */
> >  #define TEXASR_FS_LG	(63 - 36)	/* failure summary */
> >  #define TEXASR_EX_LG	(63 - 37)	/* TFIAR exact bit */
> >  #define TEXASR_ROT_LG	(63 - 38)	/* ROT bit */
> >  #define TEXASR_FC	(ASM_CONST(0xFF) << TEXASR_FC_LG)
> > +#define TEXASR_AB	__MASK(TEXASR_AB_LG)
> > +#define TEXASR_SU	__MASK(TEXASR_SU_LG)
> >  #define TEXASR_HV	__MASK(TEXASR_HV_LG)
> >  #define TEXASR_PR	__MASK(TEXASR_PR_LG)
> >  #define TEXASR_FS	__MASK(TEXASR_FS_LG)
> 
> It would be good to collect up all the modifications you need to make
> to reg.h into a single patch at the beginning of the patch series --
> that will make it easier to merge it all.
> 
OK.

> > diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
> > index 1eb1900..51c0e20 100644
> > --- a/arch/powerpc/kvm/book3s_emulate.c
> > +++ b/arch/powerpc/kvm/book3s_emulate.c
> 
> [snip]
> 
> > @@ -127,6 +130,42 @@ void kvmppc_copyfrom_vcpu_tm(struct kvm_vcpu *vcpu)
> >  	vcpu->arch.vrsave = vcpu->arch.vrsave_tm;
> >  }
> >  
> > +static void kvmppc_emulate_treclaim(struct kvm_vcpu *vcpu, int ra_val)
> > +{
> > +	unsigned long guest_msr = kvmppc_get_msr(vcpu);
> > +	int fc_val = ra_val ? ra_val : 1;
> > +
> > +	kvmppc_save_tm_pr(vcpu);
> > +
> > +	preempt_disable();
> > +	kvmppc_copyfrom_vcpu_tm(vcpu);
> > +	preempt_enable();
> > +
> > +	/*
> > +	 * treclaim need quit to non-transactional state.
> > +	 */
> > +	guest_msr &= ~(MSR_TS_MASK);
> > +	kvmppc_set_msr(vcpu, guest_msr);
> > +
> > +	preempt_disable();
> > +	tm_enable();
> > +	vcpu->arch.texasr = mfspr(SPRN_TEXASR);
> > +	vcpu->arch.texasr &= ~TEXASR_FC;
> > +	vcpu->arch.texasr |= ((u64)fc_val << TEXASR_FC_LG);
> 
> You're doing failure recording here unconditionally, but the
> architecture says that treclaim. only does failure recording if
> TEXASR_FS is not already set.
> 
I need add that. And the CR0 setting is also missed. 
Thanks for the catch.

[snip]

BR,
- Simon

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 21/26] KVM: PPC: Book3S PR: adds emulation for treclaim.
@ 2018-01-30  3:18       ` Simon Guo
  0 siblings, 0 replies; 116+ messages in thread
From: Simon Guo @ 2018-01-30  3:18 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

Hi Paul,
On Tue, Jan 23, 2018 at 08:23:23PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:34PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > This patch adds support for "treclaim." emulation when PR KVM guest
> > executes treclaim. and traps to host.
> > 
> > We will firstly doing treclaim. and save TM checkpoint and doing
> > treclaim. Then it is necessary to update vcpu current reg content
> > with checkpointed vals. When rfid into guest again, those vcpu
> > current reg content(now the checkpoint vals) will be loaded into
> > regs.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> > ---
> >  arch/powerpc/include/asm/reg.h    |  4 +++
> >  arch/powerpc/kvm/book3s_emulate.c | 66 ++++++++++++++++++++++++++++++++++++++-
> >  2 files changed, 69 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
> > index 6c293bc..b3bcf6b 100644
> > --- a/arch/powerpc/include/asm/reg.h
> > +++ b/arch/powerpc/include/asm/reg.h
> > @@ -244,12 +244,16 @@
> >  #define SPRN_TEXASR	0x82	/* Transaction EXception & Summary */
> >  #define SPRN_TEXASRU	0x83	/* ''	   ''	   ''	 Upper 32  */
> >  #define TEXASR_FC_LG	(63 - 7)	/* Failure Code */
> > +#define TEXASR_AB_LG	(63 - 31)	/* Abort */
> > +#define TEXASR_SU_LG	(63 - 32)	/* Suspend */
> >  #define TEXASR_HV_LG	(63 - 34)	/* Hypervisor state*/
> >  #define TEXASR_PR_LG	(63 - 35)	/* Privilege level */
> >  #define TEXASR_FS_LG	(63 - 36)	/* failure summary */
> >  #define TEXASR_EX_LG	(63 - 37)	/* TFIAR exact bit */
> >  #define TEXASR_ROT_LG	(63 - 38)	/* ROT bit */
> >  #define TEXASR_FC	(ASM_CONST(0xFF) << TEXASR_FC_LG)
> > +#define TEXASR_AB	__MASK(TEXASR_AB_LG)
> > +#define TEXASR_SU	__MASK(TEXASR_SU_LG)
> >  #define TEXASR_HV	__MASK(TEXASR_HV_LG)
> >  #define TEXASR_PR	__MASK(TEXASR_PR_LG)
> >  #define TEXASR_FS	__MASK(TEXASR_FS_LG)
> 
> It would be good to collect up all the modifications you need to make
> to reg.h into a single patch at the beginning of the patch series --
> that will make it easier to merge it all.
> 
OK.

> > diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
> > index 1eb1900..51c0e20 100644
> > --- a/arch/powerpc/kvm/book3s_emulate.c
> > +++ b/arch/powerpc/kvm/book3s_emulate.c
> 
> [snip]
> 
> > @@ -127,6 +130,42 @@ void kvmppc_copyfrom_vcpu_tm(struct kvm_vcpu *vcpu)
> >  	vcpu->arch.vrsave = vcpu->arch.vrsave_tm;
> >  }
> >  
> > +static void kvmppc_emulate_treclaim(struct kvm_vcpu *vcpu, int ra_val)
> > +{
> > +	unsigned long guest_msr = kvmppc_get_msr(vcpu);
> > +	int fc_val = ra_val ? ra_val : 1;
> > +
> > +	kvmppc_save_tm_pr(vcpu);
> > +
> > +	preempt_disable();
> > +	kvmppc_copyfrom_vcpu_tm(vcpu);
> > +	preempt_enable();
> > +
> > +	/*
> > +	 * treclaim need quit to non-transactional state.
> > +	 */
> > +	guest_msr &= ~(MSR_TS_MASK);
> > +	kvmppc_set_msr(vcpu, guest_msr);
> > +
> > +	preempt_disable();
> > +	tm_enable();
> > +	vcpu->arch.texasr = mfspr(SPRN_TEXASR);
> > +	vcpu->arch.texasr &= ~TEXASR_FC;
> > +	vcpu->arch.texasr |= ((u64)fc_val << TEXASR_FC_LG);
> 
> You're doing failure recording here unconditionally, but the
> architecture says that treclaim. only does failure recording if
> TEXASR_FS is not already set.
> 
I need add that. And the CR0 setting is also missed. 
Thanks for the catch.

[snip]

BR,
- Simon

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 23/26] KVM: PPC: Book3S PR: add emulation for tabort. for privilege guest
  2018-01-23  9:44     ` Paul Mackerras
@ 2018-01-30  3:24       ` Simon Guo
  -1 siblings, 0 replies; 116+ messages in thread
From: Simon Guo @ 2018-01-30  3:24 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

Hi Paul,
On Tue, Jan 23, 2018 at 08:44:16PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:36PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > Currently privilege guest will be run with TM disabled.
> > 
> > Although the privilege guest cannot initiate a new transaction,
> > it can use tabort to terminate its problem state's transaction.
> > So it is still necessary to emulate tabort. for privilege guest.
> > 
> > This patch adds emulation for tabort. of privilege guest.
> > 
> > Tested with:
> > https://github.com/justdoitqd/publicFiles/blob/master/test_tabort.c
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> > ---
> >  arch/powerpc/include/asm/kvm_book3s.h |  1 +
> >  arch/powerpc/kvm/book3s_emulate.c     | 31 +++++++++++++++++++++++++++++++
> >  arch/powerpc/kvm/book3s_pr.c          |  2 +-
> >  3 files changed, 33 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> > index 524cd82..8bd454c 100644
> > --- a/arch/powerpc/include/asm/kvm_book3s.h
> > +++ b/arch/powerpc/include/asm/kvm_book3s.h
> > @@ -258,6 +258,7 @@ extern void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
> >  void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu);
> >  void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu);
> >  void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu);
> > +void kvmppc_save_tm_sprs(struct kvm_vcpu *vcpu);
> 
> Why do you add this declaration, and change it from "static inline" to
> "inline" below, when this patch doesn't use it?  Also, making it
> "inline" is pointless if it has a caller outside the source file where
> it's defined (if gcc wants to inline uses of it inside the same source
> file, it will do so anyway even without the "inline" keyword.)
> 
> Paul.
It is a leave over of my previous rework. Sorry and I will remove
them.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 23/26] KVM: PPC: Book3S PR: add emulation for tabort. for privilege guest
@ 2018-01-30  3:24       ` Simon Guo
  0 siblings, 0 replies; 116+ messages in thread
From: Simon Guo @ 2018-01-30  3:24 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

Hi Paul,
On Tue, Jan 23, 2018 at 08:44:16PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:36PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > Currently privilege guest will be run with TM disabled.
> > 
> > Although the privilege guest cannot initiate a new transaction,
> > it can use tabort to terminate its problem state's transaction.
> > So it is still necessary to emulate tabort. for privilege guest.
> > 
> > This patch adds emulation for tabort. of privilege guest.
> > 
> > Tested with:
> > https://github.com/justdoitqd/publicFiles/blob/master/test_tabort.c
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> > ---
> >  arch/powerpc/include/asm/kvm_book3s.h |  1 +
> >  arch/powerpc/kvm/book3s_emulate.c     | 31 +++++++++++++++++++++++++++++++
> >  arch/powerpc/kvm/book3s_pr.c          |  2 +-
> >  3 files changed, 33 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> > index 524cd82..8bd454c 100644
> > --- a/arch/powerpc/include/asm/kvm_book3s.h
> > +++ b/arch/powerpc/include/asm/kvm_book3s.h
> > @@ -258,6 +258,7 @@ extern void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
> >  void kvmppc_save_tm_pr(struct kvm_vcpu *vcpu);
> >  void kvmppc_restore_tm_pr(struct kvm_vcpu *vcpu);
> >  void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu);
> > +void kvmppc_save_tm_sprs(struct kvm_vcpu *vcpu);
> 
> Why do you add this declaration, and change it from "static inline" to
> "inline" below, when this patch doesn't use it?  Also, making it
> "inline" is pointless if it has a caller outside the source file where
> it's defined (if gcc wants to inline uses of it inside the same source
> file, it will do so anyway even without the "inline" keyword.)
> 
> Paul.
It is a leave over of my previous rework. Sorry and I will remove
them.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 25/26] KVM: PPC: Book3S PR: Support TAR handling for PR KVM HTM.
  2018-01-24  4:02     ` Paul Mackerras
@ 2018-01-30  3:26       ` Simon Guo
  -1 siblings, 0 replies; 116+ messages in thread
From: Simon Guo @ 2018-01-30  3:26 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

Hi Paul,
On Wed, Jan 24, 2018 at 03:02:58PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:38PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > Currently guest kernel doesn't handle TAR fac unavailable and it always
> > runs with TAR bit on. PR KVM will lazily enable TAR. TAR is not a
> > frequent-use reg and it is not included in SVCPU struct.
> > 
> > To make it work for transaction memory at PR KVM:
> > 1). Flush/giveup TAR at kvmppc_save_tm_pr().
> > 2) If we are receiving a TAR fac unavail exception inside a transaction,
> > the checkpointed TAR might be a TAR value from another process. So we need
> > treclaim the transaction, then load the desired TAR value into reg, and
> > perform trecheckpoint.
> > 3) Load TAR facility at kvmppc_restore_tm_pr() when TM active.
> > The reason we always loads TAR when restoring TM is that:
> > If we don't do this way, when there is a TAR fac unavailable exception
> > during TM active:
> > case 1: it is the 1st TAR fac unavail exception after tbegin.
> > vcpu->arch.tar should be reloaded as checkpoint tar val.
> > case 2: it is the 2nd or later TAR fac unavail exception after tbegin.
> > vcpu->arch.tar_tm should be reloaded as checkpoint tar val.
> > There will be unnecessary difficulty to handle the above 2 cases.
> > 
> > at the end of emulating treclaim., the correct TAR val need to be loaded
> > into reg if FSCR_TAR bit is on.
> > at the beginning of emulating trechkpt., TAR needs to be flushed so that
> > the right tar val can be copy into tar_tm.
> 
> Would it be simpler always to load up TAR when guest_MSR[TM] is 1?
> 
> Paul.
Sure. it will have a similar solution with math regs.
Thanks for the suggestion,

BR
- Simon

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 25/26] KVM: PPC: Book3S PR: Support TAR handling for PR KVM HTM.
@ 2018-01-30  3:26       ` Simon Guo
  0 siblings, 0 replies; 116+ messages in thread
From: Simon Guo @ 2018-01-30  3:26 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

Hi Paul,
On Wed, Jan 24, 2018 at 03:02:58PM +1100, Paul Mackerras wrote:
> On Thu, Jan 11, 2018 at 06:11:38PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > Currently guest kernel doesn't handle TAR fac unavailable and it always
> > runs with TAR bit on. PR KVM will lazily enable TAR. TAR is not a
> > frequent-use reg and it is not included in SVCPU struct.
> > 
> > To make it work for transaction memory at PR KVM:
> > 1). Flush/giveup TAR at kvmppc_save_tm_pr().
> > 2) If we are receiving a TAR fac unavail exception inside a transaction,
> > the checkpointed TAR might be a TAR value from another process. So we need
> > treclaim the transaction, then load the desired TAR value into reg, and
> > perform trecheckpoint.
> > 3) Load TAR facility at kvmppc_restore_tm_pr() when TM active.
> > The reason we always loads TAR when restoring TM is that:
> > If we don't do this way, when there is a TAR fac unavailable exception
> > during TM active:
> > case 1: it is the 1st TAR fac unavail exception after tbegin.
> > vcpu->arch.tar should be reloaded as checkpoint tar val.
> > case 2: it is the 2nd or later TAR fac unavail exception after tbegin.
> > vcpu->arch.tar_tm should be reloaded as checkpoint tar val.
> > There will be unnecessary difficulty to handle the above 2 cases.
> > 
> > at the end of emulating treclaim., the correct TAR val need to be loaded
> > into reg if FSCR_TAR bit is on.
> > at the beginning of emulating trechkpt., TAR needs to be flushed so that
> > the right tar val can be copy into tar_tm.
> 
> Would it be simpler always to load up TAR when guest_MSR[TM] is 1?
> 
> Paul.
Sure. it will have a similar solution with math regs.
Thanks for the suggestion,

BR
- Simon

^ permalink raw reply	[flat|nested] 116+ messages in thread

end of thread, other threads:[~2018-01-30  3:26 UTC | newest]

Thread overview: 116+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-11 10:11 [PATCH 00/26] KVM: PPC: Book3S PR: Transaction memory support on PR KVM wei.guo.simon
2018-01-11 10:11 ` wei.guo.simon
2018-01-11 10:11 ` [PATCH 01/26] KVM: PPC: Book3S PR: Move kvmppc_save_tm/kvmppc_restore_tm to separate file wei.guo.simon
2018-01-11 10:11   ` wei.guo.simon
2018-01-11 10:11 ` [PATCH 02/26] KVM: PPC: Book3S PR: add new parameter (guest MSR) for kvmppc_save_tm()/kvmppc_restore_tm() wei.guo.simon
2018-01-11 10:11   ` [PATCH 02/26] KVM: PPC: Book3S PR: add new parameter (guest MSR) for kvmppc_save_tm()/kvmppc_restore wei.guo.simon
2018-01-23  5:42   ` [PATCH 02/26] KVM: PPC: Book3S PR: add new parameter (guest MSR) for kvmppc_save_tm()/kvmppc_restore_tm() Paul Mackerras
2018-01-23  5:42     ` [PATCH 02/26] KVM: PPC: Book3S PR: add new parameter (guest MSR) for kvmppc_save_tm()/kvmppc_res Paul Mackerras
2018-01-30  2:33     ` [PATCH 02/26] KVM: PPC: Book3S PR: add new parameter (guest MSR) for kvmppc_save_tm()/kvmppc_restore_tm() Simon Guo
2018-01-30  2:33       ` [PATCH 02/26] KVM: PPC: Book3S PR: add new parameter (guest MSR) for kvmppc_save_tm()/kvmppc_res Simon Guo
2018-01-11 10:11 ` [PATCH 03/26] KVM: PPC: Book3S PR: turn on FP/VSX/VMX MSR bits in kvmppc_save_tm() wei.guo.simon
2018-01-11 10:11   ` wei.guo.simon
2018-01-11 10:11 ` [PATCH 04/26] KVM: PPC: Book3S PR: add C function wrapper for _kvmppc_save/restore_tm() wei.guo.simon
2018-01-11 10:11   ` wei.guo.simon
2018-01-23  5:49   ` Paul Mackerras
2018-01-23  5:49     ` Paul Mackerras
2018-01-30  2:38     ` Simon Guo
2018-01-30  2:38       ` Simon Guo
2018-01-11 10:11 ` [PATCH 05/26] KVM: PPC: Book3S PR: In PR KVM suspends Transactional state when inject an interrupt wei.guo.simon
2018-01-11 10:11   ` wei.guo.simon
2018-01-11 10:11 ` [PATCH 06/26] KVM: PPC: Book3S PR: PR KVM pass through MSR TM/TS bits to shadow_msr wei.guo.simon
2018-01-11 10:11   ` wei.guo.simon
2018-01-11 10:11 ` [PATCH 07/26] KVM: PPC: Book3S PR: add TEXASR related macros wei.guo.simon
2018-01-11 10:11   ` wei.guo.simon
2018-01-23  5:50   ` Paul Mackerras
2018-01-23  5:50     ` Paul Mackerras
2018-01-11 10:11 ` [PATCH 08/26] KVM: PPC: Book3S PR: Sync TM bits to shadow msr for problem state guest wei.guo.simon
2018-01-11 10:11   ` wei.guo.simon
2018-01-11 10:11 ` [PATCH 09/26] KVM: PPC: Book3S PR: implement RFID TM behavior to suppress change from S0 to N0 wei.guo.simon
2018-01-11 10:11   ` wei.guo.simon
2018-01-11 10:11 ` [PATCH 10/26] KVM: PPC: Book3S PR: set MSR HV bit accordingly for PPC970 and others wei.guo.simon
2018-01-11 10:11   ` wei.guo.simon
2018-01-23  5:51   ` Paul Mackerras
2018-01-23  5:51     ` Paul Mackerras
2018-01-11 10:11 ` [PATCH 11/26] KVM: PPC: Book3S PR: prevent TS bits change in kvmppc_interrupt_pr() wei.guo.simon
2018-01-11 10:11   ` wei.guo.simon
2018-01-11 10:11 ` [PATCH 12/26] powerpc: export symbol msr_check_and_set() wei.guo.simon
2018-01-11 10:11   ` wei.guo.simon
2018-01-11 10:11 ` [PATCH 13/26] KVM: PPC: Book3S PR: adds new kvmppc_copyto_vcpu_tm/kvmppc_copyfrom_vcpu_tm API for PR KVM wei.guo.simon
2018-01-11 10:11   ` [PATCH 13/26] KVM: PPC: Book3S PR: adds new kvmppc_copyto_vcpu_tm/kvmppc_copyfrom_vcpu_tm API for PR wei.guo.simon
2018-01-23  5:52   ` [PATCH 13/26] KVM: PPC: Book3S PR: adds new kvmppc_copyto_vcpu_tm/kvmppc_copyfrom_vcpu_tm API for PR KVM Paul Mackerras
2018-01-23  5:52     ` [PATCH 13/26] KVM: PPC: Book3S PR: adds new kvmppc_copyto_vcpu_tm/kvmppc_copyfrom_vcpu_tm API fo Paul Mackerras
2018-01-30  2:15     ` [PATCH 13/26] KVM: PPC: Book3S PR: adds new kvmppc_copyto_vcpu_tm/kvmppc_copyfrom_vcpu_tm API for PR KVM Simon Guo
2018-01-30  2:15       ` [PATCH 13/26] KVM: PPC: Book3S PR: adds new kvmppc_copyto_vcpu_tm/kvmppc_copyfrom_vcpu_tm API fo Simon Guo
2018-01-11 10:11 ` [PATCH 14/26] KVM: PPC: Book3S PR: export tm_enable()/tm_disable/tm_abort() APIs wei.guo.simon
2018-01-11 10:11   ` wei.guo.simon
2018-01-11 10:11 ` [PATCH 15/26] KVM: PPC: Book3S PR: add kvmppc_save/restore_tm_sprs() APIs wei.guo.simon
2018-01-11 10:11   ` wei.guo.simon
2018-01-11 10:11 ` [PATCH 16/26] KVM: PPC: Book3S PR: add transaction memory save/restore skeleton for PR KVM wei.guo.simon
2018-01-11 10:11   ` wei.guo.simon
2018-01-23  6:04   ` Paul Mackerras
2018-01-23  6:04     ` Paul Mackerras
2018-01-30  2:57     ` Simon Guo
2018-01-30  2:57       ` Simon Guo
2018-01-11 10:11 ` [PATCH 17/26] KVM: PPC: Book3S PR: add math support for PR KVM HTM wei.guo.simon
2018-01-11 10:11   ` wei.guo.simon
2018-01-23  7:29   ` Paul Mackerras
2018-01-23  7:29     ` Paul Mackerras
2018-01-30  3:00     ` Simon Guo
2018-01-30  3:00       ` Simon Guo
2018-01-11 10:11 ` [PATCH 18/26] KVM: PPC: Book3S PR: make mtspr/mfspr emulation behavior based on active TM SPRs wei.guo.simon
2018-01-11 10:11   ` wei.guo.simon
2018-01-23  8:17   ` Paul Mackerras
2018-01-23  8:17     ` Paul Mackerras
2018-01-30  3:02     ` Simon Guo
2018-01-30  3:02       ` Simon Guo
2018-01-11 10:11 ` [PATCH 19/26] KVM: PPC: Book3S PR: always fail transaction in guest privilege state wei.guo.simon
2018-01-11 10:11   ` wei.guo.simon
2018-01-23  8:30   ` Paul Mackerras
2018-01-23  8:30     ` Paul Mackerras
2018-01-30  3:11     ` Simon Guo
2018-01-30  3:11       ` Simon Guo
2018-01-11 10:11 ` [PATCH 20/26] KVM: PPC: Book3S PR: enable NV reg restore for reading TM SPR at " wei.guo.simon
2018-01-11 10:11   ` wei.guo.simon
2018-01-23  9:08   ` Paul Mackerras
2018-01-23  9:08     ` [PATCH 20/26] KVM: PPC: Book3S PR: enable NV reg restore for reading TM SPR at guest privilege s Paul Mackerras
2018-01-11 10:11 ` [PATCH 21/26] KVM: PPC: Book3S PR: adds emulation for treclaim wei.guo.simon
2018-01-11 10:11   ` wei.guo.simon
2018-01-23  9:23   ` Paul Mackerras
2018-01-23  9:23     ` Paul Mackerras
2018-01-30  3:18     ` Simon Guo
2018-01-30  3:18       ` Simon Guo
2018-01-11 10:11 ` [PATCH 22/26] KVM: PPC: Book3S PR: add emulation for trechkpt in PR KVM wei.guo.simon
2018-01-11 10:11   ` wei.guo.simon
2018-01-23  9:36   ` Paul Mackerras
2018-01-23  9:36     ` Paul Mackerras
2018-01-30  3:13     ` Simon Guo
2018-01-30  3:13       ` Simon Guo
2018-01-11 10:11 ` [PATCH 23/26] KVM: PPC: Book3S PR: add emulation for tabort. for privilege guest wei.guo.simon
2018-01-11 10:11   ` wei.guo.simon
2018-01-23  9:44   ` Paul Mackerras
2018-01-23  9:44     ` Paul Mackerras
2018-01-30  3:24     ` Simon Guo
2018-01-30  3:24       ` Simon Guo
2018-01-11 10:11 ` [PATCH 24/26] KVM: PPC: Book3S PR: add guard code to prevent returning to guest with PR=0 and Transactional state wei.guo.simon
2018-01-11 10:11   ` [PATCH 24/26] KVM: PPC: Book3S PR: add guard code to prevent returning to guest with PR=0 and Transa wei.guo.simon
2018-01-11 10:11 ` [PATCH 25/26] KVM: PPC: Book3S PR: Support TAR handling for PR KVM HTM wei.guo.simon
2018-01-11 10:11   ` wei.guo.simon
2018-01-24  4:02   ` Paul Mackerras
2018-01-24  4:02     ` Paul Mackerras
2018-01-30  3:26     ` Simon Guo
2018-01-30  3:26       ` Simon Guo
2018-01-11 10:11 ` [PATCH 26/26] KVM: PPC: Book3S PR: enable HTM for PR KVM for KVM_CHECK_EXTENSION ioctl wei.guo.simon
2018-01-11 10:11   ` wei.guo.simon
2018-01-11 13:56 ` [PATCH 00/26] KVM: PPC: Book3S PR: Transaction memory support on PR KVM Gustavo Romero
2018-01-11 13:56   ` Gustavo Romero
2018-01-11 22:04   ` Benjamin Herrenschmidt
2018-01-11 22:04     ` Benjamin Herrenschmidt
2018-01-12  2:41   ` Simon Guo
2018-01-12  2:41     ` Simon Guo
2018-01-23  5:38 ` Paul Mackerras
2018-01-23  5:38   ` Paul Mackerras
2018-01-23  7:16   ` Paul Mackerras
2018-01-23  7:16     ` Paul Mackerras
2018-01-27 13:10   ` Simon Guo
2018-01-27 13:10     ` Simon Guo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.