All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/11] KVM: PPC: reconstruct mmio emulation with analyse_instr()
@ 2018-04-25 11:54 ` wei.guo.simon
  0 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Simon Guo, linuxppc-dev, kvm

From: Simon Guo <wei.guo.simon@gmail.com>

We already have analyse_instr() which analyzes instructions for the instruction
type, size, addtional flags, etc. What kvmppc_emulate_loadstore() did is somehow
duplicated and it will be good to utilize analyse_instr() to reconstruct the
code. The advantage is that the code logic will be shared and more clean to be 
maintained.

This patch series reconstructs kvmppc_emulate_loadstore() for various load/store
instructions. 

The testcase locates at:
https://github.com/justdoitqd/publicFiles/blob/master/test_mmio.c

- Tested at both PR/HV KVM. 
- Also tested with little endian host & big endian guest.

Tested instruction list: 
	lbz lbzu lbzx ld ldbrx
	ldu ldx lfd lfdu lfdx
	lfiwax lfiwzx lfs lfsu lfsx
	lha lhau lhax lhbrx lhz
	lhzu lhzx lvx lwax lwbrx
	lwz lwzu lwzx lxsdx lxsiwax
	lxsiwzx lxsspx lxvd2x lxvdsx lxvw4x
	stb stbu stbx std stdbrx
	stdu stdx stfd stfdu stfdx
	stfiwx stfs stfsx sth sthbrx
	sthu sthx stvx stw stwbrx
	stwu stwx stxsdx stxsiwx stxsspx
	stxvd2x stxvw4x

Simon Guo (11):
  KVM: PPC: add pt_regs into kvm_vcpu_arch and move vcpu->arch.gpr[]
    into it
  KVM: PPC: mov nip/ctr/lr/xer registers to pt_regs in kvm_vcpu_arch
  KVM: PPC: Fix a mmio_host_swabbed uninitialized usage issue when VMX
    store
  KVM: PPC: fix incorrect element_size for stxsiwx in analyse_instr
  KVM: PPC: add GPR RA update skeleton for MMIO emulation
  KVM: PPC: add KVMPPC_VSX_COPY_WORD_LOAD_DUMP type support for mmio
    emulation
  KVM: PPC: reconstruct non-SIMD LOAD/STORE instruction mmio emulation
    with analyse_intr() input
  KVM: PPC: add giveup_ext() hook for PPC KVM ops
  KVM: PPC: reconstruct LOAD_FP/STORE_FP instruction mmio emulation with
    analyse_intr() input
  KVM: PPC: reconstruct LOAD_VMX/STORE_VMX instruction mmio emulation
    with analyse_intr() input
  KVM: PPC: reconstruct LOAD_VSX/STORE_VSX instruction mmio emulation
    with analyse_intr() input

 arch/powerpc/include/asm/kvm_book3s.h    |  20 +-
 arch/powerpc/include/asm/kvm_book3s_64.h |  20 +-
 arch/powerpc/include/asm/kvm_booke.h     |  20 +-
 arch/powerpc/include/asm/kvm_host.h      |   9 +-
 arch/powerpc/include/asm/kvm_ppc.h       |   2 +
 arch/powerpc/include/asm/sstep.h         |   2 +-
 arch/powerpc/kernel/asm-offsets.c        |  22 +-
 arch/powerpc/kvm/book3s_32_mmu.c         |   2 +-
 arch/powerpc/kvm/book3s_64_vio_hv.c      |   2 +-
 arch/powerpc/kvm/book3s_hv.c             |  11 +-
 arch/powerpc/kvm/book3s_hv_builtin.c     |   6 +-
 arch/powerpc/kvm/book3s_hv_rm_mmu.c      |  15 +-
 arch/powerpc/kvm/book3s_hv_rm_xics.c     |   2 +-
 arch/powerpc/kvm/book3s_hv_tm.c          |  10 +-
 arch/powerpc/kvm/book3s_hv_tm_builtin.c  |  10 +-
 arch/powerpc/kvm/book3s_pr.c             |  73 +--
 arch/powerpc/kvm/book3s_xive_template.c  |   4 +-
 arch/powerpc/kvm/booke.c                 |  41 +-
 arch/powerpc/kvm/booke_emulate.c         |   6 +-
 arch/powerpc/kvm/e500_emulate.c          |   6 +-
 arch/powerpc/kvm/e500_mmu.c              |   2 +-
 arch/powerpc/kvm/emulate_loadstore.c     | 734 +++++++++----------------------
 arch/powerpc/kvm/powerpc.c               |  53 ++-
 arch/powerpc/lib/sstep.c                 |   2 +-
 24 files changed, 407 insertions(+), 667 deletions(-)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 111+ messages in thread

* [PATCH 00/11] KVM: PPC: reconstruct mmio emulation with analyse_instr()
@ 2018-04-25 11:54 ` wei.guo.simon
  0 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Paul Mackerras, kvm, linuxppc-dev, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

We already have analyse_instr() which analyzes instructions for the instruction
type, size, addtional flags, etc. What kvmppc_emulate_loadstore() did is somehow
duplicated and it will be good to utilize analyse_instr() to reconstruct the
code. The advantage is that the code logic will be shared and more clean to be 
maintained.

This patch series reconstructs kvmppc_emulate_loadstore() for various load/store
instructions. 

The testcase locates at:
https://github.com/justdoitqd/publicFiles/blob/master/test_mmio.c

- Tested at both PR/HV KVM. 
- Also tested with little endian host & big endian guest.

Tested instruction list: 
	lbz lbzu lbzx ld ldbrx
	ldu ldx lfd lfdu lfdx
	lfiwax lfiwzx lfs lfsu lfsx
	lha lhau lhax lhbrx lhz
	lhzu lhzx lvx lwax lwbrx
	lwz lwzu lwzx lxsdx lxsiwax
	lxsiwzx lxsspx lxvd2x lxvdsx lxvw4x
	stb stbu stbx std stdbrx
	stdu stdx stfd stfdu stfdx
	stfiwx stfs stfsx sth sthbrx
	sthu sthx stvx stw stwbrx
	stwu stwx stxsdx stxsiwx stxsspx
	stxvd2x stxvw4x

Simon Guo (11):
  KVM: PPC: add pt_regs into kvm_vcpu_arch and move vcpu->arch.gpr[]
    into it
  KVM: PPC: mov nip/ctr/lr/xer registers to pt_regs in kvm_vcpu_arch
  KVM: PPC: Fix a mmio_host_swabbed uninitialized usage issue when VMX
    store
  KVM: PPC: fix incorrect element_size for stxsiwx in analyse_instr
  KVM: PPC: add GPR RA update skeleton for MMIO emulation
  KVM: PPC: add KVMPPC_VSX_COPY_WORD_LOAD_DUMP type support for mmio
    emulation
  KVM: PPC: reconstruct non-SIMD LOAD/STORE instruction mmio emulation
    with analyse_intr() input
  KVM: PPC: add giveup_ext() hook for PPC KVM ops
  KVM: PPC: reconstruct LOAD_FP/STORE_FP instruction mmio emulation with
    analyse_intr() input
  KVM: PPC: reconstruct LOAD_VMX/STORE_VMX instruction mmio emulation
    with analyse_intr() input
  KVM: PPC: reconstruct LOAD_VSX/STORE_VSX instruction mmio emulation
    with analyse_intr() input

 arch/powerpc/include/asm/kvm_book3s.h    |  20 +-
 arch/powerpc/include/asm/kvm_book3s_64.h |  20 +-
 arch/powerpc/include/asm/kvm_booke.h     |  20 +-
 arch/powerpc/include/asm/kvm_host.h      |   9 +-
 arch/powerpc/include/asm/kvm_ppc.h       |   2 +
 arch/powerpc/include/asm/sstep.h         |   2 +-
 arch/powerpc/kernel/asm-offsets.c        |  22 +-
 arch/powerpc/kvm/book3s_32_mmu.c         |   2 +-
 arch/powerpc/kvm/book3s_64_vio_hv.c      |   2 +-
 arch/powerpc/kvm/book3s_hv.c             |  11 +-
 arch/powerpc/kvm/book3s_hv_builtin.c     |   6 +-
 arch/powerpc/kvm/book3s_hv_rm_mmu.c      |  15 +-
 arch/powerpc/kvm/book3s_hv_rm_xics.c     |   2 +-
 arch/powerpc/kvm/book3s_hv_tm.c          |  10 +-
 arch/powerpc/kvm/book3s_hv_tm_builtin.c  |  10 +-
 arch/powerpc/kvm/book3s_pr.c             |  73 +--
 arch/powerpc/kvm/book3s_xive_template.c  |   4 +-
 arch/powerpc/kvm/booke.c                 |  41 +-
 arch/powerpc/kvm/booke_emulate.c         |   6 +-
 arch/powerpc/kvm/e500_emulate.c          |   6 +-
 arch/powerpc/kvm/e500_mmu.c              |   2 +-
 arch/powerpc/kvm/emulate_loadstore.c     | 734 +++++++++----------------------
 arch/powerpc/kvm/powerpc.c               |  53 ++-
 arch/powerpc/lib/sstep.c                 |   2 +-
 24 files changed, 407 insertions(+), 667 deletions(-)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 111+ messages in thread

* [PATCH 00/11] KVM: PPC: reconstruct mmio emulation with analyse_instr()
@ 2018-04-25 11:54 ` wei.guo.simon
  0 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Simon Guo, linuxppc-dev, kvm

From: Simon Guo <wei.guo.simon@gmail.com>

We already have analyse_instr() which analyzes instructions for the instruction
type, size, addtional flags, etc. What kvmppc_emulate_loadstore() did is somehow
duplicated and it will be good to utilize analyse_instr() to reconstruct the
code. The advantage is that the code logic will be shared and more clean to be 
maintained.

This patch series reconstructs kvmppc_emulate_loadstore() for various load/store
instructions. 

The testcase locates at:
https://github.com/justdoitqd/publicFiles/blob/master/test_mmio.c

- Tested at both PR/HV KVM. 
- Also tested with little endian host & big endian guest.

Tested instruction list: 
	lbz lbzu lbzx ld ldbrx
	ldu ldx lfd lfdu lfdx
	lfiwax lfiwzx lfs lfsu lfsx
	lha lhau lhax lhbrx lhz
	lhzu lhzx lvx lwax lwbrx
	lwz lwzu lwzx lxsdx lxsiwax
	lxsiwzx lxsspx lxvd2x lxvdsx lxvw4x
	stb stbu stbx std stdbrx
	stdu stdx stfd stfdu stfdx
	stfiwx stfs stfsx sth sthbrx
	sthu sthx stvx stw stwbrx
	stwu stwx stxsdx stxsiwx stxsspx
	stxvd2x stxvw4x

Simon Guo (11):
  KVM: PPC: add pt_regs into kvm_vcpu_arch and move vcpu->arch.gpr[]
    into it
  KVM: PPC: mov nip/ctr/lr/xer registers to pt_regs in kvm_vcpu_arch
  KVM: PPC: Fix a mmio_host_swabbed uninitialized usage issue when VMX
    store
  KVM: PPC: fix incorrect element_size for stxsiwx in analyse_instr
  KVM: PPC: add GPR RA update skeleton for MMIO emulation
  KVM: PPC: add KVMPPC_VSX_COPY_WORD_LOAD_DUMP type support for mmio
    emulation
  KVM: PPC: reconstruct non-SIMD LOAD/STORE instruction mmio emulation
    with analyse_intr() input
  KVM: PPC: add giveup_ext() hook for PPC KVM ops
  KVM: PPC: reconstruct LOAD_FP/STORE_FP instruction mmio emulation with
    analyse_intr() input
  KVM: PPC: reconstruct LOAD_VMX/STORE_VMX instruction mmio emulation
    with analyse_intr() input
  KVM: PPC: reconstruct LOAD_VSX/STORE_VSX instruction mmio emulation
    with analyse_intr() input

 arch/powerpc/include/asm/kvm_book3s.h    |  20 +-
 arch/powerpc/include/asm/kvm_book3s_64.h |  20 +-
 arch/powerpc/include/asm/kvm_booke.h     |  20 +-
 arch/powerpc/include/asm/kvm_host.h      |   9 +-
 arch/powerpc/include/asm/kvm_ppc.h       |   2 +
 arch/powerpc/include/asm/sstep.h         |   2 +-
 arch/powerpc/kernel/asm-offsets.c        |  22 +-
 arch/powerpc/kvm/book3s_32_mmu.c         |   2 +-
 arch/powerpc/kvm/book3s_64_vio_hv.c      |   2 +-
 arch/powerpc/kvm/book3s_hv.c             |  11 +-
 arch/powerpc/kvm/book3s_hv_builtin.c     |   6 +-
 arch/powerpc/kvm/book3s_hv_rm_mmu.c      |  15 +-
 arch/powerpc/kvm/book3s_hv_rm_xics.c     |   2 +-
 arch/powerpc/kvm/book3s_hv_tm.c          |  10 +-
 arch/powerpc/kvm/book3s_hv_tm_builtin.c  |  10 +-
 arch/powerpc/kvm/book3s_pr.c             |  73 +--
 arch/powerpc/kvm/book3s_xive_template.c  |   4 +-
 arch/powerpc/kvm/booke.c                 |  41 +-
 arch/powerpc/kvm/booke_emulate.c         |   6 +-
 arch/powerpc/kvm/e500_emulate.c          |   6 +-
 arch/powerpc/kvm/e500_mmu.c              |   2 +-
 arch/powerpc/kvm/emulate_loadstore.c     | 734 +++++++++----------------------
 arch/powerpc/kvm/powerpc.c               |  53 ++-
 arch/powerpc/lib/sstep.c                 |   2 +-
 24 files changed, 407 insertions(+), 667 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 111+ messages in thread

* [PATCH 01/11] KVM: PPC: add pt_regs into kvm_vcpu_arch and move vcpu->arch.gpr[] into it
  2018-04-25 11:54 ` wei.guo.simon
  (?)
@ 2018-04-25 11:54   ` wei.guo.simon
  -1 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Simon Guo, linuxppc-dev, kvm

From: Simon Guo <wei.guo.simon@gmail.com>

Current regs are scattered at kvm_vcpu_arch structure and it will
be more neat to organize them into pt_regs structure.

Also it will enable reconstruct MMIO emulation code with
analyse_instr() later.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_book3s.h   |  4 +--
 arch/powerpc/include/asm/kvm_host.h     |  2 +-
 arch/powerpc/kernel/asm-offsets.c       |  4 +--
 arch/powerpc/kvm/book3s_64_vio_hv.c     |  2 +-
 arch/powerpc/kvm/book3s_hv_builtin.c    |  6 ++--
 arch/powerpc/kvm/book3s_hv_rm_mmu.c     | 15 ++++-----
 arch/powerpc/kvm/book3s_hv_rm_xics.c    |  2 +-
 arch/powerpc/kvm/book3s_pr.c            | 56 ++++++++++++++++-----------------
 arch/powerpc/kvm/book3s_xive_template.c |  4 +--
 arch/powerpc/kvm/e500_emulate.c         |  4 +--
 10 files changed, 50 insertions(+), 49 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 4c02a73..9de4127 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -273,12 +273,12 @@ static inline struct kvmppc_vcpu_book3s *to_book3s(struct kvm_vcpu *vcpu)
 
 static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
 {
-	vcpu->arch.gpr[num] = val;
+	vcpu->arch.regs.gpr[num] = val;
 }
 
 static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
 {
-	return vcpu->arch.gpr[num];
+	return vcpu->arch.regs.gpr[num];
 }
 
 static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 17498e9..1c93d82 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -486,7 +486,7 @@ struct kvm_vcpu_arch {
 	struct kvmppc_book3s_shadow_vcpu *shadow_vcpu;
 #endif
 
-	ulong gpr[32];
+	struct pt_regs regs;
 
 	struct thread_fp_state fp;
 
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 6bee65f..e8a78a5 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -425,7 +425,7 @@ int main(void)
 	OFFSET(VCPU_HOST_STACK, kvm_vcpu, arch.host_stack);
 	OFFSET(VCPU_HOST_PID, kvm_vcpu, arch.host_pid);
 	OFFSET(VCPU_GUEST_PID, kvm_vcpu, arch.pid);
-	OFFSET(VCPU_GPRS, kvm_vcpu, arch.gpr);
+	OFFSET(VCPU_GPRS, kvm_vcpu, arch.regs.gpr);
 	OFFSET(VCPU_VRSAVE, kvm_vcpu, arch.vrsave);
 	OFFSET(VCPU_FPRS, kvm_vcpu, arch.fp.fpr);
 #ifdef CONFIG_ALTIVEC
@@ -438,7 +438,7 @@ int main(void)
 	OFFSET(VCPU_TAR, kvm_vcpu, arch.tar);
 #endif
 	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
-	OFFSET(VCPU_PC, kvm_vcpu, arch.pc);
+	OFFSET(VCPU_PC, kvm_vcpu, arch.nip);
 #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
 	OFFSET(VCPU_MSR, kvm_vcpu, arch.shregs.msr);
 	OFFSET(VCPU_SRR0, kvm_vcpu, arch.shregs.srr0);
diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c b/arch/powerpc/kvm/book3s_64_vio_hv.c
index 6651f73..bdd872a 100644
--- a/arch/powerpc/kvm/book3s_64_vio_hv.c
+++ b/arch/powerpc/kvm/book3s_64_vio_hv.c
@@ -571,7 +571,7 @@ long kvmppc_h_get_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
 	page = stt->pages[idx / TCES_PER_PAGE];
 	tbl = (u64 *)page_address(page);
 
-	vcpu->arch.gpr[4] = tbl[idx % TCES_PER_PAGE];
+	vcpu->arch.regs.gpr[4] = tbl[idx % TCES_PER_PAGE];
 
 	return H_SUCCESS;
 }
diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c
index de18299..2b12758 100644
--- a/arch/powerpc/kvm/book3s_hv_builtin.c
+++ b/arch/powerpc/kvm/book3s_hv_builtin.c
@@ -211,9 +211,9 @@ long kvmppc_h_random(struct kvm_vcpu *vcpu)
 
 	/* Only need to do the expensive mfmsr() on radix */
 	if (kvm_is_radix(vcpu->kvm) && (mfmsr() & MSR_IR))
-		r = powernv_get_random_long(&vcpu->arch.gpr[4]);
+		r = powernv_get_random_long(&vcpu->arch.regs.gpr[4]);
 	else
-		r = powernv_get_random_real_mode(&vcpu->arch.gpr[4]);
+		r = powernv_get_random_real_mode(&vcpu->arch.regs.gpr[4]);
 	if (r)
 		return H_SUCCESS;
 
@@ -562,7 +562,7 @@ unsigned long kvmppc_rm_h_xirr_x(struct kvm_vcpu *vcpu)
 {
 	if (!kvmppc_xics_enabled(vcpu))
 		return H_TOO_HARD;
-	vcpu->arch.gpr[5] = get_tb();
+	vcpu->arch.regs.gpr[5] = get_tb();
 	if (xive_enabled()) {
 		if (is_rm())
 			return xive_rm_h_xirr(vcpu);
diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index e1c083f..3d3ce7a 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
@@ -418,7 +418,8 @@ long kvmppc_h_enter(struct kvm_vcpu *vcpu, unsigned long flags,
 		    long pte_index, unsigned long pteh, unsigned long ptel)
 {
 	return kvmppc_do_h_enter(vcpu->kvm, flags, pte_index, pteh, ptel,
-				 vcpu->arch.pgdir, true, &vcpu->arch.gpr[4]);
+				 vcpu->arch.pgdir, true,
+				 &vcpu->arch.regs.gpr[4]);
 }
 
 #ifdef __BIG_ENDIAN__
@@ -565,13 +566,13 @@ long kvmppc_h_remove(struct kvm_vcpu *vcpu, unsigned long flags,
 		     unsigned long pte_index, unsigned long avpn)
 {
 	return kvmppc_do_h_remove(vcpu->kvm, flags, pte_index, avpn,
-				  &vcpu->arch.gpr[4]);
+				  &vcpu->arch.regs.gpr[4]);
 }
 
 long kvmppc_h_bulk_remove(struct kvm_vcpu *vcpu)
 {
 	struct kvm *kvm = vcpu->kvm;
-	unsigned long *args = &vcpu->arch.gpr[4];
+	unsigned long *args = &vcpu->arch.regs.gpr[4];
 	__be64 *hp, *hptes[4];
 	unsigned long tlbrb[4];
 	long int i, j, k, n, found, indexes[4];
@@ -791,8 +792,8 @@ long kvmppc_h_read(struct kvm_vcpu *vcpu, unsigned long flags,
 			r = rev[i].guest_rpte | (r & (HPTE_R_R | HPTE_R_C));
 			r &= ~HPTE_GR_RESERVED;
 		}
-		vcpu->arch.gpr[4 + i * 2] = v;
-		vcpu->arch.gpr[5 + i * 2] = r;
+		vcpu->arch.regs.gpr[4 + i * 2] = v;
+		vcpu->arch.regs.gpr[5 + i * 2] = r;
 	}
 	return H_SUCCESS;
 }
@@ -838,7 +839,7 @@ long kvmppc_h_clear_ref(struct kvm_vcpu *vcpu, unsigned long flags,
 			}
 		}
 	}
-	vcpu->arch.gpr[4] = gr;
+	vcpu->arch.regs.gpr[4] = gr;
 	ret = H_SUCCESS;
  out:
 	unlock_hpte(hpte, v & ~HPTE_V_HVLOCK);
@@ -885,7 +886,7 @@ long kvmppc_h_clear_mod(struct kvm_vcpu *vcpu, unsigned long flags,
 			kvmppc_set_dirty_from_hpte(kvm, v, gr);
 		}
 	}
-	vcpu->arch.gpr[4] = gr;
+	vcpu->arch.regs.gpr[4] = gr;
 	ret = H_SUCCESS;
  out:
 	unlock_hpte(hpte, v & ~HPTE_V_HVLOCK);
diff --git a/arch/powerpc/kvm/book3s_hv_rm_xics.c b/arch/powerpc/kvm/book3s_hv_rm_xics.c
index 2a86261..758d1d2 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_xics.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_xics.c
@@ -517,7 +517,7 @@ unsigned long xics_rm_h_xirr(struct kvm_vcpu *vcpu)
 	} while (!icp_rm_try_update(icp, old_state, new_state));
 
 	/* Return the result in GPR4 */
-	vcpu->arch.gpr[4] = xirr;
+	vcpu->arch.regs.gpr[4] = xirr;
 
 	return check_too_hard(xics, icp);
 }
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index d3f304d..899bc9a 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -147,20 +147,20 @@ void kvmppc_copy_to_svcpu(struct kvm_vcpu *vcpu)
 {
 	struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
 
-	svcpu->gpr[0] = vcpu->arch.gpr[0];
-	svcpu->gpr[1] = vcpu->arch.gpr[1];
-	svcpu->gpr[2] = vcpu->arch.gpr[2];
-	svcpu->gpr[3] = vcpu->arch.gpr[3];
-	svcpu->gpr[4] = vcpu->arch.gpr[4];
-	svcpu->gpr[5] = vcpu->arch.gpr[5];
-	svcpu->gpr[6] = vcpu->arch.gpr[6];
-	svcpu->gpr[7] = vcpu->arch.gpr[7];
-	svcpu->gpr[8] = vcpu->arch.gpr[8];
-	svcpu->gpr[9] = vcpu->arch.gpr[9];
-	svcpu->gpr[10] = vcpu->arch.gpr[10];
-	svcpu->gpr[11] = vcpu->arch.gpr[11];
-	svcpu->gpr[12] = vcpu->arch.gpr[12];
-	svcpu->gpr[13] = vcpu->arch.gpr[13];
+	svcpu->gpr[0] = vcpu->arch.regs.gpr[0];
+	svcpu->gpr[1] = vcpu->arch.regs.gpr[1];
+	svcpu->gpr[2] = vcpu->arch.regs.gpr[2];
+	svcpu->gpr[3] = vcpu->arch.regs.gpr[3];
+	svcpu->gpr[4] = vcpu->arch.regs.gpr[4];
+	svcpu->gpr[5] = vcpu->arch.regs.gpr[5];
+	svcpu->gpr[6] = vcpu->arch.regs.gpr[6];
+	svcpu->gpr[7] = vcpu->arch.regs.gpr[7];
+	svcpu->gpr[8] = vcpu->arch.regs.gpr[8];
+	svcpu->gpr[9] = vcpu->arch.regs.gpr[9];
+	svcpu->gpr[10] = vcpu->arch.regs.gpr[10];
+	svcpu->gpr[11] = vcpu->arch.regs.gpr[11];
+	svcpu->gpr[12] = vcpu->arch.regs.gpr[12];
+	svcpu->gpr[13] = vcpu->arch.regs.gpr[13];
 	svcpu->cr  = vcpu->arch.cr;
 	svcpu->xer = vcpu->arch.xer;
 	svcpu->ctr = vcpu->arch.ctr;
@@ -194,20 +194,20 @@ void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu)
 	if (!svcpu->in_use)
 		goto out;
 
-	vcpu->arch.gpr[0] = svcpu->gpr[0];
-	vcpu->arch.gpr[1] = svcpu->gpr[1];
-	vcpu->arch.gpr[2] = svcpu->gpr[2];
-	vcpu->arch.gpr[3] = svcpu->gpr[3];
-	vcpu->arch.gpr[4] = svcpu->gpr[4];
-	vcpu->arch.gpr[5] = svcpu->gpr[5];
-	vcpu->arch.gpr[6] = svcpu->gpr[6];
-	vcpu->arch.gpr[7] = svcpu->gpr[7];
-	vcpu->arch.gpr[8] = svcpu->gpr[8];
-	vcpu->arch.gpr[9] = svcpu->gpr[9];
-	vcpu->arch.gpr[10] = svcpu->gpr[10];
-	vcpu->arch.gpr[11] = svcpu->gpr[11];
-	vcpu->arch.gpr[12] = svcpu->gpr[12];
-	vcpu->arch.gpr[13] = svcpu->gpr[13];
+	vcpu->arch.regs.gpr[0] = svcpu->gpr[0];
+	vcpu->arch.regs.gpr[1] = svcpu->gpr[1];
+	vcpu->arch.regs.gpr[2] = svcpu->gpr[2];
+	vcpu->arch.regs.gpr[3] = svcpu->gpr[3];
+	vcpu->arch.regs.gpr[4] = svcpu->gpr[4];
+	vcpu->arch.regs.gpr[5] = svcpu->gpr[5];
+	vcpu->arch.regs.gpr[6] = svcpu->gpr[6];
+	vcpu->arch.regs.gpr[7] = svcpu->gpr[7];
+	vcpu->arch.regs.gpr[8] = svcpu->gpr[8];
+	vcpu->arch.regs.gpr[9] = svcpu->gpr[9];
+	vcpu->arch.regs.gpr[10] = svcpu->gpr[10];
+	vcpu->arch.regs.gpr[11] = svcpu->gpr[11];
+	vcpu->arch.regs.gpr[12] = svcpu->gpr[12];
+	vcpu->arch.regs.gpr[13] = svcpu->gpr[13];
 	vcpu->arch.cr  = svcpu->cr;
 	vcpu->arch.xer = svcpu->xer;
 	vcpu->arch.ctr = svcpu->ctr;
diff --git a/arch/powerpc/kvm/book3s_xive_template.c b/arch/powerpc/kvm/book3s_xive_template.c
index c7a5dea..b5940fc 100644
--- a/arch/powerpc/kvm/book3s_xive_template.c
+++ b/arch/powerpc/kvm/book3s_xive_template.c
@@ -327,7 +327,7 @@ X_STATIC unsigned long GLUE(X_PFX,h_xirr)(struct kvm_vcpu *vcpu)
 	 */
 
 	/* Return interrupt and old CPPR in GPR4 */
-	vcpu->arch.gpr[4] = hirq | (old_cppr << 24);
+	vcpu->arch.regs.gpr[4] = hirq | (old_cppr << 24);
 
 	return H_SUCCESS;
 }
@@ -362,7 +362,7 @@ X_STATIC unsigned long GLUE(X_PFX,h_ipoll)(struct kvm_vcpu *vcpu, unsigned long
 	hirq = GLUE(X_PFX,scan_interrupts)(xc, pending, scan_poll);
 
 	/* Return interrupt and old CPPR in GPR4 */
-	vcpu->arch.gpr[4] = hirq | (xc->cppr << 24);
+	vcpu->arch.regs.gpr[4] = hirq | (xc->cppr << 24);
 
 	return H_SUCCESS;
 }
diff --git a/arch/powerpc/kvm/e500_emulate.c b/arch/powerpc/kvm/e500_emulate.c
index 990db69..8f871fb 100644
--- a/arch/powerpc/kvm/e500_emulate.c
+++ b/arch/powerpc/kvm/e500_emulate.c
@@ -53,7 +53,7 @@ static int dbell2prio(ulong param)
 
 static int kvmppc_e500_emul_msgclr(struct kvm_vcpu *vcpu, int rb)
 {
-	ulong param = vcpu->arch.gpr[rb];
+	ulong param = vcpu->arch.regs.gpr[rb];
 	int prio = dbell2prio(param);
 
 	if (prio < 0)
@@ -65,7 +65,7 @@ static int kvmppc_e500_emul_msgclr(struct kvm_vcpu *vcpu, int rb)
 
 static int kvmppc_e500_emul_msgsnd(struct kvm_vcpu *vcpu, int rb)
 {
-	ulong param = vcpu->arch.gpr[rb];
+	ulong param = vcpu->arch.regs.gpr[rb];
 	int prio = dbell2prio(rb);
 	int pir = param & PPC_DBELL_PIR_MASK;
 	int i;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 01/11] KVM: PPC: add pt_regs into kvm_vcpu_arch and move vcpu->arch.gpr[] into it
@ 2018-04-25 11:54   ` wei.guo.simon
  0 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Paul Mackerras, kvm, linuxppc-dev, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

Current regs are scattered at kvm_vcpu_arch structure and it will
be more neat to organize them into pt_regs structure.

Also it will enable reconstruct MMIO emulation code with
analyse_instr() later.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_book3s.h   |  4 +--
 arch/powerpc/include/asm/kvm_host.h     |  2 +-
 arch/powerpc/kernel/asm-offsets.c       |  4 +--
 arch/powerpc/kvm/book3s_64_vio_hv.c     |  2 +-
 arch/powerpc/kvm/book3s_hv_builtin.c    |  6 ++--
 arch/powerpc/kvm/book3s_hv_rm_mmu.c     | 15 ++++-----
 arch/powerpc/kvm/book3s_hv_rm_xics.c    |  2 +-
 arch/powerpc/kvm/book3s_pr.c            | 56 ++++++++++++++++-----------------
 arch/powerpc/kvm/book3s_xive_template.c |  4 +--
 arch/powerpc/kvm/e500_emulate.c         |  4 +--
 10 files changed, 50 insertions(+), 49 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 4c02a73..9de4127 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -273,12 +273,12 @@ static inline struct kvmppc_vcpu_book3s *to_book3s(struct kvm_vcpu *vcpu)
 
 static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
 {
-	vcpu->arch.gpr[num] = val;
+	vcpu->arch.regs.gpr[num] = val;
 }
 
 static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
 {
-	return vcpu->arch.gpr[num];
+	return vcpu->arch.regs.gpr[num];
 }
 
 static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 17498e9..1c93d82 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -486,7 +486,7 @@ struct kvm_vcpu_arch {
 	struct kvmppc_book3s_shadow_vcpu *shadow_vcpu;
 #endif
 
-	ulong gpr[32];
+	struct pt_regs regs;
 
 	struct thread_fp_state fp;
 
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 6bee65f..e8a78a5 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -425,7 +425,7 @@ int main(void)
 	OFFSET(VCPU_HOST_STACK, kvm_vcpu, arch.host_stack);
 	OFFSET(VCPU_HOST_PID, kvm_vcpu, arch.host_pid);
 	OFFSET(VCPU_GUEST_PID, kvm_vcpu, arch.pid);
-	OFFSET(VCPU_GPRS, kvm_vcpu, arch.gpr);
+	OFFSET(VCPU_GPRS, kvm_vcpu, arch.regs.gpr);
 	OFFSET(VCPU_VRSAVE, kvm_vcpu, arch.vrsave);
 	OFFSET(VCPU_FPRS, kvm_vcpu, arch.fp.fpr);
 #ifdef CONFIG_ALTIVEC
@@ -438,7 +438,7 @@ int main(void)
 	OFFSET(VCPU_TAR, kvm_vcpu, arch.tar);
 #endif
 	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
-	OFFSET(VCPU_PC, kvm_vcpu, arch.pc);
+	OFFSET(VCPU_PC, kvm_vcpu, arch.nip);
 #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
 	OFFSET(VCPU_MSR, kvm_vcpu, arch.shregs.msr);
 	OFFSET(VCPU_SRR0, kvm_vcpu, arch.shregs.srr0);
diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c b/arch/powerpc/kvm/book3s_64_vio_hv.c
index 6651f73..bdd872a 100644
--- a/arch/powerpc/kvm/book3s_64_vio_hv.c
+++ b/arch/powerpc/kvm/book3s_64_vio_hv.c
@@ -571,7 +571,7 @@ long kvmppc_h_get_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
 	page = stt->pages[idx / TCES_PER_PAGE];
 	tbl = (u64 *)page_address(page);
 
-	vcpu->arch.gpr[4] = tbl[idx % TCES_PER_PAGE];
+	vcpu->arch.regs.gpr[4] = tbl[idx % TCES_PER_PAGE];
 
 	return H_SUCCESS;
 }
diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c
index de18299..2b12758 100644
--- a/arch/powerpc/kvm/book3s_hv_builtin.c
+++ b/arch/powerpc/kvm/book3s_hv_builtin.c
@@ -211,9 +211,9 @@ long kvmppc_h_random(struct kvm_vcpu *vcpu)
 
 	/* Only need to do the expensive mfmsr() on radix */
 	if (kvm_is_radix(vcpu->kvm) && (mfmsr() & MSR_IR))
-		r = powernv_get_random_long(&vcpu->arch.gpr[4]);
+		r = powernv_get_random_long(&vcpu->arch.regs.gpr[4]);
 	else
-		r = powernv_get_random_real_mode(&vcpu->arch.gpr[4]);
+		r = powernv_get_random_real_mode(&vcpu->arch.regs.gpr[4]);
 	if (r)
 		return H_SUCCESS;
 
@@ -562,7 +562,7 @@ unsigned long kvmppc_rm_h_xirr_x(struct kvm_vcpu *vcpu)
 {
 	if (!kvmppc_xics_enabled(vcpu))
 		return H_TOO_HARD;
-	vcpu->arch.gpr[5] = get_tb();
+	vcpu->arch.regs.gpr[5] = get_tb();
 	if (xive_enabled()) {
 		if (is_rm())
 			return xive_rm_h_xirr(vcpu);
diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index e1c083f..3d3ce7a 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
@@ -418,7 +418,8 @@ long kvmppc_h_enter(struct kvm_vcpu *vcpu, unsigned long flags,
 		    long pte_index, unsigned long pteh, unsigned long ptel)
 {
 	return kvmppc_do_h_enter(vcpu->kvm, flags, pte_index, pteh, ptel,
-				 vcpu->arch.pgdir, true, &vcpu->arch.gpr[4]);
+				 vcpu->arch.pgdir, true,
+				 &vcpu->arch.regs.gpr[4]);
 }
 
 #ifdef __BIG_ENDIAN__
@@ -565,13 +566,13 @@ long kvmppc_h_remove(struct kvm_vcpu *vcpu, unsigned long flags,
 		     unsigned long pte_index, unsigned long avpn)
 {
 	return kvmppc_do_h_remove(vcpu->kvm, flags, pte_index, avpn,
-				  &vcpu->arch.gpr[4]);
+				  &vcpu->arch.regs.gpr[4]);
 }
 
 long kvmppc_h_bulk_remove(struct kvm_vcpu *vcpu)
 {
 	struct kvm *kvm = vcpu->kvm;
-	unsigned long *args = &vcpu->arch.gpr[4];
+	unsigned long *args = &vcpu->arch.regs.gpr[4];
 	__be64 *hp, *hptes[4];
 	unsigned long tlbrb[4];
 	long int i, j, k, n, found, indexes[4];
@@ -791,8 +792,8 @@ long kvmppc_h_read(struct kvm_vcpu *vcpu, unsigned long flags,
 			r = rev[i].guest_rpte | (r & (HPTE_R_R | HPTE_R_C));
 			r &= ~HPTE_GR_RESERVED;
 		}
-		vcpu->arch.gpr[4 + i * 2] = v;
-		vcpu->arch.gpr[5 + i * 2] = r;
+		vcpu->arch.regs.gpr[4 + i * 2] = v;
+		vcpu->arch.regs.gpr[5 + i * 2] = r;
 	}
 	return H_SUCCESS;
 }
@@ -838,7 +839,7 @@ long kvmppc_h_clear_ref(struct kvm_vcpu *vcpu, unsigned long flags,
 			}
 		}
 	}
-	vcpu->arch.gpr[4] = gr;
+	vcpu->arch.regs.gpr[4] = gr;
 	ret = H_SUCCESS;
  out:
 	unlock_hpte(hpte, v & ~HPTE_V_HVLOCK);
@@ -885,7 +886,7 @@ long kvmppc_h_clear_mod(struct kvm_vcpu *vcpu, unsigned long flags,
 			kvmppc_set_dirty_from_hpte(kvm, v, gr);
 		}
 	}
-	vcpu->arch.gpr[4] = gr;
+	vcpu->arch.regs.gpr[4] = gr;
 	ret = H_SUCCESS;
  out:
 	unlock_hpte(hpte, v & ~HPTE_V_HVLOCK);
diff --git a/arch/powerpc/kvm/book3s_hv_rm_xics.c b/arch/powerpc/kvm/book3s_hv_rm_xics.c
index 2a86261..758d1d2 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_xics.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_xics.c
@@ -517,7 +517,7 @@ unsigned long xics_rm_h_xirr(struct kvm_vcpu *vcpu)
 	} while (!icp_rm_try_update(icp, old_state, new_state));
 
 	/* Return the result in GPR4 */
-	vcpu->arch.gpr[4] = xirr;
+	vcpu->arch.regs.gpr[4] = xirr;
 
 	return check_too_hard(xics, icp);
 }
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index d3f304d..899bc9a 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -147,20 +147,20 @@ void kvmppc_copy_to_svcpu(struct kvm_vcpu *vcpu)
 {
 	struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
 
-	svcpu->gpr[0] = vcpu->arch.gpr[0];
-	svcpu->gpr[1] = vcpu->arch.gpr[1];
-	svcpu->gpr[2] = vcpu->arch.gpr[2];
-	svcpu->gpr[3] = vcpu->arch.gpr[3];
-	svcpu->gpr[4] = vcpu->arch.gpr[4];
-	svcpu->gpr[5] = vcpu->arch.gpr[5];
-	svcpu->gpr[6] = vcpu->arch.gpr[6];
-	svcpu->gpr[7] = vcpu->arch.gpr[7];
-	svcpu->gpr[8] = vcpu->arch.gpr[8];
-	svcpu->gpr[9] = vcpu->arch.gpr[9];
-	svcpu->gpr[10] = vcpu->arch.gpr[10];
-	svcpu->gpr[11] = vcpu->arch.gpr[11];
-	svcpu->gpr[12] = vcpu->arch.gpr[12];
-	svcpu->gpr[13] = vcpu->arch.gpr[13];
+	svcpu->gpr[0] = vcpu->arch.regs.gpr[0];
+	svcpu->gpr[1] = vcpu->arch.regs.gpr[1];
+	svcpu->gpr[2] = vcpu->arch.regs.gpr[2];
+	svcpu->gpr[3] = vcpu->arch.regs.gpr[3];
+	svcpu->gpr[4] = vcpu->arch.regs.gpr[4];
+	svcpu->gpr[5] = vcpu->arch.regs.gpr[5];
+	svcpu->gpr[6] = vcpu->arch.regs.gpr[6];
+	svcpu->gpr[7] = vcpu->arch.regs.gpr[7];
+	svcpu->gpr[8] = vcpu->arch.regs.gpr[8];
+	svcpu->gpr[9] = vcpu->arch.regs.gpr[9];
+	svcpu->gpr[10] = vcpu->arch.regs.gpr[10];
+	svcpu->gpr[11] = vcpu->arch.regs.gpr[11];
+	svcpu->gpr[12] = vcpu->arch.regs.gpr[12];
+	svcpu->gpr[13] = vcpu->arch.regs.gpr[13];
 	svcpu->cr  = vcpu->arch.cr;
 	svcpu->xer = vcpu->arch.xer;
 	svcpu->ctr = vcpu->arch.ctr;
@@ -194,20 +194,20 @@ void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu)
 	if (!svcpu->in_use)
 		goto out;
 
-	vcpu->arch.gpr[0] = svcpu->gpr[0];
-	vcpu->arch.gpr[1] = svcpu->gpr[1];
-	vcpu->arch.gpr[2] = svcpu->gpr[2];
-	vcpu->arch.gpr[3] = svcpu->gpr[3];
-	vcpu->arch.gpr[4] = svcpu->gpr[4];
-	vcpu->arch.gpr[5] = svcpu->gpr[5];
-	vcpu->arch.gpr[6] = svcpu->gpr[6];
-	vcpu->arch.gpr[7] = svcpu->gpr[7];
-	vcpu->arch.gpr[8] = svcpu->gpr[8];
-	vcpu->arch.gpr[9] = svcpu->gpr[9];
-	vcpu->arch.gpr[10] = svcpu->gpr[10];
-	vcpu->arch.gpr[11] = svcpu->gpr[11];
-	vcpu->arch.gpr[12] = svcpu->gpr[12];
-	vcpu->arch.gpr[13] = svcpu->gpr[13];
+	vcpu->arch.regs.gpr[0] = svcpu->gpr[0];
+	vcpu->arch.regs.gpr[1] = svcpu->gpr[1];
+	vcpu->arch.regs.gpr[2] = svcpu->gpr[2];
+	vcpu->arch.regs.gpr[3] = svcpu->gpr[3];
+	vcpu->arch.regs.gpr[4] = svcpu->gpr[4];
+	vcpu->arch.regs.gpr[5] = svcpu->gpr[5];
+	vcpu->arch.regs.gpr[6] = svcpu->gpr[6];
+	vcpu->arch.regs.gpr[7] = svcpu->gpr[7];
+	vcpu->arch.regs.gpr[8] = svcpu->gpr[8];
+	vcpu->arch.regs.gpr[9] = svcpu->gpr[9];
+	vcpu->arch.regs.gpr[10] = svcpu->gpr[10];
+	vcpu->arch.regs.gpr[11] = svcpu->gpr[11];
+	vcpu->arch.regs.gpr[12] = svcpu->gpr[12];
+	vcpu->arch.regs.gpr[13] = svcpu->gpr[13];
 	vcpu->arch.cr  = svcpu->cr;
 	vcpu->arch.xer = svcpu->xer;
 	vcpu->arch.ctr = svcpu->ctr;
diff --git a/arch/powerpc/kvm/book3s_xive_template.c b/arch/powerpc/kvm/book3s_xive_template.c
index c7a5dea..b5940fc 100644
--- a/arch/powerpc/kvm/book3s_xive_template.c
+++ b/arch/powerpc/kvm/book3s_xive_template.c
@@ -327,7 +327,7 @@ X_STATIC unsigned long GLUE(X_PFX,h_xirr)(struct kvm_vcpu *vcpu)
 	 */
 
 	/* Return interrupt and old CPPR in GPR4 */
-	vcpu->arch.gpr[4] = hirq | (old_cppr << 24);
+	vcpu->arch.regs.gpr[4] = hirq | (old_cppr << 24);
 
 	return H_SUCCESS;
 }
@@ -362,7 +362,7 @@ X_STATIC unsigned long GLUE(X_PFX,h_ipoll)(struct kvm_vcpu *vcpu, unsigned long
 	hirq = GLUE(X_PFX,scan_interrupts)(xc, pending, scan_poll);
 
 	/* Return interrupt and old CPPR in GPR4 */
-	vcpu->arch.gpr[4] = hirq | (xc->cppr << 24);
+	vcpu->arch.regs.gpr[4] = hirq | (xc->cppr << 24);
 
 	return H_SUCCESS;
 }
diff --git a/arch/powerpc/kvm/e500_emulate.c b/arch/powerpc/kvm/e500_emulate.c
index 990db69..8f871fb 100644
--- a/arch/powerpc/kvm/e500_emulate.c
+++ b/arch/powerpc/kvm/e500_emulate.c
@@ -53,7 +53,7 @@ static int dbell2prio(ulong param)
 
 static int kvmppc_e500_emul_msgclr(struct kvm_vcpu *vcpu, int rb)
 {
-	ulong param = vcpu->arch.gpr[rb];
+	ulong param = vcpu->arch.regs.gpr[rb];
 	int prio = dbell2prio(param);
 
 	if (prio < 0)
@@ -65,7 +65,7 @@ static int kvmppc_e500_emul_msgclr(struct kvm_vcpu *vcpu, int rb)
 
 static int kvmppc_e500_emul_msgsnd(struct kvm_vcpu *vcpu, int rb)
 {
-	ulong param = vcpu->arch.gpr[rb];
+	ulong param = vcpu->arch.regs.gpr[rb];
 	int prio = dbell2prio(rb);
 	int pir = param & PPC_DBELL_PIR_MASK;
 	int i;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 01/11] KVM: PPC: add pt_regs into kvm_vcpu_arch and move vcpu->arch.gpr[] into it
@ 2018-04-25 11:54   ` wei.guo.simon
  0 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Simon Guo, linuxppc-dev, kvm

From: Simon Guo <wei.guo.simon@gmail.com>

Current regs are scattered at kvm_vcpu_arch structure and it will
be more neat to organize them into pt_regs structure.

Also it will enable reconstruct MMIO emulation code with
analyse_instr() later.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_book3s.h   |  4 +--
 arch/powerpc/include/asm/kvm_host.h     |  2 +-
 arch/powerpc/kernel/asm-offsets.c       |  4 +--
 arch/powerpc/kvm/book3s_64_vio_hv.c     |  2 +-
 arch/powerpc/kvm/book3s_hv_builtin.c    |  6 ++--
 arch/powerpc/kvm/book3s_hv_rm_mmu.c     | 15 ++++-----
 arch/powerpc/kvm/book3s_hv_rm_xics.c    |  2 +-
 arch/powerpc/kvm/book3s_pr.c            | 56 ++++++++++++++++-----------------
 arch/powerpc/kvm/book3s_xive_template.c |  4 +--
 arch/powerpc/kvm/e500_emulate.c         |  4 +--
 10 files changed, 50 insertions(+), 49 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 4c02a73..9de4127 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -273,12 +273,12 @@ static inline struct kvmppc_vcpu_book3s *to_book3s(struct kvm_vcpu *vcpu)
 
 static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
 {
-	vcpu->arch.gpr[num] = val;
+	vcpu->arch.regs.gpr[num] = val;
 }
 
 static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
 {
-	return vcpu->arch.gpr[num];
+	return vcpu->arch.regs.gpr[num];
 }
 
 static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 17498e9..1c93d82 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -486,7 +486,7 @@ struct kvm_vcpu_arch {
 	struct kvmppc_book3s_shadow_vcpu *shadow_vcpu;
 #endif
 
-	ulong gpr[32];
+	struct pt_regs regs;
 
 	struct thread_fp_state fp;
 
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 6bee65f..e8a78a5 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -425,7 +425,7 @@ int main(void)
 	OFFSET(VCPU_HOST_STACK, kvm_vcpu, arch.host_stack);
 	OFFSET(VCPU_HOST_PID, kvm_vcpu, arch.host_pid);
 	OFFSET(VCPU_GUEST_PID, kvm_vcpu, arch.pid);
-	OFFSET(VCPU_GPRS, kvm_vcpu, arch.gpr);
+	OFFSET(VCPU_GPRS, kvm_vcpu, arch.regs.gpr);
 	OFFSET(VCPU_VRSAVE, kvm_vcpu, arch.vrsave);
 	OFFSET(VCPU_FPRS, kvm_vcpu, arch.fp.fpr);
 #ifdef CONFIG_ALTIVEC
@@ -438,7 +438,7 @@ int main(void)
 	OFFSET(VCPU_TAR, kvm_vcpu, arch.tar);
 #endif
 	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
-	OFFSET(VCPU_PC, kvm_vcpu, arch.pc);
+	OFFSET(VCPU_PC, kvm_vcpu, arch.nip);
 #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
 	OFFSET(VCPU_MSR, kvm_vcpu, arch.shregs.msr);
 	OFFSET(VCPU_SRR0, kvm_vcpu, arch.shregs.srr0);
diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c b/arch/powerpc/kvm/book3s_64_vio_hv.c
index 6651f73..bdd872a 100644
--- a/arch/powerpc/kvm/book3s_64_vio_hv.c
+++ b/arch/powerpc/kvm/book3s_64_vio_hv.c
@@ -571,7 +571,7 @@ long kvmppc_h_get_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
 	page = stt->pages[idx / TCES_PER_PAGE];
 	tbl = (u64 *)page_address(page);
 
-	vcpu->arch.gpr[4] = tbl[idx % TCES_PER_PAGE];
+	vcpu->arch.regs.gpr[4] = tbl[idx % TCES_PER_PAGE];
 
 	return H_SUCCESS;
 }
diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c
index de18299..2b12758 100644
--- a/arch/powerpc/kvm/book3s_hv_builtin.c
+++ b/arch/powerpc/kvm/book3s_hv_builtin.c
@@ -211,9 +211,9 @@ long kvmppc_h_random(struct kvm_vcpu *vcpu)
 
 	/* Only need to do the expensive mfmsr() on radix */
 	if (kvm_is_radix(vcpu->kvm) && (mfmsr() & MSR_IR))
-		r = powernv_get_random_long(&vcpu->arch.gpr[4]);
+		r = powernv_get_random_long(&vcpu->arch.regs.gpr[4]);
 	else
-		r = powernv_get_random_real_mode(&vcpu->arch.gpr[4]);
+		r = powernv_get_random_real_mode(&vcpu->arch.regs.gpr[4]);
 	if (r)
 		return H_SUCCESS;
 
@@ -562,7 +562,7 @@ unsigned long kvmppc_rm_h_xirr_x(struct kvm_vcpu *vcpu)
 {
 	if (!kvmppc_xics_enabled(vcpu))
 		return H_TOO_HARD;
-	vcpu->arch.gpr[5] = get_tb();
+	vcpu->arch.regs.gpr[5] = get_tb();
 	if (xive_enabled()) {
 		if (is_rm())
 			return xive_rm_h_xirr(vcpu);
diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index e1c083f..3d3ce7a 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
@@ -418,7 +418,8 @@ long kvmppc_h_enter(struct kvm_vcpu *vcpu, unsigned long flags,
 		    long pte_index, unsigned long pteh, unsigned long ptel)
 {
 	return kvmppc_do_h_enter(vcpu->kvm, flags, pte_index, pteh, ptel,
-				 vcpu->arch.pgdir, true, &vcpu->arch.gpr[4]);
+				 vcpu->arch.pgdir, true,
+				 &vcpu->arch.regs.gpr[4]);
 }
 
 #ifdef __BIG_ENDIAN__
@@ -565,13 +566,13 @@ long kvmppc_h_remove(struct kvm_vcpu *vcpu, unsigned long flags,
 		     unsigned long pte_index, unsigned long avpn)
 {
 	return kvmppc_do_h_remove(vcpu->kvm, flags, pte_index, avpn,
-				  &vcpu->arch.gpr[4]);
+				  &vcpu->arch.regs.gpr[4]);
 }
 
 long kvmppc_h_bulk_remove(struct kvm_vcpu *vcpu)
 {
 	struct kvm *kvm = vcpu->kvm;
-	unsigned long *args = &vcpu->arch.gpr[4];
+	unsigned long *args = &vcpu->arch.regs.gpr[4];
 	__be64 *hp, *hptes[4];
 	unsigned long tlbrb[4];
 	long int i, j, k, n, found, indexes[4];
@@ -791,8 +792,8 @@ long kvmppc_h_read(struct kvm_vcpu *vcpu, unsigned long flags,
 			r = rev[i].guest_rpte | (r & (HPTE_R_R | HPTE_R_C));
 			r &= ~HPTE_GR_RESERVED;
 		}
-		vcpu->arch.gpr[4 + i * 2] = v;
-		vcpu->arch.gpr[5 + i * 2] = r;
+		vcpu->arch.regs.gpr[4 + i * 2] = v;
+		vcpu->arch.regs.gpr[5 + i * 2] = r;
 	}
 	return H_SUCCESS;
 }
@@ -838,7 +839,7 @@ long kvmppc_h_clear_ref(struct kvm_vcpu *vcpu, unsigned long flags,
 			}
 		}
 	}
-	vcpu->arch.gpr[4] = gr;
+	vcpu->arch.regs.gpr[4] = gr;
 	ret = H_SUCCESS;
  out:
 	unlock_hpte(hpte, v & ~HPTE_V_HVLOCK);
@@ -885,7 +886,7 @@ long kvmppc_h_clear_mod(struct kvm_vcpu *vcpu, unsigned long flags,
 			kvmppc_set_dirty_from_hpte(kvm, v, gr);
 		}
 	}
-	vcpu->arch.gpr[4] = gr;
+	vcpu->arch.regs.gpr[4] = gr;
 	ret = H_SUCCESS;
  out:
 	unlock_hpte(hpte, v & ~HPTE_V_HVLOCK);
diff --git a/arch/powerpc/kvm/book3s_hv_rm_xics.c b/arch/powerpc/kvm/book3s_hv_rm_xics.c
index 2a86261..758d1d2 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_xics.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_xics.c
@@ -517,7 +517,7 @@ unsigned long xics_rm_h_xirr(struct kvm_vcpu *vcpu)
 	} while (!icp_rm_try_update(icp, old_state, new_state));
 
 	/* Return the result in GPR4 */
-	vcpu->arch.gpr[4] = xirr;
+	vcpu->arch.regs.gpr[4] = xirr;
 
 	return check_too_hard(xics, icp);
 }
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index d3f304d..899bc9a 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -147,20 +147,20 @@ void kvmppc_copy_to_svcpu(struct kvm_vcpu *vcpu)
 {
 	struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
 
-	svcpu->gpr[0] = vcpu->arch.gpr[0];
-	svcpu->gpr[1] = vcpu->arch.gpr[1];
-	svcpu->gpr[2] = vcpu->arch.gpr[2];
-	svcpu->gpr[3] = vcpu->arch.gpr[3];
-	svcpu->gpr[4] = vcpu->arch.gpr[4];
-	svcpu->gpr[5] = vcpu->arch.gpr[5];
-	svcpu->gpr[6] = vcpu->arch.gpr[6];
-	svcpu->gpr[7] = vcpu->arch.gpr[7];
-	svcpu->gpr[8] = vcpu->arch.gpr[8];
-	svcpu->gpr[9] = vcpu->arch.gpr[9];
-	svcpu->gpr[10] = vcpu->arch.gpr[10];
-	svcpu->gpr[11] = vcpu->arch.gpr[11];
-	svcpu->gpr[12] = vcpu->arch.gpr[12];
-	svcpu->gpr[13] = vcpu->arch.gpr[13];
+	svcpu->gpr[0] = vcpu->arch.regs.gpr[0];
+	svcpu->gpr[1] = vcpu->arch.regs.gpr[1];
+	svcpu->gpr[2] = vcpu->arch.regs.gpr[2];
+	svcpu->gpr[3] = vcpu->arch.regs.gpr[3];
+	svcpu->gpr[4] = vcpu->arch.regs.gpr[4];
+	svcpu->gpr[5] = vcpu->arch.regs.gpr[5];
+	svcpu->gpr[6] = vcpu->arch.regs.gpr[6];
+	svcpu->gpr[7] = vcpu->arch.regs.gpr[7];
+	svcpu->gpr[8] = vcpu->arch.regs.gpr[8];
+	svcpu->gpr[9] = vcpu->arch.regs.gpr[9];
+	svcpu->gpr[10] = vcpu->arch.regs.gpr[10];
+	svcpu->gpr[11] = vcpu->arch.regs.gpr[11];
+	svcpu->gpr[12] = vcpu->arch.regs.gpr[12];
+	svcpu->gpr[13] = vcpu->arch.regs.gpr[13];
 	svcpu->cr  = vcpu->arch.cr;
 	svcpu->xer = vcpu->arch.xer;
 	svcpu->ctr = vcpu->arch.ctr;
@@ -194,20 +194,20 @@ void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu)
 	if (!svcpu->in_use)
 		goto out;
 
-	vcpu->arch.gpr[0] = svcpu->gpr[0];
-	vcpu->arch.gpr[1] = svcpu->gpr[1];
-	vcpu->arch.gpr[2] = svcpu->gpr[2];
-	vcpu->arch.gpr[3] = svcpu->gpr[3];
-	vcpu->arch.gpr[4] = svcpu->gpr[4];
-	vcpu->arch.gpr[5] = svcpu->gpr[5];
-	vcpu->arch.gpr[6] = svcpu->gpr[6];
-	vcpu->arch.gpr[7] = svcpu->gpr[7];
-	vcpu->arch.gpr[8] = svcpu->gpr[8];
-	vcpu->arch.gpr[9] = svcpu->gpr[9];
-	vcpu->arch.gpr[10] = svcpu->gpr[10];
-	vcpu->arch.gpr[11] = svcpu->gpr[11];
-	vcpu->arch.gpr[12] = svcpu->gpr[12];
-	vcpu->arch.gpr[13] = svcpu->gpr[13];
+	vcpu->arch.regs.gpr[0] = svcpu->gpr[0];
+	vcpu->arch.regs.gpr[1] = svcpu->gpr[1];
+	vcpu->arch.regs.gpr[2] = svcpu->gpr[2];
+	vcpu->arch.regs.gpr[3] = svcpu->gpr[3];
+	vcpu->arch.regs.gpr[4] = svcpu->gpr[4];
+	vcpu->arch.regs.gpr[5] = svcpu->gpr[5];
+	vcpu->arch.regs.gpr[6] = svcpu->gpr[6];
+	vcpu->arch.regs.gpr[7] = svcpu->gpr[7];
+	vcpu->arch.regs.gpr[8] = svcpu->gpr[8];
+	vcpu->arch.regs.gpr[9] = svcpu->gpr[9];
+	vcpu->arch.regs.gpr[10] = svcpu->gpr[10];
+	vcpu->arch.regs.gpr[11] = svcpu->gpr[11];
+	vcpu->arch.regs.gpr[12] = svcpu->gpr[12];
+	vcpu->arch.regs.gpr[13] = svcpu->gpr[13];
 	vcpu->arch.cr  = svcpu->cr;
 	vcpu->arch.xer = svcpu->xer;
 	vcpu->arch.ctr = svcpu->ctr;
diff --git a/arch/powerpc/kvm/book3s_xive_template.c b/arch/powerpc/kvm/book3s_xive_template.c
index c7a5dea..b5940fc 100644
--- a/arch/powerpc/kvm/book3s_xive_template.c
+++ b/arch/powerpc/kvm/book3s_xive_template.c
@@ -327,7 +327,7 @@ X_STATIC unsigned long GLUE(X_PFX,h_xirr)(struct kvm_vcpu *vcpu)
 	 */
 
 	/* Return interrupt and old CPPR in GPR4 */
-	vcpu->arch.gpr[4] = hirq | (old_cppr << 24);
+	vcpu->arch.regs.gpr[4] = hirq | (old_cppr << 24);
 
 	return H_SUCCESS;
 }
@@ -362,7 +362,7 @@ X_STATIC unsigned long GLUE(X_PFX,h_ipoll)(struct kvm_vcpu *vcpu, unsigned long
 	hirq = GLUE(X_PFX,scan_interrupts)(xc, pending, scan_poll);
 
 	/* Return interrupt and old CPPR in GPR4 */
-	vcpu->arch.gpr[4] = hirq | (xc->cppr << 24);
+	vcpu->arch.regs.gpr[4] = hirq | (xc->cppr << 24);
 
 	return H_SUCCESS;
 }
diff --git a/arch/powerpc/kvm/e500_emulate.c b/arch/powerpc/kvm/e500_emulate.c
index 990db69..8f871fb 100644
--- a/arch/powerpc/kvm/e500_emulate.c
+++ b/arch/powerpc/kvm/e500_emulate.c
@@ -53,7 +53,7 @@ static int dbell2prio(ulong param)
 
 static int kvmppc_e500_emul_msgclr(struct kvm_vcpu *vcpu, int rb)
 {
-	ulong param = vcpu->arch.gpr[rb];
+	ulong param = vcpu->arch.regs.gpr[rb];
 	int prio = dbell2prio(param);
 
 	if (prio < 0)
@@ -65,7 +65,7 @@ static int kvmppc_e500_emul_msgclr(struct kvm_vcpu *vcpu, int rb)
 
 static int kvmppc_e500_emul_msgsnd(struct kvm_vcpu *vcpu, int rb)
 {
-	ulong param = vcpu->arch.gpr[rb];
+	ulong param = vcpu->arch.regs.gpr[rb];
 	int prio = dbell2prio(rb);
 	int pir = param & PPC_DBELL_PIR_MASK;
 	int i;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 02/11] KVM: PPC: mov nip/ctr/lr/xer registers to pt_regs in kvm_vcpu_arch
  2018-04-25 11:54 ` wei.guo.simon
  (?)
@ 2018-04-25 11:54   ` wei.guo.simon
  -1 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Simon Guo, linuxppc-dev, kvm

From: Simon Guo <wei.guo.simon@gmail.com>

This patch moves nip/ctr/lr/xer registers from scattered places in
kvm_vcpu_arch to pt_regs structure.

cr register is "unsigned long" in pt_regs and u32 in vcpu->arch.
It will need more consideration and may move in later patches.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_book3s.h    | 16 ++++++-------
 arch/powerpc/include/asm/kvm_book3s_64.h | 20 ++++++++--------
 arch/powerpc/include/asm/kvm_booke.h     | 20 ++++++++--------
 arch/powerpc/include/asm/kvm_host.h      |  4 ----
 arch/powerpc/kernel/asm-offsets.c        | 20 ++++++++--------
 arch/powerpc/kvm/book3s_32_mmu.c         |  2 +-
 arch/powerpc/kvm/book3s_hv.c             |  6 ++---
 arch/powerpc/kvm/book3s_hv_tm.c          | 10 ++++----
 arch/powerpc/kvm/book3s_hv_tm_builtin.c  | 10 ++++----
 arch/powerpc/kvm/book3s_pr.c             | 16 ++++++-------
 arch/powerpc/kvm/booke.c                 | 41 +++++++++++++++++---------------
 arch/powerpc/kvm/booke_emulate.c         |  6 ++---
 arch/powerpc/kvm/e500_emulate.c          |  2 +-
 arch/powerpc/kvm/e500_mmu.c              |  2 +-
 14 files changed, 87 insertions(+), 88 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 9de4127..d39d608 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -293,42 +293,42 @@ static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
 
 static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, ulong val)
 {
-	vcpu->arch.xer = val;
+	vcpu->arch.regs.xer = val;
 }
 
 static inline ulong kvmppc_get_xer(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.xer;
+	return vcpu->arch.regs.xer;
 }
 
 static inline void kvmppc_set_ctr(struct kvm_vcpu *vcpu, ulong val)
 {
-	vcpu->arch.ctr = val;
+	vcpu->arch.regs.ctr = val;
 }
 
 static inline ulong kvmppc_get_ctr(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.ctr;
+	return vcpu->arch.regs.ctr;
 }
 
 static inline void kvmppc_set_lr(struct kvm_vcpu *vcpu, ulong val)
 {
-	vcpu->arch.lr = val;
+	vcpu->arch.regs.link = val;
 }
 
 static inline ulong kvmppc_get_lr(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.lr;
+	return vcpu->arch.regs.link;
 }
 
 static inline void kvmppc_set_pc(struct kvm_vcpu *vcpu, ulong val)
 {
-	vcpu->arch.pc = val;
+	vcpu->arch.regs.nip = val;
 }
 
 static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.pc;
+	return vcpu->arch.regs.nip;
 }
 
 static inline u64 kvmppc_get_msr(struct kvm_vcpu *vcpu);
diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h
index c424e44..dc435a5 100644
--- a/arch/powerpc/include/asm/kvm_book3s_64.h
+++ b/arch/powerpc/include/asm/kvm_book3s_64.h
@@ -483,15 +483,15 @@ static inline u64 sanitize_msr(u64 msr)
 static inline void copy_from_checkpoint(struct kvm_vcpu *vcpu)
 {
 	vcpu->arch.cr  = vcpu->arch.cr_tm;
-	vcpu->arch.xer = vcpu->arch.xer_tm;
-	vcpu->arch.lr  = vcpu->arch.lr_tm;
-	vcpu->arch.ctr = vcpu->arch.ctr_tm;
+	vcpu->arch.regs.xer = vcpu->arch.xer_tm;
+	vcpu->arch.regs.link  = vcpu->arch.lr_tm;
+	vcpu->arch.regs.ctr = vcpu->arch.ctr_tm;
 	vcpu->arch.amr = vcpu->arch.amr_tm;
 	vcpu->arch.ppr = vcpu->arch.ppr_tm;
 	vcpu->arch.dscr = vcpu->arch.dscr_tm;
 	vcpu->arch.tar = vcpu->arch.tar_tm;
-	memcpy(vcpu->arch.gpr, vcpu->arch.gpr_tm,
-	       sizeof(vcpu->arch.gpr));
+	memcpy(vcpu->arch.regs.gpr, vcpu->arch.gpr_tm,
+	       sizeof(vcpu->arch.regs.gpr));
 	vcpu->arch.fp  = vcpu->arch.fp_tm;
 	vcpu->arch.vr  = vcpu->arch.vr_tm;
 	vcpu->arch.vrsave = vcpu->arch.vrsave_tm;
@@ -500,15 +500,15 @@ static inline void copy_from_checkpoint(struct kvm_vcpu *vcpu)
 static inline void copy_to_checkpoint(struct kvm_vcpu *vcpu)
 {
 	vcpu->arch.cr_tm  = vcpu->arch.cr;
-	vcpu->arch.xer_tm = vcpu->arch.xer;
-	vcpu->arch.lr_tm  = vcpu->arch.lr;
-	vcpu->arch.ctr_tm = vcpu->arch.ctr;
+	vcpu->arch.xer_tm = vcpu->arch.regs.xer;
+	vcpu->arch.lr_tm  = vcpu->arch.regs.link;
+	vcpu->arch.ctr_tm = vcpu->arch.regs.ctr;
 	vcpu->arch.amr_tm = vcpu->arch.amr;
 	vcpu->arch.ppr_tm = vcpu->arch.ppr;
 	vcpu->arch.dscr_tm = vcpu->arch.dscr;
 	vcpu->arch.tar_tm = vcpu->arch.tar;
-	memcpy(vcpu->arch.gpr_tm, vcpu->arch.gpr,
-	       sizeof(vcpu->arch.gpr));
+	memcpy(vcpu->arch.gpr_tm, vcpu->arch.regs.gpr,
+	       sizeof(vcpu->arch.regs.gpr));
 	vcpu->arch.fp_tm  = vcpu->arch.fp;
 	vcpu->arch.vr_tm  = vcpu->arch.vr;
 	vcpu->arch.vrsave_tm = vcpu->arch.vrsave;
diff --git a/arch/powerpc/include/asm/kvm_booke.h b/arch/powerpc/include/asm/kvm_booke.h
index bc6e29e..d513e3e 100644
--- a/arch/powerpc/include/asm/kvm_booke.h
+++ b/arch/powerpc/include/asm/kvm_booke.h
@@ -36,12 +36,12 @@
 
 static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
 {
-	vcpu->arch.gpr[num] = val;
+	vcpu->arch.regs.gpr[num] = val;
 }
 
 static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
 {
-	return vcpu->arch.gpr[num];
+	return vcpu->arch.regs.gpr[num];
 }
 
 static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
@@ -56,12 +56,12 @@ static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
 
 static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, ulong val)
 {
-	vcpu->arch.xer = val;
+	vcpu->arch.regs.xer = val;
 }
 
 static inline ulong kvmppc_get_xer(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.xer;
+	return vcpu->arch.regs.xer;
 }
 
 static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
@@ -72,32 +72,32 @@ static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
 
 static inline void kvmppc_set_ctr(struct kvm_vcpu *vcpu, ulong val)
 {
-	vcpu->arch.ctr = val;
+	vcpu->arch.regs.ctr = val;
 }
 
 static inline ulong kvmppc_get_ctr(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.ctr;
+	return vcpu->arch.regs.ctr;
 }
 
 static inline void kvmppc_set_lr(struct kvm_vcpu *vcpu, ulong val)
 {
-	vcpu->arch.lr = val;
+	vcpu->arch.regs.link = val;
 }
 
 static inline ulong kvmppc_get_lr(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.lr;
+	return vcpu->arch.regs.link;
 }
 
 static inline void kvmppc_set_pc(struct kvm_vcpu *vcpu, ulong val)
 {
-	vcpu->arch.pc = val;
+	vcpu->arch.regs.nip = val;
 }
 
 static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.pc;
+	return vcpu->arch.regs.nip;
 }
 
 static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 1c93d82..2d87768 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -521,14 +521,10 @@ struct kvm_vcpu_arch {
 	u32 qpr[32];
 #endif
 
-	ulong pc;
-	ulong ctr;
-	ulong lr;
 #ifdef CONFIG_PPC_BOOK3S
 	ulong tar;
 #endif
 
-	ulong xer;
 	u32 cr;
 
 #ifdef CONFIG_PPC_BOOK3S
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index e8a78a5..731f7d4 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -431,14 +431,14 @@ int main(void)
 #ifdef CONFIG_ALTIVEC
 	OFFSET(VCPU_VRS, kvm_vcpu, arch.vr.vr);
 #endif
-	OFFSET(VCPU_XER, kvm_vcpu, arch.xer);
-	OFFSET(VCPU_CTR, kvm_vcpu, arch.ctr);
-	OFFSET(VCPU_LR, kvm_vcpu, arch.lr);
+	OFFSET(VCPU_XER, kvm_vcpu, arch.regs.xer);
+	OFFSET(VCPU_CTR, kvm_vcpu, arch.regs.ctr);
+	OFFSET(VCPU_LR, kvm_vcpu, arch.regs.link);
 #ifdef CONFIG_PPC_BOOK3S
 	OFFSET(VCPU_TAR, kvm_vcpu, arch.tar);
 #endif
-	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
-	OFFSET(VCPU_PC, kvm_vcpu, arch.nip);
+	OFFSET(VCPU_CR, kvm_vcpu, arch.regs.ccr);
+	OFFSET(VCPU_PC, kvm_vcpu, arch.regs.nip);
 #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
 	OFFSET(VCPU_MSR, kvm_vcpu, arch.shregs.msr);
 	OFFSET(VCPU_SRR0, kvm_vcpu, arch.shregs.srr0);
@@ -693,11 +693,11 @@ int main(void)
 #endif /* CONFIG_PPC_BOOK3S_64 */
 
 #else /* CONFIG_PPC_BOOK3S */
-	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
-	OFFSET(VCPU_XER, kvm_vcpu, arch.xer);
-	OFFSET(VCPU_LR, kvm_vcpu, arch.lr);
-	OFFSET(VCPU_CTR, kvm_vcpu, arch.ctr);
-	OFFSET(VCPU_PC, kvm_vcpu, arch.pc);
+	OFFSET(VCPU_CR, kvm_vcpu, arch.regs.ccr);
+	OFFSET(VCPU_XER, kvm_vcpu, arch.regs.xer);
+	OFFSET(VCPU_LR, kvm_vcpu, arch.regs.link);
+	OFFSET(VCPU_CTR, kvm_vcpu, arch.regs.ctr);
+	OFFSET(VCPU_PC, kvm_vcpu, arch.regs.nip);
 	OFFSET(VCPU_SPRG9, kvm_vcpu, arch.sprg9);
 	OFFSET(VCPU_LAST_INST, kvm_vcpu, arch.last_inst);
 	OFFSET(VCPU_FAULT_DEAR, kvm_vcpu, arch.fault_dear);
diff --git a/arch/powerpc/kvm/book3s_32_mmu.c b/arch/powerpc/kvm/book3s_32_mmu.c
index 1992676..45c8ea4 100644
--- a/arch/powerpc/kvm/book3s_32_mmu.c
+++ b/arch/powerpc/kvm/book3s_32_mmu.c
@@ -52,7 +52,7 @@
 static inline bool check_debug_ip(struct kvm_vcpu *vcpu)
 {
 #ifdef DEBUG_MMU_PTE_IP
-	return vcpu->arch.pc == DEBUG_MMU_PTE_IP;
+	return vcpu->arch.regs.nip == DEBUG_MMU_PTE_IP;
 #else
 	return true;
 #endif
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 4d07fca..5b875ba 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -371,13 +371,13 @@ static void kvmppc_dump_regs(struct kvm_vcpu *vcpu)
 
 	pr_err("vcpu %p (%d):\n", vcpu, vcpu->vcpu_id);
 	pr_err("pc  = %.16lx  msr = %.16llx  trap = %x\n",
-	       vcpu->arch.pc, vcpu->arch.shregs.msr, vcpu->arch.trap);
+	       vcpu->arch.regs.nip, vcpu->arch.shregs.msr, vcpu->arch.trap);
 	for (r = 0; r < 16; ++r)
 		pr_err("r%2d = %.16lx  r%d = %.16lx\n",
 		       r, kvmppc_get_gpr(vcpu, r),
 		       r+16, kvmppc_get_gpr(vcpu, r+16));
 	pr_err("ctr = %.16lx  lr  = %.16lx\n",
-	       vcpu->arch.ctr, vcpu->arch.lr);
+	       vcpu->arch.regs.ctr, vcpu->arch.regs.link);
 	pr_err("srr0 = %.16llx srr1 = %.16llx\n",
 	       vcpu->arch.shregs.srr0, vcpu->arch.shregs.srr1);
 	pr_err("sprg0 = %.16llx sprg1 = %.16llx\n",
@@ -385,7 +385,7 @@ static void kvmppc_dump_regs(struct kvm_vcpu *vcpu)
 	pr_err("sprg2 = %.16llx sprg3 = %.16llx\n",
 	       vcpu->arch.shregs.sprg2, vcpu->arch.shregs.sprg3);
 	pr_err("cr = %.8x  xer = %.16lx  dsisr = %.8x\n",
-	       vcpu->arch.cr, vcpu->arch.xer, vcpu->arch.shregs.dsisr);
+	       vcpu->arch.cr, vcpu->arch.regs.xer, vcpu->arch.shregs.dsisr);
 	pr_err("dar = %.16llx\n", vcpu->arch.shregs.dar);
 	pr_err("fault dar = %.16lx dsisr = %.8x\n",
 	       vcpu->arch.fault_dar, vcpu->arch.fault_dsisr);
diff --git a/arch/powerpc/kvm/book3s_hv_tm.c b/arch/powerpc/kvm/book3s_hv_tm.c
index bf710ad..0082850 100644
--- a/arch/powerpc/kvm/book3s_hv_tm.c
+++ b/arch/powerpc/kvm/book3s_hv_tm.c
@@ -19,7 +19,7 @@ static void emulate_tx_failure(struct kvm_vcpu *vcpu, u64 failure_cause)
 	u64 texasr, tfiar;
 	u64 msr = vcpu->arch.shregs.msr;
 
-	tfiar = vcpu->arch.pc & ~0x3ull;
+	tfiar = vcpu->arch.regs.nip & ~0x3ull;
 	texasr = (failure_cause << 56) | TEXASR_ABORT | TEXASR_FS | TEXASR_EXACT;
 	if (MSR_TM_SUSPENDED(vcpu->arch.shregs.msr))
 		texasr |= TEXASR_SUSP;
@@ -57,8 +57,8 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
 			       (newmsr & MSR_TM)));
 		newmsr = sanitize_msr(newmsr);
 		vcpu->arch.shregs.msr = newmsr;
-		vcpu->arch.cfar = vcpu->arch.pc - 4;
-		vcpu->arch.pc = vcpu->arch.shregs.srr0;
+		vcpu->arch.cfar = vcpu->arch.regs.nip - 4;
+		vcpu->arch.regs.nip = vcpu->arch.shregs.srr0;
 		return RESUME_GUEST;
 
 	case PPC_INST_RFEBB:
@@ -90,8 +90,8 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
 		vcpu->arch.bescr = bescr;
 		msr = (msr & ~MSR_TS_MASK) | MSR_TS_T;
 		vcpu->arch.shregs.msr = msr;
-		vcpu->arch.cfar = vcpu->arch.pc - 4;
-		vcpu->arch.pc = vcpu->arch.ebbrr;
+		vcpu->arch.cfar = vcpu->arch.regs.nip - 4;
+		vcpu->arch.regs.nip = vcpu->arch.ebbrr;
 		return RESUME_GUEST;
 
 	case PPC_INST_MTMSRD:
diff --git a/arch/powerpc/kvm/book3s_hv_tm_builtin.c b/arch/powerpc/kvm/book3s_hv_tm_builtin.c
index d98ccfd..b2c7c6f 100644
--- a/arch/powerpc/kvm/book3s_hv_tm_builtin.c
+++ b/arch/powerpc/kvm/book3s_hv_tm_builtin.c
@@ -35,8 +35,8 @@ int kvmhv_p9_tm_emulation_early(struct kvm_vcpu *vcpu)
 			return 0;
 		newmsr = sanitize_msr(newmsr);
 		vcpu->arch.shregs.msr = newmsr;
-		vcpu->arch.cfar = vcpu->arch.pc - 4;
-		vcpu->arch.pc = vcpu->arch.shregs.srr0;
+		vcpu->arch.cfar = vcpu->arch.regs.nip - 4;
+		vcpu->arch.regs.nip = vcpu->arch.shregs.srr0;
 		return 1;
 
 	case PPC_INST_RFEBB:
@@ -58,8 +58,8 @@ int kvmhv_p9_tm_emulation_early(struct kvm_vcpu *vcpu)
 		mtspr(SPRN_BESCR, bescr);
 		msr = (msr & ~MSR_TS_MASK) | MSR_TS_T;
 		vcpu->arch.shregs.msr = msr;
-		vcpu->arch.cfar = vcpu->arch.pc - 4;
-		vcpu->arch.pc = mfspr(SPRN_EBBRR);
+		vcpu->arch.cfar = vcpu->arch.regs.nip - 4;
+		vcpu->arch.regs.nip = mfspr(SPRN_EBBRR);
 		return 1;
 
 	case PPC_INST_MTMSRD:
@@ -103,7 +103,7 @@ int kvmhv_p9_tm_emulation_early(struct kvm_vcpu *vcpu)
 void kvmhv_emulate_tm_rollback(struct kvm_vcpu *vcpu)
 {
 	vcpu->arch.shregs.msr &= ~MSR_TS_MASK;	/* go to N state */
-	vcpu->arch.pc = vcpu->arch.tfhar;
+	vcpu->arch.regs.nip = vcpu->arch.tfhar;
 	copy_from_checkpoint(vcpu);
 	vcpu->arch.cr = (vcpu->arch.cr & 0x0fffffff) | 0xa0000000;
 }
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 899bc9a..67061d3 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -162,10 +162,10 @@ void kvmppc_copy_to_svcpu(struct kvm_vcpu *vcpu)
 	svcpu->gpr[12] = vcpu->arch.regs.gpr[12];
 	svcpu->gpr[13] = vcpu->arch.regs.gpr[13];
 	svcpu->cr  = vcpu->arch.cr;
-	svcpu->xer = vcpu->arch.xer;
-	svcpu->ctr = vcpu->arch.ctr;
-	svcpu->lr  = vcpu->arch.lr;
-	svcpu->pc  = vcpu->arch.pc;
+	svcpu->xer = vcpu->arch.regs.xer;
+	svcpu->ctr = vcpu->arch.regs.ctr;
+	svcpu->lr  = vcpu->arch.regs.link;
+	svcpu->pc  = vcpu->arch.regs.nip;
 #ifdef CONFIG_PPC_BOOK3S_64
 	svcpu->shadow_fscr = vcpu->arch.shadow_fscr;
 #endif
@@ -209,10 +209,10 @@ void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu)
 	vcpu->arch.regs.gpr[12] = svcpu->gpr[12];
 	vcpu->arch.regs.gpr[13] = svcpu->gpr[13];
 	vcpu->arch.cr  = svcpu->cr;
-	vcpu->arch.xer = svcpu->xer;
-	vcpu->arch.ctr = svcpu->ctr;
-	vcpu->arch.lr  = svcpu->lr;
-	vcpu->arch.pc  = svcpu->pc;
+	vcpu->arch.regs.xer = svcpu->xer;
+	vcpu->arch.regs.ctr = svcpu->ctr;
+	vcpu->arch.regs.link  = svcpu->lr;
+	vcpu->arch.regs.nip  = svcpu->pc;
 	vcpu->arch.shadow_srr1 = svcpu->shadow_srr1;
 	vcpu->arch.fault_dar   = svcpu->fault_dar;
 	vcpu->arch.fault_dsisr = svcpu->fault_dsisr;
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 6038e2e..05999c2 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -77,8 +77,10 @@ void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu)
 {
 	int i;
 
-	printk("pc:   %08lx msr:  %08llx\n", vcpu->arch.pc, vcpu->arch.shared->msr);
-	printk("lr:   %08lx ctr:  %08lx\n", vcpu->arch.lr, vcpu->arch.ctr);
+	printk("pc:   %08lx msr:  %08llx\n", vcpu->arch.regs.nip,
+			vcpu->arch.shared->msr);
+	printk("lr:   %08lx ctr:  %08lx\n", vcpu->arch.regs.link,
+			vcpu->arch.regs.ctr);
 	printk("srr0: %08llx srr1: %08llx\n", vcpu->arch.shared->srr0,
 					    vcpu->arch.shared->srr1);
 
@@ -484,24 +486,25 @@ static int kvmppc_booke_irqprio_deliver(struct kvm_vcpu *vcpu,
 	if (allowed) {
 		switch (int_class) {
 		case INT_CLASS_NONCRIT:
-			set_guest_srr(vcpu, vcpu->arch.pc,
+			set_guest_srr(vcpu, vcpu->arch.regs.nip,
 				      vcpu->arch.shared->msr);
 			break;
 		case INT_CLASS_CRIT:
-			set_guest_csrr(vcpu, vcpu->arch.pc,
+			set_guest_csrr(vcpu, vcpu->arch.regs.nip,
 				       vcpu->arch.shared->msr);
 			break;
 		case INT_CLASS_DBG:
-			set_guest_dsrr(vcpu, vcpu->arch.pc,
+			set_guest_dsrr(vcpu, vcpu->arch.regs.nip,
 				       vcpu->arch.shared->msr);
 			break;
 		case INT_CLASS_MC:
-			set_guest_mcsrr(vcpu, vcpu->arch.pc,
+			set_guest_mcsrr(vcpu, vcpu->arch.regs.nip,
 					vcpu->arch.shared->msr);
 			break;
 		}
 
-		vcpu->arch.pc = vcpu->arch.ivpr | vcpu->arch.ivor[priority];
+		vcpu->arch.regs.nip = vcpu->arch.ivpr |
+					vcpu->arch.ivor[priority];
 		if (update_esr == true)
 			kvmppc_set_esr(vcpu, vcpu->arch.queued_esr);
 		if (update_dear == true)
@@ -819,7 +822,7 @@ static int emulation_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
 
 	case EMULATE_FAIL:
 		printk(KERN_CRIT "%s: emulation at %lx failed (%08x)\n",
-		       __func__, vcpu->arch.pc, vcpu->arch.last_inst);
+		       __func__, vcpu->arch.regs.nip, vcpu->arch.last_inst);
 		/* For debugging, encode the failing instruction and
 		 * report it to userspace. */
 		run->hw.hardware_exit_reason = ~0ULL << 32;
@@ -868,7 +871,7 @@ static int kvmppc_handle_debug(struct kvm_run *run, struct kvm_vcpu *vcpu)
 	 */
 	vcpu->arch.dbsr = 0;
 	run->debug.arch.status = 0;
-	run->debug.arch.address = vcpu->arch.pc;
+	run->debug.arch.address = vcpu->arch.regs.nip;
 
 	if (dbsr & (DBSR_IAC1 | DBSR_IAC2 | DBSR_IAC3 | DBSR_IAC4)) {
 		run->debug.arch.status |= KVMPPC_DEBUG_BREAKPOINT;
@@ -964,7 +967,7 @@ static int kvmppc_resume_inst_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 	case EMULATE_FAIL:
 		pr_debug("%s: load instruction from guest address %lx failed\n",
-		       __func__, vcpu->arch.pc);
+		       __func__, vcpu->arch.regs.nip);
 		/* For debugging, encode the failing instruction and
 		 * report it to userspace. */
 		run->hw.hardware_exit_reason = ~0ULL << 32;
@@ -1162,7 +1165,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	case BOOKE_INTERRUPT_SPE_FP_DATA:
 	case BOOKE_INTERRUPT_SPE_FP_ROUND:
 		printk(KERN_CRIT "%s: unexpected SPE interrupt %u at %08lx\n",
-		       __func__, exit_nr, vcpu->arch.pc);
+		       __func__, exit_nr, vcpu->arch.regs.nip);
 		run->hw.hardware_exit_reason = exit_nr;
 		r = RESUME_HOST;
 		break;
@@ -1292,7 +1295,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	}
 
 	case BOOKE_INTERRUPT_ITLB_MISS: {
-		unsigned long eaddr = vcpu->arch.pc;
+		unsigned long eaddr = vcpu->arch.regs.nip;
 		gpa_t gpaddr;
 		gfn_t gfn;
 		int gtlb_index;
@@ -1384,7 +1387,7 @@ int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
 	int i;
 	int r;
 
-	vcpu->arch.pc = 0;
+	vcpu->arch.regs.nip = 0;
 	vcpu->arch.shared->pir = vcpu->vcpu_id;
 	kvmppc_set_gpr(vcpu, 1, (16<<20) - 8); /* -8 for the callee-save LR slot */
 	kvmppc_set_msr(vcpu, 0);
@@ -1433,10 +1436,10 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 
 	vcpu_load(vcpu);
 
-	regs->pc = vcpu->arch.pc;
+	regs->pc = vcpu->arch.regs.nip;
 	regs->cr = kvmppc_get_cr(vcpu);
-	regs->ctr = vcpu->arch.ctr;
-	regs->lr = vcpu->arch.lr;
+	regs->ctr = vcpu->arch.regs.ctr;
+	regs->lr = vcpu->arch.regs.link;
 	regs->xer = kvmppc_get_xer(vcpu);
 	regs->msr = vcpu->arch.shared->msr;
 	regs->srr0 = kvmppc_get_srr0(vcpu);
@@ -1464,10 +1467,10 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 
 	vcpu_load(vcpu);
 
-	vcpu->arch.pc = regs->pc;
+	vcpu->arch.regs.nip = regs->pc;
 	kvmppc_set_cr(vcpu, regs->cr);
-	vcpu->arch.ctr = regs->ctr;
-	vcpu->arch.lr = regs->lr;
+	vcpu->arch.regs.ctr = regs->ctr;
+	vcpu->arch.regs.link = regs->lr;
 	kvmppc_set_xer(vcpu, regs->xer);
 	kvmppc_set_msr(vcpu, regs->msr);
 	kvmppc_set_srr0(vcpu, regs->srr0);
diff --git a/arch/powerpc/kvm/booke_emulate.c b/arch/powerpc/kvm/booke_emulate.c
index a82f645..d23e582 100644
--- a/arch/powerpc/kvm/booke_emulate.c
+++ b/arch/powerpc/kvm/booke_emulate.c
@@ -34,19 +34,19 @@
 
 static void kvmppc_emul_rfi(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.pc = vcpu->arch.shared->srr0;
+	vcpu->arch.regs.nip = vcpu->arch.shared->srr0;
 	kvmppc_set_msr(vcpu, vcpu->arch.shared->srr1);
 }
 
 static void kvmppc_emul_rfdi(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.pc = vcpu->arch.dsrr0;
+	vcpu->arch.regs.nip = vcpu->arch.dsrr0;
 	kvmppc_set_msr(vcpu, vcpu->arch.dsrr1);
 }
 
 static void kvmppc_emul_rfci(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.pc = vcpu->arch.csrr0;
+	vcpu->arch.regs.nip = vcpu->arch.csrr0;
 	kvmppc_set_msr(vcpu, vcpu->arch.csrr1);
 }
 
diff --git a/arch/powerpc/kvm/e500_emulate.c b/arch/powerpc/kvm/e500_emulate.c
index 8f871fb..3f8189e 100644
--- a/arch/powerpc/kvm/e500_emulate.c
+++ b/arch/powerpc/kvm/e500_emulate.c
@@ -94,7 +94,7 @@ static int kvmppc_e500_emul_ehpriv(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	switch (get_oc(inst)) {
 	case EHPRIV_OC_DEBUG:
 		run->exit_reason = KVM_EXIT_DEBUG;
-		run->debug.arch.address = vcpu->arch.pc;
+		run->debug.arch.address = vcpu->arch.regs.nip;
 		run->debug.arch.status = 0;
 		kvmppc_account_exit(vcpu, DEBUG_EXITS);
 		emulated = EMULATE_EXIT_USER;
diff --git a/arch/powerpc/kvm/e500_mmu.c b/arch/powerpc/kvm/e500_mmu.c
index ddbf8f0..24296f4 100644
--- a/arch/powerpc/kvm/e500_mmu.c
+++ b/arch/powerpc/kvm/e500_mmu.c
@@ -513,7 +513,7 @@ void kvmppc_mmu_itlb_miss(struct kvm_vcpu *vcpu)
 {
 	unsigned int as = !!(vcpu->arch.shared->msr & MSR_IS);
 
-	kvmppc_e500_deliver_tlb_miss(vcpu, vcpu->arch.pc, as);
+	kvmppc_e500_deliver_tlb_miss(vcpu, vcpu->arch.regs.nip, as);
 }
 
 void kvmppc_mmu_dtlb_miss(struct kvm_vcpu *vcpu)
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 02/11] KVM: PPC: mov nip/ctr/lr/xer registers to pt_regs in kvm_vcpu_arch
@ 2018-04-25 11:54   ` wei.guo.simon
  0 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Paul Mackerras, kvm, linuxppc-dev, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

This patch moves nip/ctr/lr/xer registers from scattered places in
kvm_vcpu_arch to pt_regs structure.

cr register is "unsigned long" in pt_regs and u32 in vcpu->arch.
It will need more consideration and may move in later patches.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_book3s.h    | 16 ++++++-------
 arch/powerpc/include/asm/kvm_book3s_64.h | 20 ++++++++--------
 arch/powerpc/include/asm/kvm_booke.h     | 20 ++++++++--------
 arch/powerpc/include/asm/kvm_host.h      |  4 ----
 arch/powerpc/kernel/asm-offsets.c        | 20 ++++++++--------
 arch/powerpc/kvm/book3s_32_mmu.c         |  2 +-
 arch/powerpc/kvm/book3s_hv.c             |  6 ++---
 arch/powerpc/kvm/book3s_hv_tm.c          | 10 ++++----
 arch/powerpc/kvm/book3s_hv_tm_builtin.c  | 10 ++++----
 arch/powerpc/kvm/book3s_pr.c             | 16 ++++++-------
 arch/powerpc/kvm/booke.c                 | 41 +++++++++++++++++---------------
 arch/powerpc/kvm/booke_emulate.c         |  6 ++---
 arch/powerpc/kvm/e500_emulate.c          |  2 +-
 arch/powerpc/kvm/e500_mmu.c              |  2 +-
 14 files changed, 87 insertions(+), 88 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 9de4127..d39d608 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -293,42 +293,42 @@ static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
 
 static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, ulong val)
 {
-	vcpu->arch.xer = val;
+	vcpu->arch.regs.xer = val;
 }
 
 static inline ulong kvmppc_get_xer(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.xer;
+	return vcpu->arch.regs.xer;
 }
 
 static inline void kvmppc_set_ctr(struct kvm_vcpu *vcpu, ulong val)
 {
-	vcpu->arch.ctr = val;
+	vcpu->arch.regs.ctr = val;
 }
 
 static inline ulong kvmppc_get_ctr(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.ctr;
+	return vcpu->arch.regs.ctr;
 }
 
 static inline void kvmppc_set_lr(struct kvm_vcpu *vcpu, ulong val)
 {
-	vcpu->arch.lr = val;
+	vcpu->arch.regs.link = val;
 }
 
 static inline ulong kvmppc_get_lr(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.lr;
+	return vcpu->arch.regs.link;
 }
 
 static inline void kvmppc_set_pc(struct kvm_vcpu *vcpu, ulong val)
 {
-	vcpu->arch.pc = val;
+	vcpu->arch.regs.nip = val;
 }
 
 static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.pc;
+	return vcpu->arch.regs.nip;
 }
 
 static inline u64 kvmppc_get_msr(struct kvm_vcpu *vcpu);
diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h
index c424e44..dc435a5 100644
--- a/arch/powerpc/include/asm/kvm_book3s_64.h
+++ b/arch/powerpc/include/asm/kvm_book3s_64.h
@@ -483,15 +483,15 @@ static inline u64 sanitize_msr(u64 msr)
 static inline void copy_from_checkpoint(struct kvm_vcpu *vcpu)
 {
 	vcpu->arch.cr  = vcpu->arch.cr_tm;
-	vcpu->arch.xer = vcpu->arch.xer_tm;
-	vcpu->arch.lr  = vcpu->arch.lr_tm;
-	vcpu->arch.ctr = vcpu->arch.ctr_tm;
+	vcpu->arch.regs.xer = vcpu->arch.xer_tm;
+	vcpu->arch.regs.link  = vcpu->arch.lr_tm;
+	vcpu->arch.regs.ctr = vcpu->arch.ctr_tm;
 	vcpu->arch.amr = vcpu->arch.amr_tm;
 	vcpu->arch.ppr = vcpu->arch.ppr_tm;
 	vcpu->arch.dscr = vcpu->arch.dscr_tm;
 	vcpu->arch.tar = vcpu->arch.tar_tm;
-	memcpy(vcpu->arch.gpr, vcpu->arch.gpr_tm,
-	       sizeof(vcpu->arch.gpr));
+	memcpy(vcpu->arch.regs.gpr, vcpu->arch.gpr_tm,
+	       sizeof(vcpu->arch.regs.gpr));
 	vcpu->arch.fp  = vcpu->arch.fp_tm;
 	vcpu->arch.vr  = vcpu->arch.vr_tm;
 	vcpu->arch.vrsave = vcpu->arch.vrsave_tm;
@@ -500,15 +500,15 @@ static inline void copy_from_checkpoint(struct kvm_vcpu *vcpu)
 static inline void copy_to_checkpoint(struct kvm_vcpu *vcpu)
 {
 	vcpu->arch.cr_tm  = vcpu->arch.cr;
-	vcpu->arch.xer_tm = vcpu->arch.xer;
-	vcpu->arch.lr_tm  = vcpu->arch.lr;
-	vcpu->arch.ctr_tm = vcpu->arch.ctr;
+	vcpu->arch.xer_tm = vcpu->arch.regs.xer;
+	vcpu->arch.lr_tm  = vcpu->arch.regs.link;
+	vcpu->arch.ctr_tm = vcpu->arch.regs.ctr;
 	vcpu->arch.amr_tm = vcpu->arch.amr;
 	vcpu->arch.ppr_tm = vcpu->arch.ppr;
 	vcpu->arch.dscr_tm = vcpu->arch.dscr;
 	vcpu->arch.tar_tm = vcpu->arch.tar;
-	memcpy(vcpu->arch.gpr_tm, vcpu->arch.gpr,
-	       sizeof(vcpu->arch.gpr));
+	memcpy(vcpu->arch.gpr_tm, vcpu->arch.regs.gpr,
+	       sizeof(vcpu->arch.regs.gpr));
 	vcpu->arch.fp_tm  = vcpu->arch.fp;
 	vcpu->arch.vr_tm  = vcpu->arch.vr;
 	vcpu->arch.vrsave_tm = vcpu->arch.vrsave;
diff --git a/arch/powerpc/include/asm/kvm_booke.h b/arch/powerpc/include/asm/kvm_booke.h
index bc6e29e..d513e3e 100644
--- a/arch/powerpc/include/asm/kvm_booke.h
+++ b/arch/powerpc/include/asm/kvm_booke.h
@@ -36,12 +36,12 @@
 
 static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
 {
-	vcpu->arch.gpr[num] = val;
+	vcpu->arch.regs.gpr[num] = val;
 }
 
 static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
 {
-	return vcpu->arch.gpr[num];
+	return vcpu->arch.regs.gpr[num];
 }
 
 static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
@@ -56,12 +56,12 @@ static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
 
 static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, ulong val)
 {
-	vcpu->arch.xer = val;
+	vcpu->arch.regs.xer = val;
 }
 
 static inline ulong kvmppc_get_xer(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.xer;
+	return vcpu->arch.regs.xer;
 }
 
 static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
@@ -72,32 +72,32 @@ static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
 
 static inline void kvmppc_set_ctr(struct kvm_vcpu *vcpu, ulong val)
 {
-	vcpu->arch.ctr = val;
+	vcpu->arch.regs.ctr = val;
 }
 
 static inline ulong kvmppc_get_ctr(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.ctr;
+	return vcpu->arch.regs.ctr;
 }
 
 static inline void kvmppc_set_lr(struct kvm_vcpu *vcpu, ulong val)
 {
-	vcpu->arch.lr = val;
+	vcpu->arch.regs.link = val;
 }
 
 static inline ulong kvmppc_get_lr(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.lr;
+	return vcpu->arch.regs.link;
 }
 
 static inline void kvmppc_set_pc(struct kvm_vcpu *vcpu, ulong val)
 {
-	vcpu->arch.pc = val;
+	vcpu->arch.regs.nip = val;
 }
 
 static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.pc;
+	return vcpu->arch.regs.nip;
 }
 
 static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 1c93d82..2d87768 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -521,14 +521,10 @@ struct kvm_vcpu_arch {
 	u32 qpr[32];
 #endif
 
-	ulong pc;
-	ulong ctr;
-	ulong lr;
 #ifdef CONFIG_PPC_BOOK3S
 	ulong tar;
 #endif
 
-	ulong xer;
 	u32 cr;
 
 #ifdef CONFIG_PPC_BOOK3S
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index e8a78a5..731f7d4 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -431,14 +431,14 @@ int main(void)
 #ifdef CONFIG_ALTIVEC
 	OFFSET(VCPU_VRS, kvm_vcpu, arch.vr.vr);
 #endif
-	OFFSET(VCPU_XER, kvm_vcpu, arch.xer);
-	OFFSET(VCPU_CTR, kvm_vcpu, arch.ctr);
-	OFFSET(VCPU_LR, kvm_vcpu, arch.lr);
+	OFFSET(VCPU_XER, kvm_vcpu, arch.regs.xer);
+	OFFSET(VCPU_CTR, kvm_vcpu, arch.regs.ctr);
+	OFFSET(VCPU_LR, kvm_vcpu, arch.regs.link);
 #ifdef CONFIG_PPC_BOOK3S
 	OFFSET(VCPU_TAR, kvm_vcpu, arch.tar);
 #endif
-	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
-	OFFSET(VCPU_PC, kvm_vcpu, arch.nip);
+	OFFSET(VCPU_CR, kvm_vcpu, arch.regs.ccr);
+	OFFSET(VCPU_PC, kvm_vcpu, arch.regs.nip);
 #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
 	OFFSET(VCPU_MSR, kvm_vcpu, arch.shregs.msr);
 	OFFSET(VCPU_SRR0, kvm_vcpu, arch.shregs.srr0);
@@ -693,11 +693,11 @@ int main(void)
 #endif /* CONFIG_PPC_BOOK3S_64 */
 
 #else /* CONFIG_PPC_BOOK3S */
-	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
-	OFFSET(VCPU_XER, kvm_vcpu, arch.xer);
-	OFFSET(VCPU_LR, kvm_vcpu, arch.lr);
-	OFFSET(VCPU_CTR, kvm_vcpu, arch.ctr);
-	OFFSET(VCPU_PC, kvm_vcpu, arch.pc);
+	OFFSET(VCPU_CR, kvm_vcpu, arch.regs.ccr);
+	OFFSET(VCPU_XER, kvm_vcpu, arch.regs.xer);
+	OFFSET(VCPU_LR, kvm_vcpu, arch.regs.link);
+	OFFSET(VCPU_CTR, kvm_vcpu, arch.regs.ctr);
+	OFFSET(VCPU_PC, kvm_vcpu, arch.regs.nip);
 	OFFSET(VCPU_SPRG9, kvm_vcpu, arch.sprg9);
 	OFFSET(VCPU_LAST_INST, kvm_vcpu, arch.last_inst);
 	OFFSET(VCPU_FAULT_DEAR, kvm_vcpu, arch.fault_dear);
diff --git a/arch/powerpc/kvm/book3s_32_mmu.c b/arch/powerpc/kvm/book3s_32_mmu.c
index 1992676..45c8ea4 100644
--- a/arch/powerpc/kvm/book3s_32_mmu.c
+++ b/arch/powerpc/kvm/book3s_32_mmu.c
@@ -52,7 +52,7 @@
 static inline bool check_debug_ip(struct kvm_vcpu *vcpu)
 {
 #ifdef DEBUG_MMU_PTE_IP
-	return vcpu->arch.pc == DEBUG_MMU_PTE_IP;
+	return vcpu->arch.regs.nip == DEBUG_MMU_PTE_IP;
 #else
 	return true;
 #endif
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 4d07fca..5b875ba 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -371,13 +371,13 @@ static void kvmppc_dump_regs(struct kvm_vcpu *vcpu)
 
 	pr_err("vcpu %p (%d):\n", vcpu, vcpu->vcpu_id);
 	pr_err("pc  = %.16lx  msr = %.16llx  trap = %x\n",
-	       vcpu->arch.pc, vcpu->arch.shregs.msr, vcpu->arch.trap);
+	       vcpu->arch.regs.nip, vcpu->arch.shregs.msr, vcpu->arch.trap);
 	for (r = 0; r < 16; ++r)
 		pr_err("r%2d = %.16lx  r%d = %.16lx\n",
 		       r, kvmppc_get_gpr(vcpu, r),
 		       r+16, kvmppc_get_gpr(vcpu, r+16));
 	pr_err("ctr = %.16lx  lr  = %.16lx\n",
-	       vcpu->arch.ctr, vcpu->arch.lr);
+	       vcpu->arch.regs.ctr, vcpu->arch.regs.link);
 	pr_err("srr0 = %.16llx srr1 = %.16llx\n",
 	       vcpu->arch.shregs.srr0, vcpu->arch.shregs.srr1);
 	pr_err("sprg0 = %.16llx sprg1 = %.16llx\n",
@@ -385,7 +385,7 @@ static void kvmppc_dump_regs(struct kvm_vcpu *vcpu)
 	pr_err("sprg2 = %.16llx sprg3 = %.16llx\n",
 	       vcpu->arch.shregs.sprg2, vcpu->arch.shregs.sprg3);
 	pr_err("cr = %.8x  xer = %.16lx  dsisr = %.8x\n",
-	       vcpu->arch.cr, vcpu->arch.xer, vcpu->arch.shregs.dsisr);
+	       vcpu->arch.cr, vcpu->arch.regs.xer, vcpu->arch.shregs.dsisr);
 	pr_err("dar = %.16llx\n", vcpu->arch.shregs.dar);
 	pr_err("fault dar = %.16lx dsisr = %.8x\n",
 	       vcpu->arch.fault_dar, vcpu->arch.fault_dsisr);
diff --git a/arch/powerpc/kvm/book3s_hv_tm.c b/arch/powerpc/kvm/book3s_hv_tm.c
index bf710ad..0082850 100644
--- a/arch/powerpc/kvm/book3s_hv_tm.c
+++ b/arch/powerpc/kvm/book3s_hv_tm.c
@@ -19,7 +19,7 @@ static void emulate_tx_failure(struct kvm_vcpu *vcpu, u64 failure_cause)
 	u64 texasr, tfiar;
 	u64 msr = vcpu->arch.shregs.msr;
 
-	tfiar = vcpu->arch.pc & ~0x3ull;
+	tfiar = vcpu->arch.regs.nip & ~0x3ull;
 	texasr = (failure_cause << 56) | TEXASR_ABORT | TEXASR_FS | TEXASR_EXACT;
 	if (MSR_TM_SUSPENDED(vcpu->arch.shregs.msr))
 		texasr |= TEXASR_SUSP;
@@ -57,8 +57,8 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
 			       (newmsr & MSR_TM)));
 		newmsr = sanitize_msr(newmsr);
 		vcpu->arch.shregs.msr = newmsr;
-		vcpu->arch.cfar = vcpu->arch.pc - 4;
-		vcpu->arch.pc = vcpu->arch.shregs.srr0;
+		vcpu->arch.cfar = vcpu->arch.regs.nip - 4;
+		vcpu->arch.regs.nip = vcpu->arch.shregs.srr0;
 		return RESUME_GUEST;
 
 	case PPC_INST_RFEBB:
@@ -90,8 +90,8 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
 		vcpu->arch.bescr = bescr;
 		msr = (msr & ~MSR_TS_MASK) | MSR_TS_T;
 		vcpu->arch.shregs.msr = msr;
-		vcpu->arch.cfar = vcpu->arch.pc - 4;
-		vcpu->arch.pc = vcpu->arch.ebbrr;
+		vcpu->arch.cfar = vcpu->arch.regs.nip - 4;
+		vcpu->arch.regs.nip = vcpu->arch.ebbrr;
 		return RESUME_GUEST;
 
 	case PPC_INST_MTMSRD:
diff --git a/arch/powerpc/kvm/book3s_hv_tm_builtin.c b/arch/powerpc/kvm/book3s_hv_tm_builtin.c
index d98ccfd..b2c7c6f 100644
--- a/arch/powerpc/kvm/book3s_hv_tm_builtin.c
+++ b/arch/powerpc/kvm/book3s_hv_tm_builtin.c
@@ -35,8 +35,8 @@ int kvmhv_p9_tm_emulation_early(struct kvm_vcpu *vcpu)
 			return 0;
 		newmsr = sanitize_msr(newmsr);
 		vcpu->arch.shregs.msr = newmsr;
-		vcpu->arch.cfar = vcpu->arch.pc - 4;
-		vcpu->arch.pc = vcpu->arch.shregs.srr0;
+		vcpu->arch.cfar = vcpu->arch.regs.nip - 4;
+		vcpu->arch.regs.nip = vcpu->arch.shregs.srr0;
 		return 1;
 
 	case PPC_INST_RFEBB:
@@ -58,8 +58,8 @@ int kvmhv_p9_tm_emulation_early(struct kvm_vcpu *vcpu)
 		mtspr(SPRN_BESCR, bescr);
 		msr = (msr & ~MSR_TS_MASK) | MSR_TS_T;
 		vcpu->arch.shregs.msr = msr;
-		vcpu->arch.cfar = vcpu->arch.pc - 4;
-		vcpu->arch.pc = mfspr(SPRN_EBBRR);
+		vcpu->arch.cfar = vcpu->arch.regs.nip - 4;
+		vcpu->arch.regs.nip = mfspr(SPRN_EBBRR);
 		return 1;
 
 	case PPC_INST_MTMSRD:
@@ -103,7 +103,7 @@ int kvmhv_p9_tm_emulation_early(struct kvm_vcpu *vcpu)
 void kvmhv_emulate_tm_rollback(struct kvm_vcpu *vcpu)
 {
 	vcpu->arch.shregs.msr &= ~MSR_TS_MASK;	/* go to N state */
-	vcpu->arch.pc = vcpu->arch.tfhar;
+	vcpu->arch.regs.nip = vcpu->arch.tfhar;
 	copy_from_checkpoint(vcpu);
 	vcpu->arch.cr = (vcpu->arch.cr & 0x0fffffff) | 0xa0000000;
 }
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 899bc9a..67061d3 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -162,10 +162,10 @@ void kvmppc_copy_to_svcpu(struct kvm_vcpu *vcpu)
 	svcpu->gpr[12] = vcpu->arch.regs.gpr[12];
 	svcpu->gpr[13] = vcpu->arch.regs.gpr[13];
 	svcpu->cr  = vcpu->arch.cr;
-	svcpu->xer = vcpu->arch.xer;
-	svcpu->ctr = vcpu->arch.ctr;
-	svcpu->lr  = vcpu->arch.lr;
-	svcpu->pc  = vcpu->arch.pc;
+	svcpu->xer = vcpu->arch.regs.xer;
+	svcpu->ctr = vcpu->arch.regs.ctr;
+	svcpu->lr  = vcpu->arch.regs.link;
+	svcpu->pc  = vcpu->arch.regs.nip;
 #ifdef CONFIG_PPC_BOOK3S_64
 	svcpu->shadow_fscr = vcpu->arch.shadow_fscr;
 #endif
@@ -209,10 +209,10 @@ void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu)
 	vcpu->arch.regs.gpr[12] = svcpu->gpr[12];
 	vcpu->arch.regs.gpr[13] = svcpu->gpr[13];
 	vcpu->arch.cr  = svcpu->cr;
-	vcpu->arch.xer = svcpu->xer;
-	vcpu->arch.ctr = svcpu->ctr;
-	vcpu->arch.lr  = svcpu->lr;
-	vcpu->arch.pc  = svcpu->pc;
+	vcpu->arch.regs.xer = svcpu->xer;
+	vcpu->arch.regs.ctr = svcpu->ctr;
+	vcpu->arch.regs.link  = svcpu->lr;
+	vcpu->arch.regs.nip  = svcpu->pc;
 	vcpu->arch.shadow_srr1 = svcpu->shadow_srr1;
 	vcpu->arch.fault_dar   = svcpu->fault_dar;
 	vcpu->arch.fault_dsisr = svcpu->fault_dsisr;
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 6038e2e..05999c2 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -77,8 +77,10 @@ void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu)
 {
 	int i;
 
-	printk("pc:   %08lx msr:  %08llx\n", vcpu->arch.pc, vcpu->arch.shared->msr);
-	printk("lr:   %08lx ctr:  %08lx\n", vcpu->arch.lr, vcpu->arch.ctr);
+	printk("pc:   %08lx msr:  %08llx\n", vcpu->arch.regs.nip,
+			vcpu->arch.shared->msr);
+	printk("lr:   %08lx ctr:  %08lx\n", vcpu->arch.regs.link,
+			vcpu->arch.regs.ctr);
 	printk("srr0: %08llx srr1: %08llx\n", vcpu->arch.shared->srr0,
 					    vcpu->arch.shared->srr1);
 
@@ -484,24 +486,25 @@ static int kvmppc_booke_irqprio_deliver(struct kvm_vcpu *vcpu,
 	if (allowed) {
 		switch (int_class) {
 		case INT_CLASS_NONCRIT:
-			set_guest_srr(vcpu, vcpu->arch.pc,
+			set_guest_srr(vcpu, vcpu->arch.regs.nip,
 				      vcpu->arch.shared->msr);
 			break;
 		case INT_CLASS_CRIT:
-			set_guest_csrr(vcpu, vcpu->arch.pc,
+			set_guest_csrr(vcpu, vcpu->arch.regs.nip,
 				       vcpu->arch.shared->msr);
 			break;
 		case INT_CLASS_DBG:
-			set_guest_dsrr(vcpu, vcpu->arch.pc,
+			set_guest_dsrr(vcpu, vcpu->arch.regs.nip,
 				       vcpu->arch.shared->msr);
 			break;
 		case INT_CLASS_MC:
-			set_guest_mcsrr(vcpu, vcpu->arch.pc,
+			set_guest_mcsrr(vcpu, vcpu->arch.regs.nip,
 					vcpu->arch.shared->msr);
 			break;
 		}
 
-		vcpu->arch.pc = vcpu->arch.ivpr | vcpu->arch.ivor[priority];
+		vcpu->arch.regs.nip = vcpu->arch.ivpr |
+					vcpu->arch.ivor[priority];
 		if (update_esr == true)
 			kvmppc_set_esr(vcpu, vcpu->arch.queued_esr);
 		if (update_dear == true)
@@ -819,7 +822,7 @@ static int emulation_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
 
 	case EMULATE_FAIL:
 		printk(KERN_CRIT "%s: emulation at %lx failed (%08x)\n",
-		       __func__, vcpu->arch.pc, vcpu->arch.last_inst);
+		       __func__, vcpu->arch.regs.nip, vcpu->arch.last_inst);
 		/* For debugging, encode the failing instruction and
 		 * report it to userspace. */
 		run->hw.hardware_exit_reason = ~0ULL << 32;
@@ -868,7 +871,7 @@ static int kvmppc_handle_debug(struct kvm_run *run, struct kvm_vcpu *vcpu)
 	 */
 	vcpu->arch.dbsr = 0;
 	run->debug.arch.status = 0;
-	run->debug.arch.address = vcpu->arch.pc;
+	run->debug.arch.address = vcpu->arch.regs.nip;
 
 	if (dbsr & (DBSR_IAC1 | DBSR_IAC2 | DBSR_IAC3 | DBSR_IAC4)) {
 		run->debug.arch.status |= KVMPPC_DEBUG_BREAKPOINT;
@@ -964,7 +967,7 @@ static int kvmppc_resume_inst_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 	case EMULATE_FAIL:
 		pr_debug("%s: load instruction from guest address %lx failed\n",
-		       __func__, vcpu->arch.pc);
+		       __func__, vcpu->arch.regs.nip);
 		/* For debugging, encode the failing instruction and
 		 * report it to userspace. */
 		run->hw.hardware_exit_reason = ~0ULL << 32;
@@ -1162,7 +1165,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	case BOOKE_INTERRUPT_SPE_FP_DATA:
 	case BOOKE_INTERRUPT_SPE_FP_ROUND:
 		printk(KERN_CRIT "%s: unexpected SPE interrupt %u at %08lx\n",
-		       __func__, exit_nr, vcpu->arch.pc);
+		       __func__, exit_nr, vcpu->arch.regs.nip);
 		run->hw.hardware_exit_reason = exit_nr;
 		r = RESUME_HOST;
 		break;
@@ -1292,7 +1295,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	}
 
 	case BOOKE_INTERRUPT_ITLB_MISS: {
-		unsigned long eaddr = vcpu->arch.pc;
+		unsigned long eaddr = vcpu->arch.regs.nip;
 		gpa_t gpaddr;
 		gfn_t gfn;
 		int gtlb_index;
@@ -1384,7 +1387,7 @@ int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
 	int i;
 	int r;
 
-	vcpu->arch.pc = 0;
+	vcpu->arch.regs.nip = 0;
 	vcpu->arch.shared->pir = vcpu->vcpu_id;
 	kvmppc_set_gpr(vcpu, 1, (16<<20) - 8); /* -8 for the callee-save LR slot */
 	kvmppc_set_msr(vcpu, 0);
@@ -1433,10 +1436,10 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 
 	vcpu_load(vcpu);
 
-	regs->pc = vcpu->arch.pc;
+	regs->pc = vcpu->arch.regs.nip;
 	regs->cr = kvmppc_get_cr(vcpu);
-	regs->ctr = vcpu->arch.ctr;
-	regs->lr = vcpu->arch.lr;
+	regs->ctr = vcpu->arch.regs.ctr;
+	regs->lr = vcpu->arch.regs.link;
 	regs->xer = kvmppc_get_xer(vcpu);
 	regs->msr = vcpu->arch.shared->msr;
 	regs->srr0 = kvmppc_get_srr0(vcpu);
@@ -1464,10 +1467,10 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 
 	vcpu_load(vcpu);
 
-	vcpu->arch.pc = regs->pc;
+	vcpu->arch.regs.nip = regs->pc;
 	kvmppc_set_cr(vcpu, regs->cr);
-	vcpu->arch.ctr = regs->ctr;
-	vcpu->arch.lr = regs->lr;
+	vcpu->arch.regs.ctr = regs->ctr;
+	vcpu->arch.regs.link = regs->lr;
 	kvmppc_set_xer(vcpu, regs->xer);
 	kvmppc_set_msr(vcpu, regs->msr);
 	kvmppc_set_srr0(vcpu, regs->srr0);
diff --git a/arch/powerpc/kvm/booke_emulate.c b/arch/powerpc/kvm/booke_emulate.c
index a82f645..d23e582 100644
--- a/arch/powerpc/kvm/booke_emulate.c
+++ b/arch/powerpc/kvm/booke_emulate.c
@@ -34,19 +34,19 @@
 
 static void kvmppc_emul_rfi(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.pc = vcpu->arch.shared->srr0;
+	vcpu->arch.regs.nip = vcpu->arch.shared->srr0;
 	kvmppc_set_msr(vcpu, vcpu->arch.shared->srr1);
 }
 
 static void kvmppc_emul_rfdi(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.pc = vcpu->arch.dsrr0;
+	vcpu->arch.regs.nip = vcpu->arch.dsrr0;
 	kvmppc_set_msr(vcpu, vcpu->arch.dsrr1);
 }
 
 static void kvmppc_emul_rfci(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.pc = vcpu->arch.csrr0;
+	vcpu->arch.regs.nip = vcpu->arch.csrr0;
 	kvmppc_set_msr(vcpu, vcpu->arch.csrr1);
 }
 
diff --git a/arch/powerpc/kvm/e500_emulate.c b/arch/powerpc/kvm/e500_emulate.c
index 8f871fb..3f8189e 100644
--- a/arch/powerpc/kvm/e500_emulate.c
+++ b/arch/powerpc/kvm/e500_emulate.c
@@ -94,7 +94,7 @@ static int kvmppc_e500_emul_ehpriv(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	switch (get_oc(inst)) {
 	case EHPRIV_OC_DEBUG:
 		run->exit_reason = KVM_EXIT_DEBUG;
-		run->debug.arch.address = vcpu->arch.pc;
+		run->debug.arch.address = vcpu->arch.regs.nip;
 		run->debug.arch.status = 0;
 		kvmppc_account_exit(vcpu, DEBUG_EXITS);
 		emulated = EMULATE_EXIT_USER;
diff --git a/arch/powerpc/kvm/e500_mmu.c b/arch/powerpc/kvm/e500_mmu.c
index ddbf8f0..24296f4 100644
--- a/arch/powerpc/kvm/e500_mmu.c
+++ b/arch/powerpc/kvm/e500_mmu.c
@@ -513,7 +513,7 @@ void kvmppc_mmu_itlb_miss(struct kvm_vcpu *vcpu)
 {
 	unsigned int as = !!(vcpu->arch.shared->msr & MSR_IS);
 
-	kvmppc_e500_deliver_tlb_miss(vcpu, vcpu->arch.pc, as);
+	kvmppc_e500_deliver_tlb_miss(vcpu, vcpu->arch.regs.nip, as);
 }
 
 void kvmppc_mmu_dtlb_miss(struct kvm_vcpu *vcpu)
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 02/11] KVM: PPC: mov nip/ctr/lr/xer registers to pt_regs in kvm_vcpu_arch
@ 2018-04-25 11:54   ` wei.guo.simon
  0 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Simon Guo, linuxppc-dev, kvm

From: Simon Guo <wei.guo.simon@gmail.com>

This patch moves nip/ctr/lr/xer registers from scattered places in
kvm_vcpu_arch to pt_regs structure.

cr register is "unsigned long" in pt_regs and u32 in vcpu->arch.
It will need more consideration and may move in later patches.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_book3s.h    | 16 ++++++-------
 arch/powerpc/include/asm/kvm_book3s_64.h | 20 ++++++++--------
 arch/powerpc/include/asm/kvm_booke.h     | 20 ++++++++--------
 arch/powerpc/include/asm/kvm_host.h      |  4 ----
 arch/powerpc/kernel/asm-offsets.c        | 20 ++++++++--------
 arch/powerpc/kvm/book3s_32_mmu.c         |  2 +-
 arch/powerpc/kvm/book3s_hv.c             |  6 ++---
 arch/powerpc/kvm/book3s_hv_tm.c          | 10 ++++----
 arch/powerpc/kvm/book3s_hv_tm_builtin.c  | 10 ++++----
 arch/powerpc/kvm/book3s_pr.c             | 16 ++++++-------
 arch/powerpc/kvm/booke.c                 | 41 +++++++++++++++++---------------
 arch/powerpc/kvm/booke_emulate.c         |  6 ++---
 arch/powerpc/kvm/e500_emulate.c          |  2 +-
 arch/powerpc/kvm/e500_mmu.c              |  2 +-
 14 files changed, 87 insertions(+), 88 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 9de4127..d39d608 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -293,42 +293,42 @@ static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
 
 static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, ulong val)
 {
-	vcpu->arch.xer = val;
+	vcpu->arch.regs.xer = val;
 }
 
 static inline ulong kvmppc_get_xer(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.xer;
+	return vcpu->arch.regs.xer;
 }
 
 static inline void kvmppc_set_ctr(struct kvm_vcpu *vcpu, ulong val)
 {
-	vcpu->arch.ctr = val;
+	vcpu->arch.regs.ctr = val;
 }
 
 static inline ulong kvmppc_get_ctr(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.ctr;
+	return vcpu->arch.regs.ctr;
 }
 
 static inline void kvmppc_set_lr(struct kvm_vcpu *vcpu, ulong val)
 {
-	vcpu->arch.lr = val;
+	vcpu->arch.regs.link = val;
 }
 
 static inline ulong kvmppc_get_lr(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.lr;
+	return vcpu->arch.regs.link;
 }
 
 static inline void kvmppc_set_pc(struct kvm_vcpu *vcpu, ulong val)
 {
-	vcpu->arch.pc = val;
+	vcpu->arch.regs.nip = val;
 }
 
 static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.pc;
+	return vcpu->arch.regs.nip;
 }
 
 static inline u64 kvmppc_get_msr(struct kvm_vcpu *vcpu);
diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h
index c424e44..dc435a5 100644
--- a/arch/powerpc/include/asm/kvm_book3s_64.h
+++ b/arch/powerpc/include/asm/kvm_book3s_64.h
@@ -483,15 +483,15 @@ static inline u64 sanitize_msr(u64 msr)
 static inline void copy_from_checkpoint(struct kvm_vcpu *vcpu)
 {
 	vcpu->arch.cr  = vcpu->arch.cr_tm;
-	vcpu->arch.xer = vcpu->arch.xer_tm;
-	vcpu->arch.lr  = vcpu->arch.lr_tm;
-	vcpu->arch.ctr = vcpu->arch.ctr_tm;
+	vcpu->arch.regs.xer = vcpu->arch.xer_tm;
+	vcpu->arch.regs.link  = vcpu->arch.lr_tm;
+	vcpu->arch.regs.ctr = vcpu->arch.ctr_tm;
 	vcpu->arch.amr = vcpu->arch.amr_tm;
 	vcpu->arch.ppr = vcpu->arch.ppr_tm;
 	vcpu->arch.dscr = vcpu->arch.dscr_tm;
 	vcpu->arch.tar = vcpu->arch.tar_tm;
-	memcpy(vcpu->arch.gpr, vcpu->arch.gpr_tm,
-	       sizeof(vcpu->arch.gpr));
+	memcpy(vcpu->arch.regs.gpr, vcpu->arch.gpr_tm,
+	       sizeof(vcpu->arch.regs.gpr));
 	vcpu->arch.fp  = vcpu->arch.fp_tm;
 	vcpu->arch.vr  = vcpu->arch.vr_tm;
 	vcpu->arch.vrsave = vcpu->arch.vrsave_tm;
@@ -500,15 +500,15 @@ static inline void copy_from_checkpoint(struct kvm_vcpu *vcpu)
 static inline void copy_to_checkpoint(struct kvm_vcpu *vcpu)
 {
 	vcpu->arch.cr_tm  = vcpu->arch.cr;
-	vcpu->arch.xer_tm = vcpu->arch.xer;
-	vcpu->arch.lr_tm  = vcpu->arch.lr;
-	vcpu->arch.ctr_tm = vcpu->arch.ctr;
+	vcpu->arch.xer_tm = vcpu->arch.regs.xer;
+	vcpu->arch.lr_tm  = vcpu->arch.regs.link;
+	vcpu->arch.ctr_tm = vcpu->arch.regs.ctr;
 	vcpu->arch.amr_tm = vcpu->arch.amr;
 	vcpu->arch.ppr_tm = vcpu->arch.ppr;
 	vcpu->arch.dscr_tm = vcpu->arch.dscr;
 	vcpu->arch.tar_tm = vcpu->arch.tar;
-	memcpy(vcpu->arch.gpr_tm, vcpu->arch.gpr,
-	       sizeof(vcpu->arch.gpr));
+	memcpy(vcpu->arch.gpr_tm, vcpu->arch.regs.gpr,
+	       sizeof(vcpu->arch.regs.gpr));
 	vcpu->arch.fp_tm  = vcpu->arch.fp;
 	vcpu->arch.vr_tm  = vcpu->arch.vr;
 	vcpu->arch.vrsave_tm = vcpu->arch.vrsave;
diff --git a/arch/powerpc/include/asm/kvm_booke.h b/arch/powerpc/include/asm/kvm_booke.h
index bc6e29e..d513e3e 100644
--- a/arch/powerpc/include/asm/kvm_booke.h
+++ b/arch/powerpc/include/asm/kvm_booke.h
@@ -36,12 +36,12 @@
 
 static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
 {
-	vcpu->arch.gpr[num] = val;
+	vcpu->arch.regs.gpr[num] = val;
 }
 
 static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
 {
-	return vcpu->arch.gpr[num];
+	return vcpu->arch.regs.gpr[num];
 }
 
 static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
@@ -56,12 +56,12 @@ static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
 
 static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, ulong val)
 {
-	vcpu->arch.xer = val;
+	vcpu->arch.regs.xer = val;
 }
 
 static inline ulong kvmppc_get_xer(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.xer;
+	return vcpu->arch.regs.xer;
 }
 
 static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
@@ -72,32 +72,32 @@ static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
 
 static inline void kvmppc_set_ctr(struct kvm_vcpu *vcpu, ulong val)
 {
-	vcpu->arch.ctr = val;
+	vcpu->arch.regs.ctr = val;
 }
 
 static inline ulong kvmppc_get_ctr(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.ctr;
+	return vcpu->arch.regs.ctr;
 }
 
 static inline void kvmppc_set_lr(struct kvm_vcpu *vcpu, ulong val)
 {
-	vcpu->arch.lr = val;
+	vcpu->arch.regs.link = val;
 }
 
 static inline ulong kvmppc_get_lr(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.lr;
+	return vcpu->arch.regs.link;
 }
 
 static inline void kvmppc_set_pc(struct kvm_vcpu *vcpu, ulong val)
 {
-	vcpu->arch.pc = val;
+	vcpu->arch.regs.nip = val;
 }
 
 static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.pc;
+	return vcpu->arch.regs.nip;
 }
 
 static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 1c93d82..2d87768 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -521,14 +521,10 @@ struct kvm_vcpu_arch {
 	u32 qpr[32];
 #endif
 
-	ulong pc;
-	ulong ctr;
-	ulong lr;
 #ifdef CONFIG_PPC_BOOK3S
 	ulong tar;
 #endif
 
-	ulong xer;
 	u32 cr;
 
 #ifdef CONFIG_PPC_BOOK3S
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index e8a78a5..731f7d4 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -431,14 +431,14 @@ int main(void)
 #ifdef CONFIG_ALTIVEC
 	OFFSET(VCPU_VRS, kvm_vcpu, arch.vr.vr);
 #endif
-	OFFSET(VCPU_XER, kvm_vcpu, arch.xer);
-	OFFSET(VCPU_CTR, kvm_vcpu, arch.ctr);
-	OFFSET(VCPU_LR, kvm_vcpu, arch.lr);
+	OFFSET(VCPU_XER, kvm_vcpu, arch.regs.xer);
+	OFFSET(VCPU_CTR, kvm_vcpu, arch.regs.ctr);
+	OFFSET(VCPU_LR, kvm_vcpu, arch.regs.link);
 #ifdef CONFIG_PPC_BOOK3S
 	OFFSET(VCPU_TAR, kvm_vcpu, arch.tar);
 #endif
-	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
-	OFFSET(VCPU_PC, kvm_vcpu, arch.nip);
+	OFFSET(VCPU_CR, kvm_vcpu, arch.regs.ccr);
+	OFFSET(VCPU_PC, kvm_vcpu, arch.regs.nip);
 #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
 	OFFSET(VCPU_MSR, kvm_vcpu, arch.shregs.msr);
 	OFFSET(VCPU_SRR0, kvm_vcpu, arch.shregs.srr0);
@@ -693,11 +693,11 @@ int main(void)
 #endif /* CONFIG_PPC_BOOK3S_64 */
 
 #else /* CONFIG_PPC_BOOK3S */
-	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
-	OFFSET(VCPU_XER, kvm_vcpu, arch.xer);
-	OFFSET(VCPU_LR, kvm_vcpu, arch.lr);
-	OFFSET(VCPU_CTR, kvm_vcpu, arch.ctr);
-	OFFSET(VCPU_PC, kvm_vcpu, arch.pc);
+	OFFSET(VCPU_CR, kvm_vcpu, arch.regs.ccr);
+	OFFSET(VCPU_XER, kvm_vcpu, arch.regs.xer);
+	OFFSET(VCPU_LR, kvm_vcpu, arch.regs.link);
+	OFFSET(VCPU_CTR, kvm_vcpu, arch.regs.ctr);
+	OFFSET(VCPU_PC, kvm_vcpu, arch.regs.nip);
 	OFFSET(VCPU_SPRG9, kvm_vcpu, arch.sprg9);
 	OFFSET(VCPU_LAST_INST, kvm_vcpu, arch.last_inst);
 	OFFSET(VCPU_FAULT_DEAR, kvm_vcpu, arch.fault_dear);
diff --git a/arch/powerpc/kvm/book3s_32_mmu.c b/arch/powerpc/kvm/book3s_32_mmu.c
index 1992676..45c8ea4 100644
--- a/arch/powerpc/kvm/book3s_32_mmu.c
+++ b/arch/powerpc/kvm/book3s_32_mmu.c
@@ -52,7 +52,7 @@
 static inline bool check_debug_ip(struct kvm_vcpu *vcpu)
 {
 #ifdef DEBUG_MMU_PTE_IP
-	return vcpu->arch.pc = DEBUG_MMU_PTE_IP;
+	return vcpu->arch.regs.nip = DEBUG_MMU_PTE_IP;
 #else
 	return true;
 #endif
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 4d07fca..5b875ba 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -371,13 +371,13 @@ static void kvmppc_dump_regs(struct kvm_vcpu *vcpu)
 
 	pr_err("vcpu %p (%d):\n", vcpu, vcpu->vcpu_id);
 	pr_err("pc  = %.16lx  msr = %.16llx  trap = %x\n",
-	       vcpu->arch.pc, vcpu->arch.shregs.msr, vcpu->arch.trap);
+	       vcpu->arch.regs.nip, vcpu->arch.shregs.msr, vcpu->arch.trap);
 	for (r = 0; r < 16; ++r)
 		pr_err("r%2d = %.16lx  r%d = %.16lx\n",
 		       r, kvmppc_get_gpr(vcpu, r),
 		       r+16, kvmppc_get_gpr(vcpu, r+16));
 	pr_err("ctr = %.16lx  lr  = %.16lx\n",
-	       vcpu->arch.ctr, vcpu->arch.lr);
+	       vcpu->arch.regs.ctr, vcpu->arch.regs.link);
 	pr_err("srr0 = %.16llx srr1 = %.16llx\n",
 	       vcpu->arch.shregs.srr0, vcpu->arch.shregs.srr1);
 	pr_err("sprg0 = %.16llx sprg1 = %.16llx\n",
@@ -385,7 +385,7 @@ static void kvmppc_dump_regs(struct kvm_vcpu *vcpu)
 	pr_err("sprg2 = %.16llx sprg3 = %.16llx\n",
 	       vcpu->arch.shregs.sprg2, vcpu->arch.shregs.sprg3);
 	pr_err("cr = %.8x  xer = %.16lx  dsisr = %.8x\n",
-	       vcpu->arch.cr, vcpu->arch.xer, vcpu->arch.shregs.dsisr);
+	       vcpu->arch.cr, vcpu->arch.regs.xer, vcpu->arch.shregs.dsisr);
 	pr_err("dar = %.16llx\n", vcpu->arch.shregs.dar);
 	pr_err("fault dar = %.16lx dsisr = %.8x\n",
 	       vcpu->arch.fault_dar, vcpu->arch.fault_dsisr);
diff --git a/arch/powerpc/kvm/book3s_hv_tm.c b/arch/powerpc/kvm/book3s_hv_tm.c
index bf710ad..0082850 100644
--- a/arch/powerpc/kvm/book3s_hv_tm.c
+++ b/arch/powerpc/kvm/book3s_hv_tm.c
@@ -19,7 +19,7 @@ static void emulate_tx_failure(struct kvm_vcpu *vcpu, u64 failure_cause)
 	u64 texasr, tfiar;
 	u64 msr = vcpu->arch.shregs.msr;
 
-	tfiar = vcpu->arch.pc & ~0x3ull;
+	tfiar = vcpu->arch.regs.nip & ~0x3ull;
 	texasr = (failure_cause << 56) | TEXASR_ABORT | TEXASR_FS | TEXASR_EXACT;
 	if (MSR_TM_SUSPENDED(vcpu->arch.shregs.msr))
 		texasr |= TEXASR_SUSP;
@@ -57,8 +57,8 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
 			       (newmsr & MSR_TM)));
 		newmsr = sanitize_msr(newmsr);
 		vcpu->arch.shregs.msr = newmsr;
-		vcpu->arch.cfar = vcpu->arch.pc - 4;
-		vcpu->arch.pc = vcpu->arch.shregs.srr0;
+		vcpu->arch.cfar = vcpu->arch.regs.nip - 4;
+		vcpu->arch.regs.nip = vcpu->arch.shregs.srr0;
 		return RESUME_GUEST;
 
 	case PPC_INST_RFEBB:
@@ -90,8 +90,8 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
 		vcpu->arch.bescr = bescr;
 		msr = (msr & ~MSR_TS_MASK) | MSR_TS_T;
 		vcpu->arch.shregs.msr = msr;
-		vcpu->arch.cfar = vcpu->arch.pc - 4;
-		vcpu->arch.pc = vcpu->arch.ebbrr;
+		vcpu->arch.cfar = vcpu->arch.regs.nip - 4;
+		vcpu->arch.regs.nip = vcpu->arch.ebbrr;
 		return RESUME_GUEST;
 
 	case PPC_INST_MTMSRD:
diff --git a/arch/powerpc/kvm/book3s_hv_tm_builtin.c b/arch/powerpc/kvm/book3s_hv_tm_builtin.c
index d98ccfd..b2c7c6f 100644
--- a/arch/powerpc/kvm/book3s_hv_tm_builtin.c
+++ b/arch/powerpc/kvm/book3s_hv_tm_builtin.c
@@ -35,8 +35,8 @@ int kvmhv_p9_tm_emulation_early(struct kvm_vcpu *vcpu)
 			return 0;
 		newmsr = sanitize_msr(newmsr);
 		vcpu->arch.shregs.msr = newmsr;
-		vcpu->arch.cfar = vcpu->arch.pc - 4;
-		vcpu->arch.pc = vcpu->arch.shregs.srr0;
+		vcpu->arch.cfar = vcpu->arch.regs.nip - 4;
+		vcpu->arch.regs.nip = vcpu->arch.shregs.srr0;
 		return 1;
 
 	case PPC_INST_RFEBB:
@@ -58,8 +58,8 @@ int kvmhv_p9_tm_emulation_early(struct kvm_vcpu *vcpu)
 		mtspr(SPRN_BESCR, bescr);
 		msr = (msr & ~MSR_TS_MASK) | MSR_TS_T;
 		vcpu->arch.shregs.msr = msr;
-		vcpu->arch.cfar = vcpu->arch.pc - 4;
-		vcpu->arch.pc = mfspr(SPRN_EBBRR);
+		vcpu->arch.cfar = vcpu->arch.regs.nip - 4;
+		vcpu->arch.regs.nip = mfspr(SPRN_EBBRR);
 		return 1;
 
 	case PPC_INST_MTMSRD:
@@ -103,7 +103,7 @@ int kvmhv_p9_tm_emulation_early(struct kvm_vcpu *vcpu)
 void kvmhv_emulate_tm_rollback(struct kvm_vcpu *vcpu)
 {
 	vcpu->arch.shregs.msr &= ~MSR_TS_MASK;	/* go to N state */
-	vcpu->arch.pc = vcpu->arch.tfhar;
+	vcpu->arch.regs.nip = vcpu->arch.tfhar;
 	copy_from_checkpoint(vcpu);
 	vcpu->arch.cr = (vcpu->arch.cr & 0x0fffffff) | 0xa0000000;
 }
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 899bc9a..67061d3 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -162,10 +162,10 @@ void kvmppc_copy_to_svcpu(struct kvm_vcpu *vcpu)
 	svcpu->gpr[12] = vcpu->arch.regs.gpr[12];
 	svcpu->gpr[13] = vcpu->arch.regs.gpr[13];
 	svcpu->cr  = vcpu->arch.cr;
-	svcpu->xer = vcpu->arch.xer;
-	svcpu->ctr = vcpu->arch.ctr;
-	svcpu->lr  = vcpu->arch.lr;
-	svcpu->pc  = vcpu->arch.pc;
+	svcpu->xer = vcpu->arch.regs.xer;
+	svcpu->ctr = vcpu->arch.regs.ctr;
+	svcpu->lr  = vcpu->arch.regs.link;
+	svcpu->pc  = vcpu->arch.regs.nip;
 #ifdef CONFIG_PPC_BOOK3S_64
 	svcpu->shadow_fscr = vcpu->arch.shadow_fscr;
 #endif
@@ -209,10 +209,10 @@ void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu)
 	vcpu->arch.regs.gpr[12] = svcpu->gpr[12];
 	vcpu->arch.regs.gpr[13] = svcpu->gpr[13];
 	vcpu->arch.cr  = svcpu->cr;
-	vcpu->arch.xer = svcpu->xer;
-	vcpu->arch.ctr = svcpu->ctr;
-	vcpu->arch.lr  = svcpu->lr;
-	vcpu->arch.pc  = svcpu->pc;
+	vcpu->arch.regs.xer = svcpu->xer;
+	vcpu->arch.regs.ctr = svcpu->ctr;
+	vcpu->arch.regs.link  = svcpu->lr;
+	vcpu->arch.regs.nip  = svcpu->pc;
 	vcpu->arch.shadow_srr1 = svcpu->shadow_srr1;
 	vcpu->arch.fault_dar   = svcpu->fault_dar;
 	vcpu->arch.fault_dsisr = svcpu->fault_dsisr;
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 6038e2e..05999c2 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -77,8 +77,10 @@ void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu)
 {
 	int i;
 
-	printk("pc:   %08lx msr:  %08llx\n", vcpu->arch.pc, vcpu->arch.shared->msr);
-	printk("lr:   %08lx ctr:  %08lx\n", vcpu->arch.lr, vcpu->arch.ctr);
+	printk("pc:   %08lx msr:  %08llx\n", vcpu->arch.regs.nip,
+			vcpu->arch.shared->msr);
+	printk("lr:   %08lx ctr:  %08lx\n", vcpu->arch.regs.link,
+			vcpu->arch.regs.ctr);
 	printk("srr0: %08llx srr1: %08llx\n", vcpu->arch.shared->srr0,
 					    vcpu->arch.shared->srr1);
 
@@ -484,24 +486,25 @@ static int kvmppc_booke_irqprio_deliver(struct kvm_vcpu *vcpu,
 	if (allowed) {
 		switch (int_class) {
 		case INT_CLASS_NONCRIT:
-			set_guest_srr(vcpu, vcpu->arch.pc,
+			set_guest_srr(vcpu, vcpu->arch.regs.nip,
 				      vcpu->arch.shared->msr);
 			break;
 		case INT_CLASS_CRIT:
-			set_guest_csrr(vcpu, vcpu->arch.pc,
+			set_guest_csrr(vcpu, vcpu->arch.regs.nip,
 				       vcpu->arch.shared->msr);
 			break;
 		case INT_CLASS_DBG:
-			set_guest_dsrr(vcpu, vcpu->arch.pc,
+			set_guest_dsrr(vcpu, vcpu->arch.regs.nip,
 				       vcpu->arch.shared->msr);
 			break;
 		case INT_CLASS_MC:
-			set_guest_mcsrr(vcpu, vcpu->arch.pc,
+			set_guest_mcsrr(vcpu, vcpu->arch.regs.nip,
 					vcpu->arch.shared->msr);
 			break;
 		}
 
-		vcpu->arch.pc = vcpu->arch.ivpr | vcpu->arch.ivor[priority];
+		vcpu->arch.regs.nip = vcpu->arch.ivpr |
+					vcpu->arch.ivor[priority];
 		if (update_esr = true)
 			kvmppc_set_esr(vcpu, vcpu->arch.queued_esr);
 		if (update_dear = true)
@@ -819,7 +822,7 @@ static int emulation_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
 
 	case EMULATE_FAIL:
 		printk(KERN_CRIT "%s: emulation at %lx failed (%08x)\n",
-		       __func__, vcpu->arch.pc, vcpu->arch.last_inst);
+		       __func__, vcpu->arch.regs.nip, vcpu->arch.last_inst);
 		/* For debugging, encode the failing instruction and
 		 * report it to userspace. */
 		run->hw.hardware_exit_reason = ~0ULL << 32;
@@ -868,7 +871,7 @@ static int kvmppc_handle_debug(struct kvm_run *run, struct kvm_vcpu *vcpu)
 	 */
 	vcpu->arch.dbsr = 0;
 	run->debug.arch.status = 0;
-	run->debug.arch.address = vcpu->arch.pc;
+	run->debug.arch.address = vcpu->arch.regs.nip;
 
 	if (dbsr & (DBSR_IAC1 | DBSR_IAC2 | DBSR_IAC3 | DBSR_IAC4)) {
 		run->debug.arch.status |= KVMPPC_DEBUG_BREAKPOINT;
@@ -964,7 +967,7 @@ static int kvmppc_resume_inst_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 	case EMULATE_FAIL:
 		pr_debug("%s: load instruction from guest address %lx failed\n",
-		       __func__, vcpu->arch.pc);
+		       __func__, vcpu->arch.regs.nip);
 		/* For debugging, encode the failing instruction and
 		 * report it to userspace. */
 		run->hw.hardware_exit_reason = ~0ULL << 32;
@@ -1162,7 +1165,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	case BOOKE_INTERRUPT_SPE_FP_DATA:
 	case BOOKE_INTERRUPT_SPE_FP_ROUND:
 		printk(KERN_CRIT "%s: unexpected SPE interrupt %u at %08lx\n",
-		       __func__, exit_nr, vcpu->arch.pc);
+		       __func__, exit_nr, vcpu->arch.regs.nip);
 		run->hw.hardware_exit_reason = exit_nr;
 		r = RESUME_HOST;
 		break;
@@ -1292,7 +1295,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	}
 
 	case BOOKE_INTERRUPT_ITLB_MISS: {
-		unsigned long eaddr = vcpu->arch.pc;
+		unsigned long eaddr = vcpu->arch.regs.nip;
 		gpa_t gpaddr;
 		gfn_t gfn;
 		int gtlb_index;
@@ -1384,7 +1387,7 @@ int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
 	int i;
 	int r;
 
-	vcpu->arch.pc = 0;
+	vcpu->arch.regs.nip = 0;
 	vcpu->arch.shared->pir = vcpu->vcpu_id;
 	kvmppc_set_gpr(vcpu, 1, (16<<20) - 8); /* -8 for the callee-save LR slot */
 	kvmppc_set_msr(vcpu, 0);
@@ -1433,10 +1436,10 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 
 	vcpu_load(vcpu);
 
-	regs->pc = vcpu->arch.pc;
+	regs->pc = vcpu->arch.regs.nip;
 	regs->cr = kvmppc_get_cr(vcpu);
-	regs->ctr = vcpu->arch.ctr;
-	regs->lr = vcpu->arch.lr;
+	regs->ctr = vcpu->arch.regs.ctr;
+	regs->lr = vcpu->arch.regs.link;
 	regs->xer = kvmppc_get_xer(vcpu);
 	regs->msr = vcpu->arch.shared->msr;
 	regs->srr0 = kvmppc_get_srr0(vcpu);
@@ -1464,10 +1467,10 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 
 	vcpu_load(vcpu);
 
-	vcpu->arch.pc = regs->pc;
+	vcpu->arch.regs.nip = regs->pc;
 	kvmppc_set_cr(vcpu, regs->cr);
-	vcpu->arch.ctr = regs->ctr;
-	vcpu->arch.lr = regs->lr;
+	vcpu->arch.regs.ctr = regs->ctr;
+	vcpu->arch.regs.link = regs->lr;
 	kvmppc_set_xer(vcpu, regs->xer);
 	kvmppc_set_msr(vcpu, regs->msr);
 	kvmppc_set_srr0(vcpu, regs->srr0);
diff --git a/arch/powerpc/kvm/booke_emulate.c b/arch/powerpc/kvm/booke_emulate.c
index a82f645..d23e582 100644
--- a/arch/powerpc/kvm/booke_emulate.c
+++ b/arch/powerpc/kvm/booke_emulate.c
@@ -34,19 +34,19 @@
 
 static void kvmppc_emul_rfi(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.pc = vcpu->arch.shared->srr0;
+	vcpu->arch.regs.nip = vcpu->arch.shared->srr0;
 	kvmppc_set_msr(vcpu, vcpu->arch.shared->srr1);
 }
 
 static void kvmppc_emul_rfdi(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.pc = vcpu->arch.dsrr0;
+	vcpu->arch.regs.nip = vcpu->arch.dsrr0;
 	kvmppc_set_msr(vcpu, vcpu->arch.dsrr1);
 }
 
 static void kvmppc_emul_rfci(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.pc = vcpu->arch.csrr0;
+	vcpu->arch.regs.nip = vcpu->arch.csrr0;
 	kvmppc_set_msr(vcpu, vcpu->arch.csrr1);
 }
 
diff --git a/arch/powerpc/kvm/e500_emulate.c b/arch/powerpc/kvm/e500_emulate.c
index 8f871fb..3f8189e 100644
--- a/arch/powerpc/kvm/e500_emulate.c
+++ b/arch/powerpc/kvm/e500_emulate.c
@@ -94,7 +94,7 @@ static int kvmppc_e500_emul_ehpriv(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	switch (get_oc(inst)) {
 	case EHPRIV_OC_DEBUG:
 		run->exit_reason = KVM_EXIT_DEBUG;
-		run->debug.arch.address = vcpu->arch.pc;
+		run->debug.arch.address = vcpu->arch.regs.nip;
 		run->debug.arch.status = 0;
 		kvmppc_account_exit(vcpu, DEBUG_EXITS);
 		emulated = EMULATE_EXIT_USER;
diff --git a/arch/powerpc/kvm/e500_mmu.c b/arch/powerpc/kvm/e500_mmu.c
index ddbf8f0..24296f4 100644
--- a/arch/powerpc/kvm/e500_mmu.c
+++ b/arch/powerpc/kvm/e500_mmu.c
@@ -513,7 +513,7 @@ void kvmppc_mmu_itlb_miss(struct kvm_vcpu *vcpu)
 {
 	unsigned int as = !!(vcpu->arch.shared->msr & MSR_IS);
 
-	kvmppc_e500_deliver_tlb_miss(vcpu, vcpu->arch.pc, as);
+	kvmppc_e500_deliver_tlb_miss(vcpu, vcpu->arch.regs.nip, as);
 }
 
 void kvmppc_mmu_dtlb_miss(struct kvm_vcpu *vcpu)
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 03/11] KVM: PPC: Fix a mmio_host_swabbed uninitialized usage issue when VMX store
  2018-04-25 11:54 ` wei.guo.simon
  (?)
@ 2018-04-25 11:54   ` wei.guo.simon
  -1 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Simon Guo, linuxppc-dev, kvm

From: Simon Guo <wei.guo.simon@gmail.com>

When KVM emulates VMX store, it will invoke kvmppc_get_vmx_data() to
retrieve VMX reg val. kvmppc_get_vmx_data() will check mmio_host_swabbed
to decide which double word of vr[] to be used. But the
mmio_host_swabbed can be uninitiazed during VMX store procedure:

kvmppc_emulate_loadstore
	\- kvmppc_handle_store128_by2x64
		\- kvmppc_get_vmx_data

This patch corrects this by using kvmppc_need_byteswap() to choose
double word of vr[] and initialized mmio_host_swabbed to avoid invisble
trouble.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/kvm/emulate_loadstore.c | 1 +
 arch/powerpc/kvm/powerpc.c           | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
index a382e15..b8a3aef 100644
--- a/arch/powerpc/kvm/emulate_loadstore.c
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -111,6 +111,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 	vcpu->arch.mmio_sp64_extend = 0;
 	vcpu->arch.mmio_sign_extend = 0;
 	vcpu->arch.mmio_vmx_copy_nums = 0;
+	vcpu->arch.mmio_host_swabbed = 0;
 
 	switch (get_op(inst)) {
 	case 31:
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 4e38764..bef27b1 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -1374,7 +1374,7 @@ static inline int kvmppc_get_vmx_data(struct kvm_vcpu *vcpu, int rs, u64 *val)
 	if (di > 1)
 		return -1;
 
-	if (vcpu->arch.mmio_host_swabbed)
+	if (kvmppc_need_byteswap(vcpu))
 		di = 1 - di;
 
 	w0 = vrs.u[di * 2];
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 03/11] KVM: PPC: Fix a mmio_host_swabbed uninitialized usage issue when VMX store
@ 2018-04-25 11:54   ` wei.guo.simon
  0 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Paul Mackerras, kvm, linuxppc-dev, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

When KVM emulates VMX store, it will invoke kvmppc_get_vmx_data() to
retrieve VMX reg val. kvmppc_get_vmx_data() will check mmio_host_swabbed
to decide which double word of vr[] to be used. But the
mmio_host_swabbed can be uninitiazed during VMX store procedure:

kvmppc_emulate_loadstore
	\- kvmppc_handle_store128_by2x64
		\- kvmppc_get_vmx_data

This patch corrects this by using kvmppc_need_byteswap() to choose
double word of vr[] and initialized mmio_host_swabbed to avoid invisble
trouble.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/kvm/emulate_loadstore.c | 1 +
 arch/powerpc/kvm/powerpc.c           | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
index a382e15..b8a3aef 100644
--- a/arch/powerpc/kvm/emulate_loadstore.c
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -111,6 +111,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 	vcpu->arch.mmio_sp64_extend = 0;
 	vcpu->arch.mmio_sign_extend = 0;
 	vcpu->arch.mmio_vmx_copy_nums = 0;
+	vcpu->arch.mmio_host_swabbed = 0;
 
 	switch (get_op(inst)) {
 	case 31:
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 4e38764..bef27b1 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -1374,7 +1374,7 @@ static inline int kvmppc_get_vmx_data(struct kvm_vcpu *vcpu, int rs, u64 *val)
 	if (di > 1)
 		return -1;
 
-	if (vcpu->arch.mmio_host_swabbed)
+	if (kvmppc_need_byteswap(vcpu))
 		di = 1 - di;
 
 	w0 = vrs.u[di * 2];
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 03/11] KVM: PPC: Fix a mmio_host_swabbed uninitialized usage issue when VMX store
@ 2018-04-25 11:54   ` wei.guo.simon
  0 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Simon Guo, linuxppc-dev, kvm

From: Simon Guo <wei.guo.simon@gmail.com>

When KVM emulates VMX store, it will invoke kvmppc_get_vmx_data() to
retrieve VMX reg val. kvmppc_get_vmx_data() will check mmio_host_swabbed
to decide which double word of vr[] to be used. But the
mmio_host_swabbed can be uninitiazed during VMX store procedure:

kvmppc_emulate_loadstore
	\- kvmppc_handle_store128_by2x64
		\- kvmppc_get_vmx_data

This patch corrects this by using kvmppc_need_byteswap() to choose
double word of vr[] and initialized mmio_host_swabbed to avoid invisble
trouble.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/kvm/emulate_loadstore.c | 1 +
 arch/powerpc/kvm/powerpc.c           | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
index a382e15..b8a3aef 100644
--- a/arch/powerpc/kvm/emulate_loadstore.c
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -111,6 +111,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 	vcpu->arch.mmio_sp64_extend = 0;
 	vcpu->arch.mmio_sign_extend = 0;
 	vcpu->arch.mmio_vmx_copy_nums = 0;
+	vcpu->arch.mmio_host_swabbed = 0;
 
 	switch (get_op(inst)) {
 	case 31:
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 4e38764..bef27b1 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -1374,7 +1374,7 @@ static inline int kvmppc_get_vmx_data(struct kvm_vcpu *vcpu, int rs, u64 *val)
 	if (di > 1)
 		return -1;
 
-	if (vcpu->arch.mmio_host_swabbed)
+	if (kvmppc_need_byteswap(vcpu))
 		di = 1 - di;
 
 	w0 = vrs.u[di * 2];
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 04/11] KVM: PPC: fix incorrect element_size for stxsiwx in analyse_instr
  2018-04-25 11:54 ` wei.guo.simon
  (?)
@ 2018-04-25 11:54   ` wei.guo.simon
  -1 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Simon Guo, linuxppc-dev, kvm

From: Simon Guo <wei.guo.simon@gmail.com>

stwsiwx will place contents of word element 1 of VSR into word
storage of EA. So the element size of stwsiwx should be 4.

This patch correct the size from 8 to 4.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/lib/sstep.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
index 34d68f1..151d484 100644
--- a/arch/powerpc/lib/sstep.c
+++ b/arch/powerpc/lib/sstep.c
@@ -2178,7 +2178,7 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
 		case 140:	/* stxsiwx */
 			op->reg = rd | ((instr & 1) << 5);
 			op->type = MKOP(STORE_VSX, 0, 4);
-			op->element_size = 8;
+			op->element_size = 4;
 			break;
 
 		case 268:	/* lxvx */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 04/11] KVM: PPC: fix incorrect element_size for stxsiwx in analyse_instr
@ 2018-04-25 11:54   ` wei.guo.simon
  0 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Paul Mackerras, kvm, linuxppc-dev, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

stwsiwx will place contents of word element 1 of VSR into word
storage of EA. So the element size of stwsiwx should be 4.

This patch correct the size from 8 to 4.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/lib/sstep.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
index 34d68f1..151d484 100644
--- a/arch/powerpc/lib/sstep.c
+++ b/arch/powerpc/lib/sstep.c
@@ -2178,7 +2178,7 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
 		case 140:	/* stxsiwx */
 			op->reg = rd | ((instr & 1) << 5);
 			op->type = MKOP(STORE_VSX, 0, 4);
-			op->element_size = 8;
+			op->element_size = 4;
 			break;
 
 		case 268:	/* lxvx */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 04/11] KVM: PPC: fix incorrect element_size for stxsiwx in analyse_instr
@ 2018-04-25 11:54   ` wei.guo.simon
  0 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Simon Guo, linuxppc-dev, kvm

From: Simon Guo <wei.guo.simon@gmail.com>

stwsiwx will place contents of word element 1 of VSR into word
storage of EA. So the element size of stwsiwx should be 4.

This patch correct the size from 8 to 4.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/lib/sstep.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
index 34d68f1..151d484 100644
--- a/arch/powerpc/lib/sstep.c
+++ b/arch/powerpc/lib/sstep.c
@@ -2178,7 +2178,7 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
 		case 140:	/* stxsiwx */
 			op->reg = rd | ((instr & 1) << 5);
 			op->type = MKOP(STORE_VSX, 0, 4);
-			op->element_size = 8;
+			op->element_size = 4;
 			break;
 
 		case 268:	/* lxvx */
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 05/11] KVM: PPC: add GPR RA update skeleton for MMIO emulation
  2018-04-25 11:54 ` wei.guo.simon
  (?)
@ 2018-04-25 11:54   ` wei.guo.simon
  -1 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Simon Guo, linuxppc-dev, kvm

From: Simon Guo <wei.guo.simon@gmail.com>

To optimize kvm emulation code with analyse_instr, adds new
mmio_update_ra flag to aid with GPR RA update.

This patch arms RA update at load/store emulation path for both
qemu mmio emulation or coalesced mmio emulation.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_host.h  |  2 ++
 arch/powerpc/kvm/emulate_loadstore.c |  1 +
 arch/powerpc/kvm/powerpc.c           | 17 +++++++++++++++++
 3 files changed, 20 insertions(+)

diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 2d87768..1c7da00 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -673,6 +673,8 @@ struct kvm_vcpu_arch {
 	u8 mmio_sign_extend;
 	/* conversion between single and double precision */
 	u8 mmio_sp64_extend;
+	u8 mmio_ra; /* GPR as ra to be updated with EA */
+	u8 mmio_update_ra;
 	/*
 	 * Number of simulations for vsx.
 	 * If we use 2*8bytes to simulate 1*16bytes,
diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
index b8a3aef..90b9692 100644
--- a/arch/powerpc/kvm/emulate_loadstore.c
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -111,6 +111,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 	vcpu->arch.mmio_sp64_extend = 0;
 	vcpu->arch.mmio_sign_extend = 0;
 	vcpu->arch.mmio_vmx_copy_nums = 0;
+	vcpu->arch.mmio_update_ra = 0;
 	vcpu->arch.mmio_host_swabbed = 0;
 
 	switch (get_op(inst)) {
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index bef27b1..f7fd68f 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -1111,6 +1111,12 @@ static int __kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 	if (!ret) {
 		kvmppc_complete_mmio_load(vcpu, run);
+		if (vcpu->arch.mmio_update_ra) {
+			kvmppc_set_gpr(vcpu, vcpu->arch.mmio_ra,
+					vcpu->arch.vaddr_accessed);
+			vcpu->arch.mmio_update_ra = 0;
+		}
+
 		vcpu->mmio_needed = 0;
 		return EMULATE_DONE;
 	}
@@ -1215,6 +1221,12 @@ int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 	if (!ret) {
 		vcpu->mmio_needed = 0;
+		if (vcpu->arch.mmio_update_ra) {
+			kvmppc_set_gpr(vcpu, vcpu->arch.mmio_ra,
+					vcpu->arch.vaddr_accessed);
+			vcpu->arch.mmio_update_ra = 0;
+		}
+
 		return EMULATE_DONE;
 	}
 
@@ -1581,6 +1593,11 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 			}
 		}
 #endif
+		if (vcpu->arch.mmio_update_ra) {
+			kvmppc_set_gpr(vcpu, vcpu->arch.mmio_ra,
+					vcpu->arch.vaddr_accessed);
+			vcpu->arch.mmio_update_ra = 0;
+		}
 	} else if (vcpu->arch.osi_needed) {
 		u64 *gprs = run->osi.gprs;
 		int i;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 05/11] KVM: PPC: add GPR RA update skeleton for MMIO emulation
@ 2018-04-25 11:54   ` wei.guo.simon
  0 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Paul Mackerras, kvm, linuxppc-dev, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

To optimize kvm emulation code with analyse_instr, adds new
mmio_update_ra flag to aid with GPR RA update.

This patch arms RA update at load/store emulation path for both
qemu mmio emulation or coalesced mmio emulation.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_host.h  |  2 ++
 arch/powerpc/kvm/emulate_loadstore.c |  1 +
 arch/powerpc/kvm/powerpc.c           | 17 +++++++++++++++++
 3 files changed, 20 insertions(+)

diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 2d87768..1c7da00 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -673,6 +673,8 @@ struct kvm_vcpu_arch {
 	u8 mmio_sign_extend;
 	/* conversion between single and double precision */
 	u8 mmio_sp64_extend;
+	u8 mmio_ra; /* GPR as ra to be updated with EA */
+	u8 mmio_update_ra;
 	/*
 	 * Number of simulations for vsx.
 	 * If we use 2*8bytes to simulate 1*16bytes,
diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
index b8a3aef..90b9692 100644
--- a/arch/powerpc/kvm/emulate_loadstore.c
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -111,6 +111,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 	vcpu->arch.mmio_sp64_extend = 0;
 	vcpu->arch.mmio_sign_extend = 0;
 	vcpu->arch.mmio_vmx_copy_nums = 0;
+	vcpu->arch.mmio_update_ra = 0;
 	vcpu->arch.mmio_host_swabbed = 0;
 
 	switch (get_op(inst)) {
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index bef27b1..f7fd68f 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -1111,6 +1111,12 @@ static int __kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 	if (!ret) {
 		kvmppc_complete_mmio_load(vcpu, run);
+		if (vcpu->arch.mmio_update_ra) {
+			kvmppc_set_gpr(vcpu, vcpu->arch.mmio_ra,
+					vcpu->arch.vaddr_accessed);
+			vcpu->arch.mmio_update_ra = 0;
+		}
+
 		vcpu->mmio_needed = 0;
 		return EMULATE_DONE;
 	}
@@ -1215,6 +1221,12 @@ int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 	if (!ret) {
 		vcpu->mmio_needed = 0;
+		if (vcpu->arch.mmio_update_ra) {
+			kvmppc_set_gpr(vcpu, vcpu->arch.mmio_ra,
+					vcpu->arch.vaddr_accessed);
+			vcpu->arch.mmio_update_ra = 0;
+		}
+
 		return EMULATE_DONE;
 	}
 
@@ -1581,6 +1593,11 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 			}
 		}
 #endif
+		if (vcpu->arch.mmio_update_ra) {
+			kvmppc_set_gpr(vcpu, vcpu->arch.mmio_ra,
+					vcpu->arch.vaddr_accessed);
+			vcpu->arch.mmio_update_ra = 0;
+		}
 	} else if (vcpu->arch.osi_needed) {
 		u64 *gprs = run->osi.gprs;
 		int i;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 05/11] KVM: PPC: add GPR RA update skeleton for MMIO emulation
@ 2018-04-25 11:54   ` wei.guo.simon
  0 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Simon Guo, linuxppc-dev, kvm

From: Simon Guo <wei.guo.simon@gmail.com>

To optimize kvm emulation code with analyse_instr, adds new
mmio_update_ra flag to aid with GPR RA update.

This patch arms RA update at load/store emulation path for both
qemu mmio emulation or coalesced mmio emulation.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_host.h  |  2 ++
 arch/powerpc/kvm/emulate_loadstore.c |  1 +
 arch/powerpc/kvm/powerpc.c           | 17 +++++++++++++++++
 3 files changed, 20 insertions(+)

diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 2d87768..1c7da00 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -673,6 +673,8 @@ struct kvm_vcpu_arch {
 	u8 mmio_sign_extend;
 	/* conversion between single and double precision */
 	u8 mmio_sp64_extend;
+	u8 mmio_ra; /* GPR as ra to be updated with EA */
+	u8 mmio_update_ra;
 	/*
 	 * Number of simulations for vsx.
 	 * If we use 2*8bytes to simulate 1*16bytes,
diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
index b8a3aef..90b9692 100644
--- a/arch/powerpc/kvm/emulate_loadstore.c
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -111,6 +111,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 	vcpu->arch.mmio_sp64_extend = 0;
 	vcpu->arch.mmio_sign_extend = 0;
 	vcpu->arch.mmio_vmx_copy_nums = 0;
+	vcpu->arch.mmio_update_ra = 0;
 	vcpu->arch.mmio_host_swabbed = 0;
 
 	switch (get_op(inst)) {
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index bef27b1..f7fd68f 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -1111,6 +1111,12 @@ static int __kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 	if (!ret) {
 		kvmppc_complete_mmio_load(vcpu, run);
+		if (vcpu->arch.mmio_update_ra) {
+			kvmppc_set_gpr(vcpu, vcpu->arch.mmio_ra,
+					vcpu->arch.vaddr_accessed);
+			vcpu->arch.mmio_update_ra = 0;
+		}
+
 		vcpu->mmio_needed = 0;
 		return EMULATE_DONE;
 	}
@@ -1215,6 +1221,12 @@ int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 	if (!ret) {
 		vcpu->mmio_needed = 0;
+		if (vcpu->arch.mmio_update_ra) {
+			kvmppc_set_gpr(vcpu, vcpu->arch.mmio_ra,
+					vcpu->arch.vaddr_accessed);
+			vcpu->arch.mmio_update_ra = 0;
+		}
+
 		return EMULATE_DONE;
 	}
 
@@ -1581,6 +1593,11 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 			}
 		}
 #endif
+		if (vcpu->arch.mmio_update_ra) {
+			kvmppc_set_gpr(vcpu, vcpu->arch.mmio_ra,
+					vcpu->arch.vaddr_accessed);
+			vcpu->arch.mmio_update_ra = 0;
+		}
 	} else if (vcpu->arch.osi_needed) {
 		u64 *gprs = run->osi.gprs;
 		int i;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 06/11] KVM: PPC: add KVMPPC_VSX_COPY_WORD_LOAD_DUMP type support for mmio emulation
  2018-04-25 11:54 ` wei.guo.simon
  (?)
@ 2018-04-25 11:54   ` wei.guo.simon
  -1 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Simon Guo, linuxppc-dev, kvm

From: Simon Guo <wei.guo.simon@gmail.com>

Some VSX instruction like lxvwsx will splat word into VSR. This patch
adds VSX copy type KVMPPC_VSX_COPY_WORD_LOAD_DUMP to support this.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_host.h |  1 +
 arch/powerpc/kvm/powerpc.c          | 23 +++++++++++++++++++++++
 2 files changed, 24 insertions(+)

diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 1c7da00..db7e25d 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -454,6 +454,7 @@ struct mmio_hpte_cache {
 #define KVMPPC_VSX_COPY_WORD		1
 #define KVMPPC_VSX_COPY_DWORD		2
 #define KVMPPC_VSX_COPY_DWORD_LOAD_DUMP	3
+#define KVMPPC_VSX_COPY_WORD_LOAD_DUMP	4
 
 struct openpic;
 
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index f7fd68f..17f0315 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -907,6 +907,26 @@ static inline void kvmppc_set_vsr_dword_dump(struct kvm_vcpu *vcpu,
 	}
 }
 
+static inline void kvmppc_set_vsr_word_dump(struct kvm_vcpu *vcpu,
+	u32 gpr)
+{
+	union kvmppc_one_reg val;
+	int index = vcpu->arch.io_gpr & KVM_MMIO_REG_MASK;
+
+	if (vcpu->arch.mmio_vsx_tx_sx_enabled) {
+		val.vsx32val[0] = gpr;
+		val.vsx32val[1] = gpr;
+		val.vsx32val[2] = gpr;
+		val.vsx32val[3] = gpr;
+		VCPU_VSX_VR(vcpu, index) = val.vval;
+	} else {
+		val.vsx32val[0] = gpr;
+		val.vsx32val[1] = gpr;
+		VCPU_VSX_FPR(vcpu, index, 0) = val.vsxval[0];
+		VCPU_VSX_FPR(vcpu, index, 1) = val.vsxval[0];
+	}
+}
+
 static inline void kvmppc_set_vsr_word(struct kvm_vcpu *vcpu,
 	u32 gpr32)
 {
@@ -1061,6 +1081,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 		else if (vcpu->arch.mmio_vsx_copy_type ==
 				KVMPPC_VSX_COPY_DWORD_LOAD_DUMP)
 			kvmppc_set_vsr_dword_dump(vcpu, gpr);
+		else if (vcpu->arch.mmio_vsx_copy_type ==
+				KVMPPC_VSX_COPY_WORD_LOAD_DUMP)
+			kvmppc_set_vsr_word_dump(vcpu, gpr);
 		break;
 #endif
 #ifdef CONFIG_ALTIVEC
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 06/11] KVM: PPC: add KVMPPC_VSX_COPY_WORD_LOAD_DUMP type support for mmio emulation
@ 2018-04-25 11:54   ` wei.guo.simon
  0 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Paul Mackerras, kvm, linuxppc-dev, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

Some VSX instruction like lxvwsx will splat word into VSR. This patch
adds VSX copy type KVMPPC_VSX_COPY_WORD_LOAD_DUMP to support this.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_host.h |  1 +
 arch/powerpc/kvm/powerpc.c          | 23 +++++++++++++++++++++++
 2 files changed, 24 insertions(+)

diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 1c7da00..db7e25d 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -454,6 +454,7 @@ struct mmio_hpte_cache {
 #define KVMPPC_VSX_COPY_WORD		1
 #define KVMPPC_VSX_COPY_DWORD		2
 #define KVMPPC_VSX_COPY_DWORD_LOAD_DUMP	3
+#define KVMPPC_VSX_COPY_WORD_LOAD_DUMP	4
 
 struct openpic;
 
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index f7fd68f..17f0315 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -907,6 +907,26 @@ static inline void kvmppc_set_vsr_dword_dump(struct kvm_vcpu *vcpu,
 	}
 }
 
+static inline void kvmppc_set_vsr_word_dump(struct kvm_vcpu *vcpu,
+	u32 gpr)
+{
+	union kvmppc_one_reg val;
+	int index = vcpu->arch.io_gpr & KVM_MMIO_REG_MASK;
+
+	if (vcpu->arch.mmio_vsx_tx_sx_enabled) {
+		val.vsx32val[0] = gpr;
+		val.vsx32val[1] = gpr;
+		val.vsx32val[2] = gpr;
+		val.vsx32val[3] = gpr;
+		VCPU_VSX_VR(vcpu, index) = val.vval;
+	} else {
+		val.vsx32val[0] = gpr;
+		val.vsx32val[1] = gpr;
+		VCPU_VSX_FPR(vcpu, index, 0) = val.vsxval[0];
+		VCPU_VSX_FPR(vcpu, index, 1) = val.vsxval[0];
+	}
+}
+
 static inline void kvmppc_set_vsr_word(struct kvm_vcpu *vcpu,
 	u32 gpr32)
 {
@@ -1061,6 +1081,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 		else if (vcpu->arch.mmio_vsx_copy_type ==
 				KVMPPC_VSX_COPY_DWORD_LOAD_DUMP)
 			kvmppc_set_vsr_dword_dump(vcpu, gpr);
+		else if (vcpu->arch.mmio_vsx_copy_type ==
+				KVMPPC_VSX_COPY_WORD_LOAD_DUMP)
+			kvmppc_set_vsr_word_dump(vcpu, gpr);
 		break;
 #endif
 #ifdef CONFIG_ALTIVEC
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 06/11] KVM: PPC: add KVMPPC_VSX_COPY_WORD_LOAD_DUMP type support for mmio emulation
@ 2018-04-25 11:54   ` wei.guo.simon
  0 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Simon Guo, linuxppc-dev, kvm

From: Simon Guo <wei.guo.simon@gmail.com>

Some VSX instruction like lxvwsx will splat word into VSR. This patch
adds VSX copy type KVMPPC_VSX_COPY_WORD_LOAD_DUMP to support this.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_host.h |  1 +
 arch/powerpc/kvm/powerpc.c          | 23 +++++++++++++++++++++++
 2 files changed, 24 insertions(+)

diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 1c7da00..db7e25d 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -454,6 +454,7 @@ struct mmio_hpte_cache {
 #define KVMPPC_VSX_COPY_WORD		1
 #define KVMPPC_VSX_COPY_DWORD		2
 #define KVMPPC_VSX_COPY_DWORD_LOAD_DUMP	3
+#define KVMPPC_VSX_COPY_WORD_LOAD_DUMP	4
 
 struct openpic;
 
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index f7fd68f..17f0315 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -907,6 +907,26 @@ static inline void kvmppc_set_vsr_dword_dump(struct kvm_vcpu *vcpu,
 	}
 }
 
+static inline void kvmppc_set_vsr_word_dump(struct kvm_vcpu *vcpu,
+	u32 gpr)
+{
+	union kvmppc_one_reg val;
+	int index = vcpu->arch.io_gpr & KVM_MMIO_REG_MASK;
+
+	if (vcpu->arch.mmio_vsx_tx_sx_enabled) {
+		val.vsx32val[0] = gpr;
+		val.vsx32val[1] = gpr;
+		val.vsx32val[2] = gpr;
+		val.vsx32val[3] = gpr;
+		VCPU_VSX_VR(vcpu, index) = val.vval;
+	} else {
+		val.vsx32val[0] = gpr;
+		val.vsx32val[1] = gpr;
+		VCPU_VSX_FPR(vcpu, index, 0) = val.vsxval[0];
+		VCPU_VSX_FPR(vcpu, index, 1) = val.vsxval[0];
+	}
+}
+
 static inline void kvmppc_set_vsr_word(struct kvm_vcpu *vcpu,
 	u32 gpr32)
 {
@@ -1061,6 +1081,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 		else if (vcpu->arch.mmio_vsx_copy_type =
 				KVMPPC_VSX_COPY_DWORD_LOAD_DUMP)
 			kvmppc_set_vsr_dword_dump(vcpu, gpr);
+		else if (vcpu->arch.mmio_vsx_copy_type =
+				KVMPPC_VSX_COPY_WORD_LOAD_DUMP)
+			kvmppc_set_vsr_word_dump(vcpu, gpr);
 		break;
 #endif
 #ifdef CONFIG_ALTIVEC
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 07/11] KVM: PPC: reconstruct non-SIMD LOAD/STORE instruction mmio emulation with analyse_intr() input
  2018-04-25 11:54 ` wei.guo.simon
  (?)
@ 2018-04-25 11:54   ` wei.guo.simon
  -1 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Simon Guo, linuxppc-dev, kvm

From: Simon Guo <wei.guo.simon@gmail.com>

This patch reconstructs non-SIMD LOAD/STORE instruction MMIO emulation
with analyse_intr() input. It utilizes the BYTEREV/UPDATE/SIGNEXT
properties exported by analyse_instr() and invokes
kvmppc_handle_load(s)/kvmppc_handle_store() accordingly.

It also move CACHEOP type handling into the skeleton.

instruction_type within sstep.h is renamed to avoid conflict with
kvm_ppc.h.

Suggested-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/sstep.h     |   2 +-
 arch/powerpc/kvm/emulate_loadstore.c | 282 +++++++----------------------------
 2 files changed, 51 insertions(+), 233 deletions(-)

diff --git a/arch/powerpc/include/asm/sstep.h b/arch/powerpc/include/asm/sstep.h
index ab9d849..0a1a312 100644
--- a/arch/powerpc/include/asm/sstep.h
+++ b/arch/powerpc/include/asm/sstep.h
@@ -23,7 +23,7 @@
 #define IS_RFID(instr)		(((instr) & 0xfc0007fe) == 0x4c000024)
 #define IS_RFI(instr)		(((instr) & 0xfc0007fe) == 0x4c000064)
 
-enum instruction_type {
+enum analyse_instruction_type {
 	COMPUTE,		/* arith/logical/CR op, etc. */
 	LOAD,			/* load and store types need to be contiguous */
 	LOAD_MULTI,
diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
index 90b9692..aaaf872 100644
--- a/arch/powerpc/kvm/emulate_loadstore.c
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -31,9 +31,12 @@
 #include <asm/kvm_ppc.h>
 #include <asm/disassemble.h>
 #include <asm/ppc-opcode.h>
+#include <asm/sstep.h>
 #include "timing.h"
 #include "trace.h"
 
+int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+		  unsigned int instr);
 #ifdef CONFIG_PPC_FPU
 static bool kvmppc_check_fp_disabled(struct kvm_vcpu *vcpu)
 {
@@ -84,8 +87,9 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 	struct kvm_run *run = vcpu->run;
 	u32 inst;
 	int ra, rs, rt;
-	enum emulation_result emulated;
+	enum emulation_result emulated = EMULATE_FAIL;
 	int advance = 1;
+	struct instruction_op op;
 
 	/* this default type might be overwritten by subcategories */
 	kvmppc_set_exit_type(vcpu, EMULATED_INST_EXITS);
@@ -114,144 +118,64 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 	vcpu->arch.mmio_update_ra = 0;
 	vcpu->arch.mmio_host_swabbed = 0;
 
-	switch (get_op(inst)) {
-	case 31:
-		switch (get_xop(inst)) {
-		case OP_31_XOP_LWZX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
-			break;
-
-		case OP_31_XOP_LWZUX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_LBZX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
-			break;
+	emulated = EMULATE_FAIL;
+	vcpu->arch.regs.msr = vcpu->arch.shared->msr;
+	vcpu->arch.regs.ccr = vcpu->arch.cr;
+	if (analyse_instr(&op, &vcpu->arch.regs, inst) == 0) {
+		int type = op.type & INSTR_TYPE_MASK;
+		int size = GETSIZE(op.type);
 
-		case OP_31_XOP_LBZUX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
+		switch (type) {
+		case LOAD:  {
+			int instr_byte_swap = op.type & BYTEREV;
 
-		case OP_31_XOP_STDX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 8, 1);
-			break;
+			if (op.type & UPDATE) {
+				vcpu->arch.mmio_ra = op.update_reg;
+				vcpu->arch.mmio_update_ra = 1;
+			}
 
-		case OP_31_XOP_STDUX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 8, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_STWX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 4, 1);
-			break;
-
-		case OP_31_XOP_STWUX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 4, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_STBX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 1, 1);
-			break;
-
-		case OP_31_XOP_STBUX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 1, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_LHAX:
-			emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
-			break;
-
-		case OP_31_XOP_LHAUX:
-			emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
+			if (op.type & SIGNEXT)
+				emulated = kvmppc_handle_loads(run, vcpu,
+						op.reg, size, !instr_byte_swap);
+			else
+				emulated = kvmppc_handle_load(run, vcpu,
+						op.reg, size, !instr_byte_swap);
 
-		case OP_31_XOP_LHZX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
 			break;
-
-		case OP_31_XOP_LHZUX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_STHX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 2, 1);
-			break;
-
-		case OP_31_XOP_STHUX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 2, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_DCBST:
-		case OP_31_XOP_DCBF:
-		case OP_31_XOP_DCBI:
+		}
+		case STORE:
+			if (op.type & UPDATE) {
+				vcpu->arch.mmio_ra = op.update_reg;
+				vcpu->arch.mmio_update_ra = 1;
+			}
+
+			/* if need byte reverse, op.val has been reverted by
+			 * analyse_instr().
+			 */
+			emulated = kvmppc_handle_store(run, vcpu, op.val,
+					size, 1);
+			break;
+		case CACHEOP:
 			/* Do nothing. The guest is performing dcbi because
 			 * hardware DMA is not snooped by the dcache, but
 			 * emulated DMA either goes through the dcache as
 			 * normal writes, or the host kernel has handled dcache
-			 * coherence. */
-			break;
-
-		case OP_31_XOP_LWBRX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 4, 0);
-			break;
-
-		case OP_31_XOP_STWBRX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 4, 0);
+			 * coherence.
+			 */
+			emulated = EMULATE_DONE;
 			break;
-
-		case OP_31_XOP_LHBRX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 0);
-			break;
-
-		case OP_31_XOP_STHBRX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 2, 0);
-			break;
-
-		case OP_31_XOP_LDBRX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 8, 0);
-			break;
-
-		case OP_31_XOP_STDBRX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 8, 0);
-			break;
-
-		case OP_31_XOP_LDX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1);
+		default:
 			break;
+		}
+	}
 
-		case OP_31_XOP_LDUX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
 
-		case OP_31_XOP_LWAX:
-			emulated = kvmppc_handle_loads(run, vcpu, rt, 4, 1);
-			break;
-
-		case OP_31_XOP_LWAUX:
-			emulated = kvmppc_handle_loads(run, vcpu, rt, 4, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
+	if (emulated == EMULATE_DONE)
+		goto out;
 
+	switch (get_op(inst)) {
+	case 31:
+		switch (get_xop(inst)) {
 #ifdef CONFIG_PPC_FPU
 		case OP_31_XOP_LFSX:
 			if (kvmppc_check_fp_disabled(vcpu))
@@ -503,10 +427,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 		}
 		break;
 
-	case OP_LWZ:
-		emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
-		break;
-
 #ifdef CONFIG_PPC_FPU
 	case OP_STFS:
 		if (kvmppc_check_fp_disabled(vcpu))
@@ -543,110 +463,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 	                               8, 1);
 		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
 		break;
-#endif
-
-	case OP_LD:
-		rt = get_rt(inst);
-		switch (inst & 3) {
-		case 0:	/* ld */
-			emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1);
-			break;
-		case 1: /* ldu */
-			emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-		case 2:	/* lwa */
-			emulated = kvmppc_handle_loads(run, vcpu, rt, 4, 1);
-			break;
-		default:
-			emulated = EMULATE_FAIL;
-		}
-		break;
-
-	case OP_LWZU:
-		emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-	case OP_LBZ:
-		emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
-		break;
-
-	case OP_LBZU:
-		emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-	case OP_STW:
-		emulated = kvmppc_handle_store(run, vcpu,
-					       kvmppc_get_gpr(vcpu, rs),
-		                               4, 1);
-		break;
-
-	case OP_STD:
-		rs = get_rs(inst);
-		switch (inst & 3) {
-		case 0:	/* std */
-			emulated = kvmppc_handle_store(run, vcpu,
-				kvmppc_get_gpr(vcpu, rs), 8, 1);
-			break;
-		case 1: /* stdu */
-			emulated = kvmppc_handle_store(run, vcpu,
-				kvmppc_get_gpr(vcpu, rs), 8, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-		default:
-			emulated = EMULATE_FAIL;
-		}
-		break;
-
-	case OP_STWU:
-		emulated = kvmppc_handle_store(run, vcpu,
-				kvmppc_get_gpr(vcpu, rs), 4, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-	case OP_STB:
-		emulated = kvmppc_handle_store(run, vcpu,
-				kvmppc_get_gpr(vcpu, rs), 1, 1);
-		break;
-
-	case OP_STBU:
-		emulated = kvmppc_handle_store(run, vcpu,
-				kvmppc_get_gpr(vcpu, rs), 1, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-	case OP_LHZ:
-		emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
-		break;
-
-	case OP_LHZU:
-		emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-	case OP_LHA:
-		emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
-		break;
-
-	case OP_LHAU:
-		emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
 
-	case OP_STH:
-		emulated = kvmppc_handle_store(run, vcpu,
-				kvmppc_get_gpr(vcpu, rs), 2, 1);
-		break;
-
-	case OP_STHU:
-		emulated = kvmppc_handle_store(run, vcpu,
-				kvmppc_get_gpr(vcpu, rs), 2, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-#ifdef CONFIG_PPC_FPU
 	case OP_LFS:
 		if (kvmppc_check_fp_disabled(vcpu))
 			return EMULATE_DONE;
@@ -685,6 +502,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 		break;
 	}
 
+out:
 	if (emulated == EMULATE_FAIL) {
 		advance = 0;
 		kvmppc_core_queue_program(vcpu, 0);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 07/11] KVM: PPC: reconstruct non-SIMD LOAD/STORE instruction mmio emulation with analyse_intr() input
@ 2018-04-25 11:54   ` wei.guo.simon
  0 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Paul Mackerras, kvm, linuxppc-dev, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

This patch reconstructs non-SIMD LOAD/STORE instruction MMIO emulation
with analyse_intr() input. It utilizes the BYTEREV/UPDATE/SIGNEXT
properties exported by analyse_instr() and invokes
kvmppc_handle_load(s)/kvmppc_handle_store() accordingly.

It also move CACHEOP type handling into the skeleton.

instruction_type within sstep.h is renamed to avoid conflict with
kvm_ppc.h.

Suggested-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/sstep.h     |   2 +-
 arch/powerpc/kvm/emulate_loadstore.c | 282 +++++++----------------------------
 2 files changed, 51 insertions(+), 233 deletions(-)

diff --git a/arch/powerpc/include/asm/sstep.h b/arch/powerpc/include/asm/sstep.h
index ab9d849..0a1a312 100644
--- a/arch/powerpc/include/asm/sstep.h
+++ b/arch/powerpc/include/asm/sstep.h
@@ -23,7 +23,7 @@
 #define IS_RFID(instr)		(((instr) & 0xfc0007fe) == 0x4c000024)
 #define IS_RFI(instr)		(((instr) & 0xfc0007fe) == 0x4c000064)
 
-enum instruction_type {
+enum analyse_instruction_type {
 	COMPUTE,		/* arith/logical/CR op, etc. */
 	LOAD,			/* load and store types need to be contiguous */
 	LOAD_MULTI,
diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
index 90b9692..aaaf872 100644
--- a/arch/powerpc/kvm/emulate_loadstore.c
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -31,9 +31,12 @@
 #include <asm/kvm_ppc.h>
 #include <asm/disassemble.h>
 #include <asm/ppc-opcode.h>
+#include <asm/sstep.h>
 #include "timing.h"
 #include "trace.h"
 
+int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+		  unsigned int instr);
 #ifdef CONFIG_PPC_FPU
 static bool kvmppc_check_fp_disabled(struct kvm_vcpu *vcpu)
 {
@@ -84,8 +87,9 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 	struct kvm_run *run = vcpu->run;
 	u32 inst;
 	int ra, rs, rt;
-	enum emulation_result emulated;
+	enum emulation_result emulated = EMULATE_FAIL;
 	int advance = 1;
+	struct instruction_op op;
 
 	/* this default type might be overwritten by subcategories */
 	kvmppc_set_exit_type(vcpu, EMULATED_INST_EXITS);
@@ -114,144 +118,64 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 	vcpu->arch.mmio_update_ra = 0;
 	vcpu->arch.mmio_host_swabbed = 0;
 
-	switch (get_op(inst)) {
-	case 31:
-		switch (get_xop(inst)) {
-		case OP_31_XOP_LWZX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
-			break;
-
-		case OP_31_XOP_LWZUX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_LBZX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
-			break;
+	emulated = EMULATE_FAIL;
+	vcpu->arch.regs.msr = vcpu->arch.shared->msr;
+	vcpu->arch.regs.ccr = vcpu->arch.cr;
+	if (analyse_instr(&op, &vcpu->arch.regs, inst) == 0) {
+		int type = op.type & INSTR_TYPE_MASK;
+		int size = GETSIZE(op.type);
 
-		case OP_31_XOP_LBZUX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
+		switch (type) {
+		case LOAD:  {
+			int instr_byte_swap = op.type & BYTEREV;
 
-		case OP_31_XOP_STDX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 8, 1);
-			break;
+			if (op.type & UPDATE) {
+				vcpu->arch.mmio_ra = op.update_reg;
+				vcpu->arch.mmio_update_ra = 1;
+			}
 
-		case OP_31_XOP_STDUX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 8, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_STWX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 4, 1);
-			break;
-
-		case OP_31_XOP_STWUX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 4, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_STBX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 1, 1);
-			break;
-
-		case OP_31_XOP_STBUX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 1, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_LHAX:
-			emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
-			break;
-
-		case OP_31_XOP_LHAUX:
-			emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
+			if (op.type & SIGNEXT)
+				emulated = kvmppc_handle_loads(run, vcpu,
+						op.reg, size, !instr_byte_swap);
+			else
+				emulated = kvmppc_handle_load(run, vcpu,
+						op.reg, size, !instr_byte_swap);
 
-		case OP_31_XOP_LHZX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
 			break;
-
-		case OP_31_XOP_LHZUX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_STHX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 2, 1);
-			break;
-
-		case OP_31_XOP_STHUX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 2, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_DCBST:
-		case OP_31_XOP_DCBF:
-		case OP_31_XOP_DCBI:
+		}
+		case STORE:
+			if (op.type & UPDATE) {
+				vcpu->arch.mmio_ra = op.update_reg;
+				vcpu->arch.mmio_update_ra = 1;
+			}
+
+			/* if need byte reverse, op.val has been reverted by
+			 * analyse_instr().
+			 */
+			emulated = kvmppc_handle_store(run, vcpu, op.val,
+					size, 1);
+			break;
+		case CACHEOP:
 			/* Do nothing. The guest is performing dcbi because
 			 * hardware DMA is not snooped by the dcache, but
 			 * emulated DMA either goes through the dcache as
 			 * normal writes, or the host kernel has handled dcache
-			 * coherence. */
-			break;
-
-		case OP_31_XOP_LWBRX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 4, 0);
-			break;
-
-		case OP_31_XOP_STWBRX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 4, 0);
+			 * coherence.
+			 */
+			emulated = EMULATE_DONE;
 			break;
-
-		case OP_31_XOP_LHBRX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 0);
-			break;
-
-		case OP_31_XOP_STHBRX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 2, 0);
-			break;
-
-		case OP_31_XOP_LDBRX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 8, 0);
-			break;
-
-		case OP_31_XOP_STDBRX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 8, 0);
-			break;
-
-		case OP_31_XOP_LDX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1);
+		default:
 			break;
+		}
+	}
 
-		case OP_31_XOP_LDUX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
 
-		case OP_31_XOP_LWAX:
-			emulated = kvmppc_handle_loads(run, vcpu, rt, 4, 1);
-			break;
-
-		case OP_31_XOP_LWAUX:
-			emulated = kvmppc_handle_loads(run, vcpu, rt, 4, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
+	if (emulated == EMULATE_DONE)
+		goto out;
 
+	switch (get_op(inst)) {
+	case 31:
+		switch (get_xop(inst)) {
 #ifdef CONFIG_PPC_FPU
 		case OP_31_XOP_LFSX:
 			if (kvmppc_check_fp_disabled(vcpu))
@@ -503,10 +427,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 		}
 		break;
 
-	case OP_LWZ:
-		emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
-		break;
-
 #ifdef CONFIG_PPC_FPU
 	case OP_STFS:
 		if (kvmppc_check_fp_disabled(vcpu))
@@ -543,110 +463,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 	                               8, 1);
 		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
 		break;
-#endif
-
-	case OP_LD:
-		rt = get_rt(inst);
-		switch (inst & 3) {
-		case 0:	/* ld */
-			emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1);
-			break;
-		case 1: /* ldu */
-			emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-		case 2:	/* lwa */
-			emulated = kvmppc_handle_loads(run, vcpu, rt, 4, 1);
-			break;
-		default:
-			emulated = EMULATE_FAIL;
-		}
-		break;
-
-	case OP_LWZU:
-		emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-	case OP_LBZ:
-		emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
-		break;
-
-	case OP_LBZU:
-		emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-	case OP_STW:
-		emulated = kvmppc_handle_store(run, vcpu,
-					       kvmppc_get_gpr(vcpu, rs),
-		                               4, 1);
-		break;
-
-	case OP_STD:
-		rs = get_rs(inst);
-		switch (inst & 3) {
-		case 0:	/* std */
-			emulated = kvmppc_handle_store(run, vcpu,
-				kvmppc_get_gpr(vcpu, rs), 8, 1);
-			break;
-		case 1: /* stdu */
-			emulated = kvmppc_handle_store(run, vcpu,
-				kvmppc_get_gpr(vcpu, rs), 8, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-		default:
-			emulated = EMULATE_FAIL;
-		}
-		break;
-
-	case OP_STWU:
-		emulated = kvmppc_handle_store(run, vcpu,
-				kvmppc_get_gpr(vcpu, rs), 4, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-	case OP_STB:
-		emulated = kvmppc_handle_store(run, vcpu,
-				kvmppc_get_gpr(vcpu, rs), 1, 1);
-		break;
-
-	case OP_STBU:
-		emulated = kvmppc_handle_store(run, vcpu,
-				kvmppc_get_gpr(vcpu, rs), 1, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-	case OP_LHZ:
-		emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
-		break;
-
-	case OP_LHZU:
-		emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-	case OP_LHA:
-		emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
-		break;
-
-	case OP_LHAU:
-		emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
 
-	case OP_STH:
-		emulated = kvmppc_handle_store(run, vcpu,
-				kvmppc_get_gpr(vcpu, rs), 2, 1);
-		break;
-
-	case OP_STHU:
-		emulated = kvmppc_handle_store(run, vcpu,
-				kvmppc_get_gpr(vcpu, rs), 2, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-#ifdef CONFIG_PPC_FPU
 	case OP_LFS:
 		if (kvmppc_check_fp_disabled(vcpu))
 			return EMULATE_DONE;
@@ -685,6 +502,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 		break;
 	}
 
+out:
 	if (emulated == EMULATE_FAIL) {
 		advance = 0;
 		kvmppc_core_queue_program(vcpu, 0);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 07/11] KVM: PPC: reconstruct non-SIMD LOAD/STORE instruction mmio emulation with analyse_intr
@ 2018-04-25 11:54   ` wei.guo.simon
  0 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Simon Guo, linuxppc-dev, kvm

From: Simon Guo <wei.guo.simon@gmail.com>

This patch reconstructs non-SIMD LOAD/STORE instruction MMIO emulation
with analyse_intr() input. It utilizes the BYTEREV/UPDATE/SIGNEXT
properties exported by analyse_instr() and invokes
kvmppc_handle_load(s)/kvmppc_handle_store() accordingly.

It also move CACHEOP type handling into the skeleton.

instruction_type within sstep.h is renamed to avoid conflict with
kvm_ppc.h.

Suggested-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/sstep.h     |   2 +-
 arch/powerpc/kvm/emulate_loadstore.c | 282 +++++++----------------------------
 2 files changed, 51 insertions(+), 233 deletions(-)

diff --git a/arch/powerpc/include/asm/sstep.h b/arch/powerpc/include/asm/sstep.h
index ab9d849..0a1a312 100644
--- a/arch/powerpc/include/asm/sstep.h
+++ b/arch/powerpc/include/asm/sstep.h
@@ -23,7 +23,7 @@
 #define IS_RFID(instr)		(((instr) & 0xfc0007fe) = 0x4c000024)
 #define IS_RFI(instr)		(((instr) & 0xfc0007fe) = 0x4c000064)
 
-enum instruction_type {
+enum analyse_instruction_type {
 	COMPUTE,		/* arith/logical/CR op, etc. */
 	LOAD,			/* load and store types need to be contiguous */
 	LOAD_MULTI,
diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
index 90b9692..aaaf872 100644
--- a/arch/powerpc/kvm/emulate_loadstore.c
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -31,9 +31,12 @@
 #include <asm/kvm_ppc.h>
 #include <asm/disassemble.h>
 #include <asm/ppc-opcode.h>
+#include <asm/sstep.h>
 #include "timing.h"
 #include "trace.h"
 
+int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
+		  unsigned int instr);
 #ifdef CONFIG_PPC_FPU
 static bool kvmppc_check_fp_disabled(struct kvm_vcpu *vcpu)
 {
@@ -84,8 +87,9 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 	struct kvm_run *run = vcpu->run;
 	u32 inst;
 	int ra, rs, rt;
-	enum emulation_result emulated;
+	enum emulation_result emulated = EMULATE_FAIL;
 	int advance = 1;
+	struct instruction_op op;
 
 	/* this default type might be overwritten by subcategories */
 	kvmppc_set_exit_type(vcpu, EMULATED_INST_EXITS);
@@ -114,144 +118,64 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 	vcpu->arch.mmio_update_ra = 0;
 	vcpu->arch.mmio_host_swabbed = 0;
 
-	switch (get_op(inst)) {
-	case 31:
-		switch (get_xop(inst)) {
-		case OP_31_XOP_LWZX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
-			break;
-
-		case OP_31_XOP_LWZUX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_LBZX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
-			break;
+	emulated = EMULATE_FAIL;
+	vcpu->arch.regs.msr = vcpu->arch.shared->msr;
+	vcpu->arch.regs.ccr = vcpu->arch.cr;
+	if (analyse_instr(&op, &vcpu->arch.regs, inst) = 0) {
+		int type = op.type & INSTR_TYPE_MASK;
+		int size = GETSIZE(op.type);
 
-		case OP_31_XOP_LBZUX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
+		switch (type) {
+		case LOAD:  {
+			int instr_byte_swap = op.type & BYTEREV;
 
-		case OP_31_XOP_STDX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 8, 1);
-			break;
+			if (op.type & UPDATE) {
+				vcpu->arch.mmio_ra = op.update_reg;
+				vcpu->arch.mmio_update_ra = 1;
+			}
 
-		case OP_31_XOP_STDUX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 8, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_STWX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 4, 1);
-			break;
-
-		case OP_31_XOP_STWUX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 4, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_STBX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 1, 1);
-			break;
-
-		case OP_31_XOP_STBUX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 1, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_LHAX:
-			emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
-			break;
-
-		case OP_31_XOP_LHAUX:
-			emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
+			if (op.type & SIGNEXT)
+				emulated = kvmppc_handle_loads(run, vcpu,
+						op.reg, size, !instr_byte_swap);
+			else
+				emulated = kvmppc_handle_load(run, vcpu,
+						op.reg, size, !instr_byte_swap);
 
-		case OP_31_XOP_LHZX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
 			break;
-
-		case OP_31_XOP_LHZUX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_STHX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 2, 1);
-			break;
-
-		case OP_31_XOP_STHUX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 2, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_DCBST:
-		case OP_31_XOP_DCBF:
-		case OP_31_XOP_DCBI:
+		}
+		case STORE:
+			if (op.type & UPDATE) {
+				vcpu->arch.mmio_ra = op.update_reg;
+				vcpu->arch.mmio_update_ra = 1;
+			}
+
+			/* if need byte reverse, op.val has been reverted by
+			 * analyse_instr().
+			 */
+			emulated = kvmppc_handle_store(run, vcpu, op.val,
+					size, 1);
+			break;
+		case CACHEOP:
 			/* Do nothing. The guest is performing dcbi because
 			 * hardware DMA is not snooped by the dcache, but
 			 * emulated DMA either goes through the dcache as
 			 * normal writes, or the host kernel has handled dcache
-			 * coherence. */
-			break;
-
-		case OP_31_XOP_LWBRX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 4, 0);
-			break;
-
-		case OP_31_XOP_STWBRX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 4, 0);
+			 * coherence.
+			 */
+			emulated = EMULATE_DONE;
 			break;
-
-		case OP_31_XOP_LHBRX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 0);
-			break;
-
-		case OP_31_XOP_STHBRX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 2, 0);
-			break;
-
-		case OP_31_XOP_LDBRX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 8, 0);
-			break;
-
-		case OP_31_XOP_STDBRX:
-			emulated = kvmppc_handle_store(run, vcpu,
-					kvmppc_get_gpr(vcpu, rs), 8, 0);
-			break;
-
-		case OP_31_XOP_LDX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1);
+		default:
 			break;
+		}
+	}
 
-		case OP_31_XOP_LDUX:
-			emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
 
-		case OP_31_XOP_LWAX:
-			emulated = kvmppc_handle_loads(run, vcpu, rt, 4, 1);
-			break;
-
-		case OP_31_XOP_LWAUX:
-			emulated = kvmppc_handle_loads(run, vcpu, rt, 4, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
+	if (emulated = EMULATE_DONE)
+		goto out;
 
+	switch (get_op(inst)) {
+	case 31:
+		switch (get_xop(inst)) {
 #ifdef CONFIG_PPC_FPU
 		case OP_31_XOP_LFSX:
 			if (kvmppc_check_fp_disabled(vcpu))
@@ -503,10 +427,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 		}
 		break;
 
-	case OP_LWZ:
-		emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
-		break;
-
 #ifdef CONFIG_PPC_FPU
 	case OP_STFS:
 		if (kvmppc_check_fp_disabled(vcpu))
@@ -543,110 +463,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 	                               8, 1);
 		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
 		break;
-#endif
-
-	case OP_LD:
-		rt = get_rt(inst);
-		switch (inst & 3) {
-		case 0:	/* ld */
-			emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1);
-			break;
-		case 1: /* ldu */
-			emulated = kvmppc_handle_load(run, vcpu, rt, 8, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-		case 2:	/* lwa */
-			emulated = kvmppc_handle_loads(run, vcpu, rt, 4, 1);
-			break;
-		default:
-			emulated = EMULATE_FAIL;
-		}
-		break;
-
-	case OP_LWZU:
-		emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-	case OP_LBZ:
-		emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
-		break;
-
-	case OP_LBZU:
-		emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-	case OP_STW:
-		emulated = kvmppc_handle_store(run, vcpu,
-					       kvmppc_get_gpr(vcpu, rs),
-		                               4, 1);
-		break;
-
-	case OP_STD:
-		rs = get_rs(inst);
-		switch (inst & 3) {
-		case 0:	/* std */
-			emulated = kvmppc_handle_store(run, vcpu,
-				kvmppc_get_gpr(vcpu, rs), 8, 1);
-			break;
-		case 1: /* stdu */
-			emulated = kvmppc_handle_store(run, vcpu,
-				kvmppc_get_gpr(vcpu, rs), 8, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-		default:
-			emulated = EMULATE_FAIL;
-		}
-		break;
-
-	case OP_STWU:
-		emulated = kvmppc_handle_store(run, vcpu,
-				kvmppc_get_gpr(vcpu, rs), 4, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-	case OP_STB:
-		emulated = kvmppc_handle_store(run, vcpu,
-				kvmppc_get_gpr(vcpu, rs), 1, 1);
-		break;
-
-	case OP_STBU:
-		emulated = kvmppc_handle_store(run, vcpu,
-				kvmppc_get_gpr(vcpu, rs), 1, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-	case OP_LHZ:
-		emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
-		break;
-
-	case OP_LHZU:
-		emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-	case OP_LHA:
-		emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
-		break;
-
-	case OP_LHAU:
-		emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
 
-	case OP_STH:
-		emulated = kvmppc_handle_store(run, vcpu,
-				kvmppc_get_gpr(vcpu, rs), 2, 1);
-		break;
-
-	case OP_STHU:
-		emulated = kvmppc_handle_store(run, vcpu,
-				kvmppc_get_gpr(vcpu, rs), 2, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-#ifdef CONFIG_PPC_FPU
 	case OP_LFS:
 		if (kvmppc_check_fp_disabled(vcpu))
 			return EMULATE_DONE;
@@ -685,6 +502,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 		break;
 	}
 
+out:
 	if (emulated = EMULATE_FAIL) {
 		advance = 0;
 		kvmppc_core_queue_program(vcpu, 0);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 08/11] KVM: PPC: add giveup_ext() hook for PPC KVM ops
  2018-04-25 11:54 ` wei.guo.simon
  (?)
@ 2018-04-25 11:54   ` wei.guo.simon
  -1 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Simon Guo, linuxppc-dev, kvm

From: Simon Guo <wei.guo.simon@gmail.com>

Currently HV will save math regs(FP/VEC/VSX) when trap into host. But
PR KVM will only save math regs when qemu task switch out of CPU.

To emulate FP/VEC/VSX load, PR KVM need to flush math regs firstly and
then be able to update saved VCPU FPR/VEC/VSX area reasonably.

This patch adds the giveup_ext() to KVM ops (an empty one for HV KVM)
and kvmppc_complete_mmio_load() can invoke that hook to flush math
regs accordingly.

Math regs flush is also necessary for STORE, which will be covered
in later patch within this patch series.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_ppc.h | 1 +
 arch/powerpc/kvm/book3s_hv.c       | 5 +++++
 arch/powerpc/kvm/book3s_pr.c       | 1 +
 arch/powerpc/kvm/powerpc.c         | 9 +++++++++
 4 files changed, 16 insertions(+)

diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index abe7032..b265538 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -324,6 +324,7 @@ struct kvmppc_ops {
 	int (*get_rmmu_info)(struct kvm *kvm, struct kvm_ppc_rmmu_info *info);
 	int (*set_smt_mode)(struct kvm *kvm, unsigned long mode,
 			    unsigned long flags);
+	void (*giveup_ext)(struct kvm_vcpu *vcpu, ulong msr);
 };
 
 extern struct kvmppc_ops *kvmppc_hv_ops;
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 5b875ba..7eb5507 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -2084,6 +2084,10 @@ static int kvmhv_set_smt_mode(struct kvm *kvm, unsigned long smt_mode,
 	return err;
 }
 
+static void kvmhv_giveup_ext(struct kvm_vcpu *vcpu, ulong msr)
+{
+}
+
 static void unpin_vpa(struct kvm *kvm, struct kvmppc_vpa *vpa)
 {
 	if (vpa->pinned_addr)
@@ -4398,6 +4402,7 @@ static int kvmhv_configure_mmu(struct kvm *kvm, struct kvm_ppc_mmuv3_cfg *cfg)
 	.configure_mmu = kvmhv_configure_mmu,
 	.get_rmmu_info = kvmhv_get_rmmu_info,
 	.set_smt_mode = kvmhv_set_smt_mode,
+	.giveup_ext = kvmhv_giveup_ext,
 };
 
 static int kvm_init_subcore_bitmap(void)
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 67061d3..be26636 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -1782,6 +1782,7 @@ static long kvm_arch_vm_ioctl_pr(struct file *filp,
 #ifdef CONFIG_PPC_BOOK3S_64
 	.hcall_implemented = kvmppc_hcall_impl_pr,
 #endif
+	.giveup_ext = kvmppc_giveup_ext,
 };
 
 
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 17f0315..e724601 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -1061,6 +1061,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 		kvmppc_set_gpr(vcpu, vcpu->arch.io_gpr, gpr);
 		break;
 	case KVM_MMIO_REG_FPR:
+		if (!is_kvmppc_hv_enabled(vcpu->kvm))
+			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_FP);
+
 		VCPU_FPR(vcpu, vcpu->arch.io_gpr & KVM_MMIO_REG_MASK) = gpr;
 		break;
 #ifdef CONFIG_PPC_BOOK3S
@@ -1074,6 +1077,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 #endif
 #ifdef CONFIG_VSX
 	case KVM_MMIO_REG_VSX:
+		if (!is_kvmppc_hv_enabled(vcpu->kvm))
+			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_VSX);
+
 		if (vcpu->arch.mmio_vsx_copy_type == KVMPPC_VSX_COPY_DWORD)
 			kvmppc_set_vsr_dword(vcpu, gpr);
 		else if (vcpu->arch.mmio_vsx_copy_type == KVMPPC_VSX_COPY_WORD)
@@ -1088,6 +1094,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 #endif
 #ifdef CONFIG_ALTIVEC
 	case KVM_MMIO_REG_VMX:
+		if (!is_kvmppc_hv_enabled(vcpu->kvm))
+			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_VEC);
+
 		kvmppc_set_vmx_dword(vcpu, gpr);
 		break;
 #endif
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 08/11] KVM: PPC: add giveup_ext() hook for PPC KVM ops
@ 2018-04-25 11:54   ` wei.guo.simon
  0 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Paul Mackerras, kvm, linuxppc-dev, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

Currently HV will save math regs(FP/VEC/VSX) when trap into host. But
PR KVM will only save math regs when qemu task switch out of CPU.

To emulate FP/VEC/VSX load, PR KVM need to flush math regs firstly and
then be able to update saved VCPU FPR/VEC/VSX area reasonably.

This patch adds the giveup_ext() to KVM ops (an empty one for HV KVM)
and kvmppc_complete_mmio_load() can invoke that hook to flush math
regs accordingly.

Math regs flush is also necessary for STORE, which will be covered
in later patch within this patch series.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_ppc.h | 1 +
 arch/powerpc/kvm/book3s_hv.c       | 5 +++++
 arch/powerpc/kvm/book3s_pr.c       | 1 +
 arch/powerpc/kvm/powerpc.c         | 9 +++++++++
 4 files changed, 16 insertions(+)

diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index abe7032..b265538 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -324,6 +324,7 @@ struct kvmppc_ops {
 	int (*get_rmmu_info)(struct kvm *kvm, struct kvm_ppc_rmmu_info *info);
 	int (*set_smt_mode)(struct kvm *kvm, unsigned long mode,
 			    unsigned long flags);
+	void (*giveup_ext)(struct kvm_vcpu *vcpu, ulong msr);
 };
 
 extern struct kvmppc_ops *kvmppc_hv_ops;
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 5b875ba..7eb5507 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -2084,6 +2084,10 @@ static int kvmhv_set_smt_mode(struct kvm *kvm, unsigned long smt_mode,
 	return err;
 }
 
+static void kvmhv_giveup_ext(struct kvm_vcpu *vcpu, ulong msr)
+{
+}
+
 static void unpin_vpa(struct kvm *kvm, struct kvmppc_vpa *vpa)
 {
 	if (vpa->pinned_addr)
@@ -4398,6 +4402,7 @@ static int kvmhv_configure_mmu(struct kvm *kvm, struct kvm_ppc_mmuv3_cfg *cfg)
 	.configure_mmu = kvmhv_configure_mmu,
 	.get_rmmu_info = kvmhv_get_rmmu_info,
 	.set_smt_mode = kvmhv_set_smt_mode,
+	.giveup_ext = kvmhv_giveup_ext,
 };
 
 static int kvm_init_subcore_bitmap(void)
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 67061d3..be26636 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -1782,6 +1782,7 @@ static long kvm_arch_vm_ioctl_pr(struct file *filp,
 #ifdef CONFIG_PPC_BOOK3S_64
 	.hcall_implemented = kvmppc_hcall_impl_pr,
 #endif
+	.giveup_ext = kvmppc_giveup_ext,
 };
 
 
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 17f0315..e724601 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -1061,6 +1061,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 		kvmppc_set_gpr(vcpu, vcpu->arch.io_gpr, gpr);
 		break;
 	case KVM_MMIO_REG_FPR:
+		if (!is_kvmppc_hv_enabled(vcpu->kvm))
+			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_FP);
+
 		VCPU_FPR(vcpu, vcpu->arch.io_gpr & KVM_MMIO_REG_MASK) = gpr;
 		break;
 #ifdef CONFIG_PPC_BOOK3S
@@ -1074,6 +1077,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 #endif
 #ifdef CONFIG_VSX
 	case KVM_MMIO_REG_VSX:
+		if (!is_kvmppc_hv_enabled(vcpu->kvm))
+			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_VSX);
+
 		if (vcpu->arch.mmio_vsx_copy_type == KVMPPC_VSX_COPY_DWORD)
 			kvmppc_set_vsr_dword(vcpu, gpr);
 		else if (vcpu->arch.mmio_vsx_copy_type == KVMPPC_VSX_COPY_WORD)
@@ -1088,6 +1094,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 #endif
 #ifdef CONFIG_ALTIVEC
 	case KVM_MMIO_REG_VMX:
+		if (!is_kvmppc_hv_enabled(vcpu->kvm))
+			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_VEC);
+
 		kvmppc_set_vmx_dword(vcpu, gpr);
 		break;
 #endif
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 08/11] KVM: PPC: add giveup_ext() hook for PPC KVM ops
@ 2018-04-25 11:54   ` wei.guo.simon
  0 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Simon Guo, linuxppc-dev, kvm

From: Simon Guo <wei.guo.simon@gmail.com>

Currently HV will save math regs(FP/VEC/VSX) when trap into host. But
PR KVM will only save math regs when qemu task switch out of CPU.

To emulate FP/VEC/VSX load, PR KVM need to flush math regs firstly and
then be able to update saved VCPU FPR/VEC/VSX area reasonably.

This patch adds the giveup_ext() to KVM ops (an empty one for HV KVM)
and kvmppc_complete_mmio_load() can invoke that hook to flush math
regs accordingly.

Math regs flush is also necessary for STORE, which will be covered
in later patch within this patch series.

Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_ppc.h | 1 +
 arch/powerpc/kvm/book3s_hv.c       | 5 +++++
 arch/powerpc/kvm/book3s_pr.c       | 1 +
 arch/powerpc/kvm/powerpc.c         | 9 +++++++++
 4 files changed, 16 insertions(+)

diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index abe7032..b265538 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -324,6 +324,7 @@ struct kvmppc_ops {
 	int (*get_rmmu_info)(struct kvm *kvm, struct kvm_ppc_rmmu_info *info);
 	int (*set_smt_mode)(struct kvm *kvm, unsigned long mode,
 			    unsigned long flags);
+	void (*giveup_ext)(struct kvm_vcpu *vcpu, ulong msr);
 };
 
 extern struct kvmppc_ops *kvmppc_hv_ops;
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 5b875ba..7eb5507 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -2084,6 +2084,10 @@ static int kvmhv_set_smt_mode(struct kvm *kvm, unsigned long smt_mode,
 	return err;
 }
 
+static void kvmhv_giveup_ext(struct kvm_vcpu *vcpu, ulong msr)
+{
+}
+
 static void unpin_vpa(struct kvm *kvm, struct kvmppc_vpa *vpa)
 {
 	if (vpa->pinned_addr)
@@ -4398,6 +4402,7 @@ static int kvmhv_configure_mmu(struct kvm *kvm, struct kvm_ppc_mmuv3_cfg *cfg)
 	.configure_mmu = kvmhv_configure_mmu,
 	.get_rmmu_info = kvmhv_get_rmmu_info,
 	.set_smt_mode = kvmhv_set_smt_mode,
+	.giveup_ext = kvmhv_giveup_ext,
 };
 
 static int kvm_init_subcore_bitmap(void)
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 67061d3..be26636 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -1782,6 +1782,7 @@ static long kvm_arch_vm_ioctl_pr(struct file *filp,
 #ifdef CONFIG_PPC_BOOK3S_64
 	.hcall_implemented = kvmppc_hcall_impl_pr,
 #endif
+	.giveup_ext = kvmppc_giveup_ext,
 };
 
 
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 17f0315..e724601 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -1061,6 +1061,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 		kvmppc_set_gpr(vcpu, vcpu->arch.io_gpr, gpr);
 		break;
 	case KVM_MMIO_REG_FPR:
+		if (!is_kvmppc_hv_enabled(vcpu->kvm))
+			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_FP);
+
 		VCPU_FPR(vcpu, vcpu->arch.io_gpr & KVM_MMIO_REG_MASK) = gpr;
 		break;
 #ifdef CONFIG_PPC_BOOK3S
@@ -1074,6 +1077,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 #endif
 #ifdef CONFIG_VSX
 	case KVM_MMIO_REG_VSX:
+		if (!is_kvmppc_hv_enabled(vcpu->kvm))
+			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_VSX);
+
 		if (vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD)
 			kvmppc_set_vsr_dword(vcpu, gpr);
 		else if (vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_WORD)
@@ -1088,6 +1094,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 #endif
 #ifdef CONFIG_ALTIVEC
 	case KVM_MMIO_REG_VMX:
+		if (!is_kvmppc_hv_enabled(vcpu->kvm))
+			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_VEC);
+
 		kvmppc_set_vmx_dword(vcpu, gpr);
 		break;
 #endif
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 09/11] KVM: PPC: reconstruct LOAD_FP/STORE_FP instruction mmio emulation with analyse_intr() input
  2018-04-25 11:54 ` wei.guo.simon
  (?)
@ 2018-04-25 11:54   ` wei.guo.simon
  -1 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Simon Guo, linuxppc-dev, kvm

From: Simon Guo <wei.guo.simon@gmail.com>

This patch reconstructs LOAD_FP/STORE_FP instruction MMIO emulation with
analyse_intr() input. It utilizes the FPCONV/UPDATE properties exported by
analyse_instr() and invokes kvmppc_handle_load(s)/kvmppc_handle_store()
accordingly.

The FP regs need to be flushed so that the right FP reg vals can be read
from vcpu->arch.fpr.

Suggested-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/kvm/emulate_loadstore.c | 199 ++++++++---------------------------
 1 file changed, 42 insertions(+), 157 deletions(-)

diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
index aaaf872..2dbdf9a 100644
--- a/arch/powerpc/kvm/emulate_loadstore.c
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -143,6 +143,23 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 
 			break;
 		}
+#ifdef CONFIG_PPC_FPU
+		case LOAD_FP:
+			if (kvmppc_check_fp_disabled(vcpu))
+				return EMULATE_DONE;
+
+			if (op.type & UPDATE) {
+				vcpu->arch.mmio_ra = op.update_reg;
+				vcpu->arch.mmio_update_ra = 1;
+			}
+
+			if (op.type & FPCONV)
+				vcpu->arch.mmio_sp64_extend = 1;
+
+			emulated = kvmppc_handle_load(run, vcpu,
+					KVM_MMIO_REG_FPR|op.reg, size, 1);
+			break;
+#endif
 		case STORE:
 			if (op.type & UPDATE) {
 				vcpu->arch.mmio_ra = op.update_reg;
@@ -155,6 +172,31 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 			emulated = kvmppc_handle_store(run, vcpu, op.val,
 					size, 1);
 			break;
+#ifdef CONFIG_PPC_FPU
+		case STORE_FP:
+			if (kvmppc_check_fp_disabled(vcpu))
+				return EMULATE_DONE;
+
+			/* if it is PR KVM, the FP/VEC/VSX registers need to
+			 * be flushed so that kvmppc_handle_store() can read
+			 * actual VMX vals from vcpu->arch.
+			 */
+			if (!is_kvmppc_hv_enabled(vcpu->kvm))
+				vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu,
+						MSR_FP);
+
+			if (op.type & UPDATE) {
+				vcpu->arch.mmio_ra = op.update_reg;
+				vcpu->arch.mmio_update_ra = 1;
+			}
+
+			if (op.type & FPCONV)
+				vcpu->arch.mmio_sp64_extend = 1;
+
+			emulated = kvmppc_handle_store(run, vcpu,
+					VCPU_FPR(vcpu, op.reg), size, 1);
+			break;
+#endif
 		case CACHEOP:
 			/* Do nothing. The guest is performing dcbi because
 			 * hardware DMA is not snooped by the dcache, but
@@ -176,93 +218,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 	switch (get_op(inst)) {
 	case 31:
 		switch (get_xop(inst)) {
-#ifdef CONFIG_PPC_FPU
-		case OP_31_XOP_LFSX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_sp64_extend = 1;
-			emulated = kvmppc_handle_load(run, vcpu,
-				KVM_MMIO_REG_FPR|rt, 4, 1);
-			break;
-
-		case OP_31_XOP_LFSUX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_sp64_extend = 1;
-			emulated = kvmppc_handle_load(run, vcpu,
-				KVM_MMIO_REG_FPR|rt, 4, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_LFDX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			emulated = kvmppc_handle_load(run, vcpu,
-				KVM_MMIO_REG_FPR|rt, 8, 1);
-			break;
-
-		case OP_31_XOP_LFDUX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			emulated = kvmppc_handle_load(run, vcpu,
-				KVM_MMIO_REG_FPR|rt, 8, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_LFIWAX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			emulated = kvmppc_handle_loads(run, vcpu,
-				KVM_MMIO_REG_FPR|rt, 4, 1);
-			break;
-
-		case OP_31_XOP_LFIWZX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			emulated = kvmppc_handle_load(run, vcpu,
-				KVM_MMIO_REG_FPR|rt, 4, 1);
-			break;
-
-		case OP_31_XOP_STFSX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_sp64_extend = 1;
-			emulated = kvmppc_handle_store(run, vcpu,
-				VCPU_FPR(vcpu, rs), 4, 1);
-			break;
-
-		case OP_31_XOP_STFSUX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_sp64_extend = 1;
-			emulated = kvmppc_handle_store(run, vcpu,
-				VCPU_FPR(vcpu, rs), 4, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_STFDX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			emulated = kvmppc_handle_store(run, vcpu,
-				VCPU_FPR(vcpu, rs), 8, 1);
-			break;
-
-		case OP_31_XOP_STFDUX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			emulated = kvmppc_handle_store(run, vcpu,
-				VCPU_FPR(vcpu, rs), 8, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_STFIWX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			emulated = kvmppc_handle_store(run, vcpu,
-				VCPU_FPR(vcpu, rs), 4, 1);
-			break;
-#endif
-
 #ifdef CONFIG_VSX
 		case OP_31_XOP_LXSDX:
 			if (kvmppc_check_vsx_disabled(vcpu))
@@ -427,76 +382,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 		}
 		break;
 
-#ifdef CONFIG_PPC_FPU
-	case OP_STFS:
-		if (kvmppc_check_fp_disabled(vcpu))
-			return EMULATE_DONE;
-		vcpu->arch.mmio_sp64_extend = 1;
-		emulated = kvmppc_handle_store(run, vcpu,
-			VCPU_FPR(vcpu, rs),
-			4, 1);
-		break;
-
-	case OP_STFSU:
-		if (kvmppc_check_fp_disabled(vcpu))
-			return EMULATE_DONE;
-		vcpu->arch.mmio_sp64_extend = 1;
-		emulated = kvmppc_handle_store(run, vcpu,
-			VCPU_FPR(vcpu, rs),
-			4, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-	case OP_STFD:
-		if (kvmppc_check_fp_disabled(vcpu))
-			return EMULATE_DONE;
-		emulated = kvmppc_handle_store(run, vcpu,
-			VCPU_FPR(vcpu, rs),
-	                               8, 1);
-		break;
-
-	case OP_STFDU:
-		if (kvmppc_check_fp_disabled(vcpu))
-			return EMULATE_DONE;
-		emulated = kvmppc_handle_store(run, vcpu,
-			VCPU_FPR(vcpu, rs),
-	                               8, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-	case OP_LFS:
-		if (kvmppc_check_fp_disabled(vcpu))
-			return EMULATE_DONE;
-		vcpu->arch.mmio_sp64_extend = 1;
-		emulated = kvmppc_handle_load(run, vcpu,
-			KVM_MMIO_REG_FPR|rt, 4, 1);
-		break;
-
-	case OP_LFSU:
-		if (kvmppc_check_fp_disabled(vcpu))
-			return EMULATE_DONE;
-		vcpu->arch.mmio_sp64_extend = 1;
-		emulated = kvmppc_handle_load(run, vcpu,
-			KVM_MMIO_REG_FPR|rt, 4, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-	case OP_LFD:
-		if (kvmppc_check_fp_disabled(vcpu))
-			return EMULATE_DONE;
-		emulated = kvmppc_handle_load(run, vcpu,
-			KVM_MMIO_REG_FPR|rt, 8, 1);
-		break;
-
-	case OP_LFDU:
-		if (kvmppc_check_fp_disabled(vcpu))
-			return EMULATE_DONE;
-		emulated = kvmppc_handle_load(run, vcpu,
-			KVM_MMIO_REG_FPR|rt, 8, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-#endif
-
 	default:
 		emulated = EMULATE_FAIL;
 		break;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 09/11] KVM: PPC: reconstruct LOAD_FP/STORE_FP instruction mmio emulation with analyse_intr() input
@ 2018-04-25 11:54   ` wei.guo.simon
  0 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Paul Mackerras, kvm, linuxppc-dev, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

This patch reconstructs LOAD_FP/STORE_FP instruction MMIO emulation with
analyse_intr() input. It utilizes the FPCONV/UPDATE properties exported by
analyse_instr() and invokes kvmppc_handle_load(s)/kvmppc_handle_store()
accordingly.

The FP regs need to be flushed so that the right FP reg vals can be read
from vcpu->arch.fpr.

Suggested-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/kvm/emulate_loadstore.c | 199 ++++++++---------------------------
 1 file changed, 42 insertions(+), 157 deletions(-)

diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
index aaaf872..2dbdf9a 100644
--- a/arch/powerpc/kvm/emulate_loadstore.c
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -143,6 +143,23 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 
 			break;
 		}
+#ifdef CONFIG_PPC_FPU
+		case LOAD_FP:
+			if (kvmppc_check_fp_disabled(vcpu))
+				return EMULATE_DONE;
+
+			if (op.type & UPDATE) {
+				vcpu->arch.mmio_ra = op.update_reg;
+				vcpu->arch.mmio_update_ra = 1;
+			}
+
+			if (op.type & FPCONV)
+				vcpu->arch.mmio_sp64_extend = 1;
+
+			emulated = kvmppc_handle_load(run, vcpu,
+					KVM_MMIO_REG_FPR|op.reg, size, 1);
+			break;
+#endif
 		case STORE:
 			if (op.type & UPDATE) {
 				vcpu->arch.mmio_ra = op.update_reg;
@@ -155,6 +172,31 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 			emulated = kvmppc_handle_store(run, vcpu, op.val,
 					size, 1);
 			break;
+#ifdef CONFIG_PPC_FPU
+		case STORE_FP:
+			if (kvmppc_check_fp_disabled(vcpu))
+				return EMULATE_DONE;
+
+			/* if it is PR KVM, the FP/VEC/VSX registers need to
+			 * be flushed so that kvmppc_handle_store() can read
+			 * actual VMX vals from vcpu->arch.
+			 */
+			if (!is_kvmppc_hv_enabled(vcpu->kvm))
+				vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu,
+						MSR_FP);
+
+			if (op.type & UPDATE) {
+				vcpu->arch.mmio_ra = op.update_reg;
+				vcpu->arch.mmio_update_ra = 1;
+			}
+
+			if (op.type & FPCONV)
+				vcpu->arch.mmio_sp64_extend = 1;
+
+			emulated = kvmppc_handle_store(run, vcpu,
+					VCPU_FPR(vcpu, op.reg), size, 1);
+			break;
+#endif
 		case CACHEOP:
 			/* Do nothing. The guest is performing dcbi because
 			 * hardware DMA is not snooped by the dcache, but
@@ -176,93 +218,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 	switch (get_op(inst)) {
 	case 31:
 		switch (get_xop(inst)) {
-#ifdef CONFIG_PPC_FPU
-		case OP_31_XOP_LFSX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_sp64_extend = 1;
-			emulated = kvmppc_handle_load(run, vcpu,
-				KVM_MMIO_REG_FPR|rt, 4, 1);
-			break;
-
-		case OP_31_XOP_LFSUX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_sp64_extend = 1;
-			emulated = kvmppc_handle_load(run, vcpu,
-				KVM_MMIO_REG_FPR|rt, 4, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_LFDX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			emulated = kvmppc_handle_load(run, vcpu,
-				KVM_MMIO_REG_FPR|rt, 8, 1);
-			break;
-
-		case OP_31_XOP_LFDUX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			emulated = kvmppc_handle_load(run, vcpu,
-				KVM_MMIO_REG_FPR|rt, 8, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_LFIWAX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			emulated = kvmppc_handle_loads(run, vcpu,
-				KVM_MMIO_REG_FPR|rt, 4, 1);
-			break;
-
-		case OP_31_XOP_LFIWZX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			emulated = kvmppc_handle_load(run, vcpu,
-				KVM_MMIO_REG_FPR|rt, 4, 1);
-			break;
-
-		case OP_31_XOP_STFSX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_sp64_extend = 1;
-			emulated = kvmppc_handle_store(run, vcpu,
-				VCPU_FPR(vcpu, rs), 4, 1);
-			break;
-
-		case OP_31_XOP_STFSUX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_sp64_extend = 1;
-			emulated = kvmppc_handle_store(run, vcpu,
-				VCPU_FPR(vcpu, rs), 4, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_STFDX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			emulated = kvmppc_handle_store(run, vcpu,
-				VCPU_FPR(vcpu, rs), 8, 1);
-			break;
-
-		case OP_31_XOP_STFDUX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			emulated = kvmppc_handle_store(run, vcpu,
-				VCPU_FPR(vcpu, rs), 8, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_STFIWX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			emulated = kvmppc_handle_store(run, vcpu,
-				VCPU_FPR(vcpu, rs), 4, 1);
-			break;
-#endif
-
 #ifdef CONFIG_VSX
 		case OP_31_XOP_LXSDX:
 			if (kvmppc_check_vsx_disabled(vcpu))
@@ -427,76 +382,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 		}
 		break;
 
-#ifdef CONFIG_PPC_FPU
-	case OP_STFS:
-		if (kvmppc_check_fp_disabled(vcpu))
-			return EMULATE_DONE;
-		vcpu->arch.mmio_sp64_extend = 1;
-		emulated = kvmppc_handle_store(run, vcpu,
-			VCPU_FPR(vcpu, rs),
-			4, 1);
-		break;
-
-	case OP_STFSU:
-		if (kvmppc_check_fp_disabled(vcpu))
-			return EMULATE_DONE;
-		vcpu->arch.mmio_sp64_extend = 1;
-		emulated = kvmppc_handle_store(run, vcpu,
-			VCPU_FPR(vcpu, rs),
-			4, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-	case OP_STFD:
-		if (kvmppc_check_fp_disabled(vcpu))
-			return EMULATE_DONE;
-		emulated = kvmppc_handle_store(run, vcpu,
-			VCPU_FPR(vcpu, rs),
-	                               8, 1);
-		break;
-
-	case OP_STFDU:
-		if (kvmppc_check_fp_disabled(vcpu))
-			return EMULATE_DONE;
-		emulated = kvmppc_handle_store(run, vcpu,
-			VCPU_FPR(vcpu, rs),
-	                               8, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-	case OP_LFS:
-		if (kvmppc_check_fp_disabled(vcpu))
-			return EMULATE_DONE;
-		vcpu->arch.mmio_sp64_extend = 1;
-		emulated = kvmppc_handle_load(run, vcpu,
-			KVM_MMIO_REG_FPR|rt, 4, 1);
-		break;
-
-	case OP_LFSU:
-		if (kvmppc_check_fp_disabled(vcpu))
-			return EMULATE_DONE;
-		vcpu->arch.mmio_sp64_extend = 1;
-		emulated = kvmppc_handle_load(run, vcpu,
-			KVM_MMIO_REG_FPR|rt, 4, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-	case OP_LFD:
-		if (kvmppc_check_fp_disabled(vcpu))
-			return EMULATE_DONE;
-		emulated = kvmppc_handle_load(run, vcpu,
-			KVM_MMIO_REG_FPR|rt, 8, 1);
-		break;
-
-	case OP_LFDU:
-		if (kvmppc_check_fp_disabled(vcpu))
-			return EMULATE_DONE;
-		emulated = kvmppc_handle_load(run, vcpu,
-			KVM_MMIO_REG_FPR|rt, 8, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-#endif
-
 	default:
 		emulated = EMULATE_FAIL;
 		break;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 09/11] KVM: PPC: reconstruct LOAD_FP/STORE_FP instruction mmio emulation with analyse_intr()
@ 2018-04-25 11:54   ` wei.guo.simon
  0 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Simon Guo, linuxppc-dev, kvm

From: Simon Guo <wei.guo.simon@gmail.com>

This patch reconstructs LOAD_FP/STORE_FP instruction MMIO emulation with
analyse_intr() input. It utilizes the FPCONV/UPDATE properties exported by
analyse_instr() and invokes kvmppc_handle_load(s)/kvmppc_handle_store()
accordingly.

The FP regs need to be flushed so that the right FP reg vals can be read
from vcpu->arch.fpr.

Suggested-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/kvm/emulate_loadstore.c | 199 ++++++++---------------------------
 1 file changed, 42 insertions(+), 157 deletions(-)

diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
index aaaf872..2dbdf9a 100644
--- a/arch/powerpc/kvm/emulate_loadstore.c
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -143,6 +143,23 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 
 			break;
 		}
+#ifdef CONFIG_PPC_FPU
+		case LOAD_FP:
+			if (kvmppc_check_fp_disabled(vcpu))
+				return EMULATE_DONE;
+
+			if (op.type & UPDATE) {
+				vcpu->arch.mmio_ra = op.update_reg;
+				vcpu->arch.mmio_update_ra = 1;
+			}
+
+			if (op.type & FPCONV)
+				vcpu->arch.mmio_sp64_extend = 1;
+
+			emulated = kvmppc_handle_load(run, vcpu,
+					KVM_MMIO_REG_FPR|op.reg, size, 1);
+			break;
+#endif
 		case STORE:
 			if (op.type & UPDATE) {
 				vcpu->arch.mmio_ra = op.update_reg;
@@ -155,6 +172,31 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 			emulated = kvmppc_handle_store(run, vcpu, op.val,
 					size, 1);
 			break;
+#ifdef CONFIG_PPC_FPU
+		case STORE_FP:
+			if (kvmppc_check_fp_disabled(vcpu))
+				return EMULATE_DONE;
+
+			/* if it is PR KVM, the FP/VEC/VSX registers need to
+			 * be flushed so that kvmppc_handle_store() can read
+			 * actual VMX vals from vcpu->arch.
+			 */
+			if (!is_kvmppc_hv_enabled(vcpu->kvm))
+				vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu,
+						MSR_FP);
+
+			if (op.type & UPDATE) {
+				vcpu->arch.mmio_ra = op.update_reg;
+				vcpu->arch.mmio_update_ra = 1;
+			}
+
+			if (op.type & FPCONV)
+				vcpu->arch.mmio_sp64_extend = 1;
+
+			emulated = kvmppc_handle_store(run, vcpu,
+					VCPU_FPR(vcpu, op.reg), size, 1);
+			break;
+#endif
 		case CACHEOP:
 			/* Do nothing. The guest is performing dcbi because
 			 * hardware DMA is not snooped by the dcache, but
@@ -176,93 +218,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 	switch (get_op(inst)) {
 	case 31:
 		switch (get_xop(inst)) {
-#ifdef CONFIG_PPC_FPU
-		case OP_31_XOP_LFSX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_sp64_extend = 1;
-			emulated = kvmppc_handle_load(run, vcpu,
-				KVM_MMIO_REG_FPR|rt, 4, 1);
-			break;
-
-		case OP_31_XOP_LFSUX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_sp64_extend = 1;
-			emulated = kvmppc_handle_load(run, vcpu,
-				KVM_MMIO_REG_FPR|rt, 4, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_LFDX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			emulated = kvmppc_handle_load(run, vcpu,
-				KVM_MMIO_REG_FPR|rt, 8, 1);
-			break;
-
-		case OP_31_XOP_LFDUX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			emulated = kvmppc_handle_load(run, vcpu,
-				KVM_MMIO_REG_FPR|rt, 8, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_LFIWAX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			emulated = kvmppc_handle_loads(run, vcpu,
-				KVM_MMIO_REG_FPR|rt, 4, 1);
-			break;
-
-		case OP_31_XOP_LFIWZX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			emulated = kvmppc_handle_load(run, vcpu,
-				KVM_MMIO_REG_FPR|rt, 4, 1);
-			break;
-
-		case OP_31_XOP_STFSX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_sp64_extend = 1;
-			emulated = kvmppc_handle_store(run, vcpu,
-				VCPU_FPR(vcpu, rs), 4, 1);
-			break;
-
-		case OP_31_XOP_STFSUX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_sp64_extend = 1;
-			emulated = kvmppc_handle_store(run, vcpu,
-				VCPU_FPR(vcpu, rs), 4, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_STFDX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			emulated = kvmppc_handle_store(run, vcpu,
-				VCPU_FPR(vcpu, rs), 8, 1);
-			break;
-
-		case OP_31_XOP_STFDUX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			emulated = kvmppc_handle_store(run, vcpu,
-				VCPU_FPR(vcpu, rs), 8, 1);
-			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-			break;
-
-		case OP_31_XOP_STFIWX:
-			if (kvmppc_check_fp_disabled(vcpu))
-				return EMULATE_DONE;
-			emulated = kvmppc_handle_store(run, vcpu,
-				VCPU_FPR(vcpu, rs), 4, 1);
-			break;
-#endif
-
 #ifdef CONFIG_VSX
 		case OP_31_XOP_LXSDX:
 			if (kvmppc_check_vsx_disabled(vcpu))
@@ -427,76 +382,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 		}
 		break;
 
-#ifdef CONFIG_PPC_FPU
-	case OP_STFS:
-		if (kvmppc_check_fp_disabled(vcpu))
-			return EMULATE_DONE;
-		vcpu->arch.mmio_sp64_extend = 1;
-		emulated = kvmppc_handle_store(run, vcpu,
-			VCPU_FPR(vcpu, rs),
-			4, 1);
-		break;
-
-	case OP_STFSU:
-		if (kvmppc_check_fp_disabled(vcpu))
-			return EMULATE_DONE;
-		vcpu->arch.mmio_sp64_extend = 1;
-		emulated = kvmppc_handle_store(run, vcpu,
-			VCPU_FPR(vcpu, rs),
-			4, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-	case OP_STFD:
-		if (kvmppc_check_fp_disabled(vcpu))
-			return EMULATE_DONE;
-		emulated = kvmppc_handle_store(run, vcpu,
-			VCPU_FPR(vcpu, rs),
-	                               8, 1);
-		break;
-
-	case OP_STFDU:
-		if (kvmppc_check_fp_disabled(vcpu))
-			return EMULATE_DONE;
-		emulated = kvmppc_handle_store(run, vcpu,
-			VCPU_FPR(vcpu, rs),
-	                               8, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-	case OP_LFS:
-		if (kvmppc_check_fp_disabled(vcpu))
-			return EMULATE_DONE;
-		vcpu->arch.mmio_sp64_extend = 1;
-		emulated = kvmppc_handle_load(run, vcpu,
-			KVM_MMIO_REG_FPR|rt, 4, 1);
-		break;
-
-	case OP_LFSU:
-		if (kvmppc_check_fp_disabled(vcpu))
-			return EMULATE_DONE;
-		vcpu->arch.mmio_sp64_extend = 1;
-		emulated = kvmppc_handle_load(run, vcpu,
-			KVM_MMIO_REG_FPR|rt, 4, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-
-	case OP_LFD:
-		if (kvmppc_check_fp_disabled(vcpu))
-			return EMULATE_DONE;
-		emulated = kvmppc_handle_load(run, vcpu,
-			KVM_MMIO_REG_FPR|rt, 8, 1);
-		break;
-
-	case OP_LFDU:
-		if (kvmppc_check_fp_disabled(vcpu))
-			return EMULATE_DONE;
-		emulated = kvmppc_handle_load(run, vcpu,
-			KVM_MMIO_REG_FPR|rt, 8, 1);
-		kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
-		break;
-#endif
-
 	default:
 		emulated = EMULATE_FAIL;
 		break;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 10/11] KVM: PPC: reconstruct LOAD_VMX/STORE_VMX instruction mmio emulation with analyse_intr() input
  2018-04-25 11:54 ` wei.guo.simon
  (?)
@ 2018-04-25 11:54   ` wei.guo.simon
  -1 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Simon Guo, linuxppc-dev, kvm

From: Simon Guo <wei.guo.simon@gmail.com>

This patch reconstructs LOAD_VMX/STORE_VMX instruction MMIO emulation with
analyse_intr() input. When emulating the store, the VMX reg will need to
be flushed so that the right reg val can be retrieved before writing to
IO MEM.

Suggested-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_ppc.h   |  1 +
 arch/powerpc/kvm/emulate_loadstore.c | 73 +++++++++++++++++++++++++-----------
 arch/powerpc/kvm/powerpc.c           |  2 +-
 3 files changed, 53 insertions(+), 23 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index b265538..eeb00de 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -83,6 +83,7 @@ extern int kvmppc_handle_vsx_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			int is_default_endian, int mmio_sign_extend);
 extern int kvmppc_handle_load128_by2x64(struct kvm_run *run,
 		struct kvm_vcpu *vcpu, unsigned int rt, int is_default_endian);
+extern int kvmppc_get_vmx_data(struct kvm_vcpu *vcpu, int rs, u64 *val);
 extern int kvmppc_handle_store128_by2x64(struct kvm_run *run,
 		struct kvm_vcpu *vcpu, unsigned int rs, int is_default_endian);
 extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
index 2dbdf9a..0bfee2f 100644
--- a/arch/powerpc/kvm/emulate_loadstore.c
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -160,6 +160,27 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 					KVM_MMIO_REG_FPR|op.reg, size, 1);
 			break;
 #endif
+#ifdef CONFIG_ALTIVEC
+		case LOAD_VMX:
+			if (kvmppc_check_altivec_disabled(vcpu))
+				return EMULATE_DONE;
+
+			/* VMX access will need to be size aligned */
+			vcpu->arch.vaddr_accessed &= ~((unsigned long)size - 1);
+			vcpu->arch.paddr_accessed &= ~((unsigned long)size - 1);
+
+			if (size == 16) {
+				vcpu->arch.mmio_vmx_copy_nums = 2;
+				emulated = kvmppc_handle_load128_by2x64(run,
+						vcpu, KVM_MMIO_REG_VMX|op.reg,
+						1);
+			} else if (size <= 8)
+				emulated = kvmppc_handle_load(run, vcpu,
+						KVM_MMIO_REG_VMX|op.reg,
+						size, 1);
+
+			break;
+#endif
 		case STORE:
 			if (op.type & UPDATE) {
 				vcpu->arch.mmio_ra = op.update_reg;
@@ -197,6 +218,36 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 					VCPU_FPR(vcpu, op.reg), size, 1);
 			break;
 #endif
+#ifdef CONFIG_ALTIVEC
+		case STORE_VMX:
+			if (kvmppc_check_altivec_disabled(vcpu))
+				return EMULATE_DONE;
+
+			/* VMX access will need to be size aligned */
+			vcpu->arch.vaddr_accessed &= ~((unsigned long)size - 1);
+			vcpu->arch.paddr_accessed &= ~((unsigned long)size - 1);
+
+			/* if it is PR KVM, the FP/VEC/VSX registers need to
+			 * be flushed so that kvmppc_handle_store() can read
+			 * actual VMX vals from vcpu->arch.
+			 */
+			if (!is_kvmppc_hv_enabled(vcpu->kvm))
+				vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu,
+						MSR_VEC);
+
+			if (size == 16) {
+				vcpu->arch.mmio_vmx_copy_nums = 2;
+				emulated = kvmppc_handle_store128_by2x64(run,
+						vcpu, op.reg, 1);
+			} else if (size <= 8) {
+				u64 val;
+
+				kvmppc_get_vmx_data(vcpu, op.reg, &val);
+				emulated = kvmppc_handle_store(run, vcpu,
+						val, size, 1);
+			}
+			break;
+#endif
 		case CACHEOP:
 			/* Do nothing. The guest is performing dcbi because
 			 * hardware DMA is not snooped by the dcache, but
@@ -354,28 +405,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 			break;
 #endif /* CONFIG_VSX */
 
-#ifdef CONFIG_ALTIVEC
-		case OP_31_XOP_LVX:
-			if (kvmppc_check_altivec_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.vaddr_accessed &= ~0xFULL;
-			vcpu->arch.paddr_accessed &= ~0xFULL;
-			vcpu->arch.mmio_vmx_copy_nums = 2;
-			emulated = kvmppc_handle_load128_by2x64(run, vcpu,
-					KVM_MMIO_REG_VMX|rt, 1);
-			break;
-
-		case OP_31_XOP_STVX:
-			if (kvmppc_check_altivec_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.vaddr_accessed &= ~0xFULL;
-			vcpu->arch.paddr_accessed &= ~0xFULL;
-			vcpu->arch.mmio_vmx_copy_nums = 2;
-			emulated = kvmppc_handle_store128_by2x64(run, vcpu,
-					rs, 1);
-			break;
-#endif /* CONFIG_ALTIVEC */
-
 		default:
 			emulated = EMULATE_FAIL;
 			break;
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index e724601..000182e 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -1408,7 +1408,7 @@ int kvmppc_handle_load128_by2x64(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	return emulated;
 }
 
-static inline int kvmppc_get_vmx_data(struct kvm_vcpu *vcpu, int rs, u64 *val)
+int kvmppc_get_vmx_data(struct kvm_vcpu *vcpu, int rs, u64 *val)
 {
 	vector128 vrs = VCPU_VSX_VR(vcpu, rs);
 	u32 di;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 10/11] KVM: PPC: reconstruct LOAD_VMX/STORE_VMX instruction mmio emulation with analyse_intr() input
@ 2018-04-25 11:54   ` wei.guo.simon
  0 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Paul Mackerras, kvm, linuxppc-dev, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

This patch reconstructs LOAD_VMX/STORE_VMX instruction MMIO emulation with
analyse_intr() input. When emulating the store, the VMX reg will need to
be flushed so that the right reg val can be retrieved before writing to
IO MEM.

Suggested-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_ppc.h   |  1 +
 arch/powerpc/kvm/emulate_loadstore.c | 73 +++++++++++++++++++++++++-----------
 arch/powerpc/kvm/powerpc.c           |  2 +-
 3 files changed, 53 insertions(+), 23 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index b265538..eeb00de 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -83,6 +83,7 @@ extern int kvmppc_handle_vsx_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			int is_default_endian, int mmio_sign_extend);
 extern int kvmppc_handle_load128_by2x64(struct kvm_run *run,
 		struct kvm_vcpu *vcpu, unsigned int rt, int is_default_endian);
+extern int kvmppc_get_vmx_data(struct kvm_vcpu *vcpu, int rs, u64 *val);
 extern int kvmppc_handle_store128_by2x64(struct kvm_run *run,
 		struct kvm_vcpu *vcpu, unsigned int rs, int is_default_endian);
 extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
index 2dbdf9a..0bfee2f 100644
--- a/arch/powerpc/kvm/emulate_loadstore.c
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -160,6 +160,27 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 					KVM_MMIO_REG_FPR|op.reg, size, 1);
 			break;
 #endif
+#ifdef CONFIG_ALTIVEC
+		case LOAD_VMX:
+			if (kvmppc_check_altivec_disabled(vcpu))
+				return EMULATE_DONE;
+
+			/* VMX access will need to be size aligned */
+			vcpu->arch.vaddr_accessed &= ~((unsigned long)size - 1);
+			vcpu->arch.paddr_accessed &= ~((unsigned long)size - 1);
+
+			if (size == 16) {
+				vcpu->arch.mmio_vmx_copy_nums = 2;
+				emulated = kvmppc_handle_load128_by2x64(run,
+						vcpu, KVM_MMIO_REG_VMX|op.reg,
+						1);
+			} else if (size <= 8)
+				emulated = kvmppc_handle_load(run, vcpu,
+						KVM_MMIO_REG_VMX|op.reg,
+						size, 1);
+
+			break;
+#endif
 		case STORE:
 			if (op.type & UPDATE) {
 				vcpu->arch.mmio_ra = op.update_reg;
@@ -197,6 +218,36 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 					VCPU_FPR(vcpu, op.reg), size, 1);
 			break;
 #endif
+#ifdef CONFIG_ALTIVEC
+		case STORE_VMX:
+			if (kvmppc_check_altivec_disabled(vcpu))
+				return EMULATE_DONE;
+
+			/* VMX access will need to be size aligned */
+			vcpu->arch.vaddr_accessed &= ~((unsigned long)size - 1);
+			vcpu->arch.paddr_accessed &= ~((unsigned long)size - 1);
+
+			/* if it is PR KVM, the FP/VEC/VSX registers need to
+			 * be flushed so that kvmppc_handle_store() can read
+			 * actual VMX vals from vcpu->arch.
+			 */
+			if (!is_kvmppc_hv_enabled(vcpu->kvm))
+				vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu,
+						MSR_VEC);
+
+			if (size == 16) {
+				vcpu->arch.mmio_vmx_copy_nums = 2;
+				emulated = kvmppc_handle_store128_by2x64(run,
+						vcpu, op.reg, 1);
+			} else if (size <= 8) {
+				u64 val;
+
+				kvmppc_get_vmx_data(vcpu, op.reg, &val);
+				emulated = kvmppc_handle_store(run, vcpu,
+						val, size, 1);
+			}
+			break;
+#endif
 		case CACHEOP:
 			/* Do nothing. The guest is performing dcbi because
 			 * hardware DMA is not snooped by the dcache, but
@@ -354,28 +405,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 			break;
 #endif /* CONFIG_VSX */
 
-#ifdef CONFIG_ALTIVEC
-		case OP_31_XOP_LVX:
-			if (kvmppc_check_altivec_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.vaddr_accessed &= ~0xFULL;
-			vcpu->arch.paddr_accessed &= ~0xFULL;
-			vcpu->arch.mmio_vmx_copy_nums = 2;
-			emulated = kvmppc_handle_load128_by2x64(run, vcpu,
-					KVM_MMIO_REG_VMX|rt, 1);
-			break;
-
-		case OP_31_XOP_STVX:
-			if (kvmppc_check_altivec_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.vaddr_accessed &= ~0xFULL;
-			vcpu->arch.paddr_accessed &= ~0xFULL;
-			vcpu->arch.mmio_vmx_copy_nums = 2;
-			emulated = kvmppc_handle_store128_by2x64(run, vcpu,
-					rs, 1);
-			break;
-#endif /* CONFIG_ALTIVEC */
-
 		default:
 			emulated = EMULATE_FAIL;
 			break;
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index e724601..000182e 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -1408,7 +1408,7 @@ int kvmppc_handle_load128_by2x64(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	return emulated;
 }
 
-static inline int kvmppc_get_vmx_data(struct kvm_vcpu *vcpu, int rs, u64 *val)
+int kvmppc_get_vmx_data(struct kvm_vcpu *vcpu, int rs, u64 *val)
 {
 	vector128 vrs = VCPU_VSX_VR(vcpu, rs);
 	u32 di;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 10/11] KVM: PPC: reconstruct LOAD_VMX/STORE_VMX instruction mmio emulation with analyse_intr(
@ 2018-04-25 11:54   ` wei.guo.simon
  0 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Simon Guo, linuxppc-dev, kvm

From: Simon Guo <wei.guo.simon@gmail.com>

This patch reconstructs LOAD_VMX/STORE_VMX instruction MMIO emulation with
analyse_intr() input. When emulating the store, the VMX reg will need to
be flushed so that the right reg val can be retrieved before writing to
IO MEM.

Suggested-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/include/asm/kvm_ppc.h   |  1 +
 arch/powerpc/kvm/emulate_loadstore.c | 73 +++++++++++++++++++++++++-----------
 arch/powerpc/kvm/powerpc.c           |  2 +-
 3 files changed, 53 insertions(+), 23 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index b265538..eeb00de 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -83,6 +83,7 @@ extern int kvmppc_handle_vsx_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			int is_default_endian, int mmio_sign_extend);
 extern int kvmppc_handle_load128_by2x64(struct kvm_run *run,
 		struct kvm_vcpu *vcpu, unsigned int rt, int is_default_endian);
+extern int kvmppc_get_vmx_data(struct kvm_vcpu *vcpu, int rs, u64 *val);
 extern int kvmppc_handle_store128_by2x64(struct kvm_run *run,
 		struct kvm_vcpu *vcpu, unsigned int rs, int is_default_endian);
 extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
index 2dbdf9a..0bfee2f 100644
--- a/arch/powerpc/kvm/emulate_loadstore.c
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -160,6 +160,27 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 					KVM_MMIO_REG_FPR|op.reg, size, 1);
 			break;
 #endif
+#ifdef CONFIG_ALTIVEC
+		case LOAD_VMX:
+			if (kvmppc_check_altivec_disabled(vcpu))
+				return EMULATE_DONE;
+
+			/* VMX access will need to be size aligned */
+			vcpu->arch.vaddr_accessed &= ~((unsigned long)size - 1);
+			vcpu->arch.paddr_accessed &= ~((unsigned long)size - 1);
+
+			if (size = 16) {
+				vcpu->arch.mmio_vmx_copy_nums = 2;
+				emulated = kvmppc_handle_load128_by2x64(run,
+						vcpu, KVM_MMIO_REG_VMX|op.reg,
+						1);
+			} else if (size <= 8)
+				emulated = kvmppc_handle_load(run, vcpu,
+						KVM_MMIO_REG_VMX|op.reg,
+						size, 1);
+
+			break;
+#endif
 		case STORE:
 			if (op.type & UPDATE) {
 				vcpu->arch.mmio_ra = op.update_reg;
@@ -197,6 +218,36 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 					VCPU_FPR(vcpu, op.reg), size, 1);
 			break;
 #endif
+#ifdef CONFIG_ALTIVEC
+		case STORE_VMX:
+			if (kvmppc_check_altivec_disabled(vcpu))
+				return EMULATE_DONE;
+
+			/* VMX access will need to be size aligned */
+			vcpu->arch.vaddr_accessed &= ~((unsigned long)size - 1);
+			vcpu->arch.paddr_accessed &= ~((unsigned long)size - 1);
+
+			/* if it is PR KVM, the FP/VEC/VSX registers need to
+			 * be flushed so that kvmppc_handle_store() can read
+			 * actual VMX vals from vcpu->arch.
+			 */
+			if (!is_kvmppc_hv_enabled(vcpu->kvm))
+				vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu,
+						MSR_VEC);
+
+			if (size = 16) {
+				vcpu->arch.mmio_vmx_copy_nums = 2;
+				emulated = kvmppc_handle_store128_by2x64(run,
+						vcpu, op.reg, 1);
+			} else if (size <= 8) {
+				u64 val;
+
+				kvmppc_get_vmx_data(vcpu, op.reg, &val);
+				emulated = kvmppc_handle_store(run, vcpu,
+						val, size, 1);
+			}
+			break;
+#endif
 		case CACHEOP:
 			/* Do nothing. The guest is performing dcbi because
 			 * hardware DMA is not snooped by the dcache, but
@@ -354,28 +405,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 			break;
 #endif /* CONFIG_VSX */
 
-#ifdef CONFIG_ALTIVEC
-		case OP_31_XOP_LVX:
-			if (kvmppc_check_altivec_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.vaddr_accessed &= ~0xFULL;
-			vcpu->arch.paddr_accessed &= ~0xFULL;
-			vcpu->arch.mmio_vmx_copy_nums = 2;
-			emulated = kvmppc_handle_load128_by2x64(run, vcpu,
-					KVM_MMIO_REG_VMX|rt, 1);
-			break;
-
-		case OP_31_XOP_STVX:
-			if (kvmppc_check_altivec_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.vaddr_accessed &= ~0xFULL;
-			vcpu->arch.paddr_accessed &= ~0xFULL;
-			vcpu->arch.mmio_vmx_copy_nums = 2;
-			emulated = kvmppc_handle_store128_by2x64(run, vcpu,
-					rs, 1);
-			break;
-#endif /* CONFIG_ALTIVEC */
-
 		default:
 			emulated = EMULATE_FAIL;
 			break;
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index e724601..000182e 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -1408,7 +1408,7 @@ int kvmppc_handle_load128_by2x64(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	return emulated;
 }
 
-static inline int kvmppc_get_vmx_data(struct kvm_vcpu *vcpu, int rs, u64 *val)
+int kvmppc_get_vmx_data(struct kvm_vcpu *vcpu, int rs, u64 *val)
 {
 	vector128 vrs = VCPU_VSX_VR(vcpu, rs);
 	u32 di;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 11/11] KVM: PPC: reconstruct LOAD_VSX/STORE_VSX instruction mmio emulation with analyse_intr() input
  2018-04-25 11:54 ` wei.guo.simon
  (?)
@ 2018-04-25 11:54   ` wei.guo.simon
  -1 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Simon Guo, linuxppc-dev, kvm

From: Simon Guo <wei.guo.simon@gmail.com>

This patch reconstructs LOAD_VSX/STORE_VSX instruction MMIO emulation with
analyse_intr() input. It utilizes VSX_FPCONV/VSX_SPLAT/SIGNEXT exported
by analyse_instr() and handle accordingly.

When emulating VSX store, the VSX reg will need to be flushed so that
the right reg val can be retrieved before writing to IO MEM.

Suggested-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/kvm/emulate_loadstore.c | 256 ++++++++++++++---------------------
 1 file changed, 101 insertions(+), 155 deletions(-)

diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
index 0bfee2f..bbd2f58 100644
--- a/arch/powerpc/kvm/emulate_loadstore.c
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -181,6 +181,54 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 
 			break;
 #endif
+#ifdef CONFIG_VSX
+		case LOAD_VSX: {
+			int io_size_each;
+
+			if (op.vsx_flags & VSX_CHECK_VEC) {
+				if (kvmppc_check_altivec_disabled(vcpu))
+					return EMULATE_DONE;
+			} else {
+				if (kvmppc_check_vsx_disabled(vcpu))
+					return EMULATE_DONE;
+			}
+
+			if (op.vsx_flags & VSX_FPCONV)
+				vcpu->arch.mmio_sp64_extend = 1;
+
+			if (op.element_size == 8)  {
+				if (op.vsx_flags & VSX_SPLAT)
+					vcpu->arch.mmio_vsx_copy_type =
+						KVMPPC_VSX_COPY_DWORD_LOAD_DUMP;
+				else
+					vcpu->arch.mmio_vsx_copy_type =
+						KVMPPC_VSX_COPY_DWORD;
+			} else if (op.element_size == 4) {
+				if (op.vsx_flags & VSX_SPLAT)
+					vcpu->arch.mmio_vsx_copy_type =
+						KVMPPC_VSX_COPY_WORD_LOAD_DUMP;
+				else
+					vcpu->arch.mmio_vsx_copy_type =
+						KVMPPC_VSX_COPY_WORD;
+			} else
+				break;
+
+			if (size < op.element_size) {
+				/* precision convert case: lxsspx, etc */
+				vcpu->arch.mmio_vsx_copy_nums = 1;
+				io_size_each = size;
+			} else { /* lxvw4x, lxvd2x, etc */
+				vcpu->arch.mmio_vsx_copy_nums =
+					size/op.element_size;
+				io_size_each = op.element_size;
+			}
+
+			emulated = kvmppc_handle_vsx_load(run, vcpu,
+					KVM_MMIO_REG_VSX|op.reg, io_size_each,
+					1, op.type & SIGNEXT);
+			break;
+		}
+#endif
 		case STORE:
 			if (op.type & UPDATE) {
 				vcpu->arch.mmio_ra = op.update_reg;
@@ -248,6 +296,59 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 			}
 			break;
 #endif
+#ifdef CONFIG_VSX
+		case STORE_VSX: {
+				/* io length for each mmio emulation */
+			int io_size_each;
+
+			if (op.vsx_flags & VSX_CHECK_VEC) {
+				if (kvmppc_check_altivec_disabled(vcpu))
+					return EMULATE_DONE;
+			} else {
+				if (kvmppc_check_vsx_disabled(vcpu))
+					return EMULATE_DONE;
+			}
+
+			/* if it is PR KVM, the FP/VEC/VSX registers need to
+			 * be flushed so that kvmppc_handle_store() can read
+			 * actual VMX vals from vcpu->arch.
+			 */
+			if (!is_kvmppc_hv_enabled(vcpu->kvm))
+				vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu,
+						MSR_VSX);
+
+			if (op.vsx_flags & VSX_FPCONV)
+				vcpu->arch.mmio_sp64_extend = 1;
+
+			/* stxsiwx has a special vsx_offset */
+			if ((get_op(inst) == 31) &&
+					(get_xop(inst) == OP_31_XOP_STXSIWX))
+				vcpu->arch.mmio_vsx_offset = 1;
+
+			if (op.element_size == 8)
+				vcpu->arch.mmio_vsx_copy_type =
+						KVMPPC_VSX_COPY_DWORD;
+			else if (op.element_size == 4)
+				vcpu->arch.mmio_vsx_copy_type =
+						KVMPPC_VSX_COPY_WORD;
+			else
+				break;
+
+			if (size < op.element_size) {
+				/* precise conversion case, like stxsspx */
+				vcpu->arch.mmio_vsx_copy_nums = 1;
+				io_size_each = size;
+			} else { /* stxvw4x, stxvd2x, etc */
+				vcpu->arch.mmio_vsx_copy_nums =
+						size/op.element_size;
+				io_size_each = op.element_size;
+			}
+
+			emulated = kvmppc_handle_vsx_store(run, vcpu,
+					op.reg, io_size_each, 1);
+			break;
+		}
+#endif
 		case CACHEOP:
 			/* Do nothing. The guest is performing dcbi because
 			 * hardware DMA is not snooped by the dcache, but
@@ -262,161 +363,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 		}
 	}
 
-
-	if (emulated == EMULATE_DONE)
-		goto out;
-
-	switch (get_op(inst)) {
-	case 31:
-		switch (get_xop(inst)) {
-#ifdef CONFIG_VSX
-		case OP_31_XOP_LXSDX:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 1;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
-			emulated = kvmppc_handle_vsx_load(run, vcpu,
-				KVM_MMIO_REG_VSX|rt, 8, 1, 0);
-			break;
-
-		case OP_31_XOP_LXSSPX:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 1;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
-			vcpu->arch.mmio_sp64_extend = 1;
-			emulated = kvmppc_handle_vsx_load(run, vcpu,
-				KVM_MMIO_REG_VSX|rt, 4, 1, 0);
-			break;
-
-		case OP_31_XOP_LXSIWAX:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 1;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
-			emulated = kvmppc_handle_vsx_load(run, vcpu,
-				KVM_MMIO_REG_VSX|rt, 4, 1, 1);
-			break;
-
-		case OP_31_XOP_LXSIWZX:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 1;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
-			emulated = kvmppc_handle_vsx_load(run, vcpu,
-				KVM_MMIO_REG_VSX|rt, 4, 1, 0);
-			break;
-
-		case OP_31_XOP_LXVD2X:
-		/*
-		 * In this case, the official load/store process is like this:
-		 * Step1, exit from vm by page fault isr, then kvm save vsr.
-		 * Please see guest_exit_cont->store_fp_state->SAVE_32VSRS
-		 * as reference.
-		 *
-		 * Step2, copy data between memory and VCPU
-		 * Notice: for LXVD2X/STXVD2X/LXVW4X/STXVW4X, we use
-		 * 2copies*8bytes or 4copies*4bytes
-		 * to simulate one copy of 16bytes.
-		 * Also there is an endian issue here, we should notice the
-		 * layout of memory.
-		 * Please see MARCO of LXVD2X_ROT/STXVD2X_ROT as more reference.
-		 * If host is little-endian, kvm will call XXSWAPD for
-		 * LXVD2X_ROT/STXVD2X_ROT.
-		 * So, if host is little-endian,
-		 * the postion of memeory should be swapped.
-		 *
-		 * Step3, return to guest, kvm reset register.
-		 * Please see kvmppc_hv_entry->load_fp_state->REST_32VSRS
-		 * as reference.
-		 */
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 2;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
-			emulated = kvmppc_handle_vsx_load(run, vcpu,
-				KVM_MMIO_REG_VSX|rt, 8, 1, 0);
-			break;
-
-		case OP_31_XOP_LXVW4X:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 4;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_WORD;
-			emulated = kvmppc_handle_vsx_load(run, vcpu,
-				KVM_MMIO_REG_VSX|rt, 4, 1, 0);
-			break;
-
-		case OP_31_XOP_LXVDSX:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 1;
-			vcpu->arch.mmio_vsx_copy_type =
-				 KVMPPC_VSX_COPY_DWORD_LOAD_DUMP;
-			emulated = kvmppc_handle_vsx_load(run, vcpu,
-				KVM_MMIO_REG_VSX|rt, 8, 1, 0);
-			break;
-
-		case OP_31_XOP_STXSDX:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 1;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
-			emulated = kvmppc_handle_vsx_store(run, vcpu,
-						 rs, 8, 1);
-			break;
-
-		case OP_31_XOP_STXSSPX:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 1;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
-			vcpu->arch.mmio_sp64_extend = 1;
-			emulated = kvmppc_handle_vsx_store(run, vcpu,
-						 rs, 4, 1);
-			break;
-
-		case OP_31_XOP_STXSIWX:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_offset = 1;
-			vcpu->arch.mmio_vsx_copy_nums = 1;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_WORD;
-			emulated = kvmppc_handle_vsx_store(run, vcpu,
-							 rs, 4, 1);
-			break;
-
-		case OP_31_XOP_STXVD2X:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 2;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
-			emulated = kvmppc_handle_vsx_store(run, vcpu,
-							 rs, 8, 1);
-			break;
-
-		case OP_31_XOP_STXVW4X:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 4;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_WORD;
-			emulated = kvmppc_handle_vsx_store(run, vcpu,
-							 rs, 4, 1);
-			break;
-#endif /* CONFIG_VSX */
-
-		default:
-			emulated = EMULATE_FAIL;
-			break;
-		}
-		break;
-
-	default:
-		emulated = EMULATE_FAIL;
-		break;
-	}
-
-out:
 	if (emulated == EMULATE_FAIL) {
 		advance = 0;
 		kvmppc_core_queue_program(vcpu, 0);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 11/11] KVM: PPC: reconstruct LOAD_VSX/STORE_VSX instruction mmio emulation with analyse_intr() input
@ 2018-04-25 11:54   ` wei.guo.simon
  0 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Paul Mackerras, kvm, linuxppc-dev, Simon Guo

From: Simon Guo <wei.guo.simon@gmail.com>

This patch reconstructs LOAD_VSX/STORE_VSX instruction MMIO emulation with
analyse_intr() input. It utilizes VSX_FPCONV/VSX_SPLAT/SIGNEXT exported
by analyse_instr() and handle accordingly.

When emulating VSX store, the VSX reg will need to be flushed so that
the right reg val can be retrieved before writing to IO MEM.

Suggested-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/kvm/emulate_loadstore.c | 256 ++++++++++++++---------------------
 1 file changed, 101 insertions(+), 155 deletions(-)

diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
index 0bfee2f..bbd2f58 100644
--- a/arch/powerpc/kvm/emulate_loadstore.c
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -181,6 +181,54 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 
 			break;
 #endif
+#ifdef CONFIG_VSX
+		case LOAD_VSX: {
+			int io_size_each;
+
+			if (op.vsx_flags & VSX_CHECK_VEC) {
+				if (kvmppc_check_altivec_disabled(vcpu))
+					return EMULATE_DONE;
+			} else {
+				if (kvmppc_check_vsx_disabled(vcpu))
+					return EMULATE_DONE;
+			}
+
+			if (op.vsx_flags & VSX_FPCONV)
+				vcpu->arch.mmio_sp64_extend = 1;
+
+			if (op.element_size == 8)  {
+				if (op.vsx_flags & VSX_SPLAT)
+					vcpu->arch.mmio_vsx_copy_type =
+						KVMPPC_VSX_COPY_DWORD_LOAD_DUMP;
+				else
+					vcpu->arch.mmio_vsx_copy_type =
+						KVMPPC_VSX_COPY_DWORD;
+			} else if (op.element_size == 4) {
+				if (op.vsx_flags & VSX_SPLAT)
+					vcpu->arch.mmio_vsx_copy_type =
+						KVMPPC_VSX_COPY_WORD_LOAD_DUMP;
+				else
+					vcpu->arch.mmio_vsx_copy_type =
+						KVMPPC_VSX_COPY_WORD;
+			} else
+				break;
+
+			if (size < op.element_size) {
+				/* precision convert case: lxsspx, etc */
+				vcpu->arch.mmio_vsx_copy_nums = 1;
+				io_size_each = size;
+			} else { /* lxvw4x, lxvd2x, etc */
+				vcpu->arch.mmio_vsx_copy_nums =
+					size/op.element_size;
+				io_size_each = op.element_size;
+			}
+
+			emulated = kvmppc_handle_vsx_load(run, vcpu,
+					KVM_MMIO_REG_VSX|op.reg, io_size_each,
+					1, op.type & SIGNEXT);
+			break;
+		}
+#endif
 		case STORE:
 			if (op.type & UPDATE) {
 				vcpu->arch.mmio_ra = op.update_reg;
@@ -248,6 +296,59 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 			}
 			break;
 #endif
+#ifdef CONFIG_VSX
+		case STORE_VSX: {
+				/* io length for each mmio emulation */
+			int io_size_each;
+
+			if (op.vsx_flags & VSX_CHECK_VEC) {
+				if (kvmppc_check_altivec_disabled(vcpu))
+					return EMULATE_DONE;
+			} else {
+				if (kvmppc_check_vsx_disabled(vcpu))
+					return EMULATE_DONE;
+			}
+
+			/* if it is PR KVM, the FP/VEC/VSX registers need to
+			 * be flushed so that kvmppc_handle_store() can read
+			 * actual VMX vals from vcpu->arch.
+			 */
+			if (!is_kvmppc_hv_enabled(vcpu->kvm))
+				vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu,
+						MSR_VSX);
+
+			if (op.vsx_flags & VSX_FPCONV)
+				vcpu->arch.mmio_sp64_extend = 1;
+
+			/* stxsiwx has a special vsx_offset */
+			if ((get_op(inst) == 31) &&
+					(get_xop(inst) == OP_31_XOP_STXSIWX))
+				vcpu->arch.mmio_vsx_offset = 1;
+
+			if (op.element_size == 8)
+				vcpu->arch.mmio_vsx_copy_type =
+						KVMPPC_VSX_COPY_DWORD;
+			else if (op.element_size == 4)
+				vcpu->arch.mmio_vsx_copy_type =
+						KVMPPC_VSX_COPY_WORD;
+			else
+				break;
+
+			if (size < op.element_size) {
+				/* precise conversion case, like stxsspx */
+				vcpu->arch.mmio_vsx_copy_nums = 1;
+				io_size_each = size;
+			} else { /* stxvw4x, stxvd2x, etc */
+				vcpu->arch.mmio_vsx_copy_nums =
+						size/op.element_size;
+				io_size_each = op.element_size;
+			}
+
+			emulated = kvmppc_handle_vsx_store(run, vcpu,
+					op.reg, io_size_each, 1);
+			break;
+		}
+#endif
 		case CACHEOP:
 			/* Do nothing. The guest is performing dcbi because
 			 * hardware DMA is not snooped by the dcache, but
@@ -262,161 +363,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 		}
 	}
 
-
-	if (emulated == EMULATE_DONE)
-		goto out;
-
-	switch (get_op(inst)) {
-	case 31:
-		switch (get_xop(inst)) {
-#ifdef CONFIG_VSX
-		case OP_31_XOP_LXSDX:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 1;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
-			emulated = kvmppc_handle_vsx_load(run, vcpu,
-				KVM_MMIO_REG_VSX|rt, 8, 1, 0);
-			break;
-
-		case OP_31_XOP_LXSSPX:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 1;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
-			vcpu->arch.mmio_sp64_extend = 1;
-			emulated = kvmppc_handle_vsx_load(run, vcpu,
-				KVM_MMIO_REG_VSX|rt, 4, 1, 0);
-			break;
-
-		case OP_31_XOP_LXSIWAX:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 1;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
-			emulated = kvmppc_handle_vsx_load(run, vcpu,
-				KVM_MMIO_REG_VSX|rt, 4, 1, 1);
-			break;
-
-		case OP_31_XOP_LXSIWZX:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 1;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
-			emulated = kvmppc_handle_vsx_load(run, vcpu,
-				KVM_MMIO_REG_VSX|rt, 4, 1, 0);
-			break;
-
-		case OP_31_XOP_LXVD2X:
-		/*
-		 * In this case, the official load/store process is like this:
-		 * Step1, exit from vm by page fault isr, then kvm save vsr.
-		 * Please see guest_exit_cont->store_fp_state->SAVE_32VSRS
-		 * as reference.
-		 *
-		 * Step2, copy data between memory and VCPU
-		 * Notice: for LXVD2X/STXVD2X/LXVW4X/STXVW4X, we use
-		 * 2copies*8bytes or 4copies*4bytes
-		 * to simulate one copy of 16bytes.
-		 * Also there is an endian issue here, we should notice the
-		 * layout of memory.
-		 * Please see MARCO of LXVD2X_ROT/STXVD2X_ROT as more reference.
-		 * If host is little-endian, kvm will call XXSWAPD for
-		 * LXVD2X_ROT/STXVD2X_ROT.
-		 * So, if host is little-endian,
-		 * the postion of memeory should be swapped.
-		 *
-		 * Step3, return to guest, kvm reset register.
-		 * Please see kvmppc_hv_entry->load_fp_state->REST_32VSRS
-		 * as reference.
-		 */
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 2;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
-			emulated = kvmppc_handle_vsx_load(run, vcpu,
-				KVM_MMIO_REG_VSX|rt, 8, 1, 0);
-			break;
-
-		case OP_31_XOP_LXVW4X:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 4;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_WORD;
-			emulated = kvmppc_handle_vsx_load(run, vcpu,
-				KVM_MMIO_REG_VSX|rt, 4, 1, 0);
-			break;
-
-		case OP_31_XOP_LXVDSX:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 1;
-			vcpu->arch.mmio_vsx_copy_type =
-				 KVMPPC_VSX_COPY_DWORD_LOAD_DUMP;
-			emulated = kvmppc_handle_vsx_load(run, vcpu,
-				KVM_MMIO_REG_VSX|rt, 8, 1, 0);
-			break;
-
-		case OP_31_XOP_STXSDX:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 1;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
-			emulated = kvmppc_handle_vsx_store(run, vcpu,
-						 rs, 8, 1);
-			break;
-
-		case OP_31_XOP_STXSSPX:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 1;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
-			vcpu->arch.mmio_sp64_extend = 1;
-			emulated = kvmppc_handle_vsx_store(run, vcpu,
-						 rs, 4, 1);
-			break;
-
-		case OP_31_XOP_STXSIWX:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_offset = 1;
-			vcpu->arch.mmio_vsx_copy_nums = 1;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_WORD;
-			emulated = kvmppc_handle_vsx_store(run, vcpu,
-							 rs, 4, 1);
-			break;
-
-		case OP_31_XOP_STXVD2X:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 2;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
-			emulated = kvmppc_handle_vsx_store(run, vcpu,
-							 rs, 8, 1);
-			break;
-
-		case OP_31_XOP_STXVW4X:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 4;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_WORD;
-			emulated = kvmppc_handle_vsx_store(run, vcpu,
-							 rs, 4, 1);
-			break;
-#endif /* CONFIG_VSX */
-
-		default:
-			emulated = EMULATE_FAIL;
-			break;
-		}
-		break;
-
-	default:
-		emulated = EMULATE_FAIL;
-		break;
-	}
-
-out:
 	if (emulated == EMULATE_FAIL) {
 		advance = 0;
 		kvmppc_core_queue_program(vcpu, 0);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 111+ messages in thread

* [PATCH 11/11] KVM: PPC: reconstruct LOAD_VSX/STORE_VSX instruction mmio emulation with analyse_intr(
@ 2018-04-25 11:54   ` wei.guo.simon
  0 siblings, 0 replies; 111+ messages in thread
From: wei.guo.simon @ 2018-04-25 11:54 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Simon Guo, linuxppc-dev, kvm

From: Simon Guo <wei.guo.simon@gmail.com>

This patch reconstructs LOAD_VSX/STORE_VSX instruction MMIO emulation with
analyse_intr() input. It utilizes VSX_FPCONV/VSX_SPLAT/SIGNEXT exported
by analyse_instr() and handle accordingly.

When emulating VSX store, the VSX reg will need to be flushed so that
the right reg val can be retrieved before writing to IO MEM.

Suggested-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
---
 arch/powerpc/kvm/emulate_loadstore.c | 256 ++++++++++++++---------------------
 1 file changed, 101 insertions(+), 155 deletions(-)

diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
index 0bfee2f..bbd2f58 100644
--- a/arch/powerpc/kvm/emulate_loadstore.c
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -181,6 +181,54 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 
 			break;
 #endif
+#ifdef CONFIG_VSX
+		case LOAD_VSX: {
+			int io_size_each;
+
+			if (op.vsx_flags & VSX_CHECK_VEC) {
+				if (kvmppc_check_altivec_disabled(vcpu))
+					return EMULATE_DONE;
+			} else {
+				if (kvmppc_check_vsx_disabled(vcpu))
+					return EMULATE_DONE;
+			}
+
+			if (op.vsx_flags & VSX_FPCONV)
+				vcpu->arch.mmio_sp64_extend = 1;
+
+			if (op.element_size = 8)  {
+				if (op.vsx_flags & VSX_SPLAT)
+					vcpu->arch.mmio_vsx_copy_type +						KVMPPC_VSX_COPY_DWORD_LOAD_DUMP;
+				else
+					vcpu->arch.mmio_vsx_copy_type +						KVMPPC_VSX_COPY_DWORD;
+			} else if (op.element_size = 4) {
+				if (op.vsx_flags & VSX_SPLAT)
+					vcpu->arch.mmio_vsx_copy_type +						KVMPPC_VSX_COPY_WORD_LOAD_DUMP;
+				else
+					vcpu->arch.mmio_vsx_copy_type +						KVMPPC_VSX_COPY_WORD;
+			} else
+				break;
+
+			if (size < op.element_size) {
+				/* precision convert case: lxsspx, etc */
+				vcpu->arch.mmio_vsx_copy_nums = 1;
+				io_size_each = size;
+			} else { /* lxvw4x, lxvd2x, etc */
+				vcpu->arch.mmio_vsx_copy_nums +					size/op.element_size;
+				io_size_each = op.element_size;
+			}
+
+			emulated = kvmppc_handle_vsx_load(run, vcpu,
+					KVM_MMIO_REG_VSX|op.reg, io_size_each,
+					1, op.type & SIGNEXT);
+			break;
+		}
+#endif
 		case STORE:
 			if (op.type & UPDATE) {
 				vcpu->arch.mmio_ra = op.update_reg;
@@ -248,6 +296,59 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 			}
 			break;
 #endif
+#ifdef CONFIG_VSX
+		case STORE_VSX: {
+				/* io length for each mmio emulation */
+			int io_size_each;
+
+			if (op.vsx_flags & VSX_CHECK_VEC) {
+				if (kvmppc_check_altivec_disabled(vcpu))
+					return EMULATE_DONE;
+			} else {
+				if (kvmppc_check_vsx_disabled(vcpu))
+					return EMULATE_DONE;
+			}
+
+			/* if it is PR KVM, the FP/VEC/VSX registers need to
+			 * be flushed so that kvmppc_handle_store() can read
+			 * actual VMX vals from vcpu->arch.
+			 */
+			if (!is_kvmppc_hv_enabled(vcpu->kvm))
+				vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu,
+						MSR_VSX);
+
+			if (op.vsx_flags & VSX_FPCONV)
+				vcpu->arch.mmio_sp64_extend = 1;
+
+			/* stxsiwx has a special vsx_offset */
+			if ((get_op(inst) = 31) &&
+					(get_xop(inst) = OP_31_XOP_STXSIWX))
+				vcpu->arch.mmio_vsx_offset = 1;
+
+			if (op.element_size = 8)
+				vcpu->arch.mmio_vsx_copy_type +						KVMPPC_VSX_COPY_DWORD;
+			else if (op.element_size = 4)
+				vcpu->arch.mmio_vsx_copy_type +						KVMPPC_VSX_COPY_WORD;
+			else
+				break;
+
+			if (size < op.element_size) {
+				/* precise conversion case, like stxsspx */
+				vcpu->arch.mmio_vsx_copy_nums = 1;
+				io_size_each = size;
+			} else { /* stxvw4x, stxvd2x, etc */
+				vcpu->arch.mmio_vsx_copy_nums +						size/op.element_size;
+				io_size_each = op.element_size;
+			}
+
+			emulated = kvmppc_handle_vsx_store(run, vcpu,
+					op.reg, io_size_each, 1);
+			break;
+		}
+#endif
 		case CACHEOP:
 			/* Do nothing. The guest is performing dcbi because
 			 * hardware DMA is not snooped by the dcache, but
@@ -262,161 +363,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 		}
 	}
 
-
-	if (emulated = EMULATE_DONE)
-		goto out;
-
-	switch (get_op(inst)) {
-	case 31:
-		switch (get_xop(inst)) {
-#ifdef CONFIG_VSX
-		case OP_31_XOP_LXSDX:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 1;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
-			emulated = kvmppc_handle_vsx_load(run, vcpu,
-				KVM_MMIO_REG_VSX|rt, 8, 1, 0);
-			break;
-
-		case OP_31_XOP_LXSSPX:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 1;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
-			vcpu->arch.mmio_sp64_extend = 1;
-			emulated = kvmppc_handle_vsx_load(run, vcpu,
-				KVM_MMIO_REG_VSX|rt, 4, 1, 0);
-			break;
-
-		case OP_31_XOP_LXSIWAX:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 1;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
-			emulated = kvmppc_handle_vsx_load(run, vcpu,
-				KVM_MMIO_REG_VSX|rt, 4, 1, 1);
-			break;
-
-		case OP_31_XOP_LXSIWZX:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 1;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
-			emulated = kvmppc_handle_vsx_load(run, vcpu,
-				KVM_MMIO_REG_VSX|rt, 4, 1, 0);
-			break;
-
-		case OP_31_XOP_LXVD2X:
-		/*
-		 * In this case, the official load/store process is like this:
-		 * Step1, exit from vm by page fault isr, then kvm save vsr.
-		 * Please see guest_exit_cont->store_fp_state->SAVE_32VSRS
-		 * as reference.
-		 *
-		 * Step2, copy data between memory and VCPU
-		 * Notice: for LXVD2X/STXVD2X/LXVW4X/STXVW4X, we use
-		 * 2copies*8bytes or 4copies*4bytes
-		 * to simulate one copy of 16bytes.
-		 * Also there is an endian issue here, we should notice the
-		 * layout of memory.
-		 * Please see MARCO of LXVD2X_ROT/STXVD2X_ROT as more reference.
-		 * If host is little-endian, kvm will call XXSWAPD for
-		 * LXVD2X_ROT/STXVD2X_ROT.
-		 * So, if host is little-endian,
-		 * the postion of memeory should be swapped.
-		 *
-		 * Step3, return to guest, kvm reset register.
-		 * Please see kvmppc_hv_entry->load_fp_state->REST_32VSRS
-		 * as reference.
-		 */
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 2;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
-			emulated = kvmppc_handle_vsx_load(run, vcpu,
-				KVM_MMIO_REG_VSX|rt, 8, 1, 0);
-			break;
-
-		case OP_31_XOP_LXVW4X:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 4;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_WORD;
-			emulated = kvmppc_handle_vsx_load(run, vcpu,
-				KVM_MMIO_REG_VSX|rt, 4, 1, 0);
-			break;
-
-		case OP_31_XOP_LXVDSX:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 1;
-			vcpu->arch.mmio_vsx_copy_type -				 KVMPPC_VSX_COPY_DWORD_LOAD_DUMP;
-			emulated = kvmppc_handle_vsx_load(run, vcpu,
-				KVM_MMIO_REG_VSX|rt, 8, 1, 0);
-			break;
-
-		case OP_31_XOP_STXSDX:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 1;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
-			emulated = kvmppc_handle_vsx_store(run, vcpu,
-						 rs, 8, 1);
-			break;
-
-		case OP_31_XOP_STXSSPX:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 1;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
-			vcpu->arch.mmio_sp64_extend = 1;
-			emulated = kvmppc_handle_vsx_store(run, vcpu,
-						 rs, 4, 1);
-			break;
-
-		case OP_31_XOP_STXSIWX:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_offset = 1;
-			vcpu->arch.mmio_vsx_copy_nums = 1;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_WORD;
-			emulated = kvmppc_handle_vsx_store(run, vcpu,
-							 rs, 4, 1);
-			break;
-
-		case OP_31_XOP_STXVD2X:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 2;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_DWORD;
-			emulated = kvmppc_handle_vsx_store(run, vcpu,
-							 rs, 8, 1);
-			break;
-
-		case OP_31_XOP_STXVW4X:
-			if (kvmppc_check_vsx_disabled(vcpu))
-				return EMULATE_DONE;
-			vcpu->arch.mmio_vsx_copy_nums = 4;
-			vcpu->arch.mmio_vsx_copy_type = KVMPPC_VSX_COPY_WORD;
-			emulated = kvmppc_handle_vsx_store(run, vcpu,
-							 rs, 4, 1);
-			break;
-#endif /* CONFIG_VSX */
-
-		default:
-			emulated = EMULATE_FAIL;
-			break;
-		}
-		break;
-
-	default:
-		emulated = EMULATE_FAIL;
-		break;
-	}
-
-out:
 	if (emulated = EMULATE_FAIL) {
 		advance = 0;
 		kvmppc_core_queue_program(vcpu, 0);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 111+ messages in thread

* Re: [PATCH 01/11] KVM: PPC: add pt_regs into kvm_vcpu_arch and move vcpu->arch.gpr[] into it
  2018-04-25 11:54   ` wei.guo.simon
  (?)
@ 2018-04-27  3:47     ` kbuild test robot
  -1 siblings, 0 replies; 111+ messages in thread
From: kbuild test robot @ 2018-04-27  3:47 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: kvm, Simon Guo, kvm-ppc, kbuild-all, linuxppc-dev

[-- Attachment #1: Type: text/plain, Size: 5424 bytes --]

Hi Simon,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on powerpc/next]
[also build test ERROR on v4.17-rc2 next-20180426]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/wei-guo-simon-gmail-com/KVM-PPC-add-pt_regs-into-kvm_vcpu_arch-and-move-vcpu-arch-gpr-into-it/20180427-055410
base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-defconfig (attached as .config)
compiler: powerpc64-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=powerpc 

Note: the linux-review/wei-guo-simon-gmail-com/KVM-PPC-add-pt_regs-into-kvm_vcpu_arch-and-move-vcpu-arch-gpr-into-it/20180427-055410 HEAD 92a7de2f1920f80f57d625d6d07731a00ea99161 builds fine.
      It only hurts bisectibility.

All error/warnings (new ones prefixed by >>):

   In file included from arch/powerpc/include/asm/kvm_book3s.h:271:0,
                    from arch/powerpc/kernel/asm-offsets.c:57:
   arch/powerpc/include/asm/kvm_book3s_64.h: In function 'copy_from_checkpoint':
>> arch/powerpc/include/asm/kvm_book3s_64.h:493:20: error: 'struct kvm_vcpu_arch' has no member named 'gpr'; did you mean 'qpr'?
     memcpy(vcpu->arch.gpr, vcpu->arch.gpr_tm,
                       ^~~
                       qpr
   arch/powerpc/include/asm/kvm_book3s_64.h:494:27: error: 'struct kvm_vcpu_arch' has no member named 'gpr'; did you mean 'qpr'?
            sizeof(vcpu->arch.gpr));
                              ^~~
                              qpr
   arch/powerpc/include/asm/kvm_book3s_64.h: In function 'copy_to_checkpoint':
   arch/powerpc/include/asm/kvm_book3s_64.h:510:39: error: 'struct kvm_vcpu_arch' has no member named 'gpr'; did you mean 'qpr'?
     memcpy(vcpu->arch.gpr_tm, vcpu->arch.gpr,
                                          ^~~
                                          qpr
   arch/powerpc/include/asm/kvm_book3s_64.h:511:27: error: 'struct kvm_vcpu_arch' has no member named 'gpr'; did you mean 'qpr'?
            sizeof(vcpu->arch.gpr));
                              ^~~
                              qpr
   In file included from arch/powerpc/kernel/asm-offsets.c:30:0:
   arch/powerpc/kernel/asm-offsets.c: In function 'main':
>> include/linux/compiler-gcc.h:170:2: error: 'struct kvm_vcpu_arch' has no member named 'nip'
     __builtin_offsetof(a, b)
     ^
   include/linux/kbuild.h:6:62: note: in definition of macro 'DEFINE'
     asm volatile("\n.ascii \"->" #sym " %0 " #val "\"" : : "i" (val))
                                                                 ^~~
   include/linux/stddef.h:17:32: note: in expansion of macro '__compiler_offsetof'
    #define offsetof(TYPE, MEMBER) __compiler_offsetof(TYPE, MEMBER)
                                   ^~~~~~~~~~~~~~~~~~~
>> include/linux/kbuild.h:11:14: note: in expansion of macro 'offsetof'
     DEFINE(sym, offsetof(struct str, mem))
                 ^~~~~~~~
>> arch/powerpc/kernel/asm-offsets.c:441:2: note: in expansion of macro 'OFFSET'
     OFFSET(VCPU_PC, kvm_vcpu, arch.nip);
     ^~~~~~
   make[2]: *** [arch/powerpc/kernel/asm-offsets.s] Error 1
   make[2]: Target '__build' not remade because of errors.
   make[1]: *** [prepare0] Error 2
   make[1]: Target 'prepare' not remade because of errors.
   make: *** [sub-make] Error 2

vim +493 arch/powerpc/include/asm/kvm_book3s_64.h

4bb3c7a0 Paul Mackerras 2018-03-21  481  
4bb3c7a0 Paul Mackerras 2018-03-21  482  #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
4bb3c7a0 Paul Mackerras 2018-03-21  483  static inline void copy_from_checkpoint(struct kvm_vcpu *vcpu)
4bb3c7a0 Paul Mackerras 2018-03-21  484  {
4bb3c7a0 Paul Mackerras 2018-03-21  485  	vcpu->arch.cr  = vcpu->arch.cr_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  486  	vcpu->arch.xer = vcpu->arch.xer_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  487  	vcpu->arch.lr  = vcpu->arch.lr_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  488  	vcpu->arch.ctr = vcpu->arch.ctr_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  489  	vcpu->arch.amr = vcpu->arch.amr_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  490  	vcpu->arch.ppr = vcpu->arch.ppr_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  491  	vcpu->arch.dscr = vcpu->arch.dscr_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  492  	vcpu->arch.tar = vcpu->arch.tar_tm;
4bb3c7a0 Paul Mackerras 2018-03-21 @493  	memcpy(vcpu->arch.gpr, vcpu->arch.gpr_tm,
4bb3c7a0 Paul Mackerras 2018-03-21  494  	       sizeof(vcpu->arch.gpr));
4bb3c7a0 Paul Mackerras 2018-03-21  495  	vcpu->arch.fp  = vcpu->arch.fp_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  496  	vcpu->arch.vr  = vcpu->arch.vr_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  497  	vcpu->arch.vrsave = vcpu->arch.vrsave_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  498  }
4bb3c7a0 Paul Mackerras 2018-03-21  499  

:::::: The code at line 493 was first introduced by commit
:::::: 4bb3c7a0208fc13ca70598efd109901a7cd45ae7 KVM: PPC: Book3S HV: Work around transactional memory bugs in POWER9

:::::: TO: Paul Mackerras <paulus@ozlabs.org>
:::::: CC: Michael Ellerman <mpe@ellerman.id.au>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 23368 bytes --]

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 01/11] KVM: PPC: add pt_regs into kvm_vcpu_arch and move vcpu->arch.gpr[] into it
@ 2018-04-27  3:47     ` kbuild test robot
  0 siblings, 0 replies; 111+ messages in thread
From: kbuild test robot @ 2018-04-27  3:47 UTC (permalink / raw)
  To: wei.guo.simon
  Cc: kbuild-all, kvm-ppc, Paul Mackerras, kvm, linuxppc-dev, Simon Guo

[-- Attachment #1: Type: text/plain, Size: 5424 bytes --]

Hi Simon,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on powerpc/next]
[also build test ERROR on v4.17-rc2 next-20180426]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/wei-guo-simon-gmail-com/KVM-PPC-add-pt_regs-into-kvm_vcpu_arch-and-move-vcpu-arch-gpr-into-it/20180427-055410
base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-defconfig (attached as .config)
compiler: powerpc64-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=powerpc 

Note: the linux-review/wei-guo-simon-gmail-com/KVM-PPC-add-pt_regs-into-kvm_vcpu_arch-and-move-vcpu-arch-gpr-into-it/20180427-055410 HEAD 92a7de2f1920f80f57d625d6d07731a00ea99161 builds fine.
      It only hurts bisectibility.

All error/warnings (new ones prefixed by >>):

   In file included from arch/powerpc/include/asm/kvm_book3s.h:271:0,
                    from arch/powerpc/kernel/asm-offsets.c:57:
   arch/powerpc/include/asm/kvm_book3s_64.h: In function 'copy_from_checkpoint':
>> arch/powerpc/include/asm/kvm_book3s_64.h:493:20: error: 'struct kvm_vcpu_arch' has no member named 'gpr'; did you mean 'qpr'?
     memcpy(vcpu->arch.gpr, vcpu->arch.gpr_tm,
                       ^~~
                       qpr
   arch/powerpc/include/asm/kvm_book3s_64.h:494:27: error: 'struct kvm_vcpu_arch' has no member named 'gpr'; did you mean 'qpr'?
            sizeof(vcpu->arch.gpr));
                              ^~~
                              qpr
   arch/powerpc/include/asm/kvm_book3s_64.h: In function 'copy_to_checkpoint':
   arch/powerpc/include/asm/kvm_book3s_64.h:510:39: error: 'struct kvm_vcpu_arch' has no member named 'gpr'; did you mean 'qpr'?
     memcpy(vcpu->arch.gpr_tm, vcpu->arch.gpr,
                                          ^~~
                                          qpr
   arch/powerpc/include/asm/kvm_book3s_64.h:511:27: error: 'struct kvm_vcpu_arch' has no member named 'gpr'; did you mean 'qpr'?
            sizeof(vcpu->arch.gpr));
                              ^~~
                              qpr
   In file included from arch/powerpc/kernel/asm-offsets.c:30:0:
   arch/powerpc/kernel/asm-offsets.c: In function 'main':
>> include/linux/compiler-gcc.h:170:2: error: 'struct kvm_vcpu_arch' has no member named 'nip'
     __builtin_offsetof(a, b)
     ^
   include/linux/kbuild.h:6:62: note: in definition of macro 'DEFINE'
     asm volatile("\n.ascii \"->" #sym " %0 " #val "\"" : : "i" (val))
                                                                 ^~~
   include/linux/stddef.h:17:32: note: in expansion of macro '__compiler_offsetof'
    #define offsetof(TYPE, MEMBER) __compiler_offsetof(TYPE, MEMBER)
                                   ^~~~~~~~~~~~~~~~~~~
>> include/linux/kbuild.h:11:14: note: in expansion of macro 'offsetof'
     DEFINE(sym, offsetof(struct str, mem))
                 ^~~~~~~~
>> arch/powerpc/kernel/asm-offsets.c:441:2: note: in expansion of macro 'OFFSET'
     OFFSET(VCPU_PC, kvm_vcpu, arch.nip);
     ^~~~~~
   make[2]: *** [arch/powerpc/kernel/asm-offsets.s] Error 1
   make[2]: Target '__build' not remade because of errors.
   make[1]: *** [prepare0] Error 2
   make[1]: Target 'prepare' not remade because of errors.
   make: *** [sub-make] Error 2

vim +493 arch/powerpc/include/asm/kvm_book3s_64.h

4bb3c7a0 Paul Mackerras 2018-03-21  481  
4bb3c7a0 Paul Mackerras 2018-03-21  482  #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
4bb3c7a0 Paul Mackerras 2018-03-21  483  static inline void copy_from_checkpoint(struct kvm_vcpu *vcpu)
4bb3c7a0 Paul Mackerras 2018-03-21  484  {
4bb3c7a0 Paul Mackerras 2018-03-21  485  	vcpu->arch.cr  = vcpu->arch.cr_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  486  	vcpu->arch.xer = vcpu->arch.xer_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  487  	vcpu->arch.lr  = vcpu->arch.lr_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  488  	vcpu->arch.ctr = vcpu->arch.ctr_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  489  	vcpu->arch.amr = vcpu->arch.amr_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  490  	vcpu->arch.ppr = vcpu->arch.ppr_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  491  	vcpu->arch.dscr = vcpu->arch.dscr_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  492  	vcpu->arch.tar = vcpu->arch.tar_tm;
4bb3c7a0 Paul Mackerras 2018-03-21 @493  	memcpy(vcpu->arch.gpr, vcpu->arch.gpr_tm,
4bb3c7a0 Paul Mackerras 2018-03-21  494  	       sizeof(vcpu->arch.gpr));
4bb3c7a0 Paul Mackerras 2018-03-21  495  	vcpu->arch.fp  = vcpu->arch.fp_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  496  	vcpu->arch.vr  = vcpu->arch.vr_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  497  	vcpu->arch.vrsave = vcpu->arch.vrsave_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  498  }
4bb3c7a0 Paul Mackerras 2018-03-21  499  

:::::: The code at line 493 was first introduced by commit
:::::: 4bb3c7a0208fc13ca70598efd109901a7cd45ae7 KVM: PPC: Book3S HV: Work around transactional memory bugs in POWER9

:::::: TO: Paul Mackerras <paulus@ozlabs.org>
:::::: CC: Michael Ellerman <mpe@ellerman.id.au>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 23368 bytes --]

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 01/11] KVM: PPC: add pt_regs into kvm_vcpu_arch and move vcpu->arch.gpr[] into it
@ 2018-04-27  3:47     ` kbuild test robot
  0 siblings, 0 replies; 111+ messages in thread
From: kbuild test robot @ 2018-04-27  3:47 UTC (permalink / raw)
  To: kvm-ppc

[-- Attachment #1: Type: text/plain, Size: 5424 bytes --]

Hi Simon,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on powerpc/next]
[also build test ERROR on v4.17-rc2 next-20180426]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/wei-guo-simon-gmail-com/KVM-PPC-add-pt_regs-into-kvm_vcpu_arch-and-move-vcpu-arch-gpr-into-it/20180427-055410
base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-defconfig (attached as .config)
compiler: powerpc64-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=powerpc 

Note: the linux-review/wei-guo-simon-gmail-com/KVM-PPC-add-pt_regs-into-kvm_vcpu_arch-and-move-vcpu-arch-gpr-into-it/20180427-055410 HEAD 92a7de2f1920f80f57d625d6d07731a00ea99161 builds fine.
      It only hurts bisectibility.

All error/warnings (new ones prefixed by >>):

   In file included from arch/powerpc/include/asm/kvm_book3s.h:271:0,
                    from arch/powerpc/kernel/asm-offsets.c:57:
   arch/powerpc/include/asm/kvm_book3s_64.h: In function 'copy_from_checkpoint':
>> arch/powerpc/include/asm/kvm_book3s_64.h:493:20: error: 'struct kvm_vcpu_arch' has no member named 'gpr'; did you mean 'qpr'?
     memcpy(vcpu->arch.gpr, vcpu->arch.gpr_tm,
                       ^~~
                       qpr
   arch/powerpc/include/asm/kvm_book3s_64.h:494:27: error: 'struct kvm_vcpu_arch' has no member named 'gpr'; did you mean 'qpr'?
            sizeof(vcpu->arch.gpr));
                              ^~~
                              qpr
   arch/powerpc/include/asm/kvm_book3s_64.h: In function 'copy_to_checkpoint':
   arch/powerpc/include/asm/kvm_book3s_64.h:510:39: error: 'struct kvm_vcpu_arch' has no member named 'gpr'; did you mean 'qpr'?
     memcpy(vcpu->arch.gpr_tm, vcpu->arch.gpr,
                                          ^~~
                                          qpr
   arch/powerpc/include/asm/kvm_book3s_64.h:511:27: error: 'struct kvm_vcpu_arch' has no member named 'gpr'; did you mean 'qpr'?
            sizeof(vcpu->arch.gpr));
                              ^~~
                              qpr
   In file included from arch/powerpc/kernel/asm-offsets.c:30:0:
   arch/powerpc/kernel/asm-offsets.c: In function 'main':
>> include/linux/compiler-gcc.h:170:2: error: 'struct kvm_vcpu_arch' has no member named 'nip'
     __builtin_offsetof(a, b)
     ^
   include/linux/kbuild.h:6:62: note: in definition of macro 'DEFINE'
     asm volatile("\n.ascii \"->" #sym " %0 " #val "\"" : : "i" (val))
                                                                 ^~~
   include/linux/stddef.h:17:32: note: in expansion of macro '__compiler_offsetof'
    #define offsetof(TYPE, MEMBER) __compiler_offsetof(TYPE, MEMBER)
                                   ^~~~~~~~~~~~~~~~~~~
>> include/linux/kbuild.h:11:14: note: in expansion of macro 'offsetof'
     DEFINE(sym, offsetof(struct str, mem))
                 ^~~~~~~~
>> arch/powerpc/kernel/asm-offsets.c:441:2: note: in expansion of macro 'OFFSET'
     OFFSET(VCPU_PC, kvm_vcpu, arch.nip);
     ^~~~~~
   make[2]: *** [arch/powerpc/kernel/asm-offsets.s] Error 1
   make[2]: Target '__build' not remade because of errors.
   make[1]: *** [prepare0] Error 2
   make[1]: Target 'prepare' not remade because of errors.
   make: *** [sub-make] Error 2

vim +493 arch/powerpc/include/asm/kvm_book3s_64.h

4bb3c7a0 Paul Mackerras 2018-03-21  481  
4bb3c7a0 Paul Mackerras 2018-03-21  482  #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
4bb3c7a0 Paul Mackerras 2018-03-21  483  static inline void copy_from_checkpoint(struct kvm_vcpu *vcpu)
4bb3c7a0 Paul Mackerras 2018-03-21  484  {
4bb3c7a0 Paul Mackerras 2018-03-21  485  	vcpu->arch.cr  = vcpu->arch.cr_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  486  	vcpu->arch.xer = vcpu->arch.xer_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  487  	vcpu->arch.lr  = vcpu->arch.lr_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  488  	vcpu->arch.ctr = vcpu->arch.ctr_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  489  	vcpu->arch.amr = vcpu->arch.amr_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  490  	vcpu->arch.ppr = vcpu->arch.ppr_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  491  	vcpu->arch.dscr = vcpu->arch.dscr_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  492  	vcpu->arch.tar = vcpu->arch.tar_tm;
4bb3c7a0 Paul Mackerras 2018-03-21 @493  	memcpy(vcpu->arch.gpr, vcpu->arch.gpr_tm,
4bb3c7a0 Paul Mackerras 2018-03-21  494  	       sizeof(vcpu->arch.gpr));
4bb3c7a0 Paul Mackerras 2018-03-21  495  	vcpu->arch.fp  = vcpu->arch.fp_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  496  	vcpu->arch.vr  = vcpu->arch.vr_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  497  	vcpu->arch.vrsave = vcpu->arch.vrsave_tm;
4bb3c7a0 Paul Mackerras 2018-03-21  498  }
4bb3c7a0 Paul Mackerras 2018-03-21  499  

:::::: The code at line 493 was first introduced by commit
:::::: 4bb3c7a0208fc13ca70598efd109901a7cd45ae7 KVM: PPC: Book3S HV: Work around transactional memory bugs in POWER9

:::::: TO: Paul Mackerras <paulus@ozlabs.org>
:::::: CC: Michael Ellerman <mpe@ellerman.id.au>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 23368 bytes --]

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 01/11] KVM: PPC: add pt_regs into kvm_vcpu_arch and move vcpu->arch.gpr[] into it
  2018-04-27  3:47     ` kbuild test robot
  (?)
@ 2018-04-27 10:21       ` Simon Guo
  -1 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-04-27 10:21 UTC (permalink / raw)
  To: kbuild test robot; +Cc: linuxppc-dev, kbuild-all, kvm-ppc, kvm

On Fri, Apr 27, 2018 at 11:47:21AM +0800, kbuild test robot wrote:
> Hi Simon,
> 
> Thank you for the patch! Yet something to improve:
> 
> [auto build test ERROR on powerpc/next]
> [also build test ERROR on v4.17-rc2 next-20180426]
> [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
> 
> url:    https://github.com/0day-ci/linux/commits/wei-guo-simon-gmail-com/KVM-PPC-add-pt_regs-into-kvm_vcpu_arch-and-move-vcpu-arch-gpr-into-it/20180427-055410
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
> config: powerpc-defconfig (attached as .config)
> compiler: powerpc64-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
> reproduce:
>         wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
>         chmod +x ~/bin/make.cross
>         # save the attached .config to linux build tree
>         make.cross ARCH=powerpc 
> 
> Note: the linux-review/wei-guo-simon-gmail-com/KVM-PPC-add-pt_regs-into-kvm_vcpu_arch-and-move-vcpu-arch-gpr-into-it/20180427-055410 HEAD 92a7de2f1920f80f57d625d6d07731a00ea99161 builds fine.
>       It only hurts bisectibility.
> 
> All error/warnings (new ones prefixed by >>):
> 
>    In file included from arch/powerpc/include/asm/kvm_book3s.h:271:0,
>                     from arch/powerpc/kernel/asm-offsets.c:57:
>    arch/powerpc/include/asm/kvm_book3s_64.h: In function 'copy_from_checkpoint':
> >> arch/powerpc/include/asm/kvm_book3s_64.h:493:20: error: 'struct kvm_vcpu_arch' has no member named 'gpr'; did you mean 'qpr'?
>      memcpy(vcpu->arch.gpr, vcpu->arch.gpr_tm,
>                        ^~~
>                        qpr
>    arch/powerpc/include/asm/kvm_book3s_64.h:494:27: error: 'struct kvm_vcpu_arch' has no member named 'gpr'; did you mean 'qpr'?
>             sizeof(vcpu->arch.gpr));
>                               ^~~
>                               qpr
>    arch/powerpc/include/asm/kvm_book3s_64.h: In function 'copy_to_checkpoint':
>    arch/powerpc/include/asm/kvm_book3s_64.h:510:39: error: 'struct kvm_vcpu_arch' has no member named 'gpr'; did you mean 'qpr'?
>      memcpy(vcpu->arch.gpr_tm, vcpu->arch.gpr,
>                                           ^~~
>                                           qpr
>    arch/powerpc/include/asm/kvm_book3s_64.h:511:27: error: 'struct kvm_vcpu_arch' has no member named 'gpr'; did you mean 'qpr'?
>             sizeof(vcpu->arch.gpr));
>                               ^~~
>                               qpr
>    In file included from arch/powerpc/kernel/asm-offsets.c:30:0:
>    arch/powerpc/kernel/asm-offsets.c: In function 'main':
> >> include/linux/compiler-gcc.h:170:2: error: 'struct kvm_vcpu_arch' has no member named 'nip'
>      __builtin_offsetof(a, b)
>      ^
>    include/linux/kbuild.h:6:62: note: in definition of macro 'DEFINE'
>      asm volatile("\n.ascii \"->" #sym " %0 " #val "\"" : : "i" (val))
>                                                                  ^~~
>    include/linux/stddef.h:17:32: note: in expansion of macro '__compiler_offsetof'
>     #define offsetof(TYPE, MEMBER) __compiler_offsetof(TYPE, MEMBER)
>                                    ^~~~~~~~~~~~~~~~~~~
> >> include/linux/kbuild.h:11:14: note: in expansion of macro 'offsetof'
>      DEFINE(sym, offsetof(struct str, mem))
>                  ^~~~~~~~
> >> arch/powerpc/kernel/asm-offsets.c:441:2: note: in expansion of macro 'OFFSET'
>      OFFSET(VCPU_PC, kvm_vcpu, arch.nip);
>      ^~~~~~
>    make[2]: *** [arch/powerpc/kernel/asm-offsets.s] Error 1
>    make[2]: Target '__build' not remade because of errors.
>    make[1]: *** [prepare0] Error 2
>    make[1]: Target 'prepare' not remade because of errors.
>    make: *** [sub-make] Error 2
> 
> vim +493 arch/powerpc/include/asm/kvm_book3s_64.h
> 
> 4bb3c7a0 Paul Mackerras 2018-03-21  481  
> 4bb3c7a0 Paul Mackerras 2018-03-21  482  #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> 4bb3c7a0 Paul Mackerras 2018-03-21  483  static inline void copy_from_checkpoint(struct kvm_vcpu *vcpu)
> 4bb3c7a0 Paul Mackerras 2018-03-21  484  {
> 4bb3c7a0 Paul Mackerras 2018-03-21  485  	vcpu->arch.cr  = vcpu->arch.cr_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  486  	vcpu->arch.xer = vcpu->arch.xer_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  487  	vcpu->arch.lr  = vcpu->arch.lr_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  488  	vcpu->arch.ctr = vcpu->arch.ctr_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  489  	vcpu->arch.amr = vcpu->arch.amr_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  490  	vcpu->arch.ppr = vcpu->arch.ppr_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  491  	vcpu->arch.dscr = vcpu->arch.dscr_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  492  	vcpu->arch.tar = vcpu->arch.tar_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21 @493  	memcpy(vcpu->arch.gpr, vcpu->arch.gpr_tm,
> 4bb3c7a0 Paul Mackerras 2018-03-21  494  	       sizeof(vcpu->arch.gpr));
> 4bb3c7a0 Paul Mackerras 2018-03-21  495  	vcpu->arch.fp  = vcpu->arch.fp_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  496  	vcpu->arch.vr  = vcpu->arch.vr_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  497  	vcpu->arch.vrsave = vcpu->arch.vrsave_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  498  }
> 4bb3c7a0 Paul Mackerras 2018-03-21  499  
> 
> :::::: The code at line 493 was first introduced by commit
> :::::: 4bb3c7a0208fc13ca70598efd109901a7cd45ae7 KVM: PPC: Book3S HV: Work around transactional memory bugs in POWER9
> 
> :::::: TO: Paul Mackerras <paulus@ozlabs.org>
> :::::: CC: Michael Ellerman <mpe@ellerman.id.au>
> 
> ---
> 0-DAY kernel test infrastructure                Open Source Technology Center
> https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

Somehow I put some code (which should have been in PATCH 01) into
PATCH 02 while splitting the patches, and it lead to the error. I
will correct it in V2.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 01/11] KVM: PPC: add pt_regs into kvm_vcpu_arch and move vcpu->arch.gpr[] into it
@ 2018-04-27 10:21       ` Simon Guo
  0 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-04-27 10:21 UTC (permalink / raw)
  To: kbuild test robot; +Cc: kbuild-all, kvm-ppc, Paul Mackerras, kvm, linuxppc-dev

On Fri, Apr 27, 2018 at 11:47:21AM +0800, kbuild test robot wrote:
> Hi Simon,
> 
> Thank you for the patch! Yet something to improve:
> 
> [auto build test ERROR on powerpc/next]
> [also build test ERROR on v4.17-rc2 next-20180426]
> [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
> 
> url:    https://github.com/0day-ci/linux/commits/wei-guo-simon-gmail-com/KVM-PPC-add-pt_regs-into-kvm_vcpu_arch-and-move-vcpu-arch-gpr-into-it/20180427-055410
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
> config: powerpc-defconfig (attached as .config)
> compiler: powerpc64-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
> reproduce:
>         wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
>         chmod +x ~/bin/make.cross
>         # save the attached .config to linux build tree
>         make.cross ARCH=powerpc 
> 
> Note: the linux-review/wei-guo-simon-gmail-com/KVM-PPC-add-pt_regs-into-kvm_vcpu_arch-and-move-vcpu-arch-gpr-into-it/20180427-055410 HEAD 92a7de2f1920f80f57d625d6d07731a00ea99161 builds fine.
>       It only hurts bisectibility.
> 
> All error/warnings (new ones prefixed by >>):
> 
>    In file included from arch/powerpc/include/asm/kvm_book3s.h:271:0,
>                     from arch/powerpc/kernel/asm-offsets.c:57:
>    arch/powerpc/include/asm/kvm_book3s_64.h: In function 'copy_from_checkpoint':
> >> arch/powerpc/include/asm/kvm_book3s_64.h:493:20: error: 'struct kvm_vcpu_arch' has no member named 'gpr'; did you mean 'qpr'?
>      memcpy(vcpu->arch.gpr, vcpu->arch.gpr_tm,
>                        ^~~
>                        qpr
>    arch/powerpc/include/asm/kvm_book3s_64.h:494:27: error: 'struct kvm_vcpu_arch' has no member named 'gpr'; did you mean 'qpr'?
>             sizeof(vcpu->arch.gpr));
>                               ^~~
>                               qpr
>    arch/powerpc/include/asm/kvm_book3s_64.h: In function 'copy_to_checkpoint':
>    arch/powerpc/include/asm/kvm_book3s_64.h:510:39: error: 'struct kvm_vcpu_arch' has no member named 'gpr'; did you mean 'qpr'?
>      memcpy(vcpu->arch.gpr_tm, vcpu->arch.gpr,
>                                           ^~~
>                                           qpr
>    arch/powerpc/include/asm/kvm_book3s_64.h:511:27: error: 'struct kvm_vcpu_arch' has no member named 'gpr'; did you mean 'qpr'?
>             sizeof(vcpu->arch.gpr));
>                               ^~~
>                               qpr
>    In file included from arch/powerpc/kernel/asm-offsets.c:30:0:
>    arch/powerpc/kernel/asm-offsets.c: In function 'main':
> >> include/linux/compiler-gcc.h:170:2: error: 'struct kvm_vcpu_arch' has no member named 'nip'
>      __builtin_offsetof(a, b)
>      ^
>    include/linux/kbuild.h:6:62: note: in definition of macro 'DEFINE'
>      asm volatile("\n.ascii \"->" #sym " %0 " #val "\"" : : "i" (val))
>                                                                  ^~~
>    include/linux/stddef.h:17:32: note: in expansion of macro '__compiler_offsetof'
>     #define offsetof(TYPE, MEMBER) __compiler_offsetof(TYPE, MEMBER)
>                                    ^~~~~~~~~~~~~~~~~~~
> >> include/linux/kbuild.h:11:14: note: in expansion of macro 'offsetof'
>      DEFINE(sym, offsetof(struct str, mem))
>                  ^~~~~~~~
> >> arch/powerpc/kernel/asm-offsets.c:441:2: note: in expansion of macro 'OFFSET'
>      OFFSET(VCPU_PC, kvm_vcpu, arch.nip);
>      ^~~~~~
>    make[2]: *** [arch/powerpc/kernel/asm-offsets.s] Error 1
>    make[2]: Target '__build' not remade because of errors.
>    make[1]: *** [prepare0] Error 2
>    make[1]: Target 'prepare' not remade because of errors.
>    make: *** [sub-make] Error 2
> 
> vim +493 arch/powerpc/include/asm/kvm_book3s_64.h
> 
> 4bb3c7a0 Paul Mackerras 2018-03-21  481  
> 4bb3c7a0 Paul Mackerras 2018-03-21  482  #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> 4bb3c7a0 Paul Mackerras 2018-03-21  483  static inline void copy_from_checkpoint(struct kvm_vcpu *vcpu)
> 4bb3c7a0 Paul Mackerras 2018-03-21  484  {
> 4bb3c7a0 Paul Mackerras 2018-03-21  485  	vcpu->arch.cr  = vcpu->arch.cr_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  486  	vcpu->arch.xer = vcpu->arch.xer_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  487  	vcpu->arch.lr  = vcpu->arch.lr_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  488  	vcpu->arch.ctr = vcpu->arch.ctr_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  489  	vcpu->arch.amr = vcpu->arch.amr_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  490  	vcpu->arch.ppr = vcpu->arch.ppr_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  491  	vcpu->arch.dscr = vcpu->arch.dscr_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  492  	vcpu->arch.tar = vcpu->arch.tar_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21 @493  	memcpy(vcpu->arch.gpr, vcpu->arch.gpr_tm,
> 4bb3c7a0 Paul Mackerras 2018-03-21  494  	       sizeof(vcpu->arch.gpr));
> 4bb3c7a0 Paul Mackerras 2018-03-21  495  	vcpu->arch.fp  = vcpu->arch.fp_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  496  	vcpu->arch.vr  = vcpu->arch.vr_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  497  	vcpu->arch.vrsave = vcpu->arch.vrsave_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  498  }
> 4bb3c7a0 Paul Mackerras 2018-03-21  499  
> 
> :::::: The code at line 493 was first introduced by commit
> :::::: 4bb3c7a0208fc13ca70598efd109901a7cd45ae7 KVM: PPC: Book3S HV: Work around transactional memory bugs in POWER9
> 
> :::::: TO: Paul Mackerras <paulus@ozlabs.org>
> :::::: CC: Michael Ellerman <mpe@ellerman.id.au>
> 
> ---
> 0-DAY kernel test infrastructure                Open Source Technology Center
> https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

Somehow I put some code (which should have been in PATCH 01) into
PATCH 02 while splitting the patches, and it lead to the error. I
will correct it in V2.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 01/11] KVM: PPC: add pt_regs into kvm_vcpu_arch and move vcpu->arch.gpr[] into it
@ 2018-04-27 10:21       ` Simon Guo
  0 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-04-27 10:21 UTC (permalink / raw)
  To: kbuild test robot; +Cc: linuxppc-dev, kbuild-all, kvm-ppc, kvm

On Fri, Apr 27, 2018 at 11:47:21AM +0800, kbuild test robot wrote:
> Hi Simon,
> 
> Thank you for the patch! Yet something to improve:
> 
> [auto build test ERROR on powerpc/next]
> [also build test ERROR on v4.17-rc2 next-20180426]
> [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
> 
> url:    https://github.com/0day-ci/linux/commits/wei-guo-simon-gmail-com/KVM-PPC-add-pt_regs-into-kvm_vcpu_arch-and-move-vcpu-arch-gpr-into-it/20180427-055410
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
> config: powerpc-defconfig (attached as .config)
> compiler: powerpc64-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
> reproduce:
>         wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
>         chmod +x ~/bin/make.cross
>         # save the attached .config to linux build tree
>         make.cross ARCH=powerpc 
> 
> Note: the linux-review/wei-guo-simon-gmail-com/KVM-PPC-add-pt_regs-into-kvm_vcpu_arch-and-move-vcpu-arch-gpr-into-it/20180427-055410 HEAD 92a7de2f1920f80f57d625d6d07731a00ea99161 builds fine.
>       It only hurts bisectibility.
> 
> All error/warnings (new ones prefixed by >>):
> 
>    In file included from arch/powerpc/include/asm/kvm_book3s.h:271:0,
>                     from arch/powerpc/kernel/asm-offsets.c:57:
>    arch/powerpc/include/asm/kvm_book3s_64.h: In function 'copy_from_checkpoint':
> >> arch/powerpc/include/asm/kvm_book3s_64.h:493:20: error: 'struct kvm_vcpu_arch' has no member named 'gpr'; did you mean 'qpr'?
>      memcpy(vcpu->arch.gpr, vcpu->arch.gpr_tm,
>                        ^~~
>                        qpr
>    arch/powerpc/include/asm/kvm_book3s_64.h:494:27: error: 'struct kvm_vcpu_arch' has no member named 'gpr'; did you mean 'qpr'?
>             sizeof(vcpu->arch.gpr));
>                               ^~~
>                               qpr
>    arch/powerpc/include/asm/kvm_book3s_64.h: In function 'copy_to_checkpoint':
>    arch/powerpc/include/asm/kvm_book3s_64.h:510:39: error: 'struct kvm_vcpu_arch' has no member named 'gpr'; did you mean 'qpr'?
>      memcpy(vcpu->arch.gpr_tm, vcpu->arch.gpr,
>                                           ^~~
>                                           qpr
>    arch/powerpc/include/asm/kvm_book3s_64.h:511:27: error: 'struct kvm_vcpu_arch' has no member named 'gpr'; did you mean 'qpr'?
>             sizeof(vcpu->arch.gpr));
>                               ^~~
>                               qpr
>    In file included from arch/powerpc/kernel/asm-offsets.c:30:0:
>    arch/powerpc/kernel/asm-offsets.c: In function 'main':
> >> include/linux/compiler-gcc.h:170:2: error: 'struct kvm_vcpu_arch' has no member named 'nip'
>      __builtin_offsetof(a, b)
>      ^
>    include/linux/kbuild.h:6:62: note: in definition of macro 'DEFINE'
>      asm volatile("\n.ascii \"->" #sym " %0 " #val "\"" : : "i" (val))
>                                                                  ^~~
>    include/linux/stddef.h:17:32: note: in expansion of macro '__compiler_offsetof'
>     #define offsetof(TYPE, MEMBER) __compiler_offsetof(TYPE, MEMBER)
>                                    ^~~~~~~~~~~~~~~~~~~
> >> include/linux/kbuild.h:11:14: note: in expansion of macro 'offsetof'
>      DEFINE(sym, offsetof(struct str, mem))
>                  ^~~~~~~~
> >> arch/powerpc/kernel/asm-offsets.c:441:2: note: in expansion of macro 'OFFSET'
>      OFFSET(VCPU_PC, kvm_vcpu, arch.nip);
>      ^~~~~~
>    make[2]: *** [arch/powerpc/kernel/asm-offsets.s] Error 1
>    make[2]: Target '__build' not remade because of errors.
>    make[1]: *** [prepare0] Error 2
>    make[1]: Target 'prepare' not remade because of errors.
>    make: *** [sub-make] Error 2
> 
> vim +493 arch/powerpc/include/asm/kvm_book3s_64.h
> 
> 4bb3c7a0 Paul Mackerras 2018-03-21  481  
> 4bb3c7a0 Paul Mackerras 2018-03-21  482  #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
> 4bb3c7a0 Paul Mackerras 2018-03-21  483  static inline void copy_from_checkpoint(struct kvm_vcpu *vcpu)
> 4bb3c7a0 Paul Mackerras 2018-03-21  484  {
> 4bb3c7a0 Paul Mackerras 2018-03-21  485  	vcpu->arch.cr  = vcpu->arch.cr_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  486  	vcpu->arch.xer = vcpu->arch.xer_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  487  	vcpu->arch.lr  = vcpu->arch.lr_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  488  	vcpu->arch.ctr = vcpu->arch.ctr_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  489  	vcpu->arch.amr = vcpu->arch.amr_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  490  	vcpu->arch.ppr = vcpu->arch.ppr_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  491  	vcpu->arch.dscr = vcpu->arch.dscr_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  492  	vcpu->arch.tar = vcpu->arch.tar_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21 @493  	memcpy(vcpu->arch.gpr, vcpu->arch.gpr_tm,
> 4bb3c7a0 Paul Mackerras 2018-03-21  494  	       sizeof(vcpu->arch.gpr));
> 4bb3c7a0 Paul Mackerras 2018-03-21  495  	vcpu->arch.fp  = vcpu->arch.fp_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  496  	vcpu->arch.vr  = vcpu->arch.vr_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  497  	vcpu->arch.vrsave = vcpu->arch.vrsave_tm;
> 4bb3c7a0 Paul Mackerras 2018-03-21  498  }
> 4bb3c7a0 Paul Mackerras 2018-03-21  499  
> 
> :::::: The code at line 493 was first introduced by commit
> :::::: 4bb3c7a0208fc13ca70598efd109901a7cd45ae7 KVM: PPC: Book3S HV: Work around transactional memory bugs in POWER9
> 
> :::::: TO: Paul Mackerras <paulus@ozlabs.org>
> :::::: CC: Michael Ellerman <mpe@ellerman.id.au>
> 
> ---
> 0-DAY kernel test infrastructure                Open Source Technology Center
> https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

Somehow I put some code (which should have been in PATCH 01) into
PATCH 02 while splitting the patches, and it lead to the error. I
will correct it in V2.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 00/11] KVM: PPC: reconstruct mmio emulation with analyse_instr()
  2018-04-25 11:54 ` wei.guo.simon
  (?)
@ 2018-05-03  5:31   ` Paul Mackerras
  -1 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  5:31 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Wed, Apr 25, 2018 at 07:54:33PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> We already have analyse_instr() which analyzes instructions for the instruction
> type, size, addtional flags, etc. What kvmppc_emulate_loadstore() did is somehow
> duplicated and it will be good to utilize analyse_instr() to reconstruct the
> code. The advantage is that the code logic will be shared and more clean to be 
> maintained.
> 
> This patch series reconstructs kvmppc_emulate_loadstore() for various load/store
> instructions. 
> 
> The testcase locates at:
> https://github.com/justdoitqd/publicFiles/blob/master/test_mmio.c
> 
> - Tested at both PR/HV KVM. 
> - Also tested with little endian host & big endian guest.
> 
> Tested instruction list: 
> 	lbz lbzu lbzx ld ldbrx
> 	ldu ldx lfd lfdu lfdx
> 	lfiwax lfiwzx lfs lfsu lfsx
> 	lha lhau lhax lhbrx lhz
> 	lhzu lhzx lvx lwax lwbrx
> 	lwz lwzu lwzx lxsdx lxsiwax
> 	lxsiwzx lxsspx lxvd2x lxvdsx lxvw4x
> 	stb stbu stbx std stdbrx
> 	stdu stdx stfd stfdu stfdx
> 	stfiwx stfs stfsx sth sthbrx
> 	sthu sthx stvx stw stwbrx
> 	stwu stwx stxsdx stxsiwx stxsspx
> 	stxvd2x stxvw4x

Thanks for doing this.  It's nice to see that this makes the code 260
lines smaller.

I have some comments on the individual patches, which I will give in
replies to those patches.

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 00/11] KVM: PPC: reconstruct mmio emulation with analyse_instr()
@ 2018-05-03  5:31   ` Paul Mackerras
  0 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  5:31 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: kvm-ppc, kvm, linuxppc-dev

On Wed, Apr 25, 2018 at 07:54:33PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> We already have analyse_instr() which analyzes instructions for the instruction
> type, size, addtional flags, etc. What kvmppc_emulate_loadstore() did is somehow
> duplicated and it will be good to utilize analyse_instr() to reconstruct the
> code. The advantage is that the code logic will be shared and more clean to be 
> maintained.
> 
> This patch series reconstructs kvmppc_emulate_loadstore() for various load/store
> instructions. 
> 
> The testcase locates at:
> https://github.com/justdoitqd/publicFiles/blob/master/test_mmio.c
> 
> - Tested at both PR/HV KVM. 
> - Also tested with little endian host & big endian guest.
> 
> Tested instruction list: 
> 	lbz lbzu lbzx ld ldbrx
> 	ldu ldx lfd lfdu lfdx
> 	lfiwax lfiwzx lfs lfsu lfsx
> 	lha lhau lhax lhbrx lhz
> 	lhzu lhzx lvx lwax lwbrx
> 	lwz lwzu lwzx lxsdx lxsiwax
> 	lxsiwzx lxsspx lxvd2x lxvdsx lxvw4x
> 	stb stbu stbx std stdbrx
> 	stdu stdx stfd stfdu stfdx
> 	stfiwx stfs stfsx sth sthbrx
> 	sthu sthx stvx stw stwbrx
> 	stwu stwx stxsdx stxsiwx stxsspx
> 	stxvd2x stxvw4x

Thanks for doing this.  It's nice to see that this makes the code 260
lines smaller.

I have some comments on the individual patches, which I will give in
replies to those patches.

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 00/11] KVM: PPC: reconstruct mmio emulation with analyse_instr()
@ 2018-05-03  5:31   ` Paul Mackerras
  0 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  5:31 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Wed, Apr 25, 2018 at 07:54:33PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> We already have analyse_instr() which analyzes instructions for the instruction
> type, size, addtional flags, etc. What kvmppc_emulate_loadstore() did is somehow
> duplicated and it will be good to utilize analyse_instr() to reconstruct the
> code. The advantage is that the code logic will be shared and more clean to be 
> maintained.
> 
> This patch series reconstructs kvmppc_emulate_loadstore() for various load/store
> instructions. 
> 
> The testcase locates at:
> https://github.com/justdoitqd/publicFiles/blob/master/test_mmio.c
> 
> - Tested at both PR/HV KVM. 
> - Also tested with little endian host & big endian guest.
> 
> Tested instruction list: 
> 	lbz lbzu lbzx ld ldbrx
> 	ldu ldx lfd lfdu lfdx
> 	lfiwax lfiwzx lfs lfsu lfsx
> 	lha lhau lhax lhbrx lhz
> 	lhzu lhzx lvx lwax lwbrx
> 	lwz lwzu lwzx lxsdx lxsiwax
> 	lxsiwzx lxsspx lxvd2x lxvdsx lxvw4x
> 	stb stbu stbx std stdbrx
> 	stdu stdx stfd stfdu stfdx
> 	stfiwx stfs stfsx sth sthbrx
> 	sthu sthx stvx stw stwbrx
> 	stwu stwx stxsdx stxsiwx stxsspx
> 	stxvd2x stxvw4x

Thanks for doing this.  It's nice to see that this makes the code 260
lines smaller.

I have some comments on the individual patches, which I will give in
replies to those patches.

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 01/11] KVM: PPC: add pt_regs into kvm_vcpu_arch and move vcpu->arch.gpr[] into it
  2018-04-25 11:54   ` wei.guo.simon
  (?)
@ 2018-05-03  5:34     ` Paul Mackerras
  -1 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  5:34 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Wed, Apr 25, 2018 at 07:54:34PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> Current regs are scattered at kvm_vcpu_arch structure and it will
> be more neat to organize them into pt_regs structure.
> 
> Also it will enable reconstruct MMIO emulation code with

"reimplement" would be clearer than "reconstruct" here, I think.

> @@ -438,7 +438,7 @@ int main(void)
>  	OFFSET(VCPU_TAR, kvm_vcpu, arch.tar);
>  #endif
>  	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
> -	OFFSET(VCPU_PC, kvm_vcpu, arch.pc);
> +	OFFSET(VCPU_PC, kvm_vcpu, arch.nip);

This hunk shouldn't be in this patch.

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 01/11] KVM: PPC: add pt_regs into kvm_vcpu_arch and move vcpu->arch.gpr[] into it
@ 2018-05-03  5:34     ` Paul Mackerras
  0 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  5:34 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: kvm-ppc, kvm, linuxppc-dev

On Wed, Apr 25, 2018 at 07:54:34PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> Current regs are scattered at kvm_vcpu_arch structure and it will
> be more neat to organize them into pt_regs structure.
> 
> Also it will enable reconstruct MMIO emulation code with

"reimplement" would be clearer than "reconstruct" here, I think.

> @@ -438,7 +438,7 @@ int main(void)
>  	OFFSET(VCPU_TAR, kvm_vcpu, arch.tar);
>  #endif
>  	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
> -	OFFSET(VCPU_PC, kvm_vcpu, arch.pc);
> +	OFFSET(VCPU_PC, kvm_vcpu, arch.nip);

This hunk shouldn't be in this patch.

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 01/11] KVM: PPC: add pt_regs into kvm_vcpu_arch and move vcpu->arch.gpr[] into it
@ 2018-05-03  5:34     ` Paul Mackerras
  0 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  5:34 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Wed, Apr 25, 2018 at 07:54:34PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> Current regs are scattered at kvm_vcpu_arch structure and it will
> be more neat to organize them into pt_regs structure.
> 
> Also it will enable reconstruct MMIO emulation code with

"reimplement" would be clearer than "reconstruct" here, I think.

> @@ -438,7 +438,7 @@ int main(void)
>  	OFFSET(VCPU_TAR, kvm_vcpu, arch.tar);
>  #endif
>  	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
> -	OFFSET(VCPU_PC, kvm_vcpu, arch.pc);
> +	OFFSET(VCPU_PC, kvm_vcpu, arch.nip);

This hunk shouldn't be in this patch.

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 02/11] KVM: PPC: mov nip/ctr/lr/xer registers to pt_regs in kvm_vcpu_arch
  2018-04-25 11:54   ` wei.guo.simon
  (?)
@ 2018-05-03  5:46     ` Paul Mackerras
  -1 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  5:46 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Wed, Apr 25, 2018 at 07:54:35PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> This patch moves nip/ctr/lr/xer registers from scattered places in
> kvm_vcpu_arch to pt_regs structure.
> 
> cr register is "unsigned long" in pt_regs and u32 in vcpu->arch.
> It will need more consideration and may move in later patches.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

Mostly looks fine; some nits below.

> diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
> index e8a78a5..731f7d4 100644
> --- a/arch/powerpc/kernel/asm-offsets.c
> +++ b/arch/powerpc/kernel/asm-offsets.c
> @@ -431,14 +431,14 @@ int main(void)
>  #ifdef CONFIG_ALTIVEC
>  	OFFSET(VCPU_VRS, kvm_vcpu, arch.vr.vr);
>  #endif
> -	OFFSET(VCPU_XER, kvm_vcpu, arch.xer);
> -	OFFSET(VCPU_CTR, kvm_vcpu, arch.ctr);
> -	OFFSET(VCPU_LR, kvm_vcpu, arch.lr);
> +	OFFSET(VCPU_XER, kvm_vcpu, arch.regs.xer);
> +	OFFSET(VCPU_CTR, kvm_vcpu, arch.regs.ctr);
> +	OFFSET(VCPU_LR, kvm_vcpu, arch.regs.link);
>  #ifdef CONFIG_PPC_BOOK3S
>  	OFFSET(VCPU_TAR, kvm_vcpu, arch.tar);
>  #endif
> -	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
> -	OFFSET(VCPU_PC, kvm_vcpu, arch.nip);

This should be arch.pc; arch.nip doesn't exist.

> +	OFFSET(VCPU_CR, kvm_vcpu, arch.regs.ccr);

I thought the patch description said you weren't moving CR at this
stage?

> +	OFFSET(VCPU_PC, kvm_vcpu, arch.regs.nip);
>  #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
>  	OFFSET(VCPU_MSR, kvm_vcpu, arch.shregs.msr);
>  	OFFSET(VCPU_SRR0, kvm_vcpu, arch.shregs.srr0);
> @@ -693,11 +693,11 @@ int main(void)
>  #endif /* CONFIG_PPC_BOOK3S_64 */
>  
>  #else /* CONFIG_PPC_BOOK3S */
> -	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
> -	OFFSET(VCPU_XER, kvm_vcpu, arch.xer);
> -	OFFSET(VCPU_LR, kvm_vcpu, arch.lr);
> -	OFFSET(VCPU_CTR, kvm_vcpu, arch.ctr);
> -	OFFSET(VCPU_PC, kvm_vcpu, arch.pc);
> +	OFFSET(VCPU_CR, kvm_vcpu, arch.regs.ccr);

Once again VCPU_CR should not be changed.

> +	OFFSET(VCPU_XER, kvm_vcpu, arch.regs.xer);
> +	OFFSET(VCPU_LR, kvm_vcpu, arch.regs.link);
> +	OFFSET(VCPU_CTR, kvm_vcpu, arch.regs.ctr);
> +	OFFSET(VCPU_PC, kvm_vcpu, arch.regs.nip);
>  	OFFSET(VCPU_SPRG9, kvm_vcpu, arch.sprg9);
>  	OFFSET(VCPU_LAST_INST, kvm_vcpu, arch.last_inst);
>  	OFFSET(VCPU_FAULT_DEAR, kvm_vcpu, arch.fault_dear);

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 02/11] KVM: PPC: mov nip/ctr/lr/xer registers to pt_regs in kvm_vcpu_arch
@ 2018-05-03  5:46     ` Paul Mackerras
  0 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  5:46 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: kvm-ppc, kvm, linuxppc-dev

On Wed, Apr 25, 2018 at 07:54:35PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> This patch moves nip/ctr/lr/xer registers from scattered places in
> kvm_vcpu_arch to pt_regs structure.
> 
> cr register is "unsigned long" in pt_regs and u32 in vcpu->arch.
> It will need more consideration and may move in later patches.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

Mostly looks fine; some nits below.

> diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
> index e8a78a5..731f7d4 100644
> --- a/arch/powerpc/kernel/asm-offsets.c
> +++ b/arch/powerpc/kernel/asm-offsets.c
> @@ -431,14 +431,14 @@ int main(void)
>  #ifdef CONFIG_ALTIVEC
>  	OFFSET(VCPU_VRS, kvm_vcpu, arch.vr.vr);
>  #endif
> -	OFFSET(VCPU_XER, kvm_vcpu, arch.xer);
> -	OFFSET(VCPU_CTR, kvm_vcpu, arch.ctr);
> -	OFFSET(VCPU_LR, kvm_vcpu, arch.lr);
> +	OFFSET(VCPU_XER, kvm_vcpu, arch.regs.xer);
> +	OFFSET(VCPU_CTR, kvm_vcpu, arch.regs.ctr);
> +	OFFSET(VCPU_LR, kvm_vcpu, arch.regs.link);
>  #ifdef CONFIG_PPC_BOOK3S
>  	OFFSET(VCPU_TAR, kvm_vcpu, arch.tar);
>  #endif
> -	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
> -	OFFSET(VCPU_PC, kvm_vcpu, arch.nip);

This should be arch.pc; arch.nip doesn't exist.

> +	OFFSET(VCPU_CR, kvm_vcpu, arch.regs.ccr);

I thought the patch description said you weren't moving CR at this
stage?

> +	OFFSET(VCPU_PC, kvm_vcpu, arch.regs.nip);
>  #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
>  	OFFSET(VCPU_MSR, kvm_vcpu, arch.shregs.msr);
>  	OFFSET(VCPU_SRR0, kvm_vcpu, arch.shregs.srr0);
> @@ -693,11 +693,11 @@ int main(void)
>  #endif /* CONFIG_PPC_BOOK3S_64 */
>  
>  #else /* CONFIG_PPC_BOOK3S */
> -	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
> -	OFFSET(VCPU_XER, kvm_vcpu, arch.xer);
> -	OFFSET(VCPU_LR, kvm_vcpu, arch.lr);
> -	OFFSET(VCPU_CTR, kvm_vcpu, arch.ctr);
> -	OFFSET(VCPU_PC, kvm_vcpu, arch.pc);
> +	OFFSET(VCPU_CR, kvm_vcpu, arch.regs.ccr);

Once again VCPU_CR should not be changed.

> +	OFFSET(VCPU_XER, kvm_vcpu, arch.regs.xer);
> +	OFFSET(VCPU_LR, kvm_vcpu, arch.regs.link);
> +	OFFSET(VCPU_CTR, kvm_vcpu, arch.regs.ctr);
> +	OFFSET(VCPU_PC, kvm_vcpu, arch.regs.nip);
>  	OFFSET(VCPU_SPRG9, kvm_vcpu, arch.sprg9);
>  	OFFSET(VCPU_LAST_INST, kvm_vcpu, arch.last_inst);
>  	OFFSET(VCPU_FAULT_DEAR, kvm_vcpu, arch.fault_dear);

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 02/11] KVM: PPC: mov nip/ctr/lr/xer registers to pt_regs in kvm_vcpu_arch
@ 2018-05-03  5:46     ` Paul Mackerras
  0 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  5:46 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Wed, Apr 25, 2018 at 07:54:35PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> This patch moves nip/ctr/lr/xer registers from scattered places in
> kvm_vcpu_arch to pt_regs structure.
> 
> cr register is "unsigned long" in pt_regs and u32 in vcpu->arch.
> It will need more consideration and may move in later patches.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

Mostly looks fine; some nits below.

> diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
> index e8a78a5..731f7d4 100644
> --- a/arch/powerpc/kernel/asm-offsets.c
> +++ b/arch/powerpc/kernel/asm-offsets.c
> @@ -431,14 +431,14 @@ int main(void)
>  #ifdef CONFIG_ALTIVEC
>  	OFFSET(VCPU_VRS, kvm_vcpu, arch.vr.vr);
>  #endif
> -	OFFSET(VCPU_XER, kvm_vcpu, arch.xer);
> -	OFFSET(VCPU_CTR, kvm_vcpu, arch.ctr);
> -	OFFSET(VCPU_LR, kvm_vcpu, arch.lr);
> +	OFFSET(VCPU_XER, kvm_vcpu, arch.regs.xer);
> +	OFFSET(VCPU_CTR, kvm_vcpu, arch.regs.ctr);
> +	OFFSET(VCPU_LR, kvm_vcpu, arch.regs.link);
>  #ifdef CONFIG_PPC_BOOK3S
>  	OFFSET(VCPU_TAR, kvm_vcpu, arch.tar);
>  #endif
> -	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
> -	OFFSET(VCPU_PC, kvm_vcpu, arch.nip);

This should be arch.pc; arch.nip doesn't exist.

> +	OFFSET(VCPU_CR, kvm_vcpu, arch.regs.ccr);

I thought the patch description said you weren't moving CR at this
stage?

> +	OFFSET(VCPU_PC, kvm_vcpu, arch.regs.nip);
>  #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
>  	OFFSET(VCPU_MSR, kvm_vcpu, arch.shregs.msr);
>  	OFFSET(VCPU_SRR0, kvm_vcpu, arch.shregs.srr0);
> @@ -693,11 +693,11 @@ int main(void)
>  #endif /* CONFIG_PPC_BOOK3S_64 */
>  
>  #else /* CONFIG_PPC_BOOK3S */
> -	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
> -	OFFSET(VCPU_XER, kvm_vcpu, arch.xer);
> -	OFFSET(VCPU_LR, kvm_vcpu, arch.lr);
> -	OFFSET(VCPU_CTR, kvm_vcpu, arch.ctr);
> -	OFFSET(VCPU_PC, kvm_vcpu, arch.pc);
> +	OFFSET(VCPU_CR, kvm_vcpu, arch.regs.ccr);

Once again VCPU_CR should not be changed.

> +	OFFSET(VCPU_XER, kvm_vcpu, arch.regs.xer);
> +	OFFSET(VCPU_LR, kvm_vcpu, arch.regs.link);
> +	OFFSET(VCPU_CTR, kvm_vcpu, arch.regs.ctr);
> +	OFFSET(VCPU_PC, kvm_vcpu, arch.regs.nip);
>  	OFFSET(VCPU_SPRG9, kvm_vcpu, arch.sprg9);
>  	OFFSET(VCPU_LAST_INST, kvm_vcpu, arch.last_inst);
>  	OFFSET(VCPU_FAULT_DEAR, kvm_vcpu, arch.fault_dear);

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 03/11] KVM: PPC: Fix a mmio_host_swabbed uninitialized usage issue when VMX store
  2018-04-25 11:54   ` wei.guo.simon
  (?)
@ 2018-05-03  5:48     ` Paul Mackerras
  -1 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  5:48 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Wed, Apr 25, 2018 at 07:54:36PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> When KVM emulates VMX store, it will invoke kvmppc_get_vmx_data() to
> retrieve VMX reg val. kvmppc_get_vmx_data() will check mmio_host_swabbed
> to decide which double word of vr[] to be used. But the
> mmio_host_swabbed can be uninitiazed during VMX store procedure:
> 
> kvmppc_emulate_loadstore
> 	\- kvmppc_handle_store128_by2x64
> 		\- kvmppc_get_vmx_data
> 
> This patch corrects this by using kvmppc_need_byteswap() to choose
> double word of vr[] and initialized mmio_host_swabbed to avoid invisble
> trouble.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

The patch is correct, but I think the patch description needs to say
that vcpu->arch.mmio_host_swabbed is not meant to be used at all for
emulation of store instructions, and this patch makes that true for
VMX stores.

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 03/11] KVM: PPC: Fix a mmio_host_swabbed uninitialized usage issue when VMX store
@ 2018-05-03  5:48     ` Paul Mackerras
  0 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  5:48 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: kvm-ppc, kvm, linuxppc-dev

On Wed, Apr 25, 2018 at 07:54:36PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> When KVM emulates VMX store, it will invoke kvmppc_get_vmx_data() to
> retrieve VMX reg val. kvmppc_get_vmx_data() will check mmio_host_swabbed
> to decide which double word of vr[] to be used. But the
> mmio_host_swabbed can be uninitiazed during VMX store procedure:
> 
> kvmppc_emulate_loadstore
> 	\- kvmppc_handle_store128_by2x64
> 		\- kvmppc_get_vmx_data
> 
> This patch corrects this by using kvmppc_need_byteswap() to choose
> double word of vr[] and initialized mmio_host_swabbed to avoid invisble
> trouble.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

The patch is correct, but I think the patch description needs to say
that vcpu->arch.mmio_host_swabbed is not meant to be used at all for
emulation of store instructions, and this patch makes that true for
VMX stores.

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 03/11] KVM: PPC: Fix a mmio_host_swabbed uninitialized usage issue when VMX store
@ 2018-05-03  5:48     ` Paul Mackerras
  0 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  5:48 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Wed, Apr 25, 2018 at 07:54:36PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> When KVM emulates VMX store, it will invoke kvmppc_get_vmx_data() to
> retrieve VMX reg val. kvmppc_get_vmx_data() will check mmio_host_swabbed
> to decide which double word of vr[] to be used. But the
> mmio_host_swabbed can be uninitiazed during VMX store procedure:
> 
> kvmppc_emulate_loadstore
> 	\- kvmppc_handle_store128_by2x64
> 		\- kvmppc_get_vmx_data
> 
> This patch corrects this by using kvmppc_need_byteswap() to choose
> double word of vr[] and initialized mmio_host_swabbed to avoid invisble
> trouble.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

The patch is correct, but I think the patch description needs to say
that vcpu->arch.mmio_host_swabbed is not meant to be used at all for
emulation of store instructions, and this patch makes that true for
VMX stores.

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 04/11] KVM: PPC: fix incorrect element_size for stxsiwx in analyse_instr
  2018-04-25 11:54   ` wei.guo.simon
  (?)
@ 2018-05-03  5:50     ` Paul Mackerras
  -1 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  5:50 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Wed, Apr 25, 2018 at 07:54:37PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> stwsiwx will place contents of word element 1 of VSR into word
> storage of EA. So the element size of stwsiwx should be 4.
> 
> This patch correct the size from 8 to 4.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> ---
>  arch/powerpc/lib/sstep.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
> index 34d68f1..151d484 100644
> --- a/arch/powerpc/lib/sstep.c
> +++ b/arch/powerpc/lib/sstep.c
> @@ -2178,7 +2178,7 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
>  		case 140:	/* stxsiwx */
>  			op->reg = rd | ((instr & 1) << 5);
>  			op->type = MKOP(STORE_VSX, 0, 4);
> -			op->element_size = 8;
> +			op->element_size = 4;

I made the element_size be 8 deliberately because this way, with
size=4 but element_size=8, the code will naturally choose the correct
word (the least-significant word of the left half) of the register to
store into memory.  With this change you then need the special case in
a later patch for stxsiwx, which you shouldn't need if you don't make
this change.

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 04/11] KVM: PPC: fix incorrect element_size for stxsiwx in analyse_instr
@ 2018-05-03  5:50     ` Paul Mackerras
  0 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  5:50 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: kvm-ppc, kvm, linuxppc-dev

On Wed, Apr 25, 2018 at 07:54:37PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> stwsiwx will place contents of word element 1 of VSR into word
> storage of EA. So the element size of stwsiwx should be 4.
> 
> This patch correct the size from 8 to 4.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> ---
>  arch/powerpc/lib/sstep.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
> index 34d68f1..151d484 100644
> --- a/arch/powerpc/lib/sstep.c
> +++ b/arch/powerpc/lib/sstep.c
> @@ -2178,7 +2178,7 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
>  		case 140:	/* stxsiwx */
>  			op->reg = rd | ((instr & 1) << 5);
>  			op->type = MKOP(STORE_VSX, 0, 4);
> -			op->element_size = 8;
> +			op->element_size = 4;

I made the element_size be 8 deliberately because this way, with
size=4 but element_size=8, the code will naturally choose the correct
word (the least-significant word of the left half) of the register to
store into memory.  With this change you then need the special case in
a later patch for stxsiwx, which you shouldn't need if you don't make
this change.

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 04/11] KVM: PPC: fix incorrect element_size for stxsiwx in analyse_instr
@ 2018-05-03  5:50     ` Paul Mackerras
  0 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  5:50 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Wed, Apr 25, 2018 at 07:54:37PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> stwsiwx will place contents of word element 1 of VSR into word
> storage of EA. So the element size of stwsiwx should be 4.
> 
> This patch correct the size from 8 to 4.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> ---
>  arch/powerpc/lib/sstep.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
> index 34d68f1..151d484 100644
> --- a/arch/powerpc/lib/sstep.c
> +++ b/arch/powerpc/lib/sstep.c
> @@ -2178,7 +2178,7 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
>  		case 140:	/* stxsiwx */
>  			op->reg = rd | ((instr & 1) << 5);
>  			op->type = MKOP(STORE_VSX, 0, 4);
> -			op->element_size = 8;
> +			op->element_size = 4;

I made the element_size be 8 deliberately because this way, with
size=4 but element_size=8, the code will naturally choose the correct
word (the least-significant word of the left half) of the register to
store into memory.  With this change you then need the special case in
a later patch for stxsiwx, which you shouldn't need if you don't make
this change.

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 05/11] KVM: PPC: add GPR RA update skeleton for MMIO emulation
  2018-04-25 11:54   ` wei.guo.simon
  (?)
@ 2018-05-03  5:58     ` Paul Mackerras
  -1 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  5:58 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Wed, Apr 25, 2018 at 07:54:38PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> To optimize kvm emulation code with analyse_instr, adds new
> mmio_update_ra flag to aid with GPR RA update.
> 
> This patch arms RA update at load/store emulation path for both
> qemu mmio emulation or coalesced mmio emulation.

It's not clear to me why you need to do this.  The existing code
writes RA at the point where the instruction is decoded.  In later
patches, you change that so the RA update occurs after the MMIO
operation is performed.  Is there a particular reason why you made
that change?

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 05/11] KVM: PPC: add GPR RA update skeleton for MMIO emulation
@ 2018-05-03  5:58     ` Paul Mackerras
  0 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  5:58 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: kvm-ppc, kvm, linuxppc-dev

On Wed, Apr 25, 2018 at 07:54:38PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> To optimize kvm emulation code with analyse_instr, adds new
> mmio_update_ra flag to aid with GPR RA update.
> 
> This patch arms RA update at load/store emulation path for both
> qemu mmio emulation or coalesced mmio emulation.

It's not clear to me why you need to do this.  The existing code
writes RA at the point where the instruction is decoded.  In later
patches, you change that so the RA update occurs after the MMIO
operation is performed.  Is there a particular reason why you made
that change?

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 05/11] KVM: PPC: add GPR RA update skeleton for MMIO emulation
@ 2018-05-03  5:58     ` Paul Mackerras
  0 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  5:58 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Wed, Apr 25, 2018 at 07:54:38PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> To optimize kvm emulation code with analyse_instr, adds new
> mmio_update_ra flag to aid with GPR RA update.
> 
> This patch arms RA update at load/store emulation path for both
> qemu mmio emulation or coalesced mmio emulation.

It's not clear to me why you need to do this.  The existing code
writes RA at the point where the instruction is decoded.  In later
patches, you change that so the RA update occurs after the MMIO
operation is performed.  Is there a particular reason why you made
that change?

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 06/11] KVM: PPC: add KVMPPC_VSX_COPY_WORD_LOAD_DUMP type support for mmio emulation
  2018-04-25 11:54   ` wei.guo.simon
  (?)
@ 2018-05-03  5:59     ` Paul Mackerras
  -1 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  5:59 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Wed, Apr 25, 2018 at 07:54:39PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> Some VSX instruction like lxvwsx will splat word into VSR. This patch
> adds VSX copy type KVMPPC_VSX_COPY_WORD_LOAD_DUMP to support this.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

Reviewed-by: Paul Mackerras <paulus@ozlabs.org>

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 06/11] KVM: PPC: add KVMPPC_VSX_COPY_WORD_LOAD_DUMP type support for mmio emulation
@ 2018-05-03  5:59     ` Paul Mackerras
  0 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  5:59 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: kvm-ppc, kvm, linuxppc-dev

On Wed, Apr 25, 2018 at 07:54:39PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> Some VSX instruction like lxvwsx will splat word into VSR. This patch
> adds VSX copy type KVMPPC_VSX_COPY_WORD_LOAD_DUMP to support this.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

Reviewed-by: Paul Mackerras <paulus@ozlabs.org>

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 06/11] KVM: PPC: add KVMPPC_VSX_COPY_WORD_LOAD_DUMP type support for mmio emulation
@ 2018-05-03  5:59     ` Paul Mackerras
  0 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  5:59 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Wed, Apr 25, 2018 at 07:54:39PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> Some VSX instruction like lxvwsx will splat word into VSR. This patch
> adds VSX copy type KVMPPC_VSX_COPY_WORD_LOAD_DUMP to support this.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

Reviewed-by: Paul Mackerras <paulus@ozlabs.org>

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 07/11] KVM: PPC: reconstruct non-SIMD LOAD/STORE instruction mmio emulation with analyse_intr() input
  2018-04-25 11:54   ` [PATCH 07/11] KVM: PPC: reconstruct non-SIMD LOAD/STORE instruction mmio emulation with analyse_intr() input wei.guo.simon
  (?)
@ 2018-05-03  6:03     ` Paul Mackerras
  -1 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  6:03 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Wed, Apr 25, 2018 at 07:54:40PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> This patch reconstructs non-SIMD LOAD/STORE instruction MMIO emulation
> with analyse_intr() input. It utilizes the BYTEREV/UPDATE/SIGNEXT
> properties exported by analyse_instr() and invokes
> kvmppc_handle_load(s)/kvmppc_handle_store() accordingly.
> 
> It also move CACHEOP type handling into the skeleton.
> 
> instruction_type within sstep.h is renamed to avoid conflict with
> kvm_ppc.h.

I'd prefer to change the one in kvm_ppc.h, especially since that one
isn't exactly about the type of instruction, but more about the type
of interrupt led to us trying to fetch the instruction.

> Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> ---
>  arch/powerpc/include/asm/sstep.h     |   2 +-
>  arch/powerpc/kvm/emulate_loadstore.c | 282 +++++++----------------------------
>  2 files changed, 51 insertions(+), 233 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/sstep.h b/arch/powerpc/include/asm/sstep.h
> index ab9d849..0a1a312 100644
> --- a/arch/powerpc/include/asm/sstep.h
> +++ b/arch/powerpc/include/asm/sstep.h
> @@ -23,7 +23,7 @@
>  #define IS_RFID(instr)		(((instr) & 0xfc0007fe) == 0x4c000024)
>  #define IS_RFI(instr)		(((instr) & 0xfc0007fe) == 0x4c000064)
>  
> -enum instruction_type {
> +enum analyse_instruction_type {
>  	COMPUTE,		/* arith/logical/CR op, etc. */
>  	LOAD,			/* load and store types need to be contiguous */
>  	LOAD_MULTI,
> diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
> index 90b9692..aaaf872 100644
> --- a/arch/powerpc/kvm/emulate_loadstore.c
> +++ b/arch/powerpc/kvm/emulate_loadstore.c
> @@ -31,9 +31,12 @@
>  #include <asm/kvm_ppc.h>
>  #include <asm/disassemble.h>
>  #include <asm/ppc-opcode.h>
> +#include <asm/sstep.h>
>  #include "timing.h"
>  #include "trace.h"
>  
> +int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
> +		  unsigned int instr);

You shouldn't need this prototype here, since there's one in sstep.h.

>  #ifdef CONFIG_PPC_FPU
>  static bool kvmppc_check_fp_disabled(struct kvm_vcpu *vcpu)
>  {
> @@ -84,8 +87,9 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
>  	struct kvm_run *run = vcpu->run;
>  	u32 inst;
>  	int ra, rs, rt;
> -	enum emulation_result emulated;
> +	enum emulation_result emulated = EMULATE_FAIL;
>  	int advance = 1;
> +	struct instruction_op op;
>  
>  	/* this default type might be overwritten by subcategories */
>  	kvmppc_set_exit_type(vcpu, EMULATED_INST_EXITS);
> @@ -114,144 +118,64 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
>  	vcpu->arch.mmio_update_ra = 0;
>  	vcpu->arch.mmio_host_swabbed = 0;
>  
> -	switch (get_op(inst)) {
> -	case 31:
> -		switch (get_xop(inst)) {
> -		case OP_31_XOP_LWZX:
> -			emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
> -			break;
> -
> -		case OP_31_XOP_LWZUX:
> -			emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
> -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> -			break;
> -
> -		case OP_31_XOP_LBZX:
> -			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
> -			break;
> +	emulated = EMULATE_FAIL;
> +	vcpu->arch.regs.msr = vcpu->arch.shared->msr;
> +	vcpu->arch.regs.ccr = vcpu->arch.cr;
> +	if (analyse_instr(&op, &vcpu->arch.regs, inst) == 0) {
> +		int type = op.type & INSTR_TYPE_MASK;
> +		int size = GETSIZE(op.type);
>  
> -		case OP_31_XOP_LBZUX:
> -			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
> -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> -			break;
> +		switch (type) {
> +		case LOAD:  {
> +			int instr_byte_swap = op.type & BYTEREV;
>  
> -		case OP_31_XOP_STDX:
> -			emulated = kvmppc_handle_store(run, vcpu,
> -					kvmppc_get_gpr(vcpu, rs), 8, 1);
> -			break;
> +			if (op.type & UPDATE) {
> +				vcpu->arch.mmio_ra = op.update_reg;
> +				vcpu->arch.mmio_update_ra = 1;
> +			}

Any reason we can't just update RA here immediately?

>  
> -		case OP_31_XOP_STDUX:
> -			emulated = kvmppc_handle_store(run, vcpu,
> -					kvmppc_get_gpr(vcpu, rs), 8, 1);
> -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> -			break;
> -
> -		case OP_31_XOP_STWX:
> -			emulated = kvmppc_handle_store(run, vcpu,
> -					kvmppc_get_gpr(vcpu, rs), 4, 1);
> -			break;
> -
> -		case OP_31_XOP_STWUX:
> -			emulated = kvmppc_handle_store(run, vcpu,
> -					kvmppc_get_gpr(vcpu, rs), 4, 1);
> -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> -			break;
> -
> -		case OP_31_XOP_STBX:
> -			emulated = kvmppc_handle_store(run, vcpu,
> -					kvmppc_get_gpr(vcpu, rs), 1, 1);
> -			break;
> -
> -		case OP_31_XOP_STBUX:
> -			emulated = kvmppc_handle_store(run, vcpu,
> -					kvmppc_get_gpr(vcpu, rs), 1, 1);
> -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> -			break;
> -
> -		case OP_31_XOP_LHAX:
> -			emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
> -			break;
> -
> -		case OP_31_XOP_LHAUX:
> -			emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
> -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> -			break;
> +			if (op.type & SIGNEXT)
> +				emulated = kvmppc_handle_loads(run, vcpu,
> +						op.reg, size, !instr_byte_swap);
> +			else
> +				emulated = kvmppc_handle_load(run, vcpu,
> +						op.reg, size, !instr_byte_swap);
>  
> -		case OP_31_XOP_LHZX:
> -			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
>  			break;
> -
> -		case OP_31_XOP_LHZUX:
> -			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
> -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> -			break;
> -
> -		case OP_31_XOP_STHX:
> -			emulated = kvmppc_handle_store(run, vcpu,
> -					kvmppc_get_gpr(vcpu, rs), 2, 1);
> -			break;
> -
> -		case OP_31_XOP_STHUX:
> -			emulated = kvmppc_handle_store(run, vcpu,
> -					kvmppc_get_gpr(vcpu, rs), 2, 1);
> -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> -			break;
> -
> -		case OP_31_XOP_DCBST:
> -		case OP_31_XOP_DCBF:
> -		case OP_31_XOP_DCBI:
> +		}
> +		case STORE:
> +			if (op.type & UPDATE) {
> +				vcpu->arch.mmio_ra = op.update_reg;
> +				vcpu->arch.mmio_update_ra = 1;
> +			}

Same comment again about updating RA.

> +
> +			/* if need byte reverse, op.val has been reverted by

"reversed" rather than "reverted".  "Reverted" means put back to a
former state.

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 07/11] KVM: PPC: reconstruct non-SIMD LOAD/STORE instruction mmio emulation with analyse_intr() input
@ 2018-05-03  6:03     ` Paul Mackerras
  0 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  6:03 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: kvm-ppc, kvm, linuxppc-dev

On Wed, Apr 25, 2018 at 07:54:40PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> This patch reconstructs non-SIMD LOAD/STORE instruction MMIO emulation
> with analyse_intr() input. It utilizes the BYTEREV/UPDATE/SIGNEXT
> properties exported by analyse_instr() and invokes
> kvmppc_handle_load(s)/kvmppc_handle_store() accordingly.
> 
> It also move CACHEOP type handling into the skeleton.
> 
> instruction_type within sstep.h is renamed to avoid conflict with
> kvm_ppc.h.

I'd prefer to change the one in kvm_ppc.h, especially since that one
isn't exactly about the type of instruction, but more about the type
of interrupt led to us trying to fetch the instruction.

> Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> ---
>  arch/powerpc/include/asm/sstep.h     |   2 +-
>  arch/powerpc/kvm/emulate_loadstore.c | 282 +++++++----------------------------
>  2 files changed, 51 insertions(+), 233 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/sstep.h b/arch/powerpc/include/asm/sstep.h
> index ab9d849..0a1a312 100644
> --- a/arch/powerpc/include/asm/sstep.h
> +++ b/arch/powerpc/include/asm/sstep.h
> @@ -23,7 +23,7 @@
>  #define IS_RFID(instr)		(((instr) & 0xfc0007fe) == 0x4c000024)
>  #define IS_RFI(instr)		(((instr) & 0xfc0007fe) == 0x4c000064)
>  
> -enum instruction_type {
> +enum analyse_instruction_type {
>  	COMPUTE,		/* arith/logical/CR op, etc. */
>  	LOAD,			/* load and store types need to be contiguous */
>  	LOAD_MULTI,
> diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
> index 90b9692..aaaf872 100644
> --- a/arch/powerpc/kvm/emulate_loadstore.c
> +++ b/arch/powerpc/kvm/emulate_loadstore.c
> @@ -31,9 +31,12 @@
>  #include <asm/kvm_ppc.h>
>  #include <asm/disassemble.h>
>  #include <asm/ppc-opcode.h>
> +#include <asm/sstep.h>
>  #include "timing.h"
>  #include "trace.h"
>  
> +int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
> +		  unsigned int instr);

You shouldn't need this prototype here, since there's one in sstep.h.

>  #ifdef CONFIG_PPC_FPU
>  static bool kvmppc_check_fp_disabled(struct kvm_vcpu *vcpu)
>  {
> @@ -84,8 +87,9 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
>  	struct kvm_run *run = vcpu->run;
>  	u32 inst;
>  	int ra, rs, rt;
> -	enum emulation_result emulated;
> +	enum emulation_result emulated = EMULATE_FAIL;
>  	int advance = 1;
> +	struct instruction_op op;
>  
>  	/* this default type might be overwritten by subcategories */
>  	kvmppc_set_exit_type(vcpu, EMULATED_INST_EXITS);
> @@ -114,144 +118,64 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
>  	vcpu->arch.mmio_update_ra = 0;
>  	vcpu->arch.mmio_host_swabbed = 0;
>  
> -	switch (get_op(inst)) {
> -	case 31:
> -		switch (get_xop(inst)) {
> -		case OP_31_XOP_LWZX:
> -			emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
> -			break;
> -
> -		case OP_31_XOP_LWZUX:
> -			emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
> -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> -			break;
> -
> -		case OP_31_XOP_LBZX:
> -			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
> -			break;
> +	emulated = EMULATE_FAIL;
> +	vcpu->arch.regs.msr = vcpu->arch.shared->msr;
> +	vcpu->arch.regs.ccr = vcpu->arch.cr;
> +	if (analyse_instr(&op, &vcpu->arch.regs, inst) == 0) {
> +		int type = op.type & INSTR_TYPE_MASK;
> +		int size = GETSIZE(op.type);
>  
> -		case OP_31_XOP_LBZUX:
> -			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
> -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> -			break;
> +		switch (type) {
> +		case LOAD:  {
> +			int instr_byte_swap = op.type & BYTEREV;
>  
> -		case OP_31_XOP_STDX:
> -			emulated = kvmppc_handle_store(run, vcpu,
> -					kvmppc_get_gpr(vcpu, rs), 8, 1);
> -			break;
> +			if (op.type & UPDATE) {
> +				vcpu->arch.mmio_ra = op.update_reg;
> +				vcpu->arch.mmio_update_ra = 1;
> +			}

Any reason we can't just update RA here immediately?

>  
> -		case OP_31_XOP_STDUX:
> -			emulated = kvmppc_handle_store(run, vcpu,
> -					kvmppc_get_gpr(vcpu, rs), 8, 1);
> -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> -			break;
> -
> -		case OP_31_XOP_STWX:
> -			emulated = kvmppc_handle_store(run, vcpu,
> -					kvmppc_get_gpr(vcpu, rs), 4, 1);
> -			break;
> -
> -		case OP_31_XOP_STWUX:
> -			emulated = kvmppc_handle_store(run, vcpu,
> -					kvmppc_get_gpr(vcpu, rs), 4, 1);
> -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> -			break;
> -
> -		case OP_31_XOP_STBX:
> -			emulated = kvmppc_handle_store(run, vcpu,
> -					kvmppc_get_gpr(vcpu, rs), 1, 1);
> -			break;
> -
> -		case OP_31_XOP_STBUX:
> -			emulated = kvmppc_handle_store(run, vcpu,
> -					kvmppc_get_gpr(vcpu, rs), 1, 1);
> -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> -			break;
> -
> -		case OP_31_XOP_LHAX:
> -			emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
> -			break;
> -
> -		case OP_31_XOP_LHAUX:
> -			emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
> -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> -			break;
> +			if (op.type & SIGNEXT)
> +				emulated = kvmppc_handle_loads(run, vcpu,
> +						op.reg, size, !instr_byte_swap);
> +			else
> +				emulated = kvmppc_handle_load(run, vcpu,
> +						op.reg, size, !instr_byte_swap);
>  
> -		case OP_31_XOP_LHZX:
> -			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
>  			break;
> -
> -		case OP_31_XOP_LHZUX:
> -			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
> -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> -			break;
> -
> -		case OP_31_XOP_STHX:
> -			emulated = kvmppc_handle_store(run, vcpu,
> -					kvmppc_get_gpr(vcpu, rs), 2, 1);
> -			break;
> -
> -		case OP_31_XOP_STHUX:
> -			emulated = kvmppc_handle_store(run, vcpu,
> -					kvmppc_get_gpr(vcpu, rs), 2, 1);
> -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> -			break;
> -
> -		case OP_31_XOP_DCBST:
> -		case OP_31_XOP_DCBF:
> -		case OP_31_XOP_DCBI:
> +		}
> +		case STORE:
> +			if (op.type & UPDATE) {
> +				vcpu->arch.mmio_ra = op.update_reg;
> +				vcpu->arch.mmio_update_ra = 1;
> +			}

Same comment again about updating RA.

> +
> +			/* if need byte reverse, op.val has been reverted by

"reversed" rather than "reverted".  "Reverted" means put back to a
former state.

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 07/11] KVM: PPC: reconstruct non-SIMD LOAD/STORE instruction mmio emulation with analyse_
@ 2018-05-03  6:03     ` Paul Mackerras
  0 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  6:03 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Wed, Apr 25, 2018 at 07:54:40PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> This patch reconstructs non-SIMD LOAD/STORE instruction MMIO emulation
> with analyse_intr() input. It utilizes the BYTEREV/UPDATE/SIGNEXT
> properties exported by analyse_instr() and invokes
> kvmppc_handle_load(s)/kvmppc_handle_store() accordingly.
> 
> It also move CACHEOP type handling into the skeleton.
> 
> instruction_type within sstep.h is renamed to avoid conflict with
> kvm_ppc.h.

I'd prefer to change the one in kvm_ppc.h, especially since that one
isn't exactly about the type of instruction, but more about the type
of interrupt led to us trying to fetch the instruction.

> Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> ---
>  arch/powerpc/include/asm/sstep.h     |   2 +-
>  arch/powerpc/kvm/emulate_loadstore.c | 282 +++++++----------------------------
>  2 files changed, 51 insertions(+), 233 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/sstep.h b/arch/powerpc/include/asm/sstep.h
> index ab9d849..0a1a312 100644
> --- a/arch/powerpc/include/asm/sstep.h
> +++ b/arch/powerpc/include/asm/sstep.h
> @@ -23,7 +23,7 @@
>  #define IS_RFID(instr)		(((instr) & 0xfc0007fe) = 0x4c000024)
>  #define IS_RFI(instr)		(((instr) & 0xfc0007fe) = 0x4c000064)
>  
> -enum instruction_type {
> +enum analyse_instruction_type {
>  	COMPUTE,		/* arith/logical/CR op, etc. */
>  	LOAD,			/* load and store types need to be contiguous */
>  	LOAD_MULTI,
> diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
> index 90b9692..aaaf872 100644
> --- a/arch/powerpc/kvm/emulate_loadstore.c
> +++ b/arch/powerpc/kvm/emulate_loadstore.c
> @@ -31,9 +31,12 @@
>  #include <asm/kvm_ppc.h>
>  #include <asm/disassemble.h>
>  #include <asm/ppc-opcode.h>
> +#include <asm/sstep.h>
>  #include "timing.h"
>  #include "trace.h"
>  
> +int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
> +		  unsigned int instr);

You shouldn't need this prototype here, since there's one in sstep.h.

>  #ifdef CONFIG_PPC_FPU
>  static bool kvmppc_check_fp_disabled(struct kvm_vcpu *vcpu)
>  {
> @@ -84,8 +87,9 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
>  	struct kvm_run *run = vcpu->run;
>  	u32 inst;
>  	int ra, rs, rt;
> -	enum emulation_result emulated;
> +	enum emulation_result emulated = EMULATE_FAIL;
>  	int advance = 1;
> +	struct instruction_op op;
>  
>  	/* this default type might be overwritten by subcategories */
>  	kvmppc_set_exit_type(vcpu, EMULATED_INST_EXITS);
> @@ -114,144 +118,64 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
>  	vcpu->arch.mmio_update_ra = 0;
>  	vcpu->arch.mmio_host_swabbed = 0;
>  
> -	switch (get_op(inst)) {
> -	case 31:
> -		switch (get_xop(inst)) {
> -		case OP_31_XOP_LWZX:
> -			emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
> -			break;
> -
> -		case OP_31_XOP_LWZUX:
> -			emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
> -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> -			break;
> -
> -		case OP_31_XOP_LBZX:
> -			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
> -			break;
> +	emulated = EMULATE_FAIL;
> +	vcpu->arch.regs.msr = vcpu->arch.shared->msr;
> +	vcpu->arch.regs.ccr = vcpu->arch.cr;
> +	if (analyse_instr(&op, &vcpu->arch.regs, inst) = 0) {
> +		int type = op.type & INSTR_TYPE_MASK;
> +		int size = GETSIZE(op.type);
>  
> -		case OP_31_XOP_LBZUX:
> -			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
> -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> -			break;
> +		switch (type) {
> +		case LOAD:  {
> +			int instr_byte_swap = op.type & BYTEREV;
>  
> -		case OP_31_XOP_STDX:
> -			emulated = kvmppc_handle_store(run, vcpu,
> -					kvmppc_get_gpr(vcpu, rs), 8, 1);
> -			break;
> +			if (op.type & UPDATE) {
> +				vcpu->arch.mmio_ra = op.update_reg;
> +				vcpu->arch.mmio_update_ra = 1;
> +			}

Any reason we can't just update RA here immediately?

>  
> -		case OP_31_XOP_STDUX:
> -			emulated = kvmppc_handle_store(run, vcpu,
> -					kvmppc_get_gpr(vcpu, rs), 8, 1);
> -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> -			break;
> -
> -		case OP_31_XOP_STWX:
> -			emulated = kvmppc_handle_store(run, vcpu,
> -					kvmppc_get_gpr(vcpu, rs), 4, 1);
> -			break;
> -
> -		case OP_31_XOP_STWUX:
> -			emulated = kvmppc_handle_store(run, vcpu,
> -					kvmppc_get_gpr(vcpu, rs), 4, 1);
> -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> -			break;
> -
> -		case OP_31_XOP_STBX:
> -			emulated = kvmppc_handle_store(run, vcpu,
> -					kvmppc_get_gpr(vcpu, rs), 1, 1);
> -			break;
> -
> -		case OP_31_XOP_STBUX:
> -			emulated = kvmppc_handle_store(run, vcpu,
> -					kvmppc_get_gpr(vcpu, rs), 1, 1);
> -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> -			break;
> -
> -		case OP_31_XOP_LHAX:
> -			emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
> -			break;
> -
> -		case OP_31_XOP_LHAUX:
> -			emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
> -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> -			break;
> +			if (op.type & SIGNEXT)
> +				emulated = kvmppc_handle_loads(run, vcpu,
> +						op.reg, size, !instr_byte_swap);
> +			else
> +				emulated = kvmppc_handle_load(run, vcpu,
> +						op.reg, size, !instr_byte_swap);
>  
> -		case OP_31_XOP_LHZX:
> -			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
>  			break;
> -
> -		case OP_31_XOP_LHZUX:
> -			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
> -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> -			break;
> -
> -		case OP_31_XOP_STHX:
> -			emulated = kvmppc_handle_store(run, vcpu,
> -					kvmppc_get_gpr(vcpu, rs), 2, 1);
> -			break;
> -
> -		case OP_31_XOP_STHUX:
> -			emulated = kvmppc_handle_store(run, vcpu,
> -					kvmppc_get_gpr(vcpu, rs), 2, 1);
> -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> -			break;
> -
> -		case OP_31_XOP_DCBST:
> -		case OP_31_XOP_DCBF:
> -		case OP_31_XOP_DCBI:
> +		}
> +		case STORE:
> +			if (op.type & UPDATE) {
> +				vcpu->arch.mmio_ra = op.update_reg;
> +				vcpu->arch.mmio_update_ra = 1;
> +			}

Same comment again about updating RA.

> +
> +			/* if need byte reverse, op.val has been reverted by

"reversed" rather than "reverted".  "Reverted" means put back to a
former state.

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 08/11] KVM: PPC: add giveup_ext() hook for PPC KVM ops
  2018-04-25 11:54   ` wei.guo.simon
  (?)
@ 2018-05-03  6:08     ` Paul Mackerras
  -1 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  6:08 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Wed, Apr 25, 2018 at 07:54:41PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> Currently HV will save math regs(FP/VEC/VSX) when trap into host. But
> PR KVM will only save math regs when qemu task switch out of CPU.
> 
> To emulate FP/VEC/VSX load, PR KVM need to flush math regs firstly and
> then be able to update saved VCPU FPR/VEC/VSX area reasonably.
> 
> This patch adds the giveup_ext() to KVM ops (an empty one for HV KVM)
> and kvmppc_complete_mmio_load() can invoke that hook to flush math
> regs accordingly.
> 
> Math regs flush is also necessary for STORE, which will be covered
> in later patch within this patch series.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

I don't see where you have provided a function for Book E.

I would suggest you only set the function pointer to non-NULL when the
function is actually needed, i.e. for PR KVM.

It seems to me that this means that emulation of FP/VMX/VSX loads is
currently broken for PR KVM for the case where kvm_io_bus_read() is
able to supply the data, and the emulation of FP/VMX/VSX stores is
broken for PR KVM for all cases.  Do you agree?

> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 5b875ba..7eb5507 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -2084,6 +2084,10 @@ static int kvmhv_set_smt_mode(struct kvm *kvm, unsigned long smt_mode,
>  	return err;
>  }
>  
> +static void kvmhv_giveup_ext(struct kvm_vcpu *vcpu, ulong msr)
> +{
> +}
> +
>  static void unpin_vpa(struct kvm *kvm, struct kvmppc_vpa *vpa)
>  {
>  	if (vpa->pinned_addr)
> @@ -4398,6 +4402,7 @@ static int kvmhv_configure_mmu(struct kvm *kvm, struct kvm_ppc_mmuv3_cfg *cfg)
>  	.configure_mmu = kvmhv_configure_mmu,
>  	.get_rmmu_info = kvmhv_get_rmmu_info,
>  	.set_smt_mode = kvmhv_set_smt_mode,
> +	.giveup_ext = kvmhv_giveup_ext,
>  };
>  
>  static int kvm_init_subcore_bitmap(void)

I think HV KVM could leave this pointer as NULL, and then...

> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index 17f0315..e724601 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -1061,6 +1061,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
>  		kvmppc_set_gpr(vcpu, vcpu->arch.io_gpr, gpr);
>  		break;
>  	case KVM_MMIO_REG_FPR:
> +		if (!is_kvmppc_hv_enabled(vcpu->kvm))
> +			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_FP);
> +

This could become
		if (vcpu->kvm->arch.kvm_ops->giveup_ext)
			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_FP);

and you wouldn't need to fix Book E explicitly.

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 08/11] KVM: PPC: add giveup_ext() hook for PPC KVM ops
@ 2018-05-03  6:08     ` Paul Mackerras
  0 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  6:08 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: kvm-ppc, kvm, linuxppc-dev

On Wed, Apr 25, 2018 at 07:54:41PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> Currently HV will save math regs(FP/VEC/VSX) when trap into host. But
> PR KVM will only save math regs when qemu task switch out of CPU.
> 
> To emulate FP/VEC/VSX load, PR KVM need to flush math regs firstly and
> then be able to update saved VCPU FPR/VEC/VSX area reasonably.
> 
> This patch adds the giveup_ext() to KVM ops (an empty one for HV KVM)
> and kvmppc_complete_mmio_load() can invoke that hook to flush math
> regs accordingly.
> 
> Math regs flush is also necessary for STORE, which will be covered
> in later patch within this patch series.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

I don't see where you have provided a function for Book E.

I would suggest you only set the function pointer to non-NULL when the
function is actually needed, i.e. for PR KVM.

It seems to me that this means that emulation of FP/VMX/VSX loads is
currently broken for PR KVM for the case where kvm_io_bus_read() is
able to supply the data, and the emulation of FP/VMX/VSX stores is
broken for PR KVM for all cases.  Do you agree?

> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 5b875ba..7eb5507 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -2084,6 +2084,10 @@ static int kvmhv_set_smt_mode(struct kvm *kvm, unsigned long smt_mode,
>  	return err;
>  }
>  
> +static void kvmhv_giveup_ext(struct kvm_vcpu *vcpu, ulong msr)
> +{
> +}
> +
>  static void unpin_vpa(struct kvm *kvm, struct kvmppc_vpa *vpa)
>  {
>  	if (vpa->pinned_addr)
> @@ -4398,6 +4402,7 @@ static int kvmhv_configure_mmu(struct kvm *kvm, struct kvm_ppc_mmuv3_cfg *cfg)
>  	.configure_mmu = kvmhv_configure_mmu,
>  	.get_rmmu_info = kvmhv_get_rmmu_info,
>  	.set_smt_mode = kvmhv_set_smt_mode,
> +	.giveup_ext = kvmhv_giveup_ext,
>  };
>  
>  static int kvm_init_subcore_bitmap(void)

I think HV KVM could leave this pointer as NULL, and then...

> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index 17f0315..e724601 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -1061,6 +1061,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
>  		kvmppc_set_gpr(vcpu, vcpu->arch.io_gpr, gpr);
>  		break;
>  	case KVM_MMIO_REG_FPR:
> +		if (!is_kvmppc_hv_enabled(vcpu->kvm))
> +			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_FP);
> +

This could become
		if (vcpu->kvm->arch.kvm_ops->giveup_ext)
			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_FP);

and you wouldn't need to fix Book E explicitly.

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 08/11] KVM: PPC: add giveup_ext() hook for PPC KVM ops
@ 2018-05-03  6:08     ` Paul Mackerras
  0 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  6:08 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Wed, Apr 25, 2018 at 07:54:41PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> Currently HV will save math regs(FP/VEC/VSX) when trap into host. But
> PR KVM will only save math regs when qemu task switch out of CPU.
> 
> To emulate FP/VEC/VSX load, PR KVM need to flush math regs firstly and
> then be able to update saved VCPU FPR/VEC/VSX area reasonably.
> 
> This patch adds the giveup_ext() to KVM ops (an empty one for HV KVM)
> and kvmppc_complete_mmio_load() can invoke that hook to flush math
> regs accordingly.
> 
> Math regs flush is also necessary for STORE, which will be covered
> in later patch within this patch series.
> 
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

I don't see where you have provided a function for Book E.

I would suggest you only set the function pointer to non-NULL when the
function is actually needed, i.e. for PR KVM.

It seems to me that this means that emulation of FP/VMX/VSX loads is
currently broken for PR KVM for the case where kvm_io_bus_read() is
able to supply the data, and the emulation of FP/VMX/VSX stores is
broken for PR KVM for all cases.  Do you agree?

> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 5b875ba..7eb5507 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -2084,6 +2084,10 @@ static int kvmhv_set_smt_mode(struct kvm *kvm, unsigned long smt_mode,
>  	return err;
>  }
>  
> +static void kvmhv_giveup_ext(struct kvm_vcpu *vcpu, ulong msr)
> +{
> +}
> +
>  static void unpin_vpa(struct kvm *kvm, struct kvmppc_vpa *vpa)
>  {
>  	if (vpa->pinned_addr)
> @@ -4398,6 +4402,7 @@ static int kvmhv_configure_mmu(struct kvm *kvm, struct kvm_ppc_mmuv3_cfg *cfg)
>  	.configure_mmu = kvmhv_configure_mmu,
>  	.get_rmmu_info = kvmhv_get_rmmu_info,
>  	.set_smt_mode = kvmhv_set_smt_mode,
> +	.giveup_ext = kvmhv_giveup_ext,
>  };
>  
>  static int kvm_init_subcore_bitmap(void)

I think HV KVM could leave this pointer as NULL, and then...

> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index 17f0315..e724601 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -1061,6 +1061,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
>  		kvmppc_set_gpr(vcpu, vcpu->arch.io_gpr, gpr);
>  		break;
>  	case KVM_MMIO_REG_FPR:
> +		if (!is_kvmppc_hv_enabled(vcpu->kvm))
> +			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_FP);
> +

This could become
		if (vcpu->kvm->arch.kvm_ops->giveup_ext)
			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_FP);

and you wouldn't need to fix Book E explicitly.

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 09/11] KVM: PPC: reconstruct LOAD_FP/STORE_FP instruction mmio emulation with analyse_intr() input
  2018-04-25 11:54   ` [PATCH 09/11] KVM: PPC: reconstruct LOAD_FP/STORE_FP instruction mmio emulation with analyse_intr() input wei.guo.simon
  (?)
@ 2018-05-03  6:10     ` Paul Mackerras
  -1 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  6:10 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Wed, Apr 25, 2018 at 07:54:42PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> This patch reconstructs LOAD_FP/STORE_FP instruction MMIO emulation with
> analyse_intr() input. It utilizes the FPCONV/UPDATE properties exported by
> analyse_instr() and invokes kvmppc_handle_load(s)/kvmppc_handle_store()
> accordingly.
> 
> The FP regs need to be flushed so that the right FP reg vals can be read
> from vcpu->arch.fpr.

This only applies for store instructions; it would be clearer if you
said that explicitly.

> 
> Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

Same comment about updating RA as for the other patches.  Otherwise
this looks fine.

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 09/11] KVM: PPC: reconstruct LOAD_FP/STORE_FP instruction mmio emulation with analyse_intr() input
@ 2018-05-03  6:10     ` Paul Mackerras
  0 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  6:10 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: kvm-ppc, kvm, linuxppc-dev

On Wed, Apr 25, 2018 at 07:54:42PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> This patch reconstructs LOAD_FP/STORE_FP instruction MMIO emulation with
> analyse_intr() input. It utilizes the FPCONV/UPDATE properties exported by
> analyse_instr() and invokes kvmppc_handle_load(s)/kvmppc_handle_store()
> accordingly.
> 
> The FP regs need to be flushed so that the right FP reg vals can be read
> from vcpu->arch.fpr.

This only applies for store instructions; it would be clearer if you
said that explicitly.

> 
> Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

Same comment about updating RA as for the other patches.  Otherwise
this looks fine.

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 09/11] KVM: PPC: reconstruct LOAD_FP/STORE_FP instruction mmio emulation with analyse_int
@ 2018-05-03  6:10     ` Paul Mackerras
  0 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  6:10 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Wed, Apr 25, 2018 at 07:54:42PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> This patch reconstructs LOAD_FP/STORE_FP instruction MMIO emulation with
> analyse_intr() input. It utilizes the FPCONV/UPDATE properties exported by
> analyse_instr() and invokes kvmppc_handle_load(s)/kvmppc_handle_store()
> accordingly.
> 
> The FP regs need to be flushed so that the right FP reg vals can be read
> from vcpu->arch.fpr.

This only applies for store instructions; it would be clearer if you
said that explicitly.

> 
> Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

Same comment about updating RA as for the other patches.  Otherwise
this looks fine.

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 10/11] KVM: PPC: reconstruct LOAD_VMX/STORE_VMX instruction mmio emulation with analyse_intr() input
  2018-04-25 11:54   ` [PATCH 10/11] KVM: PPC: reconstruct LOAD_VMX/STORE_VMX instruction mmio emulation with analyse_intr() input wei.guo.simon
  (?)
@ 2018-05-03  6:17     ` Paul Mackerras
  -1 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  6:17 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Wed, Apr 25, 2018 at 07:54:43PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> This patch reconstructs LOAD_VMX/STORE_VMX instruction MMIO emulation with
> analyse_intr() input. When emulating the store, the VMX reg will need to
> be flushed so that the right reg val can be retrieved before writing to
> IO MEM.
> 
> Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

This looks fine for lvx and stvx, but now we are also doing something
for the vector element loads and stores (lvebx, stvebx, lvehx, stvehx,
etc.) without having the logic to insert or extract the correct
element in the vector register image.  We need either to generate an
error for the element load/store instructions, or handle them
correctly.

> diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
> index 2dbdf9a..0bfee2f 100644
> --- a/arch/powerpc/kvm/emulate_loadstore.c
> +++ b/arch/powerpc/kvm/emulate_loadstore.c
> @@ -160,6 +160,27 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
>  					KVM_MMIO_REG_FPR|op.reg, size, 1);
>  			break;
>  #endif
> +#ifdef CONFIG_ALTIVEC
> +		case LOAD_VMX:
> +			if (kvmppc_check_altivec_disabled(vcpu))
> +				return EMULATE_DONE;
> +
> +			/* VMX access will need to be size aligned */

This comment isn't quite right; it isn't that the address needs to be
size-aligned, it's that the hardware forcibly aligns it.  So I would
say something like /* Hardware enforces alignment of VMX accesses */.

> +			vcpu->arch.vaddr_accessed &= ~((unsigned long)size - 1);
> +			vcpu->arch.paddr_accessed &= ~((unsigned long)size - 1);
> +
> +			if (size == 16) {
> +				vcpu->arch.mmio_vmx_copy_nums = 2;
> +				emulated = kvmppc_handle_load128_by2x64(run,
> +						vcpu, KVM_MMIO_REG_VMX|op.reg,
> +						1);
> +			} else if (size <= 8)
> +				emulated = kvmppc_handle_load(run, vcpu,
> +						KVM_MMIO_REG_VMX|op.reg,
> +						size, 1);
> +
> +			break;
> +#endif
>  		case STORE:
>  			if (op.type & UPDATE) {
>  				vcpu->arch.mmio_ra = op.update_reg;
> @@ -197,6 +218,36 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
>  					VCPU_FPR(vcpu, op.reg), size, 1);
>  			break;
>  #endif
> +#ifdef CONFIG_ALTIVEC
> +		case STORE_VMX:
> +			if (kvmppc_check_altivec_disabled(vcpu))
> +				return EMULATE_DONE;
> +
> +			/* VMX access will need to be size aligned */
> +			vcpu->arch.vaddr_accessed &= ~((unsigned long)size - 1);
> +			vcpu->arch.paddr_accessed &= ~((unsigned long)size - 1);
> +
> +			/* if it is PR KVM, the FP/VEC/VSX registers need to
> +			 * be flushed so that kvmppc_handle_store() can read
> +			 * actual VMX vals from vcpu->arch.
> +			 */
> +			if (!is_kvmppc_hv_enabled(vcpu->kvm))

As before, I suggest just testing that the function pointer isn't
NULL.

> +				vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu,
> +						MSR_VEC);
> +
> +			if (size == 16) {
> +				vcpu->arch.mmio_vmx_copy_nums = 2;
> +				emulated = kvmppc_handle_store128_by2x64(run,
> +						vcpu, op.reg, 1);
> +			} else if (size <= 8) {
> +				u64 val;
> +
> +				kvmppc_get_vmx_data(vcpu, op.reg, &val);
> +				emulated = kvmppc_handle_store(run, vcpu,
> +						val, size, 1);
> +			}
> +			break;
> +#endif
>  		case CACHEOP:
>  			/* Do nothing. The guest is performing dcbi because
>  			 * hardware DMA is not snooped by the dcache, but
> @@ -354,28 +405,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
>  			break;
>  #endif /* CONFIG_VSX */
>  
> -#ifdef CONFIG_ALTIVEC
> -		case OP_31_XOP_LVX:
> -			if (kvmppc_check_altivec_disabled(vcpu))
> -				return EMULATE_DONE;
> -			vcpu->arch.vaddr_accessed &= ~0xFULL;
> -			vcpu->arch.paddr_accessed &= ~0xFULL;
> -			vcpu->arch.mmio_vmx_copy_nums = 2;
> -			emulated = kvmppc_handle_load128_by2x64(run, vcpu,
> -					KVM_MMIO_REG_VMX|rt, 1);
> -			break;
> -
> -		case OP_31_XOP_STVX:
> -			if (kvmppc_check_altivec_disabled(vcpu))
> -				return EMULATE_DONE;
> -			vcpu->arch.vaddr_accessed &= ~0xFULL;
> -			vcpu->arch.paddr_accessed &= ~0xFULL;
> -			vcpu->arch.mmio_vmx_copy_nums = 2;
> -			emulated = kvmppc_handle_store128_by2x64(run, vcpu,
> -					rs, 1);
> -			break;
> -#endif /* CONFIG_ALTIVEC */
> -
>  		default:
>  			emulated = EMULATE_FAIL;
>  			break;
> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index e724601..000182e 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -1408,7 +1408,7 @@ int kvmppc_handle_load128_by2x64(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  	return emulated;
>  }
>  
> -static inline int kvmppc_get_vmx_data(struct kvm_vcpu *vcpu, int rs, u64 *val)
> +int kvmppc_get_vmx_data(struct kvm_vcpu *vcpu, int rs, u64 *val)
>  {
>  	vector128 vrs = VCPU_VSX_VR(vcpu, rs);
>  	u32 di;
> -- 
> 1.8.3.1

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 10/11] KVM: PPC: reconstruct LOAD_VMX/STORE_VMX instruction mmio emulation with analyse_intr() input
@ 2018-05-03  6:17     ` Paul Mackerras
  0 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  6:17 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: kvm-ppc, kvm, linuxppc-dev

On Wed, Apr 25, 2018 at 07:54:43PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> This patch reconstructs LOAD_VMX/STORE_VMX instruction MMIO emulation with
> analyse_intr() input. When emulating the store, the VMX reg will need to
> be flushed so that the right reg val can be retrieved before writing to
> IO MEM.
> 
> Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

This looks fine for lvx and stvx, but now we are also doing something
for the vector element loads and stores (lvebx, stvebx, lvehx, stvehx,
etc.) without having the logic to insert or extract the correct
element in the vector register image.  We need either to generate an
error for the element load/store instructions, or handle them
correctly.

> diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
> index 2dbdf9a..0bfee2f 100644
> --- a/arch/powerpc/kvm/emulate_loadstore.c
> +++ b/arch/powerpc/kvm/emulate_loadstore.c
> @@ -160,6 +160,27 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
>  					KVM_MMIO_REG_FPR|op.reg, size, 1);
>  			break;
>  #endif
> +#ifdef CONFIG_ALTIVEC
> +		case LOAD_VMX:
> +			if (kvmppc_check_altivec_disabled(vcpu))
> +				return EMULATE_DONE;
> +
> +			/* VMX access will need to be size aligned */

This comment isn't quite right; it isn't that the address needs to be
size-aligned, it's that the hardware forcibly aligns it.  So I would
say something like /* Hardware enforces alignment of VMX accesses */.

> +			vcpu->arch.vaddr_accessed &= ~((unsigned long)size - 1);
> +			vcpu->arch.paddr_accessed &= ~((unsigned long)size - 1);
> +
> +			if (size == 16) {
> +				vcpu->arch.mmio_vmx_copy_nums = 2;
> +				emulated = kvmppc_handle_load128_by2x64(run,
> +						vcpu, KVM_MMIO_REG_VMX|op.reg,
> +						1);
> +			} else if (size <= 8)
> +				emulated = kvmppc_handle_load(run, vcpu,
> +						KVM_MMIO_REG_VMX|op.reg,
> +						size, 1);
> +
> +			break;
> +#endif
>  		case STORE:
>  			if (op.type & UPDATE) {
>  				vcpu->arch.mmio_ra = op.update_reg;
> @@ -197,6 +218,36 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
>  					VCPU_FPR(vcpu, op.reg), size, 1);
>  			break;
>  #endif
> +#ifdef CONFIG_ALTIVEC
> +		case STORE_VMX:
> +			if (kvmppc_check_altivec_disabled(vcpu))
> +				return EMULATE_DONE;
> +
> +			/* VMX access will need to be size aligned */
> +			vcpu->arch.vaddr_accessed &= ~((unsigned long)size - 1);
> +			vcpu->arch.paddr_accessed &= ~((unsigned long)size - 1);
> +
> +			/* if it is PR KVM, the FP/VEC/VSX registers need to
> +			 * be flushed so that kvmppc_handle_store() can read
> +			 * actual VMX vals from vcpu->arch.
> +			 */
> +			if (!is_kvmppc_hv_enabled(vcpu->kvm))

As before, I suggest just testing that the function pointer isn't
NULL.

> +				vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu,
> +						MSR_VEC);
> +
> +			if (size == 16) {
> +				vcpu->arch.mmio_vmx_copy_nums = 2;
> +				emulated = kvmppc_handle_store128_by2x64(run,
> +						vcpu, op.reg, 1);
> +			} else if (size <= 8) {
> +				u64 val;
> +
> +				kvmppc_get_vmx_data(vcpu, op.reg, &val);
> +				emulated = kvmppc_handle_store(run, vcpu,
> +						val, size, 1);
> +			}
> +			break;
> +#endif
>  		case CACHEOP:
>  			/* Do nothing. The guest is performing dcbi because
>  			 * hardware DMA is not snooped by the dcache, but
> @@ -354,28 +405,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
>  			break;
>  #endif /* CONFIG_VSX */
>  
> -#ifdef CONFIG_ALTIVEC
> -		case OP_31_XOP_LVX:
> -			if (kvmppc_check_altivec_disabled(vcpu))
> -				return EMULATE_DONE;
> -			vcpu->arch.vaddr_accessed &= ~0xFULL;
> -			vcpu->arch.paddr_accessed &= ~0xFULL;
> -			vcpu->arch.mmio_vmx_copy_nums = 2;
> -			emulated = kvmppc_handle_load128_by2x64(run, vcpu,
> -					KVM_MMIO_REG_VMX|rt, 1);
> -			break;
> -
> -		case OP_31_XOP_STVX:
> -			if (kvmppc_check_altivec_disabled(vcpu))
> -				return EMULATE_DONE;
> -			vcpu->arch.vaddr_accessed &= ~0xFULL;
> -			vcpu->arch.paddr_accessed &= ~0xFULL;
> -			vcpu->arch.mmio_vmx_copy_nums = 2;
> -			emulated = kvmppc_handle_store128_by2x64(run, vcpu,
> -					rs, 1);
> -			break;
> -#endif /* CONFIG_ALTIVEC */
> -
>  		default:
>  			emulated = EMULATE_FAIL;
>  			break;
> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index e724601..000182e 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -1408,7 +1408,7 @@ int kvmppc_handle_load128_by2x64(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  	return emulated;
>  }
>  
> -static inline int kvmppc_get_vmx_data(struct kvm_vcpu *vcpu, int rs, u64 *val)
> +int kvmppc_get_vmx_data(struct kvm_vcpu *vcpu, int rs, u64 *val)
>  {
>  	vector128 vrs = VCPU_VSX_VR(vcpu, rs);
>  	u32 di;
> -- 
> 1.8.3.1

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 10/11] KVM: PPC: reconstruct LOAD_VMX/STORE_VMX instruction mmio emulation with analyse_i
@ 2018-05-03  6:17     ` Paul Mackerras
  0 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  6:17 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Wed, Apr 25, 2018 at 07:54:43PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> This patch reconstructs LOAD_VMX/STORE_VMX instruction MMIO emulation with
> analyse_intr() input. When emulating the store, the VMX reg will need to
> be flushed so that the right reg val can be retrieved before writing to
> IO MEM.
> 
> Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

This looks fine for lvx and stvx, but now we are also doing something
for the vector element loads and stores (lvebx, stvebx, lvehx, stvehx,
etc.) without having the logic to insert or extract the correct
element in the vector register image.  We need either to generate an
error for the element load/store instructions, or handle them
correctly.

> diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
> index 2dbdf9a..0bfee2f 100644
> --- a/arch/powerpc/kvm/emulate_loadstore.c
> +++ b/arch/powerpc/kvm/emulate_loadstore.c
> @@ -160,6 +160,27 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
>  					KVM_MMIO_REG_FPR|op.reg, size, 1);
>  			break;
>  #endif
> +#ifdef CONFIG_ALTIVEC
> +		case LOAD_VMX:
> +			if (kvmppc_check_altivec_disabled(vcpu))
> +				return EMULATE_DONE;
> +
> +			/* VMX access will need to be size aligned */

This comment isn't quite right; it isn't that the address needs to be
size-aligned, it's that the hardware forcibly aligns it.  So I would
say something like /* Hardware enforces alignment of VMX accesses */.

> +			vcpu->arch.vaddr_accessed &= ~((unsigned long)size - 1);
> +			vcpu->arch.paddr_accessed &= ~((unsigned long)size - 1);
> +
> +			if (size = 16) {
> +				vcpu->arch.mmio_vmx_copy_nums = 2;
> +				emulated = kvmppc_handle_load128_by2x64(run,
> +						vcpu, KVM_MMIO_REG_VMX|op.reg,
> +						1);
> +			} else if (size <= 8)
> +				emulated = kvmppc_handle_load(run, vcpu,
> +						KVM_MMIO_REG_VMX|op.reg,
> +						size, 1);
> +
> +			break;
> +#endif
>  		case STORE:
>  			if (op.type & UPDATE) {
>  				vcpu->arch.mmio_ra = op.update_reg;
> @@ -197,6 +218,36 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
>  					VCPU_FPR(vcpu, op.reg), size, 1);
>  			break;
>  #endif
> +#ifdef CONFIG_ALTIVEC
> +		case STORE_VMX:
> +			if (kvmppc_check_altivec_disabled(vcpu))
> +				return EMULATE_DONE;
> +
> +			/* VMX access will need to be size aligned */
> +			vcpu->arch.vaddr_accessed &= ~((unsigned long)size - 1);
> +			vcpu->arch.paddr_accessed &= ~((unsigned long)size - 1);
> +
> +			/* if it is PR KVM, the FP/VEC/VSX registers need to
> +			 * be flushed so that kvmppc_handle_store() can read
> +			 * actual VMX vals from vcpu->arch.
> +			 */
> +			if (!is_kvmppc_hv_enabled(vcpu->kvm))

As before, I suggest just testing that the function pointer isn't
NULL.

> +				vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu,
> +						MSR_VEC);
> +
> +			if (size = 16) {
> +				vcpu->arch.mmio_vmx_copy_nums = 2;
> +				emulated = kvmppc_handle_store128_by2x64(run,
> +						vcpu, op.reg, 1);
> +			} else if (size <= 8) {
> +				u64 val;
> +
> +				kvmppc_get_vmx_data(vcpu, op.reg, &val);
> +				emulated = kvmppc_handle_store(run, vcpu,
> +						val, size, 1);
> +			}
> +			break;
> +#endif
>  		case CACHEOP:
>  			/* Do nothing. The guest is performing dcbi because
>  			 * hardware DMA is not snooped by the dcache, but
> @@ -354,28 +405,6 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
>  			break;
>  #endif /* CONFIG_VSX */
>  
> -#ifdef CONFIG_ALTIVEC
> -		case OP_31_XOP_LVX:
> -			if (kvmppc_check_altivec_disabled(vcpu))
> -				return EMULATE_DONE;
> -			vcpu->arch.vaddr_accessed &= ~0xFULL;
> -			vcpu->arch.paddr_accessed &= ~0xFULL;
> -			vcpu->arch.mmio_vmx_copy_nums = 2;
> -			emulated = kvmppc_handle_load128_by2x64(run, vcpu,
> -					KVM_MMIO_REG_VMX|rt, 1);
> -			break;
> -
> -		case OP_31_XOP_STVX:
> -			if (kvmppc_check_altivec_disabled(vcpu))
> -				return EMULATE_DONE;
> -			vcpu->arch.vaddr_accessed &= ~0xFULL;
> -			vcpu->arch.paddr_accessed &= ~0xFULL;
> -			vcpu->arch.mmio_vmx_copy_nums = 2;
> -			emulated = kvmppc_handle_store128_by2x64(run, vcpu,
> -					rs, 1);
> -			break;
> -#endif /* CONFIG_ALTIVEC */
> -
>  		default:
>  			emulated = EMULATE_FAIL;
>  			break;
> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index e724601..000182e 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -1408,7 +1408,7 @@ int kvmppc_handle_load128_by2x64(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  	return emulated;
>  }
>  
> -static inline int kvmppc_get_vmx_data(struct kvm_vcpu *vcpu, int rs, u64 *val)
> +int kvmppc_get_vmx_data(struct kvm_vcpu *vcpu, int rs, u64 *val)
>  {
>  	vector128 vrs = VCPU_VSX_VR(vcpu, rs);
>  	u32 di;
> -- 
> 1.8.3.1

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 11/11] KVM: PPC: reconstruct LOAD_VSX/STORE_VSX instruction mmio emulation with analyse_intr() input
  2018-04-25 11:54   ` [PATCH 11/11] KVM: PPC: reconstruct LOAD_VSX/STORE_VSX instruction mmio emulation with analyse_intr() input wei.guo.simon
  (?)
@ 2018-05-03  6:26     ` Paul Mackerras
  -1 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  6:26 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Wed, Apr 25, 2018 at 07:54:44PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> This patch reconstructs LOAD_VSX/STORE_VSX instruction MMIO emulation with
> analyse_intr() input. It utilizes VSX_FPCONV/VSX_SPLAT/SIGNEXT exported
> by analyse_instr() and handle accordingly.
> 
> When emulating VSX store, the VSX reg will need to be flushed so that
> the right reg val can be retrieved before writing to IO MEM.
> 
> Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

Looks good, except that you shouldn't need the special case for
stxsiwx.  With size=4 and element_size=8, kvmppc_handle_vsx_store
should just do the right thing, as far as I can see.

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 11/11] KVM: PPC: reconstruct LOAD_VSX/STORE_VSX instruction mmio emulation with analyse_intr() input
@ 2018-05-03  6:26     ` Paul Mackerras
  0 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  6:26 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: kvm-ppc, kvm, linuxppc-dev

On Wed, Apr 25, 2018 at 07:54:44PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> This patch reconstructs LOAD_VSX/STORE_VSX instruction MMIO emulation with
> analyse_intr() input. It utilizes VSX_FPCONV/VSX_SPLAT/SIGNEXT exported
> by analyse_instr() and handle accordingly.
> 
> When emulating VSX store, the VSX reg will need to be flushed so that
> the right reg val can be retrieved before writing to IO MEM.
> 
> Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

Looks good, except that you shouldn't need the special case for
stxsiwx.  With size=4 and element_size=8, kvmppc_handle_vsx_store
should just do the right thing, as far as I can see.

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 11/11] KVM: PPC: reconstruct LOAD_VSX/STORE_VSX instruction mmio emulation with analyse_i
@ 2018-05-03  6:26     ` Paul Mackerras
  0 siblings, 0 replies; 111+ messages in thread
From: Paul Mackerras @ 2018-05-03  6:26 UTC (permalink / raw)
  To: wei.guo.simon; +Cc: linuxppc-dev, kvm, kvm-ppc

On Wed, Apr 25, 2018 at 07:54:44PM +0800, wei.guo.simon@gmail.com wrote:
> From: Simon Guo <wei.guo.simon@gmail.com>
> 
> This patch reconstructs LOAD_VSX/STORE_VSX instruction MMIO emulation with
> analyse_intr() input. It utilizes VSX_FPCONV/VSX_SPLAT/SIGNEXT exported
> by analyse_instr() and handle accordingly.
> 
> When emulating VSX store, the VSX reg will need to be flushed so that
> the right reg val can be retrieved before writing to IO MEM.
> 
> Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>

Looks good, except that you shouldn't need the special case for
stxsiwx.  With size=4 and element_size=8, kvmppc_handle_vsx_store
should just do the right thing, as far as I can see.

Paul.

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 00/11] KVM: PPC: reconstruct mmio emulation with analyse_instr()
  2018-05-03  5:31   ` Paul Mackerras
  (?)
@ 2018-05-03  7:41     ` Simon Guo
  -1 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  7:41 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, May 03, 2018 at 03:31:17PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:33PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > We already have analyse_instr() which analyzes instructions for the instruction
> > type, size, addtional flags, etc. What kvmppc_emulate_loadstore() did is somehow
> > duplicated and it will be good to utilize analyse_instr() to reconstruct the
> > code. The advantage is that the code logic will be shared and more clean to be 
> > maintained.
> > 
> > This patch series reconstructs kvmppc_emulate_loadstore() for various load/store
> > instructions. 
> > 
> > The testcase locates at:
> > https://github.com/justdoitqd/publicFiles/blob/master/test_mmio.c
> > 
> > - Tested at both PR/HV KVM. 
> > - Also tested with little endian host & big endian guest.
> > 
> > Tested instruction list: 
> > 	lbz lbzu lbzx ld ldbrx
> > 	ldu ldx lfd lfdu lfdx
> > 	lfiwax lfiwzx lfs lfsu lfsx
> > 	lha lhau lhax lhbrx lhz
> > 	lhzu lhzx lvx lwax lwbrx
> > 	lwz lwzu lwzx lxsdx lxsiwax
> > 	lxsiwzx lxsspx lxvd2x lxvdsx lxvw4x
> > 	stb stbu stbx std stdbrx
> > 	stdu stdx stfd stfdu stfdx
> > 	stfiwx stfs stfsx sth sthbrx
> > 	sthu sthx stvx stw stwbrx
> > 	stwu stwx stxsdx stxsiwx stxsspx
> > 	stxvd2x stxvw4x
> 
> Thanks for doing this.  It's nice to see that this makes the code 260
> lines smaller.
> 
> I have some comments on the individual patches, which I will give in
> replies to those patches.
> 
> Paul.
Thanks for those good comment. I will check them.

BR,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 00/11] KVM: PPC: reconstruct mmio emulation with analyse_instr()
@ 2018-05-03  7:41     ` Simon Guo
  0 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  7:41 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: kvm-ppc, kvm, linuxppc-dev

On Thu, May 03, 2018 at 03:31:17PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:33PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > We already have analyse_instr() which analyzes instructions for the instruction
> > type, size, addtional flags, etc. What kvmppc_emulate_loadstore() did is somehow
> > duplicated and it will be good to utilize analyse_instr() to reconstruct the
> > code. The advantage is that the code logic will be shared and more clean to be 
> > maintained.
> > 
> > This patch series reconstructs kvmppc_emulate_loadstore() for various load/store
> > instructions. 
> > 
> > The testcase locates at:
> > https://github.com/justdoitqd/publicFiles/blob/master/test_mmio.c
> > 
> > - Tested at both PR/HV KVM. 
> > - Also tested with little endian host & big endian guest.
> > 
> > Tested instruction list: 
> > 	lbz lbzu lbzx ld ldbrx
> > 	ldu ldx lfd lfdu lfdx
> > 	lfiwax lfiwzx lfs lfsu lfsx
> > 	lha lhau lhax lhbrx lhz
> > 	lhzu lhzx lvx lwax lwbrx
> > 	lwz lwzu lwzx lxsdx lxsiwax
> > 	lxsiwzx lxsspx lxvd2x lxvdsx lxvw4x
> > 	stb stbu stbx std stdbrx
> > 	stdu stdx stfd stfdu stfdx
> > 	stfiwx stfs stfsx sth sthbrx
> > 	sthu sthx stvx stw stwbrx
> > 	stwu stwx stxsdx stxsiwx stxsspx
> > 	stxvd2x stxvw4x
> 
> Thanks for doing this.  It's nice to see that this makes the code 260
> lines smaller.
> 
> I have some comments on the individual patches, which I will give in
> replies to those patches.
> 
> Paul.
Thanks for those good comment. I will check them.

BR,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 00/11] KVM: PPC: reconstruct mmio emulation with analyse_instr()
@ 2018-05-03  7:41     ` Simon Guo
  0 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  7:41 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, May 03, 2018 at 03:31:17PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:33PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > We already have analyse_instr() which analyzes instructions for the instruction
> > type, size, addtional flags, etc. What kvmppc_emulate_loadstore() did is somehow
> > duplicated and it will be good to utilize analyse_instr() to reconstruct the
> > code. The advantage is that the code logic will be shared and more clean to be 
> > maintained.
> > 
> > This patch series reconstructs kvmppc_emulate_loadstore() for various load/store
> > instructions. 
> > 
> > The testcase locates at:
> > https://github.com/justdoitqd/publicFiles/blob/master/test_mmio.c
> > 
> > - Tested at both PR/HV KVM. 
> > - Also tested with little endian host & big endian guest.
> > 
> > Tested instruction list: 
> > 	lbz lbzu lbzx ld ldbrx
> > 	ldu ldx lfd lfdu lfdx
> > 	lfiwax lfiwzx lfs lfsu lfsx
> > 	lha lhau lhax lhbrx lhz
> > 	lhzu lhzx lvx lwax lwbrx
> > 	lwz lwzu lwzx lxsdx lxsiwax
> > 	lxsiwzx lxsspx lxvd2x lxvdsx lxvw4x
> > 	stb stbu stbx std stdbrx
> > 	stdu stdx stfd stfdu stfdx
> > 	stfiwx stfs stfsx sth sthbrx
> > 	sthu sthx stvx stw stwbrx
> > 	stwu stwx stxsdx stxsiwx stxsspx
> > 	stxvd2x stxvw4x
> 
> Thanks for doing this.  It's nice to see that this makes the code 260
> lines smaller.
> 
> I have some comments on the individual patches, which I will give in
> replies to those patches.
> 
> Paul.
Thanks for those good comment. I will check them.

BR,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 01/11] KVM: PPC: add pt_regs into kvm_vcpu_arch and move vcpu->arch.gpr[] into it
  2018-05-03  5:34     ` Paul Mackerras
  (?)
@ 2018-05-03  7:43       ` Simon Guo
  -1 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  7:43 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, May 03, 2018 at 03:34:01PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:34PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > Current regs are scattered at kvm_vcpu_arch structure and it will
> > be more neat to organize them into pt_regs structure.
> > 
> > Also it will enable reconstruct MMIO emulation code with
> 
> "reimplement" would be clearer than "reconstruct" here, I think.
> 
ok.

> > @@ -438,7 +438,7 @@ int main(void)
> >  	OFFSET(VCPU_TAR, kvm_vcpu, arch.tar);
> >  #endif
> >  	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
> > -	OFFSET(VCPU_PC, kvm_vcpu, arch.pc);
> > +	OFFSET(VCPU_PC, kvm_vcpu, arch.nip);
> 
> This hunk shouldn't be in this patch.
Yes. sorry my patch split has some issue in patch 00/01.

> 
> Paul.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 01/11] KVM: PPC: add pt_regs into kvm_vcpu_arch and move vcpu->arch.gpr[] into it
@ 2018-05-03  7:43       ` Simon Guo
  0 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  7:43 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: kvm-ppc, kvm, linuxppc-dev

On Thu, May 03, 2018 at 03:34:01PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:34PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > Current regs are scattered at kvm_vcpu_arch structure and it will
> > be more neat to organize them into pt_regs structure.
> > 
> > Also it will enable reconstruct MMIO emulation code with
> 
> "reimplement" would be clearer than "reconstruct" here, I think.
> 
ok.

> > @@ -438,7 +438,7 @@ int main(void)
> >  	OFFSET(VCPU_TAR, kvm_vcpu, arch.tar);
> >  #endif
> >  	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
> > -	OFFSET(VCPU_PC, kvm_vcpu, arch.pc);
> > +	OFFSET(VCPU_PC, kvm_vcpu, arch.nip);
> 
> This hunk shouldn't be in this patch.
Yes. sorry my patch split has some issue in patch 00/01.

> 
> Paul.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 01/11] KVM: PPC: add pt_regs into kvm_vcpu_arch and move vcpu->arch.gpr[] into it
@ 2018-05-03  7:43       ` Simon Guo
  0 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  7:43 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, May 03, 2018 at 03:34:01PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:34PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > Current regs are scattered at kvm_vcpu_arch structure and it will
> > be more neat to organize them into pt_regs structure.
> > 
> > Also it will enable reconstruct MMIO emulation code with
> 
> "reimplement" would be clearer than "reconstruct" here, I think.
> 
ok.

> > @@ -438,7 +438,7 @@ int main(void)
> >  	OFFSET(VCPU_TAR, kvm_vcpu, arch.tar);
> >  #endif
> >  	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
> > -	OFFSET(VCPU_PC, kvm_vcpu, arch.pc);
> > +	OFFSET(VCPU_PC, kvm_vcpu, arch.nip);
> 
> This hunk shouldn't be in this patch.
Yes. sorry my patch split has some issue in patch 00/01.

> 
> Paul.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 02/11] KVM: PPC: mov nip/ctr/lr/xer registers to pt_regs in kvm_vcpu_arch
  2018-05-03  5:46     ` Paul Mackerras
  (?)
@ 2018-05-03  7:51       ` Simon Guo
  -1 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  7:51 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, May 03, 2018 at 03:46:01PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:35PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > This patch moves nip/ctr/lr/xer registers from scattered places in
> > kvm_vcpu_arch to pt_regs structure.
> > 
> > cr register is "unsigned long" in pt_regs and u32 in vcpu->arch.
> > It will need more consideration and may move in later patches.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> Mostly looks fine; some nits below.
> 
> > diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
> > index e8a78a5..731f7d4 100644
> > --- a/arch/powerpc/kernel/asm-offsets.c
> > +++ b/arch/powerpc/kernel/asm-offsets.c
> > @@ -431,14 +431,14 @@ int main(void)
> >  #ifdef CONFIG_ALTIVEC
> >  	OFFSET(VCPU_VRS, kvm_vcpu, arch.vr.vr);
> >  #endif
> > -	OFFSET(VCPU_XER, kvm_vcpu, arch.xer);
> > -	OFFSET(VCPU_CTR, kvm_vcpu, arch.ctr);
> > -	OFFSET(VCPU_LR, kvm_vcpu, arch.lr);
> > +	OFFSET(VCPU_XER, kvm_vcpu, arch.regs.xer);
> > +	OFFSET(VCPU_CTR, kvm_vcpu, arch.regs.ctr);
> > +	OFFSET(VCPU_LR, kvm_vcpu, arch.regs.link);
> >  #ifdef CONFIG_PPC_BOOK3S
> >  	OFFSET(VCPU_TAR, kvm_vcpu, arch.tar);
> >  #endif
> > -	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
> > -	OFFSET(VCPU_PC, kvm_vcpu, arch.nip);
> 
> This should be arch.pc; arch.nip doesn't exist.
Yes. That was introduced during patch split. I will correct them.

> 
> > +	OFFSET(VCPU_CR, kvm_vcpu, arch.regs.ccr);
> 
> I thought the patch description said you weren't moving CR at this
> stage?
Sorry about that. thanks for pointing out.

> 
> > +	OFFSET(VCPU_PC, kvm_vcpu, arch.regs.nip);
> >  #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
> >  	OFFSET(VCPU_MSR, kvm_vcpu, arch.shregs.msr);
> >  	OFFSET(VCPU_SRR0, kvm_vcpu, arch.shregs.srr0);
> > @@ -693,11 +693,11 @@ int main(void)
> >  #endif /* CONFIG_PPC_BOOK3S_64 */
> >  
> >  #else /* CONFIG_PPC_BOOK3S */
> > -	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
> > -	OFFSET(VCPU_XER, kvm_vcpu, arch.xer);
> > -	OFFSET(VCPU_LR, kvm_vcpu, arch.lr);
> > -	OFFSET(VCPU_CTR, kvm_vcpu, arch.ctr);
> > -	OFFSET(VCPU_PC, kvm_vcpu, arch.pc);
> > +	OFFSET(VCPU_CR, kvm_vcpu, arch.regs.ccr);
> 
> Once again VCPU_CR should not be changed.
Yep.

> 
> > +	OFFSET(VCPU_XER, kvm_vcpu, arch.regs.xer);
> > +	OFFSET(VCPU_LR, kvm_vcpu, arch.regs.link);
> > +	OFFSET(VCPU_CTR, kvm_vcpu, arch.regs.ctr);
> > +	OFFSET(VCPU_PC, kvm_vcpu, arch.regs.nip);
> >  	OFFSET(VCPU_SPRG9, kvm_vcpu, arch.sprg9);
> >  	OFFSET(VCPU_LAST_INST, kvm_vcpu, arch.last_inst);
> >  	OFFSET(VCPU_FAULT_DEAR, kvm_vcpu, arch.fault_dear);
> 
> Paul.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 02/11] KVM: PPC: mov nip/ctr/lr/xer registers to pt_regs in kvm_vcpu_arch
@ 2018-05-03  7:51       ` Simon Guo
  0 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  7:51 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: kvm-ppc, kvm, linuxppc-dev

On Thu, May 03, 2018 at 03:46:01PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:35PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > This patch moves nip/ctr/lr/xer registers from scattered places in
> > kvm_vcpu_arch to pt_regs structure.
> > 
> > cr register is "unsigned long" in pt_regs and u32 in vcpu->arch.
> > It will need more consideration and may move in later patches.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> Mostly looks fine; some nits below.
> 
> > diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
> > index e8a78a5..731f7d4 100644
> > --- a/arch/powerpc/kernel/asm-offsets.c
> > +++ b/arch/powerpc/kernel/asm-offsets.c
> > @@ -431,14 +431,14 @@ int main(void)
> >  #ifdef CONFIG_ALTIVEC
> >  	OFFSET(VCPU_VRS, kvm_vcpu, arch.vr.vr);
> >  #endif
> > -	OFFSET(VCPU_XER, kvm_vcpu, arch.xer);
> > -	OFFSET(VCPU_CTR, kvm_vcpu, arch.ctr);
> > -	OFFSET(VCPU_LR, kvm_vcpu, arch.lr);
> > +	OFFSET(VCPU_XER, kvm_vcpu, arch.regs.xer);
> > +	OFFSET(VCPU_CTR, kvm_vcpu, arch.regs.ctr);
> > +	OFFSET(VCPU_LR, kvm_vcpu, arch.regs.link);
> >  #ifdef CONFIG_PPC_BOOK3S
> >  	OFFSET(VCPU_TAR, kvm_vcpu, arch.tar);
> >  #endif
> > -	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
> > -	OFFSET(VCPU_PC, kvm_vcpu, arch.nip);
> 
> This should be arch.pc; arch.nip doesn't exist.
Yes. That was introduced during patch split. I will correct them.

> 
> > +	OFFSET(VCPU_CR, kvm_vcpu, arch.regs.ccr);
> 
> I thought the patch description said you weren't moving CR at this
> stage?
Sorry about that. thanks for pointing out.

> 
> > +	OFFSET(VCPU_PC, kvm_vcpu, arch.regs.nip);
> >  #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
> >  	OFFSET(VCPU_MSR, kvm_vcpu, arch.shregs.msr);
> >  	OFFSET(VCPU_SRR0, kvm_vcpu, arch.shregs.srr0);
> > @@ -693,11 +693,11 @@ int main(void)
> >  #endif /* CONFIG_PPC_BOOK3S_64 */
> >  
> >  #else /* CONFIG_PPC_BOOK3S */
> > -	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
> > -	OFFSET(VCPU_XER, kvm_vcpu, arch.xer);
> > -	OFFSET(VCPU_LR, kvm_vcpu, arch.lr);
> > -	OFFSET(VCPU_CTR, kvm_vcpu, arch.ctr);
> > -	OFFSET(VCPU_PC, kvm_vcpu, arch.pc);
> > +	OFFSET(VCPU_CR, kvm_vcpu, arch.regs.ccr);
> 
> Once again VCPU_CR should not be changed.
Yep.

> 
> > +	OFFSET(VCPU_XER, kvm_vcpu, arch.regs.xer);
> > +	OFFSET(VCPU_LR, kvm_vcpu, arch.regs.link);
> > +	OFFSET(VCPU_CTR, kvm_vcpu, arch.regs.ctr);
> > +	OFFSET(VCPU_PC, kvm_vcpu, arch.regs.nip);
> >  	OFFSET(VCPU_SPRG9, kvm_vcpu, arch.sprg9);
> >  	OFFSET(VCPU_LAST_INST, kvm_vcpu, arch.last_inst);
> >  	OFFSET(VCPU_FAULT_DEAR, kvm_vcpu, arch.fault_dear);
> 
> Paul.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 02/11] KVM: PPC: mov nip/ctr/lr/xer registers to pt_regs in kvm_vcpu_arch
@ 2018-05-03  7:51       ` Simon Guo
  0 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  7:51 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, May 03, 2018 at 03:46:01PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:35PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > This patch moves nip/ctr/lr/xer registers from scattered places in
> > kvm_vcpu_arch to pt_regs structure.
> > 
> > cr register is "unsigned long" in pt_regs and u32 in vcpu->arch.
> > It will need more consideration and may move in later patches.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> Mostly looks fine; some nits below.
> 
> > diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
> > index e8a78a5..731f7d4 100644
> > --- a/arch/powerpc/kernel/asm-offsets.c
> > +++ b/arch/powerpc/kernel/asm-offsets.c
> > @@ -431,14 +431,14 @@ int main(void)
> >  #ifdef CONFIG_ALTIVEC
> >  	OFFSET(VCPU_VRS, kvm_vcpu, arch.vr.vr);
> >  #endif
> > -	OFFSET(VCPU_XER, kvm_vcpu, arch.xer);
> > -	OFFSET(VCPU_CTR, kvm_vcpu, arch.ctr);
> > -	OFFSET(VCPU_LR, kvm_vcpu, arch.lr);
> > +	OFFSET(VCPU_XER, kvm_vcpu, arch.regs.xer);
> > +	OFFSET(VCPU_CTR, kvm_vcpu, arch.regs.ctr);
> > +	OFFSET(VCPU_LR, kvm_vcpu, arch.regs.link);
> >  #ifdef CONFIG_PPC_BOOK3S
> >  	OFFSET(VCPU_TAR, kvm_vcpu, arch.tar);
> >  #endif
> > -	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
> > -	OFFSET(VCPU_PC, kvm_vcpu, arch.nip);
> 
> This should be arch.pc; arch.nip doesn't exist.
Yes. That was introduced during patch split. I will correct them.

> 
> > +	OFFSET(VCPU_CR, kvm_vcpu, arch.regs.ccr);
> 
> I thought the patch description said you weren't moving CR at this
> stage?
Sorry about that. thanks for pointing out.

> 
> > +	OFFSET(VCPU_PC, kvm_vcpu, arch.regs.nip);
> >  #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
> >  	OFFSET(VCPU_MSR, kvm_vcpu, arch.shregs.msr);
> >  	OFFSET(VCPU_SRR0, kvm_vcpu, arch.shregs.srr0);
> > @@ -693,11 +693,11 @@ int main(void)
> >  #endif /* CONFIG_PPC_BOOK3S_64 */
> >  
> >  #else /* CONFIG_PPC_BOOK3S */
> > -	OFFSET(VCPU_CR, kvm_vcpu, arch.cr);
> > -	OFFSET(VCPU_XER, kvm_vcpu, arch.xer);
> > -	OFFSET(VCPU_LR, kvm_vcpu, arch.lr);
> > -	OFFSET(VCPU_CTR, kvm_vcpu, arch.ctr);
> > -	OFFSET(VCPU_PC, kvm_vcpu, arch.pc);
> > +	OFFSET(VCPU_CR, kvm_vcpu, arch.regs.ccr);
> 
> Once again VCPU_CR should not be changed.
Yep.

> 
> > +	OFFSET(VCPU_XER, kvm_vcpu, arch.regs.xer);
> > +	OFFSET(VCPU_LR, kvm_vcpu, arch.regs.link);
> > +	OFFSET(VCPU_CTR, kvm_vcpu, arch.regs.ctr);
> > +	OFFSET(VCPU_PC, kvm_vcpu, arch.regs.nip);
> >  	OFFSET(VCPU_SPRG9, kvm_vcpu, arch.sprg9);
> >  	OFFSET(VCPU_LAST_INST, kvm_vcpu, arch.last_inst);
> >  	OFFSET(VCPU_FAULT_DEAR, kvm_vcpu, arch.fault_dear);
> 
> Paul.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 03/11] KVM: PPC: Fix a mmio_host_swabbed uninitialized usage issue when VMX store
  2018-05-03  5:48     ` Paul Mackerras
  (?)
@ 2018-05-03  7:52       ` Simon Guo
  -1 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  7:52 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, May 03, 2018 at 03:48:26PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:36PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > When KVM emulates VMX store, it will invoke kvmppc_get_vmx_data() to
> > retrieve VMX reg val. kvmppc_get_vmx_data() will check mmio_host_swabbed
> > to decide which double word of vr[] to be used. But the
> > mmio_host_swabbed can be uninitiazed during VMX store procedure:
> > 
> > kvmppc_emulate_loadstore
> > 	\- kvmppc_handle_store128_by2x64
> > 		\- kvmppc_get_vmx_data
> > 
> > This patch corrects this by using kvmppc_need_byteswap() to choose
> > double word of vr[] and initialized mmio_host_swabbed to avoid invisble
> > trouble.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> The patch is correct, but I think the patch description needs to say
> that vcpu->arch.mmio_host_swabbed is not meant to be used at all for
> emulation of store instructions, and this patch makes that true for
> VMX stores.
I will revise the commit message accordingly.

> 
> Paul.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 03/11] KVM: PPC: Fix a mmio_host_swabbed uninitialized usage issue when VMX store
@ 2018-05-03  7:52       ` Simon Guo
  0 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  7:52 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: kvm-ppc, kvm, linuxppc-dev

On Thu, May 03, 2018 at 03:48:26PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:36PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > When KVM emulates VMX store, it will invoke kvmppc_get_vmx_data() to
> > retrieve VMX reg val. kvmppc_get_vmx_data() will check mmio_host_swabbed
> > to decide which double word of vr[] to be used. But the
> > mmio_host_swabbed can be uninitiazed during VMX store procedure:
> > 
> > kvmppc_emulate_loadstore
> > 	\- kvmppc_handle_store128_by2x64
> > 		\- kvmppc_get_vmx_data
> > 
> > This patch corrects this by using kvmppc_need_byteswap() to choose
> > double word of vr[] and initialized mmio_host_swabbed to avoid invisble
> > trouble.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> The patch is correct, but I think the patch description needs to say
> that vcpu->arch.mmio_host_swabbed is not meant to be used at all for
> emulation of store instructions, and this patch makes that true for
> VMX stores.
I will revise the commit message accordingly.

> 
> Paul.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 03/11] KVM: PPC: Fix a mmio_host_swabbed uninitialized usage issue when VMX store
@ 2018-05-03  7:52       ` Simon Guo
  0 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  7:52 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, May 03, 2018 at 03:48:26PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:36PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > When KVM emulates VMX store, it will invoke kvmppc_get_vmx_data() to
> > retrieve VMX reg val. kvmppc_get_vmx_data() will check mmio_host_swabbed
> > to decide which double word of vr[] to be used. But the
> > mmio_host_swabbed can be uninitiazed during VMX store procedure:
> > 
> > kvmppc_emulate_loadstore
> > 	\- kvmppc_handle_store128_by2x64
> > 		\- kvmppc_get_vmx_data
> > 
> > This patch corrects this by using kvmppc_need_byteswap() to choose
> > double word of vr[] and initialized mmio_host_swabbed to avoid invisble
> > trouble.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> The patch is correct, but I think the patch description needs to say
> that vcpu->arch.mmio_host_swabbed is not meant to be used at all for
> emulation of store instructions, and this patch makes that true for
> VMX stores.
I will revise the commit message accordingly.

> 
> Paul.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 05/11] KVM: PPC: add GPR RA update skeleton for MMIO emulation
  2018-05-03  5:58     ` Paul Mackerras
  (?)
@ 2018-05-03  8:37       ` Simon Guo
  -1 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  8:37 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, May 03, 2018 at 03:58:14PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:38PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > To optimize kvm emulation code with analyse_instr, adds new
> > mmio_update_ra flag to aid with GPR RA update.
> > 
> > This patch arms RA update at load/store emulation path for both
> > qemu mmio emulation or coalesced mmio emulation.
> 
> It's not clear to me why you need to do this.  The existing code
> writes RA at the point where the instruction is decoded.  In later
> patches, you change that so the RA update occurs after the MMIO
> operation is performed.  Is there a particular reason why you made
> that change?
> 
> Paul.

I wanted to avoid the case that GPR RA was updated even when EMULATE_FAIL. But
if it is not mandatory, I can update RA in kvmppc_emulate_loadstore()
instead..

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 05/11] KVM: PPC: add GPR RA update skeleton for MMIO emulation
@ 2018-05-03  8:37       ` Simon Guo
  0 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  8:37 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: kvm-ppc, kvm, linuxppc-dev

On Thu, May 03, 2018 at 03:58:14PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:38PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > To optimize kvm emulation code with analyse_instr, adds new
> > mmio_update_ra flag to aid with GPR RA update.
> > 
> > This patch arms RA update at load/store emulation path for both
> > qemu mmio emulation or coalesced mmio emulation.
> 
> It's not clear to me why you need to do this.  The existing code
> writes RA at the point where the instruction is decoded.  In later
> patches, you change that so the RA update occurs after the MMIO
> operation is performed.  Is there a particular reason why you made
> that change?
> 
> Paul.

I wanted to avoid the case that GPR RA was updated even when EMULATE_FAIL. But
if it is not mandatory, I can update RA in kvmppc_emulate_loadstore()
instead..

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 05/11] KVM: PPC: add GPR RA update skeleton for MMIO emulation
@ 2018-05-03  8:37       ` Simon Guo
  0 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  8:37 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, May 03, 2018 at 03:58:14PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:38PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > To optimize kvm emulation code with analyse_instr, adds new
> > mmio_update_ra flag to aid with GPR RA update.
> > 
> > This patch arms RA update at load/store emulation path for both
> > qemu mmio emulation or coalesced mmio emulation.
> 
> It's not clear to me why you need to do this.  The existing code
> writes RA at the point where the instruction is decoded.  In later
> patches, you change that so the RA update occurs after the MMIO
> operation is performed.  Is there a particular reason why you made
> that change?
> 
> Paul.

I wanted to avoid the case that GPR RA was updated even when EMULATE_FAIL. But
if it is not mandatory, I can update RA in kvmppc_emulate_loadstore()
instead..

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 04/11] KVM: PPC: fix incorrect element_size for stxsiwx in analyse_instr
  2018-05-03  5:50     ` Paul Mackerras
  (?)
@ 2018-05-03  9:05       ` Simon Guo
  -1 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  9:05 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, May 03, 2018 at 03:50:47PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:37PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > stwsiwx will place contents of word element 1 of VSR into word
> > storage of EA. So the element size of stwsiwx should be 4.
> > 
> > This patch correct the size from 8 to 4.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> > ---
> >  arch/powerpc/lib/sstep.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
> > index 34d68f1..151d484 100644
> > --- a/arch/powerpc/lib/sstep.c
> > +++ b/arch/powerpc/lib/sstep.c
> > @@ -2178,7 +2178,7 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
> >  		case 140:	/* stxsiwx */
> >  			op->reg = rd | ((instr & 1) << 5);
> >  			op->type = MKOP(STORE_VSX, 0, 4);
> > -			op->element_size = 8;
> > +			op->element_size = 4;
> 
> I made the element_size be 8 deliberately because this way, with
> size=4 but element_size=8, the code will naturally choose the correct
> word (the least-significant word of the left half) of the register to
> store into memory.  With this change you then need the special case in
> a later patch for stxsiwx, which you shouldn't need if you don't make
> this change.
> 
> Paul.

Thanks for point out. I will update accordingly.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 04/11] KVM: PPC: fix incorrect element_size for stxsiwx in analyse_instr
@ 2018-05-03  9:05       ` Simon Guo
  0 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  9:05 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: kvm-ppc, kvm, linuxppc-dev

On Thu, May 03, 2018 at 03:50:47PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:37PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > stwsiwx will place contents of word element 1 of VSR into word
> > storage of EA. So the element size of stwsiwx should be 4.
> > 
> > This patch correct the size from 8 to 4.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> > ---
> >  arch/powerpc/lib/sstep.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
> > index 34d68f1..151d484 100644
> > --- a/arch/powerpc/lib/sstep.c
> > +++ b/arch/powerpc/lib/sstep.c
> > @@ -2178,7 +2178,7 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
> >  		case 140:	/* stxsiwx */
> >  			op->reg = rd | ((instr & 1) << 5);
> >  			op->type = MKOP(STORE_VSX, 0, 4);
> > -			op->element_size = 8;
> > +			op->element_size = 4;
> 
> I made the element_size be 8 deliberately because this way, with
> size=4 but element_size=8, the code will naturally choose the correct
> word (the least-significant word of the left half) of the register to
> store into memory.  With this change you then need the special case in
> a later patch for stxsiwx, which you shouldn't need if you don't make
> this change.
> 
> Paul.

Thanks for point out. I will update accordingly.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 04/11] KVM: PPC: fix incorrect element_size for stxsiwx in analyse_instr
@ 2018-05-03  9:05       ` Simon Guo
  0 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  9:05 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, May 03, 2018 at 03:50:47PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:37PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > stwsiwx will place contents of word element 1 of VSR into word
> > storage of EA. So the element size of stwsiwx should be 4.
> > 
> > This patch correct the size from 8 to 4.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> > ---
> >  arch/powerpc/lib/sstep.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
> > index 34d68f1..151d484 100644
> > --- a/arch/powerpc/lib/sstep.c
> > +++ b/arch/powerpc/lib/sstep.c
> > @@ -2178,7 +2178,7 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
> >  		case 140:	/* stxsiwx */
> >  			op->reg = rd | ((instr & 1) << 5);
> >  			op->type = MKOP(STORE_VSX, 0, 4);
> > -			op->element_size = 8;
> > +			op->element_size = 4;
> 
> I made the element_size be 8 deliberately because this way, with
> size=4 but element_size=8, the code will naturally choose the correct
> word (the least-significant word of the left half) of the register to
> store into memory.  With this change you then need the special case in
> a later patch for stxsiwx, which you shouldn't need if you don't make
> this change.
> 
> Paul.

Thanks for point out. I will update accordingly.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 07/11] KVM: PPC: reconstruct non-SIMD LOAD/STORE instruction mmio emulation with analyse_intr() input
  2018-05-03  6:03     ` [PATCH 07/11] KVM: PPC: reconstruct non-SIMD LOAD/STORE instruction mmio emulation with analyse_intr() input Paul Mackerras
  (?)
@ 2018-05-03  9:07       ` Simon Guo
  -1 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  9:07 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, May 03, 2018 at 04:03:46PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:40PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > This patch reconstructs non-SIMD LOAD/STORE instruction MMIO emulation
> > with analyse_intr() input. It utilizes the BYTEREV/UPDATE/SIGNEXT
> > properties exported by analyse_instr() and invokes
> > kvmppc_handle_load(s)/kvmppc_handle_store() accordingly.
> > 
> > It also move CACHEOP type handling into the skeleton.
> > 
> > instruction_type within sstep.h is renamed to avoid conflict with
> > kvm_ppc.h.
> 
> I'd prefer to change the one in kvm_ppc.h, especially since that one
> isn't exactly about the type of instruction, but more about the type
> of interrupt led to us trying to fetch the instruction.
> 
Agreed.

> > Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> > ---
> >  arch/powerpc/include/asm/sstep.h     |   2 +-
> >  arch/powerpc/kvm/emulate_loadstore.c | 282 +++++++----------------------------
> >  2 files changed, 51 insertions(+), 233 deletions(-)
> > 
> > diff --git a/arch/powerpc/include/asm/sstep.h b/arch/powerpc/include/asm/sstep.h
> > index ab9d849..0a1a312 100644
> > --- a/arch/powerpc/include/asm/sstep.h
> > +++ b/arch/powerpc/include/asm/sstep.h
> > @@ -23,7 +23,7 @@
> >  #define IS_RFID(instr)		(((instr) & 0xfc0007fe) == 0x4c000024)
> >  #define IS_RFI(instr)		(((instr) & 0xfc0007fe) == 0x4c000064)
> >  
> > -enum instruction_type {
> > +enum analyse_instruction_type {
> >  	COMPUTE,		/* arith/logical/CR op, etc. */
> >  	LOAD,			/* load and store types need to be contiguous */
> >  	LOAD_MULTI,
> > diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
> > index 90b9692..aaaf872 100644
> > --- a/arch/powerpc/kvm/emulate_loadstore.c
> > +++ b/arch/powerpc/kvm/emulate_loadstore.c
> > @@ -31,9 +31,12 @@
> >  #include <asm/kvm_ppc.h>
> >  #include <asm/disassemble.h>
> >  #include <asm/ppc-opcode.h>
> > +#include <asm/sstep.h>
> >  #include "timing.h"
> >  #include "trace.h"
> >  
> > +int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
> > +		  unsigned int instr);
> 
> You shouldn't need this prototype here, since there's one in sstep.h.
> 
Yes.

> >  #ifdef CONFIG_PPC_FPU
> >  static bool kvmppc_check_fp_disabled(struct kvm_vcpu *vcpu)
> >  {
> > @@ -84,8 +87,9 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
> >  	struct kvm_run *run = vcpu->run;
> >  	u32 inst;
> >  	int ra, rs, rt;
> > -	enum emulation_result emulated;
> > +	enum emulation_result emulated = EMULATE_FAIL;
> >  	int advance = 1;
> > +	struct instruction_op op;
> >  
> >  	/* this default type might be overwritten by subcategories */
> >  	kvmppc_set_exit_type(vcpu, EMULATED_INST_EXITS);
> > @@ -114,144 +118,64 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
> >  	vcpu->arch.mmio_update_ra = 0;
> >  	vcpu->arch.mmio_host_swabbed = 0;
> >  
> > -	switch (get_op(inst)) {
> > -	case 31:
> > -		switch (get_xop(inst)) {
> > -		case OP_31_XOP_LWZX:
> > -			emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
> > -			break;
> > -
> > -		case OP_31_XOP_LWZUX:
> > -			emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
> > -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> > -			break;
> > -
> > -		case OP_31_XOP_LBZX:
> > -			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
> > -			break;
> > +	emulated = EMULATE_FAIL;
> > +	vcpu->arch.regs.msr = vcpu->arch.shared->msr;
> > +	vcpu->arch.regs.ccr = vcpu->arch.cr;
> > +	if (analyse_instr(&op, &vcpu->arch.regs, inst) == 0) {
> > +		int type = op.type & INSTR_TYPE_MASK;
> > +		int size = GETSIZE(op.type);
> >  
> > -		case OP_31_XOP_LBZUX:
> > -			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
> > -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> > -			break;
> > +		switch (type) {
> > +		case LOAD:  {
> > +			int instr_byte_swap = op.type & BYTEREV;
> >  
> > -		case OP_31_XOP_STDX:
> > -			emulated = kvmppc_handle_store(run, vcpu,
> > -					kvmppc_get_gpr(vcpu, rs), 8, 1);
> > -			break;
> > +			if (op.type & UPDATE) {
> > +				vcpu->arch.mmio_ra = op.update_reg;
> > +				vcpu->arch.mmio_update_ra = 1;
> > +			}
> 
> Any reason we can't just update RA here immediately?
> 
> >  
> > -		case OP_31_XOP_STDUX:
> > -			emulated = kvmppc_handle_store(run, vcpu,
> > -					kvmppc_get_gpr(vcpu, rs), 8, 1);
> > -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> > -			break;
> > -
> > -		case OP_31_XOP_STWX:
> > -			emulated = kvmppc_handle_store(run, vcpu,
> > -					kvmppc_get_gpr(vcpu, rs), 4, 1);
> > -			break;
> > -
> > -		case OP_31_XOP_STWUX:
> > -			emulated = kvmppc_handle_store(run, vcpu,
> > -					kvmppc_get_gpr(vcpu, rs), 4, 1);
> > -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> > -			break;
> > -
> > -		case OP_31_XOP_STBX:
> > -			emulated = kvmppc_handle_store(run, vcpu,
> > -					kvmppc_get_gpr(vcpu, rs), 1, 1);
> > -			break;
> > -
> > -		case OP_31_XOP_STBUX:
> > -			emulated = kvmppc_handle_store(run, vcpu,
> > -					kvmppc_get_gpr(vcpu, rs), 1, 1);
> > -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> > -			break;
> > -
> > -		case OP_31_XOP_LHAX:
> > -			emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
> > -			break;
> > -
> > -		case OP_31_XOP_LHAUX:
> > -			emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
> > -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> > -			break;
> > +			if (op.type & SIGNEXT)
> > +				emulated = kvmppc_handle_loads(run, vcpu,
> > +						op.reg, size, !instr_byte_swap);
> > +			else
> > +				emulated = kvmppc_handle_load(run, vcpu,
> > +						op.reg, size, !instr_byte_swap);
> >  
> > -		case OP_31_XOP_LHZX:
> > -			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
> >  			break;
> > -
> > -		case OP_31_XOP_LHZUX:
> > -			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
> > -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> > -			break;
> > -
> > -		case OP_31_XOP_STHX:
> > -			emulated = kvmppc_handle_store(run, vcpu,
> > -					kvmppc_get_gpr(vcpu, rs), 2, 1);
> > -			break;
> > -
> > -		case OP_31_XOP_STHUX:
> > -			emulated = kvmppc_handle_store(run, vcpu,
> > -					kvmppc_get_gpr(vcpu, rs), 2, 1);
> > -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> > -			break;
> > -
> > -		case OP_31_XOP_DCBST:
> > -		case OP_31_XOP_DCBF:
> > -		case OP_31_XOP_DCBI:
> > +		}
> > +		case STORE:
> > +			if (op.type & UPDATE) {
> > +				vcpu->arch.mmio_ra = op.update_reg;
> > +				vcpu->arch.mmio_update_ra = 1;
> > +			}
> 
> Same comment again about updating RA.
> 
> > +
> > +			/* if need byte reverse, op.val has been reverted by
> 
> "reversed" rather than "reverted".  "Reverted" means put back to a
> former state.
I will correct that.

> 
> Paul.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 07/11] KVM: PPC: reconstruct non-SIMD LOAD/STORE instruction mmio emulation with analyse_intr() input
@ 2018-05-03  9:07       ` Simon Guo
  0 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  9:07 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: kvm-ppc, kvm, linuxppc-dev

On Thu, May 03, 2018 at 04:03:46PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:40PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > This patch reconstructs non-SIMD LOAD/STORE instruction MMIO emulation
> > with analyse_intr() input. It utilizes the BYTEREV/UPDATE/SIGNEXT
> > properties exported by analyse_instr() and invokes
> > kvmppc_handle_load(s)/kvmppc_handle_store() accordingly.
> > 
> > It also move CACHEOP type handling into the skeleton.
> > 
> > instruction_type within sstep.h is renamed to avoid conflict with
> > kvm_ppc.h.
> 
> I'd prefer to change the one in kvm_ppc.h, especially since that one
> isn't exactly about the type of instruction, but more about the type
> of interrupt led to us trying to fetch the instruction.
> 
Agreed.

> > Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> > ---
> >  arch/powerpc/include/asm/sstep.h     |   2 +-
> >  arch/powerpc/kvm/emulate_loadstore.c | 282 +++++++----------------------------
> >  2 files changed, 51 insertions(+), 233 deletions(-)
> > 
> > diff --git a/arch/powerpc/include/asm/sstep.h b/arch/powerpc/include/asm/sstep.h
> > index ab9d849..0a1a312 100644
> > --- a/arch/powerpc/include/asm/sstep.h
> > +++ b/arch/powerpc/include/asm/sstep.h
> > @@ -23,7 +23,7 @@
> >  #define IS_RFID(instr)		(((instr) & 0xfc0007fe) == 0x4c000024)
> >  #define IS_RFI(instr)		(((instr) & 0xfc0007fe) == 0x4c000064)
> >  
> > -enum instruction_type {
> > +enum analyse_instruction_type {
> >  	COMPUTE,		/* arith/logical/CR op, etc. */
> >  	LOAD,			/* load and store types need to be contiguous */
> >  	LOAD_MULTI,
> > diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
> > index 90b9692..aaaf872 100644
> > --- a/arch/powerpc/kvm/emulate_loadstore.c
> > +++ b/arch/powerpc/kvm/emulate_loadstore.c
> > @@ -31,9 +31,12 @@
> >  #include <asm/kvm_ppc.h>
> >  #include <asm/disassemble.h>
> >  #include <asm/ppc-opcode.h>
> > +#include <asm/sstep.h>
> >  #include "timing.h"
> >  #include "trace.h"
> >  
> > +int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
> > +		  unsigned int instr);
> 
> You shouldn't need this prototype here, since there's one in sstep.h.
> 
Yes.

> >  #ifdef CONFIG_PPC_FPU
> >  static bool kvmppc_check_fp_disabled(struct kvm_vcpu *vcpu)
> >  {
> > @@ -84,8 +87,9 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
> >  	struct kvm_run *run = vcpu->run;
> >  	u32 inst;
> >  	int ra, rs, rt;
> > -	enum emulation_result emulated;
> > +	enum emulation_result emulated = EMULATE_FAIL;
> >  	int advance = 1;
> > +	struct instruction_op op;
> >  
> >  	/* this default type might be overwritten by subcategories */
> >  	kvmppc_set_exit_type(vcpu, EMULATED_INST_EXITS);
> > @@ -114,144 +118,64 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
> >  	vcpu->arch.mmio_update_ra = 0;
> >  	vcpu->arch.mmio_host_swabbed = 0;
> >  
> > -	switch (get_op(inst)) {
> > -	case 31:
> > -		switch (get_xop(inst)) {
> > -		case OP_31_XOP_LWZX:
> > -			emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
> > -			break;
> > -
> > -		case OP_31_XOP_LWZUX:
> > -			emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
> > -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> > -			break;
> > -
> > -		case OP_31_XOP_LBZX:
> > -			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
> > -			break;
> > +	emulated = EMULATE_FAIL;
> > +	vcpu->arch.regs.msr = vcpu->arch.shared->msr;
> > +	vcpu->arch.regs.ccr = vcpu->arch.cr;
> > +	if (analyse_instr(&op, &vcpu->arch.regs, inst) == 0) {
> > +		int type = op.type & INSTR_TYPE_MASK;
> > +		int size = GETSIZE(op.type);
> >  
> > -		case OP_31_XOP_LBZUX:
> > -			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
> > -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> > -			break;
> > +		switch (type) {
> > +		case LOAD:  {
> > +			int instr_byte_swap = op.type & BYTEREV;
> >  
> > -		case OP_31_XOP_STDX:
> > -			emulated = kvmppc_handle_store(run, vcpu,
> > -					kvmppc_get_gpr(vcpu, rs), 8, 1);
> > -			break;
> > +			if (op.type & UPDATE) {
> > +				vcpu->arch.mmio_ra = op.update_reg;
> > +				vcpu->arch.mmio_update_ra = 1;
> > +			}
> 
> Any reason we can't just update RA here immediately?
> 
> >  
> > -		case OP_31_XOP_STDUX:
> > -			emulated = kvmppc_handle_store(run, vcpu,
> > -					kvmppc_get_gpr(vcpu, rs), 8, 1);
> > -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> > -			break;
> > -
> > -		case OP_31_XOP_STWX:
> > -			emulated = kvmppc_handle_store(run, vcpu,
> > -					kvmppc_get_gpr(vcpu, rs), 4, 1);
> > -			break;
> > -
> > -		case OP_31_XOP_STWUX:
> > -			emulated = kvmppc_handle_store(run, vcpu,
> > -					kvmppc_get_gpr(vcpu, rs), 4, 1);
> > -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> > -			break;
> > -
> > -		case OP_31_XOP_STBX:
> > -			emulated = kvmppc_handle_store(run, vcpu,
> > -					kvmppc_get_gpr(vcpu, rs), 1, 1);
> > -			break;
> > -
> > -		case OP_31_XOP_STBUX:
> > -			emulated = kvmppc_handle_store(run, vcpu,
> > -					kvmppc_get_gpr(vcpu, rs), 1, 1);
> > -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> > -			break;
> > -
> > -		case OP_31_XOP_LHAX:
> > -			emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
> > -			break;
> > -
> > -		case OP_31_XOP_LHAUX:
> > -			emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
> > -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> > -			break;
> > +			if (op.type & SIGNEXT)
> > +				emulated = kvmppc_handle_loads(run, vcpu,
> > +						op.reg, size, !instr_byte_swap);
> > +			else
> > +				emulated = kvmppc_handle_load(run, vcpu,
> > +						op.reg, size, !instr_byte_swap);
> >  
> > -		case OP_31_XOP_LHZX:
> > -			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
> >  			break;
> > -
> > -		case OP_31_XOP_LHZUX:
> > -			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
> > -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> > -			break;
> > -
> > -		case OP_31_XOP_STHX:
> > -			emulated = kvmppc_handle_store(run, vcpu,
> > -					kvmppc_get_gpr(vcpu, rs), 2, 1);
> > -			break;
> > -
> > -		case OP_31_XOP_STHUX:
> > -			emulated = kvmppc_handle_store(run, vcpu,
> > -					kvmppc_get_gpr(vcpu, rs), 2, 1);
> > -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> > -			break;
> > -
> > -		case OP_31_XOP_DCBST:
> > -		case OP_31_XOP_DCBF:
> > -		case OP_31_XOP_DCBI:
> > +		}
> > +		case STORE:
> > +			if (op.type & UPDATE) {
> > +				vcpu->arch.mmio_ra = op.update_reg;
> > +				vcpu->arch.mmio_update_ra = 1;
> > +			}
> 
> Same comment again about updating RA.
> 
> > +
> > +			/* if need byte reverse, op.val has been reverted by
> 
> "reversed" rather than "reverted".  "Reverted" means put back to a
> former state.
I will correct that.

> 
> Paul.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 07/11] KVM: PPC: reconstruct non-SIMD LOAD/STORE instruction mmio emulation with analyse_
@ 2018-05-03  9:07       ` Simon Guo
  0 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  9:07 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, May 03, 2018 at 04:03:46PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:40PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > This patch reconstructs non-SIMD LOAD/STORE instruction MMIO emulation
> > with analyse_intr() input. It utilizes the BYTEREV/UPDATE/SIGNEXT
> > properties exported by analyse_instr() and invokes
> > kvmppc_handle_load(s)/kvmppc_handle_store() accordingly.
> > 
> > It also move CACHEOP type handling into the skeleton.
> > 
> > instruction_type within sstep.h is renamed to avoid conflict with
> > kvm_ppc.h.
> 
> I'd prefer to change the one in kvm_ppc.h, especially since that one
> isn't exactly about the type of instruction, but more about the type
> of interrupt led to us trying to fetch the instruction.
> 
Agreed.

> > Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> > ---
> >  arch/powerpc/include/asm/sstep.h     |   2 +-
> >  arch/powerpc/kvm/emulate_loadstore.c | 282 +++++++----------------------------
> >  2 files changed, 51 insertions(+), 233 deletions(-)
> > 
> > diff --git a/arch/powerpc/include/asm/sstep.h b/arch/powerpc/include/asm/sstep.h
> > index ab9d849..0a1a312 100644
> > --- a/arch/powerpc/include/asm/sstep.h
> > +++ b/arch/powerpc/include/asm/sstep.h
> > @@ -23,7 +23,7 @@
> >  #define IS_RFID(instr)		(((instr) & 0xfc0007fe) = 0x4c000024)
> >  #define IS_RFI(instr)		(((instr) & 0xfc0007fe) = 0x4c000064)
> >  
> > -enum instruction_type {
> > +enum analyse_instruction_type {
> >  	COMPUTE,		/* arith/logical/CR op, etc. */
> >  	LOAD,			/* load and store types need to be contiguous */
> >  	LOAD_MULTI,
> > diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
> > index 90b9692..aaaf872 100644
> > --- a/arch/powerpc/kvm/emulate_loadstore.c
> > +++ b/arch/powerpc/kvm/emulate_loadstore.c
> > @@ -31,9 +31,12 @@
> >  #include <asm/kvm_ppc.h>
> >  #include <asm/disassemble.h>
> >  #include <asm/ppc-opcode.h>
> > +#include <asm/sstep.h>
> >  #include "timing.h"
> >  #include "trace.h"
> >  
> > +int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
> > +		  unsigned int instr);
> 
> You shouldn't need this prototype here, since there's one in sstep.h.
> 
Yes.

> >  #ifdef CONFIG_PPC_FPU
> >  static bool kvmppc_check_fp_disabled(struct kvm_vcpu *vcpu)
> >  {
> > @@ -84,8 +87,9 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
> >  	struct kvm_run *run = vcpu->run;
> >  	u32 inst;
> >  	int ra, rs, rt;
> > -	enum emulation_result emulated;
> > +	enum emulation_result emulated = EMULATE_FAIL;
> >  	int advance = 1;
> > +	struct instruction_op op;
> >  
> >  	/* this default type might be overwritten by subcategories */
> >  	kvmppc_set_exit_type(vcpu, EMULATED_INST_EXITS);
> > @@ -114,144 +118,64 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
> >  	vcpu->arch.mmio_update_ra = 0;
> >  	vcpu->arch.mmio_host_swabbed = 0;
> >  
> > -	switch (get_op(inst)) {
> > -	case 31:
> > -		switch (get_xop(inst)) {
> > -		case OP_31_XOP_LWZX:
> > -			emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
> > -			break;
> > -
> > -		case OP_31_XOP_LWZUX:
> > -			emulated = kvmppc_handle_load(run, vcpu, rt, 4, 1);
> > -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> > -			break;
> > -
> > -		case OP_31_XOP_LBZX:
> > -			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
> > -			break;
> > +	emulated = EMULATE_FAIL;
> > +	vcpu->arch.regs.msr = vcpu->arch.shared->msr;
> > +	vcpu->arch.regs.ccr = vcpu->arch.cr;
> > +	if (analyse_instr(&op, &vcpu->arch.regs, inst) = 0) {
> > +		int type = op.type & INSTR_TYPE_MASK;
> > +		int size = GETSIZE(op.type);
> >  
> > -		case OP_31_XOP_LBZUX:
> > -			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
> > -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> > -			break;
> > +		switch (type) {
> > +		case LOAD:  {
> > +			int instr_byte_swap = op.type & BYTEREV;
> >  
> > -		case OP_31_XOP_STDX:
> > -			emulated = kvmppc_handle_store(run, vcpu,
> > -					kvmppc_get_gpr(vcpu, rs), 8, 1);
> > -			break;
> > +			if (op.type & UPDATE) {
> > +				vcpu->arch.mmio_ra = op.update_reg;
> > +				vcpu->arch.mmio_update_ra = 1;
> > +			}
> 
> Any reason we can't just update RA here immediately?
> 
> >  
> > -		case OP_31_XOP_STDUX:
> > -			emulated = kvmppc_handle_store(run, vcpu,
> > -					kvmppc_get_gpr(vcpu, rs), 8, 1);
> > -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> > -			break;
> > -
> > -		case OP_31_XOP_STWX:
> > -			emulated = kvmppc_handle_store(run, vcpu,
> > -					kvmppc_get_gpr(vcpu, rs), 4, 1);
> > -			break;
> > -
> > -		case OP_31_XOP_STWUX:
> > -			emulated = kvmppc_handle_store(run, vcpu,
> > -					kvmppc_get_gpr(vcpu, rs), 4, 1);
> > -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> > -			break;
> > -
> > -		case OP_31_XOP_STBX:
> > -			emulated = kvmppc_handle_store(run, vcpu,
> > -					kvmppc_get_gpr(vcpu, rs), 1, 1);
> > -			break;
> > -
> > -		case OP_31_XOP_STBUX:
> > -			emulated = kvmppc_handle_store(run, vcpu,
> > -					kvmppc_get_gpr(vcpu, rs), 1, 1);
> > -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> > -			break;
> > -
> > -		case OP_31_XOP_LHAX:
> > -			emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
> > -			break;
> > -
> > -		case OP_31_XOP_LHAUX:
> > -			emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
> > -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> > -			break;
> > +			if (op.type & SIGNEXT)
> > +				emulated = kvmppc_handle_loads(run, vcpu,
> > +						op.reg, size, !instr_byte_swap);
> > +			else
> > +				emulated = kvmppc_handle_load(run, vcpu,
> > +						op.reg, size, !instr_byte_swap);
> >  
> > -		case OP_31_XOP_LHZX:
> > -			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
> >  			break;
> > -
> > -		case OP_31_XOP_LHZUX:
> > -			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
> > -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> > -			break;
> > -
> > -		case OP_31_XOP_STHX:
> > -			emulated = kvmppc_handle_store(run, vcpu,
> > -					kvmppc_get_gpr(vcpu, rs), 2, 1);
> > -			break;
> > -
> > -		case OP_31_XOP_STHUX:
> > -			emulated = kvmppc_handle_store(run, vcpu,
> > -					kvmppc_get_gpr(vcpu, rs), 2, 1);
> > -			kvmppc_set_gpr(vcpu, ra, vcpu->arch.vaddr_accessed);
> > -			break;
> > -
> > -		case OP_31_XOP_DCBST:
> > -		case OP_31_XOP_DCBF:
> > -		case OP_31_XOP_DCBI:
> > +		}
> > +		case STORE:
> > +			if (op.type & UPDATE) {
> > +				vcpu->arch.mmio_ra = op.update_reg;
> > +				vcpu->arch.mmio_update_ra = 1;
> > +			}
> 
> Same comment again about updating RA.
> 
> > +
> > +			/* if need byte reverse, op.val has been reverted by
> 
> "reversed" rather than "reverted".  "Reverted" means put back to a
> former state.
I will correct that.

> 
> Paul.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 08/11] KVM: PPC: add giveup_ext() hook for PPC KVM ops
  2018-05-03  6:08     ` Paul Mackerras
  (?)
@ 2018-05-03  9:21       ` Simon Guo
  -1 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  9:21 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, May 03, 2018 at 04:08:17PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:41PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > Currently HV will save math regs(FP/VEC/VSX) when trap into host. But
> > PR KVM will only save math regs when qemu task switch out of CPU.
> > 
> > To emulate FP/VEC/VSX load, PR KVM need to flush math regs firstly and
> > then be able to update saved VCPU FPR/VEC/VSX area reasonably.
> > 
> > This patch adds the giveup_ext() to KVM ops (an empty one for HV KVM)
> > and kvmppc_complete_mmio_load() can invoke that hook to flush math
> > regs accordingly.
> > 
> > Math regs flush is also necessary for STORE, which will be covered
> > in later patch within this patch series.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> I don't see where you have provided a function for Book E.
> 
> I would suggest you only set the function pointer to non-NULL when the
> function is actually needed, i.e. for PR KVM.
Got it.

> 
> It seems to me that this means that emulation of FP/VMX/VSX loads is
> currently broken for PR KVM for the case where kvm_io_bus_read() is
> able to supply the data, and the emulation of FP/VMX/VSX stores is
> broken for PR KVM for all cases.  Do you agree?
> 
Yes. I think so.

> > diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> > index 5b875ba..7eb5507 100644
> > --- a/arch/powerpc/kvm/book3s_hv.c
> > +++ b/arch/powerpc/kvm/book3s_hv.c
> > @@ -2084,6 +2084,10 @@ static int kvmhv_set_smt_mode(struct kvm *kvm, unsigned long smt_mode,
> >  	return err;
> >  }
> >  
> > +static void kvmhv_giveup_ext(struct kvm_vcpu *vcpu, ulong msr)
> > +{
> > +}
> > +
> >  static void unpin_vpa(struct kvm *kvm, struct kvmppc_vpa *vpa)
> >  {
> >  	if (vpa->pinned_addr)
> > @@ -4398,6 +4402,7 @@ static int kvmhv_configure_mmu(struct kvm *kvm, struct kvm_ppc_mmuv3_cfg *cfg)
> >  	.configure_mmu = kvmhv_configure_mmu,
> >  	.get_rmmu_info = kvmhv_get_rmmu_info,
> >  	.set_smt_mode = kvmhv_set_smt_mode,
> > +	.giveup_ext = kvmhv_giveup_ext,
> >  };
> >  
> >  static int kvm_init_subcore_bitmap(void)
> 
> I think HV KVM could leave this pointer as NULL, and then...
ok.

> 
> > diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> > index 17f0315..e724601 100644
> > --- a/arch/powerpc/kvm/powerpc.c
> > +++ b/arch/powerpc/kvm/powerpc.c
> > @@ -1061,6 +1061,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
> >  		kvmppc_set_gpr(vcpu, vcpu->arch.io_gpr, gpr);
> >  		break;
> >  	case KVM_MMIO_REG_FPR:
> > +		if (!is_kvmppc_hv_enabled(vcpu->kvm))
> > +			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_FP);
> > +
> 
> This could become
> 		if (vcpu->kvm->arch.kvm_ops->giveup_ext)
> 			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_FP);
> 
> and you wouldn't need to fix Book E explicitly.
Yes

> 
> Paul.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 08/11] KVM: PPC: add giveup_ext() hook for PPC KVM ops
@ 2018-05-03  9:21       ` Simon Guo
  0 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  9:21 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: kvm-ppc, kvm, linuxppc-dev

On Thu, May 03, 2018 at 04:08:17PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:41PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > Currently HV will save math regs(FP/VEC/VSX) when trap into host. But
> > PR KVM will only save math regs when qemu task switch out of CPU.
> > 
> > To emulate FP/VEC/VSX load, PR KVM need to flush math regs firstly and
> > then be able to update saved VCPU FPR/VEC/VSX area reasonably.
> > 
> > This patch adds the giveup_ext() to KVM ops (an empty one for HV KVM)
> > and kvmppc_complete_mmio_load() can invoke that hook to flush math
> > regs accordingly.
> > 
> > Math regs flush is also necessary for STORE, which will be covered
> > in later patch within this patch series.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> I don't see where you have provided a function for Book E.
> 
> I would suggest you only set the function pointer to non-NULL when the
> function is actually needed, i.e. for PR KVM.
Got it.

> 
> It seems to me that this means that emulation of FP/VMX/VSX loads is
> currently broken for PR KVM for the case where kvm_io_bus_read() is
> able to supply the data, and the emulation of FP/VMX/VSX stores is
> broken for PR KVM for all cases.  Do you agree?
> 
Yes. I think so.

> > diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> > index 5b875ba..7eb5507 100644
> > --- a/arch/powerpc/kvm/book3s_hv.c
> > +++ b/arch/powerpc/kvm/book3s_hv.c
> > @@ -2084,6 +2084,10 @@ static int kvmhv_set_smt_mode(struct kvm *kvm, unsigned long smt_mode,
> >  	return err;
> >  }
> >  
> > +static void kvmhv_giveup_ext(struct kvm_vcpu *vcpu, ulong msr)
> > +{
> > +}
> > +
> >  static void unpin_vpa(struct kvm *kvm, struct kvmppc_vpa *vpa)
> >  {
> >  	if (vpa->pinned_addr)
> > @@ -4398,6 +4402,7 @@ static int kvmhv_configure_mmu(struct kvm *kvm, struct kvm_ppc_mmuv3_cfg *cfg)
> >  	.configure_mmu = kvmhv_configure_mmu,
> >  	.get_rmmu_info = kvmhv_get_rmmu_info,
> >  	.set_smt_mode = kvmhv_set_smt_mode,
> > +	.giveup_ext = kvmhv_giveup_ext,
> >  };
> >  
> >  static int kvm_init_subcore_bitmap(void)
> 
> I think HV KVM could leave this pointer as NULL, and then...
ok.

> 
> > diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> > index 17f0315..e724601 100644
> > --- a/arch/powerpc/kvm/powerpc.c
> > +++ b/arch/powerpc/kvm/powerpc.c
> > @@ -1061,6 +1061,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
> >  		kvmppc_set_gpr(vcpu, vcpu->arch.io_gpr, gpr);
> >  		break;
> >  	case KVM_MMIO_REG_FPR:
> > +		if (!is_kvmppc_hv_enabled(vcpu->kvm))
> > +			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_FP);
> > +
> 
> This could become
> 		if (vcpu->kvm->arch.kvm_ops->giveup_ext)
> 			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_FP);
> 
> and you wouldn't need to fix Book E explicitly.
Yes

> 
> Paul.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 08/11] KVM: PPC: add giveup_ext() hook for PPC KVM ops
@ 2018-05-03  9:21       ` Simon Guo
  0 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  9:21 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, May 03, 2018 at 04:08:17PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:41PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > Currently HV will save math regs(FP/VEC/VSX) when trap into host. But
> > PR KVM will only save math regs when qemu task switch out of CPU.
> > 
> > To emulate FP/VEC/VSX load, PR KVM need to flush math regs firstly and
> > then be able to update saved VCPU FPR/VEC/VSX area reasonably.
> > 
> > This patch adds the giveup_ext() to KVM ops (an empty one for HV KVM)
> > and kvmppc_complete_mmio_load() can invoke that hook to flush math
> > regs accordingly.
> > 
> > Math regs flush is also necessary for STORE, which will be covered
> > in later patch within this patch series.
> > 
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> I don't see where you have provided a function for Book E.
> 
> I would suggest you only set the function pointer to non-NULL when the
> function is actually needed, i.e. for PR KVM.
Got it.

> 
> It seems to me that this means that emulation of FP/VMX/VSX loads is
> currently broken for PR KVM for the case where kvm_io_bus_read() is
> able to supply the data, and the emulation of FP/VMX/VSX stores is
> broken for PR KVM for all cases.  Do you agree?
> 
Yes. I think so.

> > diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> > index 5b875ba..7eb5507 100644
> > --- a/arch/powerpc/kvm/book3s_hv.c
> > +++ b/arch/powerpc/kvm/book3s_hv.c
> > @@ -2084,6 +2084,10 @@ static int kvmhv_set_smt_mode(struct kvm *kvm, unsigned long smt_mode,
> >  	return err;
> >  }
> >  
> > +static void kvmhv_giveup_ext(struct kvm_vcpu *vcpu, ulong msr)
> > +{
> > +}
> > +
> >  static void unpin_vpa(struct kvm *kvm, struct kvmppc_vpa *vpa)
> >  {
> >  	if (vpa->pinned_addr)
> > @@ -4398,6 +4402,7 @@ static int kvmhv_configure_mmu(struct kvm *kvm, struct kvm_ppc_mmuv3_cfg *cfg)
> >  	.configure_mmu = kvmhv_configure_mmu,
> >  	.get_rmmu_info = kvmhv_get_rmmu_info,
> >  	.set_smt_mode = kvmhv_set_smt_mode,
> > +	.giveup_ext = kvmhv_giveup_ext,
> >  };
> >  
> >  static int kvm_init_subcore_bitmap(void)
> 
> I think HV KVM could leave this pointer as NULL, and then...
ok.

> 
> > diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> > index 17f0315..e724601 100644
> > --- a/arch/powerpc/kvm/powerpc.c
> > +++ b/arch/powerpc/kvm/powerpc.c
> > @@ -1061,6 +1061,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
> >  		kvmppc_set_gpr(vcpu, vcpu->arch.io_gpr, gpr);
> >  		break;
> >  	case KVM_MMIO_REG_FPR:
> > +		if (!is_kvmppc_hv_enabled(vcpu->kvm))
> > +			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_FP);
> > +
> 
> This could become
> 		if (vcpu->kvm->arch.kvm_ops->giveup_ext)
> 			vcpu->kvm->arch.kvm_ops->giveup_ext(vcpu, MSR_FP);
> 
> and you wouldn't need to fix Book E explicitly.
Yes

> 
> Paul.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 09/11] KVM: PPC: reconstruct LOAD_FP/STORE_FP instruction mmio emulation with analyse_intr() input
  2018-05-03  6:10     ` [PATCH 09/11] KVM: PPC: reconstruct LOAD_FP/STORE_FP instruction mmio emulation with analyse_intr() input Paul Mackerras
  (?)
@ 2018-05-03  9:25       ` Simon Guo
  -1 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  9:25 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, May 03, 2018 at 04:10:49PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:42PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > This patch reconstructs LOAD_FP/STORE_FP instruction MMIO emulation with
> > analyse_intr() input. It utilizes the FPCONV/UPDATE properties exported by
> > analyse_instr() and invokes kvmppc_handle_load(s)/kvmppc_handle_store()
> > accordingly.
> > 
> > The FP regs need to be flushed so that the right FP reg vals can be read
> > from vcpu->arch.fpr.
> 
> This only applies for store instructions; it would be clearer if you
> said that explicitly.
I will correct this message.

> 
> > 
> > Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> Same comment about updating RA as for the other patches.  Otherwise
> this looks fine.
> 
> Paul.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 09/11] KVM: PPC: reconstruct LOAD_FP/STORE_FP instruction mmio emulation with analyse_intr() input
@ 2018-05-03  9:25       ` Simon Guo
  0 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  9:25 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: kvm-ppc, kvm, linuxppc-dev

On Thu, May 03, 2018 at 04:10:49PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:42PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > This patch reconstructs LOAD_FP/STORE_FP instruction MMIO emulation with
> > analyse_intr() input. It utilizes the FPCONV/UPDATE properties exported by
> > analyse_instr() and invokes kvmppc_handle_load(s)/kvmppc_handle_store()
> > accordingly.
> > 
> > The FP regs need to be flushed so that the right FP reg vals can be read
> > from vcpu->arch.fpr.
> 
> This only applies for store instructions; it would be clearer if you
> said that explicitly.
I will correct this message.

> 
> > 
> > Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> Same comment about updating RA as for the other patches.  Otherwise
> this looks fine.
> 
> Paul.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 09/11] KVM: PPC: reconstruct LOAD_FP/STORE_FP instruction mmio emulation with analyse_int
@ 2018-05-03  9:25       ` Simon Guo
  0 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  9:25 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, May 03, 2018 at 04:10:49PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:42PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > This patch reconstructs LOAD_FP/STORE_FP instruction MMIO emulation with
> > analyse_intr() input. It utilizes the FPCONV/UPDATE properties exported by
> > analyse_instr() and invokes kvmppc_handle_load(s)/kvmppc_handle_store()
> > accordingly.
> > 
> > The FP regs need to be flushed so that the right FP reg vals can be read
> > from vcpu->arch.fpr.
> 
> This only applies for store instructions; it would be clearer if you
> said that explicitly.
I will correct this message.

> 
> > 
> > Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> Same comment about updating RA as for the other patches.  Otherwise
> this looks fine.
> 
> Paul.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 10/11] KVM: PPC: reconstruct LOAD_VMX/STORE_VMX instruction mmio emulation with analyse_intr() input
  2018-05-03  6:17     ` [PATCH 10/11] KVM: PPC: reconstruct LOAD_VMX/STORE_VMX instruction mmio emulation with analyse_intr() input Paul Mackerras
  (?)
@ 2018-05-03  9:43       ` Simon Guo
  -1 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  9:43 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, May 03, 2018 at 04:17:15PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:43PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > This patch reconstructs LOAD_VMX/STORE_VMX instruction MMIO emulation with
> > analyse_intr() input. When emulating the store, the VMX reg will need to
> > be flushed so that the right reg val can be retrieved before writing to
> > IO MEM.
> > 
> > Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> This looks fine for lvx and stvx, but now we are also doing something
> for the vector element loads and stores (lvebx, stvebx, lvehx, stvehx,
> etc.) without having the logic to insert or extract the correct
> element in the vector register image.  We need either to generate an
> error for the element load/store instructions, or handle them
> correctly.
Yes. I will consider that.

> 
> > diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
> > index 2dbdf9a..0bfee2f 100644
> > --- a/arch/powerpc/kvm/emulate_loadstore.c
> > +++ b/arch/powerpc/kvm/emulate_loadstore.c
> > @@ -160,6 +160,27 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
> >  					KVM_MMIO_REG_FPR|op.reg, size, 1);
> >  			break;
> >  #endif
> > +#ifdef CONFIG_ALTIVEC
> > +		case LOAD_VMX:
> > +			if (kvmppc_check_altivec_disabled(vcpu))
> > +				return EMULATE_DONE;
> > +
> > +			/* VMX access will need to be size aligned */
> 
> This comment isn't quite right; it isn't that the address needs to be
> size-aligned, it's that the hardware forcibly aligns it.  So I would
> say something like /* Hardware enforces alignment of VMX accesses */.
> 
I will update that.

> > +			vcpu->arch.vaddr_accessed &= ~((unsigned long)size - 1);
> > +			vcpu->arch.paddr_accessed &= ~((unsigned long)size - 1);
> > +
> > +			if (size == 16) {
> > +				vcpu->arch.mmio_vmx_copy_nums = 2;
> > +				emulated = kvmppc_handle_load128_by2x64(run,
> > +						vcpu, KVM_MMIO_REG_VMX|op.reg,
> > +						1);
> > +			} else if (size <= 8)
> > +				emulated = kvmppc_handle_load(run, vcpu,
> > +						KVM_MMIO_REG_VMX|op.reg,
> > +						size, 1);
> > +
> > +			break;
> > +#endif
> >  		case STORE:
> >  			if (op.type & UPDATE) {
> >  				vcpu->arch.mmio_ra = op.update_reg;
> > @@ -197,6 +218,36 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
> >  					VCPU_FPR(vcpu, op.reg), size, 1);
> >  			break;
> >  #endif
> > +#ifdef CONFIG_ALTIVEC
> > +		case STORE_VMX:
> > +			if (kvmppc_check_altivec_disabled(vcpu))
> > +				return EMULATE_DONE;
> > +
> > +			/* VMX access will need to be size aligned */
> > +			vcpu->arch.vaddr_accessed &= ~((unsigned long)size - 1);
> > +			vcpu->arch.paddr_accessed &= ~((unsigned long)size - 1);
> > +
> > +			/* if it is PR KVM, the FP/VEC/VSX registers need to
> > +			 * be flushed so that kvmppc_handle_store() can read
> > +			 * actual VMX vals from vcpu->arch.
> > +			 */
> > +			if (!is_kvmppc_hv_enabled(vcpu->kvm))
> 
> As before, I suggest just testing that the function pointer isn't
> NULL.
Got it.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 10/11] KVM: PPC: reconstruct LOAD_VMX/STORE_VMX instruction mmio emulation with analyse_intr() input
@ 2018-05-03  9:43       ` Simon Guo
  0 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  9:43 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: kvm-ppc, kvm, linuxppc-dev

On Thu, May 03, 2018 at 04:17:15PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:43PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > This patch reconstructs LOAD_VMX/STORE_VMX instruction MMIO emulation with
> > analyse_intr() input. When emulating the store, the VMX reg will need to
> > be flushed so that the right reg val can be retrieved before writing to
> > IO MEM.
> > 
> > Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> This looks fine for lvx and stvx, but now we are also doing something
> for the vector element loads and stores (lvebx, stvebx, lvehx, stvehx,
> etc.) without having the logic to insert or extract the correct
> element in the vector register image.  We need either to generate an
> error for the element load/store instructions, or handle them
> correctly.
Yes. I will consider that.

> 
> > diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
> > index 2dbdf9a..0bfee2f 100644
> > --- a/arch/powerpc/kvm/emulate_loadstore.c
> > +++ b/arch/powerpc/kvm/emulate_loadstore.c
> > @@ -160,6 +160,27 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
> >  					KVM_MMIO_REG_FPR|op.reg, size, 1);
> >  			break;
> >  #endif
> > +#ifdef CONFIG_ALTIVEC
> > +		case LOAD_VMX:
> > +			if (kvmppc_check_altivec_disabled(vcpu))
> > +				return EMULATE_DONE;
> > +
> > +			/* VMX access will need to be size aligned */
> 
> This comment isn't quite right; it isn't that the address needs to be
> size-aligned, it's that the hardware forcibly aligns it.  So I would
> say something like /* Hardware enforces alignment of VMX accesses */.
> 
I will update that.

> > +			vcpu->arch.vaddr_accessed &= ~((unsigned long)size - 1);
> > +			vcpu->arch.paddr_accessed &= ~((unsigned long)size - 1);
> > +
> > +			if (size == 16) {
> > +				vcpu->arch.mmio_vmx_copy_nums = 2;
> > +				emulated = kvmppc_handle_load128_by2x64(run,
> > +						vcpu, KVM_MMIO_REG_VMX|op.reg,
> > +						1);
> > +			} else if (size <= 8)
> > +				emulated = kvmppc_handle_load(run, vcpu,
> > +						KVM_MMIO_REG_VMX|op.reg,
> > +						size, 1);
> > +
> > +			break;
> > +#endif
> >  		case STORE:
> >  			if (op.type & UPDATE) {
> >  				vcpu->arch.mmio_ra = op.update_reg;
> > @@ -197,6 +218,36 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
> >  					VCPU_FPR(vcpu, op.reg), size, 1);
> >  			break;
> >  #endif
> > +#ifdef CONFIG_ALTIVEC
> > +		case STORE_VMX:
> > +			if (kvmppc_check_altivec_disabled(vcpu))
> > +				return EMULATE_DONE;
> > +
> > +			/* VMX access will need to be size aligned */
> > +			vcpu->arch.vaddr_accessed &= ~((unsigned long)size - 1);
> > +			vcpu->arch.paddr_accessed &= ~((unsigned long)size - 1);
> > +
> > +			/* if it is PR KVM, the FP/VEC/VSX registers need to
> > +			 * be flushed so that kvmppc_handle_store() can read
> > +			 * actual VMX vals from vcpu->arch.
> > +			 */
> > +			if (!is_kvmppc_hv_enabled(vcpu->kvm))
> 
> As before, I suggest just testing that the function pointer isn't
> NULL.
Got it.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 10/11] KVM: PPC: reconstruct LOAD_VMX/STORE_VMX instruction mmio emulation with analyse_i
@ 2018-05-03  9:43       ` Simon Guo
  0 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  9:43 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, May 03, 2018 at 04:17:15PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:43PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > This patch reconstructs LOAD_VMX/STORE_VMX instruction MMIO emulation with
> > analyse_intr() input. When emulating the store, the VMX reg will need to
> > be flushed so that the right reg val can be retrieved before writing to
> > IO MEM.
> > 
> > Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> This looks fine for lvx and stvx, but now we are also doing something
> for the vector element loads and stores (lvebx, stvebx, lvehx, stvehx,
> etc.) without having the logic to insert or extract the correct
> element in the vector register image.  We need either to generate an
> error for the element load/store instructions, or handle them
> correctly.
Yes. I will consider that.

> 
> > diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
> > index 2dbdf9a..0bfee2f 100644
> > --- a/arch/powerpc/kvm/emulate_loadstore.c
> > +++ b/arch/powerpc/kvm/emulate_loadstore.c
> > @@ -160,6 +160,27 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
> >  					KVM_MMIO_REG_FPR|op.reg, size, 1);
> >  			break;
> >  #endif
> > +#ifdef CONFIG_ALTIVEC
> > +		case LOAD_VMX:
> > +			if (kvmppc_check_altivec_disabled(vcpu))
> > +				return EMULATE_DONE;
> > +
> > +			/* VMX access will need to be size aligned */
> 
> This comment isn't quite right; it isn't that the address needs to be
> size-aligned, it's that the hardware forcibly aligns it.  So I would
> say something like /* Hardware enforces alignment of VMX accesses */.
> 
I will update that.

> > +			vcpu->arch.vaddr_accessed &= ~((unsigned long)size - 1);
> > +			vcpu->arch.paddr_accessed &= ~((unsigned long)size - 1);
> > +
> > +			if (size = 16) {
> > +				vcpu->arch.mmio_vmx_copy_nums = 2;
> > +				emulated = kvmppc_handle_load128_by2x64(run,
> > +						vcpu, KVM_MMIO_REG_VMX|op.reg,
> > +						1);
> > +			} else if (size <= 8)
> > +				emulated = kvmppc_handle_load(run, vcpu,
> > +						KVM_MMIO_REG_VMX|op.reg,
> > +						size, 1);
> > +
> > +			break;
> > +#endif
> >  		case STORE:
> >  			if (op.type & UPDATE) {
> >  				vcpu->arch.mmio_ra = op.update_reg;
> > @@ -197,6 +218,36 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
> >  					VCPU_FPR(vcpu, op.reg), size, 1);
> >  			break;
> >  #endif
> > +#ifdef CONFIG_ALTIVEC
> > +		case STORE_VMX:
> > +			if (kvmppc_check_altivec_disabled(vcpu))
> > +				return EMULATE_DONE;
> > +
> > +			/* VMX access will need to be size aligned */
> > +			vcpu->arch.vaddr_accessed &= ~((unsigned long)size - 1);
> > +			vcpu->arch.paddr_accessed &= ~((unsigned long)size - 1);
> > +
> > +			/* if it is PR KVM, the FP/VEC/VSX registers need to
> > +			 * be flushed so that kvmppc_handle_store() can read
> > +			 * actual VMX vals from vcpu->arch.
> > +			 */
> > +			if (!is_kvmppc_hv_enabled(vcpu->kvm))
> 
> As before, I suggest just testing that the function pointer isn't
> NULL.
Got it.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 11/11] KVM: PPC: reconstruct LOAD_VSX/STORE_VSX instruction mmio emulation with analyse_intr() input
  2018-05-03  6:26     ` [PATCH 11/11] KVM: PPC: reconstruct LOAD_VSX/STORE_VSX instruction mmio emulation with analyse_intr() input Paul Mackerras
  (?)
@ 2018-05-03  9:46       ` Simon Guo
  -1 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  9:46 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, May 03, 2018 at 04:26:12PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:44PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > This patch reconstructs LOAD_VSX/STORE_VSX instruction MMIO emulation with
> > analyse_intr() input. It utilizes VSX_FPCONV/VSX_SPLAT/SIGNEXT exported
> > by analyse_instr() and handle accordingly.
> > 
> > When emulating VSX store, the VSX reg will need to be flushed so that
> > the right reg val can be retrieved before writing to IO MEM.
> > 
> > Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> Looks good, except that you shouldn't need the special case for
> stxsiwx.  With size=4 and element_size=8, kvmppc_handle_vsx_store
> should just do the right thing, as far as I can see.
Yes. Let me test after update.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 11/11] KVM: PPC: reconstruct LOAD_VSX/STORE_VSX instruction mmio emulation with analyse_intr() input
@ 2018-05-03  9:46       ` Simon Guo
  0 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  9:46 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: kvm-ppc, kvm, linuxppc-dev

On Thu, May 03, 2018 at 04:26:12PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:44PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > This patch reconstructs LOAD_VSX/STORE_VSX instruction MMIO emulation with
> > analyse_intr() input. It utilizes VSX_FPCONV/VSX_SPLAT/SIGNEXT exported
> > by analyse_instr() and handle accordingly.
> > 
> > When emulating VSX store, the VSX reg will need to be flushed so that
> > the right reg val can be retrieved before writing to IO MEM.
> > 
> > Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> Looks good, except that you shouldn't need the special case for
> stxsiwx.  With size=4 and element_size=8, kvmppc_handle_vsx_store
> should just do the right thing, as far as I can see.
Yes. Let me test after update.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

* Re: [PATCH 11/11] KVM: PPC: reconstruct LOAD_VSX/STORE_VSX instruction mmio emulation with analyse_i
@ 2018-05-03  9:46       ` Simon Guo
  0 siblings, 0 replies; 111+ messages in thread
From: Simon Guo @ 2018-05-03  9:46 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm, kvm-ppc

On Thu, May 03, 2018 at 04:26:12PM +1000, Paul Mackerras wrote:
> On Wed, Apr 25, 2018 at 07:54:44PM +0800, wei.guo.simon@gmail.com wrote:
> > From: Simon Guo <wei.guo.simon@gmail.com>
> > 
> > This patch reconstructs LOAD_VSX/STORE_VSX instruction MMIO emulation with
> > analyse_intr() input. It utilizes VSX_FPCONV/VSX_SPLAT/SIGNEXT exported
> > by analyse_instr() and handle accordingly.
> > 
> > When emulating VSX store, the VSX reg will need to be flushed so that
> > the right reg val can be retrieved before writing to IO MEM.
> > 
> > Suggested-by: Paul Mackerras <paulus@ozlabs.org>
> > Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
> 
> Looks good, except that you shouldn't need the special case for
> stxsiwx.  With size=4 and element_size=8, kvmppc_handle_vsx_store
> should just do the right thing, as far as I can see.
Yes. Let me test after update.

Thanks,
- Simon

^ permalink raw reply	[flat|nested] 111+ messages in thread

end of thread, other threads:[~2018-05-03  9:46 UTC | newest]

Thread overview: 111+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-25 11:54 [PATCH 00/11] KVM: PPC: reconstruct mmio emulation with analyse_instr() wei.guo.simon
2018-04-25 11:54 ` wei.guo.simon
2018-04-25 11:54 ` wei.guo.simon
2018-04-25 11:54 ` [PATCH 01/11] KVM: PPC: add pt_regs into kvm_vcpu_arch and move vcpu->arch.gpr[] into it wei.guo.simon
2018-04-25 11:54   ` wei.guo.simon
2018-04-25 11:54   ` wei.guo.simon
2018-04-27  3:47   ` kbuild test robot
2018-04-27  3:47     ` kbuild test robot
2018-04-27  3:47     ` kbuild test robot
2018-04-27 10:21     ` Simon Guo
2018-04-27 10:21       ` Simon Guo
2018-04-27 10:21       ` Simon Guo
2018-05-03  5:34   ` Paul Mackerras
2018-05-03  5:34     ` Paul Mackerras
2018-05-03  5:34     ` Paul Mackerras
2018-05-03  7:43     ` Simon Guo
2018-05-03  7:43       ` Simon Guo
2018-05-03  7:43       ` Simon Guo
2018-04-25 11:54 ` [PATCH 02/11] KVM: PPC: mov nip/ctr/lr/xer registers to pt_regs in kvm_vcpu_arch wei.guo.simon
2018-04-25 11:54   ` wei.guo.simon
2018-04-25 11:54   ` wei.guo.simon
2018-05-03  5:46   ` Paul Mackerras
2018-05-03  5:46     ` Paul Mackerras
2018-05-03  5:46     ` Paul Mackerras
2018-05-03  7:51     ` Simon Guo
2018-05-03  7:51       ` Simon Guo
2018-05-03  7:51       ` Simon Guo
2018-04-25 11:54 ` [PATCH 03/11] KVM: PPC: Fix a mmio_host_swabbed uninitialized usage issue when VMX store wei.guo.simon
2018-04-25 11:54   ` wei.guo.simon
2018-04-25 11:54   ` wei.guo.simon
2018-05-03  5:48   ` Paul Mackerras
2018-05-03  5:48     ` Paul Mackerras
2018-05-03  5:48     ` Paul Mackerras
2018-05-03  7:52     ` Simon Guo
2018-05-03  7:52       ` Simon Guo
2018-05-03  7:52       ` Simon Guo
2018-04-25 11:54 ` [PATCH 04/11] KVM: PPC: fix incorrect element_size for stxsiwx in analyse_instr wei.guo.simon
2018-04-25 11:54   ` wei.guo.simon
2018-04-25 11:54   ` wei.guo.simon
2018-05-03  5:50   ` Paul Mackerras
2018-05-03  5:50     ` Paul Mackerras
2018-05-03  5:50     ` Paul Mackerras
2018-05-03  9:05     ` Simon Guo
2018-05-03  9:05       ` Simon Guo
2018-05-03  9:05       ` Simon Guo
2018-04-25 11:54 ` [PATCH 05/11] KVM: PPC: add GPR RA update skeleton for MMIO emulation wei.guo.simon
2018-04-25 11:54   ` wei.guo.simon
2018-04-25 11:54   ` wei.guo.simon
2018-05-03  5:58   ` Paul Mackerras
2018-05-03  5:58     ` Paul Mackerras
2018-05-03  5:58     ` Paul Mackerras
2018-05-03  8:37     ` Simon Guo
2018-05-03  8:37       ` Simon Guo
2018-05-03  8:37       ` Simon Guo
2018-04-25 11:54 ` [PATCH 06/11] KVM: PPC: add KVMPPC_VSX_COPY_WORD_LOAD_DUMP type support for mmio emulation wei.guo.simon
2018-04-25 11:54   ` wei.guo.simon
2018-04-25 11:54   ` wei.guo.simon
2018-05-03  5:59   ` Paul Mackerras
2018-05-03  5:59     ` Paul Mackerras
2018-05-03  5:59     ` Paul Mackerras
2018-04-25 11:54 ` [PATCH 07/11] KVM: PPC: reconstruct non-SIMD LOAD/STORE instruction mmio emulation with analyse_intr() input wei.guo.simon
2018-04-25 11:54   ` [PATCH 07/11] KVM: PPC: reconstruct non-SIMD LOAD/STORE instruction mmio emulation with analyse_intr wei.guo.simon
2018-04-25 11:54   ` [PATCH 07/11] KVM: PPC: reconstruct non-SIMD LOAD/STORE instruction mmio emulation with analyse_intr() input wei.guo.simon
2018-05-03  6:03   ` Paul Mackerras
2018-05-03  6:03     ` [PATCH 07/11] KVM: PPC: reconstruct non-SIMD LOAD/STORE instruction mmio emulation with analyse_ Paul Mackerras
2018-05-03  6:03     ` [PATCH 07/11] KVM: PPC: reconstruct non-SIMD LOAD/STORE instruction mmio emulation with analyse_intr() input Paul Mackerras
2018-05-03  9:07     ` Simon Guo
2018-05-03  9:07       ` [PATCH 07/11] KVM: PPC: reconstruct non-SIMD LOAD/STORE instruction mmio emulation with analyse_ Simon Guo
2018-05-03  9:07       ` [PATCH 07/11] KVM: PPC: reconstruct non-SIMD LOAD/STORE instruction mmio emulation with analyse_intr() input Simon Guo
2018-04-25 11:54 ` [PATCH 08/11] KVM: PPC: add giveup_ext() hook for PPC KVM ops wei.guo.simon
2018-04-25 11:54   ` wei.guo.simon
2018-04-25 11:54   ` wei.guo.simon
2018-05-03  6:08   ` Paul Mackerras
2018-05-03  6:08     ` Paul Mackerras
2018-05-03  6:08     ` Paul Mackerras
2018-05-03  9:21     ` Simon Guo
2018-05-03  9:21       ` Simon Guo
2018-05-03  9:21       ` Simon Guo
2018-04-25 11:54 ` [PATCH 09/11] KVM: PPC: reconstruct LOAD_FP/STORE_FP instruction mmio emulation with analyse_intr() input wei.guo.simon
2018-04-25 11:54   ` [PATCH 09/11] KVM: PPC: reconstruct LOAD_FP/STORE_FP instruction mmio emulation with analyse_intr() wei.guo.simon
2018-04-25 11:54   ` [PATCH 09/11] KVM: PPC: reconstruct LOAD_FP/STORE_FP instruction mmio emulation with analyse_intr() input wei.guo.simon
2018-05-03  6:10   ` Paul Mackerras
2018-05-03  6:10     ` [PATCH 09/11] KVM: PPC: reconstruct LOAD_FP/STORE_FP instruction mmio emulation with analyse_int Paul Mackerras
2018-05-03  6:10     ` [PATCH 09/11] KVM: PPC: reconstruct LOAD_FP/STORE_FP instruction mmio emulation with analyse_intr() input Paul Mackerras
2018-05-03  9:25     ` Simon Guo
2018-05-03  9:25       ` [PATCH 09/11] KVM: PPC: reconstruct LOAD_FP/STORE_FP instruction mmio emulation with analyse_int Simon Guo
2018-05-03  9:25       ` [PATCH 09/11] KVM: PPC: reconstruct LOAD_FP/STORE_FP instruction mmio emulation with analyse_intr() input Simon Guo
2018-04-25 11:54 ` [PATCH 10/11] KVM: PPC: reconstruct LOAD_VMX/STORE_VMX " wei.guo.simon
2018-04-25 11:54   ` [PATCH 10/11] KVM: PPC: reconstruct LOAD_VMX/STORE_VMX instruction mmio emulation with analyse_intr( wei.guo.simon
2018-04-25 11:54   ` [PATCH 10/11] KVM: PPC: reconstruct LOAD_VMX/STORE_VMX instruction mmio emulation with analyse_intr() input wei.guo.simon
2018-05-03  6:17   ` Paul Mackerras
2018-05-03  6:17     ` [PATCH 10/11] KVM: PPC: reconstruct LOAD_VMX/STORE_VMX instruction mmio emulation with analyse_i Paul Mackerras
2018-05-03  6:17     ` [PATCH 10/11] KVM: PPC: reconstruct LOAD_VMX/STORE_VMX instruction mmio emulation with analyse_intr() input Paul Mackerras
2018-05-03  9:43     ` Simon Guo
2018-05-03  9:43       ` [PATCH 10/11] KVM: PPC: reconstruct LOAD_VMX/STORE_VMX instruction mmio emulation with analyse_i Simon Guo
2018-05-03  9:43       ` [PATCH 10/11] KVM: PPC: reconstruct LOAD_VMX/STORE_VMX instruction mmio emulation with analyse_intr() input Simon Guo
2018-04-25 11:54 ` [PATCH 11/11] KVM: PPC: reconstruct LOAD_VSX/STORE_VSX " wei.guo.simon
2018-04-25 11:54   ` [PATCH 11/11] KVM: PPC: reconstruct LOAD_VSX/STORE_VSX instruction mmio emulation with analyse_intr( wei.guo.simon
2018-04-25 11:54   ` [PATCH 11/11] KVM: PPC: reconstruct LOAD_VSX/STORE_VSX instruction mmio emulation with analyse_intr() input wei.guo.simon
2018-05-03  6:26   ` Paul Mackerras
2018-05-03  6:26     ` [PATCH 11/11] KVM: PPC: reconstruct LOAD_VSX/STORE_VSX instruction mmio emulation with analyse_i Paul Mackerras
2018-05-03  6:26     ` [PATCH 11/11] KVM: PPC: reconstruct LOAD_VSX/STORE_VSX instruction mmio emulation with analyse_intr() input Paul Mackerras
2018-05-03  9:46     ` Simon Guo
2018-05-03  9:46       ` [PATCH 11/11] KVM: PPC: reconstruct LOAD_VSX/STORE_VSX instruction mmio emulation with analyse_i Simon Guo
2018-05-03  9:46       ` [PATCH 11/11] KVM: PPC: reconstruct LOAD_VSX/STORE_VSX instruction mmio emulation with analyse_intr() input Simon Guo
2018-05-03  5:31 ` [PATCH 00/11] KVM: PPC: reconstruct mmio emulation with analyse_instr() Paul Mackerras
2018-05-03  5:31   ` Paul Mackerras
2018-05-03  5:31   ` Paul Mackerras
2018-05-03  7:41   ` Simon Guo
2018-05-03  7:41     ` Simon Guo
2018-05-03  7:41     ` Simon Guo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.