All of lore.kernel.org
 help / color / mirror / Atom feed
From: Paul Mackerras <paulus@samba.org>
To: Alexander Graf <agraf@suse.de>, kvm-ppc@vger.kernel.org
Cc: kvm@vger.kernel.org
Subject: [RFC PATCH 5/5] KVM: PPC: Book3S: Make kvmppc_handle_load/store handle any load or store
Date: Sat, 19 Jul 2014 20:14:32 +1000	[thread overview]
Message-ID: <1405764872-8744-6-git-send-email-paulus@samba.org> (raw)
In-Reply-To: <1405764872-8744-1-git-send-email-paulus@samba.org>

At present, kvmppc_handle_load and kvmppc_handle_store only handle
emulated MMIO loads and stores.  This extends them to be able to handle
loads and stores to guest memory as well.  This is so that
kvmppc_emulate_instruction can be used to emulate loads and stores
in cases other than when an attempt to execute the instruction by the
CPU has resulted in an interrupt.

To avoid having to look up the translation for the effective address
again in kvmppc_handle_load/store when the caller of kvmppc_emulate_mmio
has already done it, we arrange to pass down the translation in a new
struct kvmppc_translated_address, which is a new argument to
kvmppc_emulate_mmio() and kvmppc_emulate_instruction().  This also
enables us to check that the guest hasn't replaced a load with a store
instruction.

This also makes the register updates for the paired-single FPU registers
match for emulated MMIO accesses what is done for accesses to normal
memory.

The new code for accessing normal guest memory uses kvmppc_ld and kvmppc_st,
which call kvmppc_xlate, which is only defined for Book 3S.  For Book E,
kvmppc_handle_load/store still only work for emulated MMIO.

Signed-off-by: Paul Mackerras <paulus@samba.org>
---
 arch/powerpc/include/asm/kvm_host.h      |   2 -
 arch/powerpc/include/asm/kvm_ppc.h       |  16 +++-
 arch/powerpc/kvm/Makefile                |   1 +
 arch/powerpc/kvm/book3s_64_mmu_hv.c      |   8 +-
 arch/powerpc/kvm/book3s_paired_singles.c | 121 ++++++++--------------------
 arch/powerpc/kvm/book3s_pr.c             |  12 +--
 arch/powerpc/kvm/booke.c                 |   9 ++-
 arch/powerpc/kvm/emulate.c               |   9 +--
 arch/powerpc/kvm/powerpc.c               | 131 +++++++++++++++++++++++--------
 9 files changed, 167 insertions(+), 142 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 0f3ac93..7c1b695 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -581,8 +581,6 @@ struct kvm_vcpu_arch {
 	/* hardware visible debug registers when in guest state */
 	struct debug_reg shadow_dbg_reg;
 #endif
-	gpa_t paddr_accessed;
-	gva_t vaddr_accessed;
 	pgd_t *pgdir;
 
 	u8 io_gpr; /* GPR used as IO source/target */
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index 9318cf3..fc9cfcd 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -47,21 +47,33 @@ enum emulation_result {
 	EMULATE_EXIT_USER,    /* emulation requires exit to user-space */
 };
 
+struct kvmppc_translated_address {
+	ulong eaddr;
+	ulong raddr;
+	bool  is_store;
+};
+
 extern int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu);
 extern int __kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu);
 extern void kvmppc_handler_highmem(void);
 
 extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
 extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
+			      unsigned long ea,
+			      struct kvmppc_translated_address *ta,
                               unsigned int rt, unsigned int bytes,
 			      int is_default_endian, int sign_extend);
 extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
+			       unsigned long ea,
+			       struct kvmppc_translated_address *ta,
 			       u64 val, unsigned int bytes,
 			       int is_default_endian);
 
 extern int kvmppc_emulate_instruction(struct kvm_run *run,
-                                      struct kvm_vcpu *vcpu);
-extern int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu);
+                                      struct kvm_vcpu *vcpu,
+				      struct kvmppc_translated_address *ta);
+extern int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
+			       struct kvmppc_translated_address *ta);
 extern void kvmppc_emulate_dec(struct kvm_vcpu *vcpu);
 extern u32 kvmppc_get_dec(struct kvm_vcpu *vcpu, u64 tb);
 extern void kvmppc_decrementer_func(unsigned long data);
diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile
index ce569b6..d5f1cd4 100644
--- a/arch/powerpc/kvm/Makefile
+++ b/arch/powerpc/kvm/Makefile
@@ -102,6 +102,7 @@ kvm-book3s_64-module-objs += \
 	$(KVM)/eventfd.o \
 	powerpc.o \
 	emulate.o \
+	fpu.o \
 	book3s.o \
 	book3s_64_vio.o \
 	book3s_rtas.o \
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index b0c2514..eda12cd 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -533,6 +533,7 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	int ret;
 	u32 last_inst;
 	unsigned long srr0 = kvmppc_get_pc(vcpu);
+	struct kvmppc_translated_address ta;
 
 	/* We try to load the last instruction.  We don't let
 	 * emulate_instruction do it as it doesn't check what
@@ -574,9 +575,10 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	 * a certain extent but we'll ignore it for now.
 	 */
 
-	vcpu->arch.paddr_accessed = gpa;
-	vcpu->arch.vaddr_accessed = ea;
-	return kvmppc_emulate_mmio(run, vcpu);
+	ta.eaddr = ea;
+	ta.raddr = gpa;
+	ta.is_store = !!is_store;
+	return kvmppc_emulate_mmio(run, vcpu, &ta);
 }
 
 int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
diff --git a/arch/powerpc/kvm/book3s_paired_singles.c b/arch/powerpc/kvm/book3s_paired_singles.c
index a5bde19..8ad0b33 100644
--- a/arch/powerpc/kvm/book3s_paired_singles.c
+++ b/arch/powerpc/kvm/book3s_paired_singles.c
@@ -183,132 +183,80 @@ static void kvmppc_inject_pf(struct kvm_vcpu *vcpu, ulong eaddr, bool is_store)
 static int kvmppc_emulate_fpr_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 				   int rs, ulong addr, int ls_type)
 {
-	int emulated = EMULATE_FAIL;
-	int r;
-	char tmp[8];
+	int emulated;
 	int len = sizeof(u32);
 
 	if (ls_type == FPU_LS_DOUBLE)
 		len = sizeof(u64);
 
 	/* read from memory */
-	r = kvmppc_ld(vcpu, &addr, len, tmp, true);
-	vcpu->arch.paddr_accessed = addr;
-
-	if (r < 0) {
+	emulated = kvmppc_handle_load(run, vcpu, addr, NULL,
+				      KVM_MMIO_REG_FPR | rs, len, 1, 0);
+	if (emulated < 0) {
 		kvmppc_inject_pf(vcpu, addr, false);
-		goto done_load;
-	} else if (r == EMULATE_DO_MMIO) {
-		emulated = kvmppc_handle_load(run, vcpu, KVM_MMIO_REG_FPR | rs,
-					      len, 1, 0);
-		goto done_load;
-	}
-
-	emulated = EMULATE_DONE;
-
-	/* put in registers */
-	switch (ls_type) {
-	case FPU_LS_SINGLE:
-		kvm_cvt_fd((u32*)tmp, &VCPU_FPR(vcpu, rs));
-		vcpu->arch.qpr[rs] = *((u32*)tmp);
-		break;
-	case FPU_LS_DOUBLE:
-		VCPU_FPR(vcpu, rs) = *((u64*)tmp);
-		break;
+		emulated = EMULATE_FAIL;
 	}
 
-	dprintk(KERN_INFO "KVM: FPR_LD [0x%llx] at 0x%lx (%d)\n", *(u64*)tmp,
-			  addr, len);
-
-done_load:
 	return emulated;
 }
 
 static int kvmppc_emulate_fpr_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
 				    int rs, ulong addr, int ls_type)
 {
-	int emulated = EMULATE_FAIL;
-	int r;
-	char tmp[8];
+	int emulated;
+	u32 tmp;
 	u64 val;
 	int len;
 
 	switch (ls_type) {
 	case FPU_LS_SINGLE:
-		kvm_cvt_df(&VCPU_FPR(vcpu, rs), (u32*)tmp);
-		val = *((u32*)tmp);
+		kvm_cvt_df(&VCPU_FPR(vcpu, rs), &tmp);
+		val = tmp;
 		len = sizeof(u32);
 		break;
 	case FPU_LS_SINGLE_LOW:
-		*((u32*)tmp) = VCPU_FPR(vcpu, rs);
 		val = VCPU_FPR(vcpu, rs) & 0xffffffff;
 		len = sizeof(u32);
 		break;
 	case FPU_LS_DOUBLE:
-		*((u64*)tmp) = VCPU_FPR(vcpu, rs);
 		val = VCPU_FPR(vcpu, rs);
 		len = sizeof(u64);
 		break;
 	default:
-		val = 0;
-		len = 0;
+		return EMULATE_DONE;
 	}
 
-	r = kvmppc_st(vcpu, &addr, len, tmp, true);
-	vcpu->arch.paddr_accessed = addr;
-	if (r < 0) {
+	emulated = kvmppc_handle_store(run, vcpu, addr, NULL, val, len, 1);
+	if (emulated < 0) {
 		kvmppc_inject_pf(vcpu, addr, true);
-	} else if (r == EMULATE_DO_MMIO) {
-		emulated = kvmppc_handle_store(run, vcpu, val, len, 1);
-	} else {
-		emulated = EMULATE_DONE;
+		emulated = EMULATE_FAIL;
 	}
 
-	dprintk(KERN_INFO "KVM: FPR_ST [0x%llx] at 0x%lx (%d)\n",
-			  val, addr, len);
-
 	return emulated;
 }
 
 static int kvmppc_emulate_psq_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 				   int rs, ulong addr, bool w, int i)
 {
-	int emulated = EMULATE_FAIL;
-	int r;
-	float one = 1.0;
-	u32 tmp[2];
+	int emulated;
 
 	/* read from memory */
 	if (w) {
-		r = kvmppc_ld(vcpu, &addr, sizeof(u32), tmp, true);
-		memcpy(&tmp[1], &one, sizeof(u32));
+		emulated = kvmppc_handle_load(run, vcpu, addr, NULL,
+					      KVM_MMIO_REG_FPR | rs, 4, 1, 0);
+		
 	} else {
-		r = kvmppc_ld(vcpu, &addr, sizeof(u32) * 2, tmp, true);
+		emulated = kvmppc_handle_load(run, vcpu, addr, NULL,
+					      KVM_MMIO_REG_FQPR | rs, 8, 1, 0);
 	}
-	vcpu->arch.paddr_accessed = addr;
-	if (r < 0) {
+	if (emulated < 0) {
 		kvmppc_inject_pf(vcpu, addr, false);
-		goto done_load;
-	} else if ((r == EMULATE_DO_MMIO) && w) {
-		emulated = kvmppc_handle_load(run, vcpu, KVM_MMIO_REG_FPR | rs,
-					      4, 1, 0);
-		vcpu->arch.qpr[rs] = tmp[1];
-		goto done_load;
-	} else if (r == EMULATE_DO_MMIO) {
-		emulated = kvmppc_handle_load(run, vcpu, KVM_MMIO_REG_FQPR | rs,
-					      8, 1, 0);
+		emulated = EMULATE_FAIL;
 		goto done_load;
 	}
 
 	emulated = EMULATE_DONE;
 
-	/* put in registers */
-	kvm_cvt_fd(&tmp[0], &VCPU_FPR(vcpu, rs));
-	vcpu->arch.qpr[rs] = tmp[1];
-
-	dprintk(KERN_INFO "KVM: PSQ_LD [0x%x, 0x%x] at 0x%lx (%d)\n", tmp[0],
-			  tmp[1], addr, w ? 4 : 8);
-
 done_load:
 	return emulated;
 }
@@ -316,29 +264,24 @@ done_load:
 static int kvmppc_emulate_psq_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
 				    int rs, ulong addr, bool w, int i)
 {
-	int emulated = EMULATE_FAIL;
-	int r;
+	int emulated;
 	u32 tmp[2];
-	int len = w ? sizeof(u32) : sizeof(u64);
 
 	kvm_cvt_df(&VCPU_FPR(vcpu, rs), &tmp[0]);
 	tmp[1] = vcpu->arch.qpr[rs];
 
-	r = kvmppc_st(vcpu, &addr, len, tmp, true);
-	vcpu->arch.paddr_accessed = addr;
-	if (r < 0) {
-		kvmppc_inject_pf(vcpu, addr, true);
-	} else if ((r == EMULATE_DO_MMIO) && w) {
-		emulated = kvmppc_handle_store(run, vcpu, tmp[0], 4, 1);
-	} else if (r == EMULATE_DO_MMIO) {
-		u64 val = ((u64)tmp[0] << 32) | tmp[1];
-		emulated = kvmppc_handle_store(run, vcpu, val, 8, 1);
+	if (w) {
+		emulated = kvmppc_handle_store(run, vcpu, addr, NULL,
+					       tmp[0], 4, 1);
 	} else {
-		emulated = EMULATE_DONE;
+		u64 val = ((u64)tmp[0] << 32) | tmp[1];
+		emulated = kvmppc_handle_store(run, vcpu, addr, NULL,
+					       val, 8, 1);
+	}
+	if (emulated < 0) {
+		kvmppc_inject_pf(vcpu, addr, true);
+		emulated = EMULATE_FAIL;
 	}
-
-	dprintk(KERN_INFO "KVM: PSQ_ST [0x%x, 0x%x] at 0x%lx (%d)\n",
-			  tmp[0], tmp[1], addr, len);
 
 	return emulated;
 }
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 29906af..98aa40a 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -539,6 +539,7 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	bool dr = (kvmppc_get_msr(vcpu) & MSR_DR) ? true : false;
 	bool ir = (kvmppc_get_msr(vcpu) & MSR_IR) ? true : false;
 	u64 vsid;
+	struct kvmppc_translated_address ta;
 
 	relocated = data ? dr : ir;
 	if (data && (vcpu->arch.fault_dsisr & DSISR_ISSTORE))
@@ -633,9 +634,10 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	} else {
 		/* MMIO */
 		vcpu->stat.mmio_exits++;
-		vcpu->arch.paddr_accessed = pte.raddr;
-		vcpu->arch.vaddr_accessed = pte.eaddr;
-		r = kvmppc_emulate_mmio(run, vcpu);
+		ta.eaddr = pte.eaddr;
+		ta.raddr = pte.raddr;
+		ta.is_store = iswrite;
+		r = kvmppc_emulate_mmio(run, vcpu, &ta);
 		if ( r == RESUME_HOST_NV )
 			r = RESUME_HOST;
 	}
@@ -856,7 +858,7 @@ static void kvmppc_emulate_fac(struct kvm_vcpu *vcpu, ulong fac)
 	enum emulation_result er = EMULATE_FAIL;
 
 	if (!(kvmppc_get_msr(vcpu) & MSR_PR))
-		er = kvmppc_emulate_instruction(vcpu->run, vcpu);
+		er = kvmppc_emulate_instruction(vcpu->run, vcpu, NULL);
 
 	if ((er != EMULATE_DONE) && (er != EMULATE_AGAIN)) {
 		/* Couldn't emulate, trigger interrupt in guest */
@@ -1071,7 +1073,7 @@ program_interrupt:
 		}
 
 		vcpu->stat.emulated_inst_exits++;
-		er = kvmppc_emulate_instruction(run, vcpu);
+		er = kvmppc_emulate_instruction(run, vcpu, NULL);
 		switch (er) {
 		case EMULATE_DONE:
 			r = RESUME_GUEST_NV;
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index e62d09e..e5740db 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -695,7 +695,7 @@ static int emulation_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
 	enum emulation_result er;
 
-	er = kvmppc_emulate_instruction(run, vcpu);
+	er = kvmppc_emulate_instruction(run, vcpu, NULL);
 	switch (er) {
 	case EMULATE_DONE:
 		/* don't overwrite subtypes, just account kvm_stats */
@@ -1068,9 +1068,10 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		} else {
 			/* Guest has mapped and accessed a page which is not
 			 * actually RAM. */
-			vcpu->arch.paddr_accessed = gpaddr;
-			vcpu->arch.vaddr_accessed = eaddr;
-			r = kvmppc_emulate_mmio(run, vcpu);
+			ta.eaddr = eaddr;
+			ta.raddr = gpaddr;
+			ta.is_store = true;
+			r = kvmppc_emulate_mmio(run, vcpu, &ta);
 			kvmppc_account_exit(vcpu, MMIO_EXITS);
 		}
 
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 0e66230..a5dfd00 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -262,7 +262,8 @@ static enum emulation_result deliver_interrupt(struct kvm_vcpu *vcpu,
  * stmw
  *
  */
-int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
+int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu,
+			       struct kvmppc_translated_address *ta)
 {
 	u32 inst = kvmppc_get_last_inst(vcpu);
 	enum emulation_result emulated = EMULATE_DONE;
@@ -291,8 +292,7 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
 		break;
 
 	case LOAD:
-		/* address already in vcpu->arch.paddr_accessed */
-		emulated = kvmppc_handle_load(run, vcpu, op.reg,
+		emulated = kvmppc_handle_load(run, vcpu, op.ea, ta, op.reg,
 					      GETSIZE(op.type),
 					      !(op.type & BYTEREV),
 					      !!(op.type & SIGNEXT));
@@ -301,8 +301,7 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
 		break;
 
 	case STORE:
-		/* address already in vcpu->arch.paddr_accessed */
-		emulated = kvmppc_handle_store(run, vcpu, op.val,
+		emulated = kvmppc_handle_store(run, vcpu, op.ea, ta, op.val,
 					       GETSIZE(op.type), 1);
 		if (op.type & UPDATE)
 			kvmppc_set_gpr(vcpu, op.update_reg, op.ea);
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 7e57ea9..d31b525 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -33,6 +33,7 @@
 #include <asm/tlbflush.h>
 #include <asm/cputhreads.h>
 #include <asm/irqflags.h>
+#include <asm/kvm_fpu.h>
 #include "timing.h"
 #include "irq.h"
 #include "../mm/mmu_decl.h"
@@ -268,12 +269,13 @@ out:
 }
 EXPORT_SYMBOL_GPL(kvmppc_sanity_check);
 
-int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu)
+int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
+			struct kvmppc_translated_address *ta)
 {
 	enum emulation_result er;
 	int r;
 
-	er = kvmppc_emulate_instruction(run, vcpu);
+	er = kvmppc_emulate_instruction(run, vcpu, ta);
 	switch (er) {
 	case EMULATE_DONE:
 		/* Future optimization: only reload non-volatiles if they were
@@ -662,34 +664,36 @@ static void kvmppc_complete_dcr_load(struct kvm_vcpu *vcpu,
 	kvmppc_set_gpr(vcpu, vcpu->arch.io_gpr, run->dcr.data);
 }
 
-static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
-                                      struct kvm_run *run)
+static void kvmppc_complete_load(struct kvm_vcpu *vcpu,
+				 u8 *data, int len, unsigned int io_gpr,
+				 bool is_bigendian, bool sign_extend)
 {
 	u64 uninitialized_var(gpr);
+	u32 val32;
 
-	if (run->mmio.len > sizeof(gpr)) {
-		printk(KERN_ERR "bad MMIO length: %d\n", run->mmio.len);
+	if (len > sizeof(gpr)) {
+		printk(KERN_ERR "bad MMIO length: %d\n", len);
 		return;
 	}
 
-	if (vcpu->arch.mmio_is_bigendian) {
-		switch (run->mmio.len) {
-		case 8: gpr = *(u64 *)run->mmio.data; break;
-		case 4: gpr = *(u32 *)run->mmio.data; break;
-		case 2: gpr = *(u16 *)run->mmio.data; break;
-		case 1: gpr = *(u8 *)run->mmio.data; break;
+	if (is_bigendian) {
+		switch (len) {
+		case 8: gpr = *(u64 *)data; break;
+		case 4: gpr = *(u32 *)data; break;
+		case 2: gpr = *(u16 *)data; break;
+		case 1: gpr = *(u8 *)data; break;
 		}
 	} else {
 		/* Convert BE data from userland back to LE. */
-		switch (run->mmio.len) {
-		case 4: gpr = ld_le32((u32 *)run->mmio.data); break;
-		case 2: gpr = ld_le16((u16 *)run->mmio.data); break;
-		case 1: gpr = *(u8 *)run->mmio.data; break;
+		switch (len) {
+		case 4: gpr = ld_le32((u32 *)data); break;
+		case 2: gpr = ld_le16((u16 *)data); break;
+		case 1: gpr = *(u8 *)data; break;
 		}
 	}
 
 	if (vcpu->arch.mmio_sign_extend) {
-		switch (run->mmio.len) {
+		switch (len) {
 #ifdef CONFIG_PPC64
 		case 4:
 			gpr = (s64)(s32)gpr;
@@ -704,22 +708,31 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 		}
 	}
 
-	kvmppc_set_gpr(vcpu, vcpu->arch.io_gpr, gpr);
-
-	switch (vcpu->arch.io_gpr & KVM_MMIO_REG_EXT_MASK) {
+	switch (io_gpr & KVM_MMIO_REG_EXT_MASK) {
 	case KVM_MMIO_REG_GPR:
-		kvmppc_set_gpr(vcpu, vcpu->arch.io_gpr, gpr);
+		kvmppc_set_gpr(vcpu, io_gpr, gpr);
 		break;
 	case KVM_MMIO_REG_FPR:
-		VCPU_FPR(vcpu, vcpu->arch.io_gpr & KVM_MMIO_REG_MASK) = gpr;
+		if (len == 4) {
+			val32 = gpr;
+			kvm_cvt_fd(&val32, &gpr);
+		}
+		VCPU_FPR(vcpu, io_gpr & KVM_MMIO_REG_MASK) = gpr;
 		break;
 #ifdef CONFIG_PPC_BOOK3S
 	case KVM_MMIO_REG_QPR:
-		vcpu->arch.qpr[vcpu->arch.io_gpr & KVM_MMIO_REG_MASK] = gpr;
+		vcpu->arch.qpr[io_gpr & KVM_MMIO_REG_MASK] = gpr;
 		break;
 	case KVM_MMIO_REG_FQPR:
-		VCPU_FPR(vcpu, vcpu->arch.io_gpr & KVM_MMIO_REG_MASK) = gpr;
-		vcpu->arch.qpr[vcpu->arch.io_gpr & KVM_MMIO_REG_MASK] = gpr;
+		val32 = gpr >> 32;
+		kvm_cvt_fd(&val32, &gpr);
+		VCPU_FPR(vcpu, io_gpr & KVM_MMIO_REG_MASK) = gpr;
+		if (len == 4) {
+			float one = 1.0;
+			memcpy(&vcpu->arch.qpr[io_gpr & KVM_MMIO_REG_MASK],
+			       &one, sizeof(u32));
+		} else
+			vcpu->arch.qpr[io_gpr & KVM_MMIO_REG_MASK] = gpr;
 		break;
 #endif
 	default:
@@ -727,12 +740,24 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 	}
 }
 
+static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
+                                      struct kvm_run *run)
+{
+	kvmppc_complete_load(vcpu, run->mmio.data, run->mmio.len,
+			     vcpu->arch.io_gpr, vcpu->arch.mmio_is_bigendian,
+			     vcpu->arch.mmio_sign_extend);
+}
+
 int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
+		       unsigned long ea, struct kvmppc_translated_address *ta,
 		       unsigned int rt, unsigned int bytes,
 		       int is_default_endian, int sign_extend)
 {
 	int idx, ret;
 	int is_bigendian;
+#ifdef CONFIG_PPC_BOOK3S
+	struct kvmppc_translated_address local_ta;
+#endif
 
 	if (kvmppc_need_byteswap(vcpu)) {
 		/* Default endianness is "little endian". */
@@ -742,12 +767,33 @@ int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		is_bigendian = is_default_endian;
 	}
 
+#ifdef CONFIG_PPC_BOOK3S
+	if (!ta || ea != ta->eaddr) {
+		unsigned long addr;
+		u8 buf[8];
+		int r;
+
+		addr = ea;
+		r = kvmppc_ld(vcpu, &addr, bytes, buf, true);
+		if (r != EMULATE_DO_MMIO) {
+			if (r >= 0)
+				kvmppc_complete_load(vcpu, buf, bytes, rt,
+						is_bigendian, sign_extend);
+			return r;
+		}
+		local_ta.eaddr = ea;
+		local_ta.raddr = addr;
+		local_ta.is_store = false;
+		ta = &local_ta;
+	}
+#endif
+
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
 		       run->mmio.len);
 	}
 
-	run->mmio.phys_addr = vcpu->arch.paddr_accessed;
+	run->mmio.phys_addr = ta->raddr;
 	run->mmio.len = bytes;
 	run->mmio.is_write = 0;
 
@@ -775,11 +821,15 @@ int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 EXPORT_SYMBOL_GPL(kvmppc_handle_load);
 
 int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
+			unsigned long ea, struct kvmppc_translated_address *ta,
 			u64 val, unsigned int bytes, int is_default_endian)
 {
 	void *data = run->mmio.data;
 	int idx, ret;
 	int is_bigendian;
+#ifdef CONFIG_PPC_BOOK3S
+	struct kvmppc_translated_address local_ta;
+#endif
 
 	if (kvmppc_need_byteswap(vcpu)) {
 		/* Default endianness is "little endian". */
@@ -794,12 +844,6 @@ int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		       run->mmio.len);
 	}
 
-	run->mmio.phys_addr = vcpu->arch.paddr_accessed;
-	run->mmio.len = bytes;
-	run->mmio.is_write = 1;
-	vcpu->mmio_needed = 1;
-	vcpu->mmio_is_write = 1;
-
 	/* Store the value at the lowest bytes in 'data'. */
 	if (is_bigendian) {
 		switch (bytes) {
@@ -817,6 +861,29 @@ int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		}
 	}
 
+#ifdef CONFIG_PPC_BOOK3S
+	if (!ta || ea != ta->eaddr || !ta->is_store) {
+		unsigned long addr;
+		int r;
+
+		addr = ea;
+		r = kvmppc_st(vcpu, &addr, bytes, data, true);
+		if (r != EMULATE_DO_MMIO)
+			return r;
+
+		local_ta.eaddr = ea;
+		local_ta.raddr = addr;
+		local_ta.is_store = true;
+		ta = &local_ta;
+	}
+#endif
+
+	run->mmio.phys_addr = ta->raddr;
+	run->mmio.len = bytes;
+	run->mmio.is_write = 1;
+	vcpu->mmio_needed = 1;
+	vcpu->mmio_is_write = 1;
+
 	idx = srcu_read_lock(&vcpu->kvm->srcu);
 
 	ret = kvm_io_bus_write(vcpu->kvm, KVM_MMIO_BUS, run->mmio.phys_addr,
-- 
2.0.1


WARNING: multiple messages have this Message-ID (diff)
From: Paul Mackerras <paulus@samba.org>
To: Alexander Graf <agraf@suse.de>, kvm-ppc@vger.kernel.org
Cc: kvm@vger.kernel.org
Subject: [RFC PATCH 5/5] KVM: PPC: Book3S: Make kvmppc_handle_load/store handle any load or store
Date: Sat, 19 Jul 2014 10:14:32 +0000	[thread overview]
Message-ID: <1405764872-8744-6-git-send-email-paulus@samba.org> (raw)
In-Reply-To: <1405764872-8744-1-git-send-email-paulus@samba.org>

At present, kvmppc_handle_load and kvmppc_handle_store only handle
emulated MMIO loads and stores.  This extends them to be able to handle
loads and stores to guest memory as well.  This is so that
kvmppc_emulate_instruction can be used to emulate loads and stores
in cases other than when an attempt to execute the instruction by the
CPU has resulted in an interrupt.

To avoid having to look up the translation for the effective address
again in kvmppc_handle_load/store when the caller of kvmppc_emulate_mmio
has already done it, we arrange to pass down the translation in a new
struct kvmppc_translated_address, which is a new argument to
kvmppc_emulate_mmio() and kvmppc_emulate_instruction().  This also
enables us to check that the guest hasn't replaced a load with a store
instruction.

This also makes the register updates for the paired-single FPU registers
match for emulated MMIO accesses what is done for accesses to normal
memory.

The new code for accessing normal guest memory uses kvmppc_ld and kvmppc_st,
which call kvmppc_xlate, which is only defined for Book 3S.  For Book E,
kvmppc_handle_load/store still only work for emulated MMIO.

Signed-off-by: Paul Mackerras <paulus@samba.org>
---
 arch/powerpc/include/asm/kvm_host.h      |   2 -
 arch/powerpc/include/asm/kvm_ppc.h       |  16 +++-
 arch/powerpc/kvm/Makefile                |   1 +
 arch/powerpc/kvm/book3s_64_mmu_hv.c      |   8 +-
 arch/powerpc/kvm/book3s_paired_singles.c | 121 ++++++++--------------------
 arch/powerpc/kvm/book3s_pr.c             |  12 +--
 arch/powerpc/kvm/booke.c                 |   9 ++-
 arch/powerpc/kvm/emulate.c               |   9 +--
 arch/powerpc/kvm/powerpc.c               | 131 +++++++++++++++++++++++--------
 9 files changed, 167 insertions(+), 142 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 0f3ac93..7c1b695 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -581,8 +581,6 @@ struct kvm_vcpu_arch {
 	/* hardware visible debug registers when in guest state */
 	struct debug_reg shadow_dbg_reg;
 #endif
-	gpa_t paddr_accessed;
-	gva_t vaddr_accessed;
 	pgd_t *pgdir;
 
 	u8 io_gpr; /* GPR used as IO source/target */
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index 9318cf3..fc9cfcd 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -47,21 +47,33 @@ enum emulation_result {
 	EMULATE_EXIT_USER,    /* emulation requires exit to user-space */
 };
 
+struct kvmppc_translated_address {
+	ulong eaddr;
+	ulong raddr;
+	bool  is_store;
+};
+
 extern int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu);
 extern int __kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu);
 extern void kvmppc_handler_highmem(void);
 
 extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
 extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
+			      unsigned long ea,
+			      struct kvmppc_translated_address *ta,
                               unsigned int rt, unsigned int bytes,
 			      int is_default_endian, int sign_extend);
 extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
+			       unsigned long ea,
+			       struct kvmppc_translated_address *ta,
 			       u64 val, unsigned int bytes,
 			       int is_default_endian);
 
 extern int kvmppc_emulate_instruction(struct kvm_run *run,
-                                      struct kvm_vcpu *vcpu);
-extern int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu);
+                                      struct kvm_vcpu *vcpu,
+				      struct kvmppc_translated_address *ta);
+extern int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
+			       struct kvmppc_translated_address *ta);
 extern void kvmppc_emulate_dec(struct kvm_vcpu *vcpu);
 extern u32 kvmppc_get_dec(struct kvm_vcpu *vcpu, u64 tb);
 extern void kvmppc_decrementer_func(unsigned long data);
diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile
index ce569b6..d5f1cd4 100644
--- a/arch/powerpc/kvm/Makefile
+++ b/arch/powerpc/kvm/Makefile
@@ -102,6 +102,7 @@ kvm-book3s_64-module-objs += \
 	$(KVM)/eventfd.o \
 	powerpc.o \
 	emulate.o \
+	fpu.o \
 	book3s.o \
 	book3s_64_vio.o \
 	book3s_rtas.o \
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index b0c2514..eda12cd 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -533,6 +533,7 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	int ret;
 	u32 last_inst;
 	unsigned long srr0 = kvmppc_get_pc(vcpu);
+	struct kvmppc_translated_address ta;
 
 	/* We try to load the last instruction.  We don't let
 	 * emulate_instruction do it as it doesn't check what
@@ -574,9 +575,10 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	 * a certain extent but we'll ignore it for now.
 	 */
 
-	vcpu->arch.paddr_accessed = gpa;
-	vcpu->arch.vaddr_accessed = ea;
-	return kvmppc_emulate_mmio(run, vcpu);
+	ta.eaddr = ea;
+	ta.raddr = gpa;
+	ta.is_store = !!is_store;
+	return kvmppc_emulate_mmio(run, vcpu, &ta);
 }
 
 int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
diff --git a/arch/powerpc/kvm/book3s_paired_singles.c b/arch/powerpc/kvm/book3s_paired_singles.c
index a5bde19..8ad0b33 100644
--- a/arch/powerpc/kvm/book3s_paired_singles.c
+++ b/arch/powerpc/kvm/book3s_paired_singles.c
@@ -183,132 +183,80 @@ static void kvmppc_inject_pf(struct kvm_vcpu *vcpu, ulong eaddr, bool is_store)
 static int kvmppc_emulate_fpr_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 				   int rs, ulong addr, int ls_type)
 {
-	int emulated = EMULATE_FAIL;
-	int r;
-	char tmp[8];
+	int emulated;
 	int len = sizeof(u32);
 
 	if (ls_type = FPU_LS_DOUBLE)
 		len = sizeof(u64);
 
 	/* read from memory */
-	r = kvmppc_ld(vcpu, &addr, len, tmp, true);
-	vcpu->arch.paddr_accessed = addr;
-
-	if (r < 0) {
+	emulated = kvmppc_handle_load(run, vcpu, addr, NULL,
+				      KVM_MMIO_REG_FPR | rs, len, 1, 0);
+	if (emulated < 0) {
 		kvmppc_inject_pf(vcpu, addr, false);
-		goto done_load;
-	} else if (r = EMULATE_DO_MMIO) {
-		emulated = kvmppc_handle_load(run, vcpu, KVM_MMIO_REG_FPR | rs,
-					      len, 1, 0);
-		goto done_load;
-	}
-
-	emulated = EMULATE_DONE;
-
-	/* put in registers */
-	switch (ls_type) {
-	case FPU_LS_SINGLE:
-		kvm_cvt_fd((u32*)tmp, &VCPU_FPR(vcpu, rs));
-		vcpu->arch.qpr[rs] = *((u32*)tmp);
-		break;
-	case FPU_LS_DOUBLE:
-		VCPU_FPR(vcpu, rs) = *((u64*)tmp);
-		break;
+		emulated = EMULATE_FAIL;
 	}
 
-	dprintk(KERN_INFO "KVM: FPR_LD [0x%llx] at 0x%lx (%d)\n", *(u64*)tmp,
-			  addr, len);
-
-done_load:
 	return emulated;
 }
 
 static int kvmppc_emulate_fpr_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
 				    int rs, ulong addr, int ls_type)
 {
-	int emulated = EMULATE_FAIL;
-	int r;
-	char tmp[8];
+	int emulated;
+	u32 tmp;
 	u64 val;
 	int len;
 
 	switch (ls_type) {
 	case FPU_LS_SINGLE:
-		kvm_cvt_df(&VCPU_FPR(vcpu, rs), (u32*)tmp);
-		val = *((u32*)tmp);
+		kvm_cvt_df(&VCPU_FPR(vcpu, rs), &tmp);
+		val = tmp;
 		len = sizeof(u32);
 		break;
 	case FPU_LS_SINGLE_LOW:
-		*((u32*)tmp) = VCPU_FPR(vcpu, rs);
 		val = VCPU_FPR(vcpu, rs) & 0xffffffff;
 		len = sizeof(u32);
 		break;
 	case FPU_LS_DOUBLE:
-		*((u64*)tmp) = VCPU_FPR(vcpu, rs);
 		val = VCPU_FPR(vcpu, rs);
 		len = sizeof(u64);
 		break;
 	default:
-		val = 0;
-		len = 0;
+		return EMULATE_DONE;
 	}
 
-	r = kvmppc_st(vcpu, &addr, len, tmp, true);
-	vcpu->arch.paddr_accessed = addr;
-	if (r < 0) {
+	emulated = kvmppc_handle_store(run, vcpu, addr, NULL, val, len, 1);
+	if (emulated < 0) {
 		kvmppc_inject_pf(vcpu, addr, true);
-	} else if (r = EMULATE_DO_MMIO) {
-		emulated = kvmppc_handle_store(run, vcpu, val, len, 1);
-	} else {
-		emulated = EMULATE_DONE;
+		emulated = EMULATE_FAIL;
 	}
 
-	dprintk(KERN_INFO "KVM: FPR_ST [0x%llx] at 0x%lx (%d)\n",
-			  val, addr, len);
-
 	return emulated;
 }
 
 static int kvmppc_emulate_psq_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 				   int rs, ulong addr, bool w, int i)
 {
-	int emulated = EMULATE_FAIL;
-	int r;
-	float one = 1.0;
-	u32 tmp[2];
+	int emulated;
 
 	/* read from memory */
 	if (w) {
-		r = kvmppc_ld(vcpu, &addr, sizeof(u32), tmp, true);
-		memcpy(&tmp[1], &one, sizeof(u32));
+		emulated = kvmppc_handle_load(run, vcpu, addr, NULL,
+					      KVM_MMIO_REG_FPR | rs, 4, 1, 0);
+		
 	} else {
-		r = kvmppc_ld(vcpu, &addr, sizeof(u32) * 2, tmp, true);
+		emulated = kvmppc_handle_load(run, vcpu, addr, NULL,
+					      KVM_MMIO_REG_FQPR | rs, 8, 1, 0);
 	}
-	vcpu->arch.paddr_accessed = addr;
-	if (r < 0) {
+	if (emulated < 0) {
 		kvmppc_inject_pf(vcpu, addr, false);
-		goto done_load;
-	} else if ((r = EMULATE_DO_MMIO) && w) {
-		emulated = kvmppc_handle_load(run, vcpu, KVM_MMIO_REG_FPR | rs,
-					      4, 1, 0);
-		vcpu->arch.qpr[rs] = tmp[1];
-		goto done_load;
-	} else if (r = EMULATE_DO_MMIO) {
-		emulated = kvmppc_handle_load(run, vcpu, KVM_MMIO_REG_FQPR | rs,
-					      8, 1, 0);
+		emulated = EMULATE_FAIL;
 		goto done_load;
 	}
 
 	emulated = EMULATE_DONE;
 
-	/* put in registers */
-	kvm_cvt_fd(&tmp[0], &VCPU_FPR(vcpu, rs));
-	vcpu->arch.qpr[rs] = tmp[1];
-
-	dprintk(KERN_INFO "KVM: PSQ_LD [0x%x, 0x%x] at 0x%lx (%d)\n", tmp[0],
-			  tmp[1], addr, w ? 4 : 8);
-
 done_load:
 	return emulated;
 }
@@ -316,29 +264,24 @@ done_load:
 static int kvmppc_emulate_psq_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
 				    int rs, ulong addr, bool w, int i)
 {
-	int emulated = EMULATE_FAIL;
-	int r;
+	int emulated;
 	u32 tmp[2];
-	int len = w ? sizeof(u32) : sizeof(u64);
 
 	kvm_cvt_df(&VCPU_FPR(vcpu, rs), &tmp[0]);
 	tmp[1] = vcpu->arch.qpr[rs];
 
-	r = kvmppc_st(vcpu, &addr, len, tmp, true);
-	vcpu->arch.paddr_accessed = addr;
-	if (r < 0) {
-		kvmppc_inject_pf(vcpu, addr, true);
-	} else if ((r = EMULATE_DO_MMIO) && w) {
-		emulated = kvmppc_handle_store(run, vcpu, tmp[0], 4, 1);
-	} else if (r = EMULATE_DO_MMIO) {
-		u64 val = ((u64)tmp[0] << 32) | tmp[1];
-		emulated = kvmppc_handle_store(run, vcpu, val, 8, 1);
+	if (w) {
+		emulated = kvmppc_handle_store(run, vcpu, addr, NULL,
+					       tmp[0], 4, 1);
 	} else {
-		emulated = EMULATE_DONE;
+		u64 val = ((u64)tmp[0] << 32) | tmp[1];
+		emulated = kvmppc_handle_store(run, vcpu, addr, NULL,
+					       val, 8, 1);
+	}
+	if (emulated < 0) {
+		kvmppc_inject_pf(vcpu, addr, true);
+		emulated = EMULATE_FAIL;
 	}
-
-	dprintk(KERN_INFO "KVM: PSQ_ST [0x%x, 0x%x] at 0x%lx (%d)\n",
-			  tmp[0], tmp[1], addr, len);
 
 	return emulated;
 }
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 29906af..98aa40a 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -539,6 +539,7 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	bool dr = (kvmppc_get_msr(vcpu) & MSR_DR) ? true : false;
 	bool ir = (kvmppc_get_msr(vcpu) & MSR_IR) ? true : false;
 	u64 vsid;
+	struct kvmppc_translated_address ta;
 
 	relocated = data ? dr : ir;
 	if (data && (vcpu->arch.fault_dsisr & DSISR_ISSTORE))
@@ -633,9 +634,10 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	} else {
 		/* MMIO */
 		vcpu->stat.mmio_exits++;
-		vcpu->arch.paddr_accessed = pte.raddr;
-		vcpu->arch.vaddr_accessed = pte.eaddr;
-		r = kvmppc_emulate_mmio(run, vcpu);
+		ta.eaddr = pte.eaddr;
+		ta.raddr = pte.raddr;
+		ta.is_store = iswrite;
+		r = kvmppc_emulate_mmio(run, vcpu, &ta);
 		if ( r = RESUME_HOST_NV )
 			r = RESUME_HOST;
 	}
@@ -856,7 +858,7 @@ static void kvmppc_emulate_fac(struct kvm_vcpu *vcpu, ulong fac)
 	enum emulation_result er = EMULATE_FAIL;
 
 	if (!(kvmppc_get_msr(vcpu) & MSR_PR))
-		er = kvmppc_emulate_instruction(vcpu->run, vcpu);
+		er = kvmppc_emulate_instruction(vcpu->run, vcpu, NULL);
 
 	if ((er != EMULATE_DONE) && (er != EMULATE_AGAIN)) {
 		/* Couldn't emulate, trigger interrupt in guest */
@@ -1071,7 +1073,7 @@ program_interrupt:
 		}
 
 		vcpu->stat.emulated_inst_exits++;
-		er = kvmppc_emulate_instruction(run, vcpu);
+		er = kvmppc_emulate_instruction(run, vcpu, NULL);
 		switch (er) {
 		case EMULATE_DONE:
 			r = RESUME_GUEST_NV;
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index e62d09e..e5740db 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -695,7 +695,7 @@ static int emulation_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
 	enum emulation_result er;
 
-	er = kvmppc_emulate_instruction(run, vcpu);
+	er = kvmppc_emulate_instruction(run, vcpu, NULL);
 	switch (er) {
 	case EMULATE_DONE:
 		/* don't overwrite subtypes, just account kvm_stats */
@@ -1068,9 +1068,10 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		} else {
 			/* Guest has mapped and accessed a page which is not
 			 * actually RAM. */
-			vcpu->arch.paddr_accessed = gpaddr;
-			vcpu->arch.vaddr_accessed = eaddr;
-			r = kvmppc_emulate_mmio(run, vcpu);
+			ta.eaddr = eaddr;
+			ta.raddr = gpaddr;
+			ta.is_store = true;
+			r = kvmppc_emulate_mmio(run, vcpu, &ta);
 			kvmppc_account_exit(vcpu, MMIO_EXITS);
 		}
 
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 0e66230..a5dfd00 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -262,7 +262,8 @@ static enum emulation_result deliver_interrupt(struct kvm_vcpu *vcpu,
  * stmw
  *
  */
-int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
+int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu,
+			       struct kvmppc_translated_address *ta)
 {
 	u32 inst = kvmppc_get_last_inst(vcpu);
 	enum emulation_result emulated = EMULATE_DONE;
@@ -291,8 +292,7 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
 		break;
 
 	case LOAD:
-		/* address already in vcpu->arch.paddr_accessed */
-		emulated = kvmppc_handle_load(run, vcpu, op.reg,
+		emulated = kvmppc_handle_load(run, vcpu, op.ea, ta, op.reg,
 					      GETSIZE(op.type),
 					      !(op.type & BYTEREV),
 					      !!(op.type & SIGNEXT));
@@ -301,8 +301,7 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
 		break;
 
 	case STORE:
-		/* address already in vcpu->arch.paddr_accessed */
-		emulated = kvmppc_handle_store(run, vcpu, op.val,
+		emulated = kvmppc_handle_store(run, vcpu, op.ea, ta, op.val,
 					       GETSIZE(op.type), 1);
 		if (op.type & UPDATE)
 			kvmppc_set_gpr(vcpu, op.update_reg, op.ea);
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 7e57ea9..d31b525 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -33,6 +33,7 @@
 #include <asm/tlbflush.h>
 #include <asm/cputhreads.h>
 #include <asm/irqflags.h>
+#include <asm/kvm_fpu.h>
 #include "timing.h"
 #include "irq.h"
 #include "../mm/mmu_decl.h"
@@ -268,12 +269,13 @@ out:
 }
 EXPORT_SYMBOL_GPL(kvmppc_sanity_check);
 
-int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu)
+int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
+			struct kvmppc_translated_address *ta)
 {
 	enum emulation_result er;
 	int r;
 
-	er = kvmppc_emulate_instruction(run, vcpu);
+	er = kvmppc_emulate_instruction(run, vcpu, ta);
 	switch (er) {
 	case EMULATE_DONE:
 		/* Future optimization: only reload non-volatiles if they were
@@ -662,34 +664,36 @@ static void kvmppc_complete_dcr_load(struct kvm_vcpu *vcpu,
 	kvmppc_set_gpr(vcpu, vcpu->arch.io_gpr, run->dcr.data);
 }
 
-static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
-                                      struct kvm_run *run)
+static void kvmppc_complete_load(struct kvm_vcpu *vcpu,
+				 u8 *data, int len, unsigned int io_gpr,
+				 bool is_bigendian, bool sign_extend)
 {
 	u64 uninitialized_var(gpr);
+	u32 val32;
 
-	if (run->mmio.len > sizeof(gpr)) {
-		printk(KERN_ERR "bad MMIO length: %d\n", run->mmio.len);
+	if (len > sizeof(gpr)) {
+		printk(KERN_ERR "bad MMIO length: %d\n", len);
 		return;
 	}
 
-	if (vcpu->arch.mmio_is_bigendian) {
-		switch (run->mmio.len) {
-		case 8: gpr = *(u64 *)run->mmio.data; break;
-		case 4: gpr = *(u32 *)run->mmio.data; break;
-		case 2: gpr = *(u16 *)run->mmio.data; break;
-		case 1: gpr = *(u8 *)run->mmio.data; break;
+	if (is_bigendian) {
+		switch (len) {
+		case 8: gpr = *(u64 *)data; break;
+		case 4: gpr = *(u32 *)data; break;
+		case 2: gpr = *(u16 *)data; break;
+		case 1: gpr = *(u8 *)data; break;
 		}
 	} else {
 		/* Convert BE data from userland back to LE. */
-		switch (run->mmio.len) {
-		case 4: gpr = ld_le32((u32 *)run->mmio.data); break;
-		case 2: gpr = ld_le16((u16 *)run->mmio.data); break;
-		case 1: gpr = *(u8 *)run->mmio.data; break;
+		switch (len) {
+		case 4: gpr = ld_le32((u32 *)data); break;
+		case 2: gpr = ld_le16((u16 *)data); break;
+		case 1: gpr = *(u8 *)data; break;
 		}
 	}
 
 	if (vcpu->arch.mmio_sign_extend) {
-		switch (run->mmio.len) {
+		switch (len) {
 #ifdef CONFIG_PPC64
 		case 4:
 			gpr = (s64)(s32)gpr;
@@ -704,22 +708,31 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 		}
 	}
 
-	kvmppc_set_gpr(vcpu, vcpu->arch.io_gpr, gpr);
-
-	switch (vcpu->arch.io_gpr & KVM_MMIO_REG_EXT_MASK) {
+	switch (io_gpr & KVM_MMIO_REG_EXT_MASK) {
 	case KVM_MMIO_REG_GPR:
-		kvmppc_set_gpr(vcpu, vcpu->arch.io_gpr, gpr);
+		kvmppc_set_gpr(vcpu, io_gpr, gpr);
 		break;
 	case KVM_MMIO_REG_FPR:
-		VCPU_FPR(vcpu, vcpu->arch.io_gpr & KVM_MMIO_REG_MASK) = gpr;
+		if (len = 4) {
+			val32 = gpr;
+			kvm_cvt_fd(&val32, &gpr);
+		}
+		VCPU_FPR(vcpu, io_gpr & KVM_MMIO_REG_MASK) = gpr;
 		break;
 #ifdef CONFIG_PPC_BOOK3S
 	case KVM_MMIO_REG_QPR:
-		vcpu->arch.qpr[vcpu->arch.io_gpr & KVM_MMIO_REG_MASK] = gpr;
+		vcpu->arch.qpr[io_gpr & KVM_MMIO_REG_MASK] = gpr;
 		break;
 	case KVM_MMIO_REG_FQPR:
-		VCPU_FPR(vcpu, vcpu->arch.io_gpr & KVM_MMIO_REG_MASK) = gpr;
-		vcpu->arch.qpr[vcpu->arch.io_gpr & KVM_MMIO_REG_MASK] = gpr;
+		val32 = gpr >> 32;
+		kvm_cvt_fd(&val32, &gpr);
+		VCPU_FPR(vcpu, io_gpr & KVM_MMIO_REG_MASK) = gpr;
+		if (len = 4) {
+			float one = 1.0;
+			memcpy(&vcpu->arch.qpr[io_gpr & KVM_MMIO_REG_MASK],
+			       &one, sizeof(u32));
+		} else
+			vcpu->arch.qpr[io_gpr & KVM_MMIO_REG_MASK] = gpr;
 		break;
 #endif
 	default:
@@ -727,12 +740,24 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 	}
 }
 
+static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
+                                      struct kvm_run *run)
+{
+	kvmppc_complete_load(vcpu, run->mmio.data, run->mmio.len,
+			     vcpu->arch.io_gpr, vcpu->arch.mmio_is_bigendian,
+			     vcpu->arch.mmio_sign_extend);
+}
+
 int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
+		       unsigned long ea, struct kvmppc_translated_address *ta,
 		       unsigned int rt, unsigned int bytes,
 		       int is_default_endian, int sign_extend)
 {
 	int idx, ret;
 	int is_bigendian;
+#ifdef CONFIG_PPC_BOOK3S
+	struct kvmppc_translated_address local_ta;
+#endif
 
 	if (kvmppc_need_byteswap(vcpu)) {
 		/* Default endianness is "little endian". */
@@ -742,12 +767,33 @@ int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		is_bigendian = is_default_endian;
 	}
 
+#ifdef CONFIG_PPC_BOOK3S
+	if (!ta || ea != ta->eaddr) {
+		unsigned long addr;
+		u8 buf[8];
+		int r;
+
+		addr = ea;
+		r = kvmppc_ld(vcpu, &addr, bytes, buf, true);
+		if (r != EMULATE_DO_MMIO) {
+			if (r >= 0)
+				kvmppc_complete_load(vcpu, buf, bytes, rt,
+						is_bigendian, sign_extend);
+			return r;
+		}
+		local_ta.eaddr = ea;
+		local_ta.raddr = addr;
+		local_ta.is_store = false;
+		ta = &local_ta;
+	}
+#endif
+
 	if (bytes > sizeof(run->mmio.data)) {
 		printk(KERN_ERR "%s: bad MMIO length: %d\n", __func__,
 		       run->mmio.len);
 	}
 
-	run->mmio.phys_addr = vcpu->arch.paddr_accessed;
+	run->mmio.phys_addr = ta->raddr;
 	run->mmio.len = bytes;
 	run->mmio.is_write = 0;
 
@@ -775,11 +821,15 @@ int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 EXPORT_SYMBOL_GPL(kvmppc_handle_load);
 
 int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
+			unsigned long ea, struct kvmppc_translated_address *ta,
 			u64 val, unsigned int bytes, int is_default_endian)
 {
 	void *data = run->mmio.data;
 	int idx, ret;
 	int is_bigendian;
+#ifdef CONFIG_PPC_BOOK3S
+	struct kvmppc_translated_address local_ta;
+#endif
 
 	if (kvmppc_need_byteswap(vcpu)) {
 		/* Default endianness is "little endian". */
@@ -794,12 +844,6 @@ int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		       run->mmio.len);
 	}
 
-	run->mmio.phys_addr = vcpu->arch.paddr_accessed;
-	run->mmio.len = bytes;
-	run->mmio.is_write = 1;
-	vcpu->mmio_needed = 1;
-	vcpu->mmio_is_write = 1;
-
 	/* Store the value at the lowest bytes in 'data'. */
 	if (is_bigendian) {
 		switch (bytes) {
@@ -817,6 +861,29 @@ int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		}
 	}
 
+#ifdef CONFIG_PPC_BOOK3S
+	if (!ta || ea != ta->eaddr || !ta->is_store) {
+		unsigned long addr;
+		int r;
+
+		addr = ea;
+		r = kvmppc_st(vcpu, &addr, bytes, data, true);
+		if (r != EMULATE_DO_MMIO)
+			return r;
+
+		local_ta.eaddr = ea;
+		local_ta.raddr = addr;
+		local_ta.is_store = true;
+		ta = &local_ta;
+	}
+#endif
+
+	run->mmio.phys_addr = ta->raddr;
+	run->mmio.len = bytes;
+	run->mmio.is_write = 1;
+	vcpu->mmio_needed = 1;
+	vcpu->mmio_is_write = 1;
+
 	idx = srcu_read_lock(&vcpu->kvm->srcu);
 
 	ret = kvm_io_bus_write(vcpu->kvm, KVM_MMIO_BUS, run->mmio.phys_addr,
-- 
2.0.1


  parent reply	other threads:[~2014-07-19 10:14 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-07-19 10:14 [RFC PATCH 0/5] Improve PPC instruction emulation Paul Mackerras
2014-07-19 10:14 ` Paul Mackerras
2014-07-19 10:14 ` [RFC PATCH 1/5] powerpc: Split out instruction analysis part of emulate_step() Paul Mackerras
2014-07-19 10:14   ` Paul Mackerras
2014-07-28 12:01   ` Alexander Graf
2014-07-28 12:01     ` Alexander Graf
2014-07-19 10:14 ` [RFC PATCH 2/5] powerpc: Implement emulation of string loads and stores Paul Mackerras
2014-07-19 10:14   ` Paul Mackerras
2014-07-19 10:14 ` [RFC PATCH 3/5] KVM: PPC: Use pt_regs struct for integer registers in struct vcpu_arch Paul Mackerras
2014-07-19 10:14   ` Paul Mackerras
2014-07-19 10:14 ` [RFC PATCH 4/5] KVM: PPC: Use analyse_instr() in kvmppc_emulate_instruction() Paul Mackerras
2014-07-19 10:14   ` Paul Mackerras
2014-07-28 12:12   ` Alexander Graf
2014-07-28 12:12     ` Alexander Graf
2014-07-19 10:14 ` Paul Mackerras [this message]
2014-07-19 10:14   ` [RFC PATCH 5/5] KVM: PPC: Book3S: Make kvmppc_handle_load/store handle any load or store Paul Mackerras
2014-07-28 12:14   ` Alexander Graf
2014-07-28 12:14     ` Alexander Graf
2014-07-28 11:46 ` [RFC PATCH 0/5] Improve PPC instruction emulation Alexander Graf
2014-07-28 11:46   ` Alexander Graf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1405764872-8744-6-git-send-email-paulus@samba.org \
    --to=paulus@samba.org \
    --cc=agraf@suse.de \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=kvm@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.