linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/7] clean up redundant 'kvm_run' parameters
@ 2020-04-27  4:35 Tianjia Zhang
  2020-04-27  4:35 ` [PATCH v4 1/7] KVM: s390: " Tianjia Zhang
                   ` (8 more replies)
  0 siblings, 9 replies; 29+ messages in thread
From: Tianjia Zhang @ 2020-04-27  4:35 UTC (permalink / raw)
  To: pbonzini, tsbogend, paulus, mpe, benh, borntraeger, frankja,
	david, cohuck, heiko.carstens, gor, sean.j.christopherson,
	vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp, x86, hpa,
	maz, james.morse, julien.thierry.kdev, suzuki.poulose,
	christoffer.dall, peterx, thuth, chenhuacai
  Cc: kvm, linux-arm-kernel, kvmarm, linux-mips, kvm-ppc, linuxppc-dev,
	linux-s390, linux-kernel, tianjia.zhang

In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
structure. For historical reasons, many kvm-related function parameters
retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
patch does a unified cleanup of these remaining redundant parameters.

This series of patches has completely cleaned the architecture of
arm64, mips, ppc, and s390 (no such redundant code on x86). Due to
the large number of modified codes, a separate patch is made for each
platform. On the ppc platform, there is also a redundant structure
pointer of 'kvm_run' in 'vcpu_arch', which has also been cleaned
separately.

---
v4 change:
  mips: fixes two errors in entry.c.

v3 change:
  Keep the existing `vcpu->run` in the function body unchanged.

v2 change:
  s390 retains the original variable name and minimizes modification.

Tianjia Zhang (7):
  KVM: s390: clean up redundant 'kvm_run' parameters
  KVM: arm64: clean up redundant 'kvm_run' parameters
  KVM: PPC: Remove redundant kvm_run from vcpu_arch
  KVM: PPC: clean up redundant 'kvm_run' parameters
  KVM: PPC: clean up redundant kvm_run parameters in assembly
  KVM: MIPS: clean up redundant 'kvm_run' parameters
  KVM: MIPS: clean up redundant kvm_run parameters in assembly

 arch/arm64/include/asm/kvm_coproc.h      |  12 +--
 arch/arm64/include/asm/kvm_host.h        |  11 +--
 arch/arm64/include/asm/kvm_mmu.h         |   2 +-
 arch/arm64/kvm/handle_exit.c             |  36 +++----
 arch/arm64/kvm/sys_regs.c                |  13 ++-
 arch/mips/include/asm/kvm_host.h         |  32 +------
 arch/mips/kvm/emulate.c                  |  59 ++++--------
 arch/mips/kvm/entry.c                    |  21 ++---
 arch/mips/kvm/mips.c                     |  14 +--
 arch/mips/kvm/trap_emul.c                | 114 ++++++++++-------------
 arch/mips/kvm/vz.c                       |  26 ++----
 arch/powerpc/include/asm/kvm_book3s.h    |  16 ++--
 arch/powerpc/include/asm/kvm_host.h      |   1 -
 arch/powerpc/include/asm/kvm_ppc.h       |  27 +++---
 arch/powerpc/kvm/book3s.c                |   4 +-
 arch/powerpc/kvm/book3s.h                |   2 +-
 arch/powerpc/kvm/book3s_64_mmu_hv.c      |  12 +--
 arch/powerpc/kvm/book3s_64_mmu_radix.c   |   4 +-
 arch/powerpc/kvm/book3s_emulate.c        |  10 +-
 arch/powerpc/kvm/book3s_hv.c             |  64 ++++++-------
 arch/powerpc/kvm/book3s_hv_nested.c      |  12 +--
 arch/powerpc/kvm/book3s_interrupts.S     |  17 ++--
 arch/powerpc/kvm/book3s_paired_singles.c |  72 +++++++-------
 arch/powerpc/kvm/book3s_pr.c             |  33 ++++---
 arch/powerpc/kvm/booke.c                 |  39 ++++----
 arch/powerpc/kvm/booke.h                 |   8 +-
 arch/powerpc/kvm/booke_emulate.c         |   2 +-
 arch/powerpc/kvm/booke_interrupts.S      |   9 +-
 arch/powerpc/kvm/bookehv_interrupts.S    |  10 +-
 arch/powerpc/kvm/e500_emulate.c          |  15 ++-
 arch/powerpc/kvm/emulate.c               |  10 +-
 arch/powerpc/kvm/emulate_loadstore.c     |  32 +++----
 arch/powerpc/kvm/powerpc.c               |  72 +++++++-------
 arch/powerpc/kvm/trace_hv.h              |   6 +-
 arch/s390/kvm/kvm-s390.c                 |  23 +++--
 virt/kvm/arm/arm.c                       |   6 +-
 virt/kvm/arm/mmio.c                      |  11 ++-
 virt/kvm/arm/mmu.c                       |   5 +-
 38 files changed, 392 insertions(+), 470 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH v4 1/7] KVM: s390: clean up redundant 'kvm_run' parameters
  2020-04-27  4:35 [PATCH v4 0/7] clean up redundant 'kvm_run' parameters Tianjia Zhang
@ 2020-04-27  4:35 ` Tianjia Zhang
  2020-04-29 12:03   ` Vitaly Kuznetsov
  2020-04-27  4:35 ` [PATCH v4 2/7] KVM: arm64: " Tianjia Zhang
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 29+ messages in thread
From: Tianjia Zhang @ 2020-04-27  4:35 UTC (permalink / raw)
  To: pbonzini, tsbogend, paulus, mpe, benh, borntraeger, frankja,
	david, cohuck, heiko.carstens, gor, sean.j.christopherson,
	vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp, x86, hpa,
	maz, james.morse, julien.thierry.kdev, suzuki.poulose,
	christoffer.dall, peterx, thuth, chenhuacai
  Cc: kvm, linux-arm-kernel, kvmarm, linux-mips, kvm-ppc, linuxppc-dev,
	linux-s390, linux-kernel, tianjia.zhang

In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
structure. For historical reasons, many kvm-related function parameters
retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
patch does a unified cleanup of these remaining redundant parameters.

Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
---
 arch/s390/kvm/kvm-s390.c | 23 +++++++++++++++--------
 1 file changed, 15 insertions(+), 8 deletions(-)

diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index e335a7e5ead7..c0d94eaa00d7 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -4176,8 +4176,9 @@ static int __vcpu_run(struct kvm_vcpu *vcpu)
 	return rc;
 }
 
-static void sync_regs_fmt2(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
+static void sync_regs_fmt2(struct kvm_vcpu *vcpu)
 {
+	struct kvm_run *kvm_run = vcpu->run;
 	struct runtime_instr_cb *riccb;
 	struct gs_cb *gscb;
 
@@ -4243,8 +4244,10 @@ static void sync_regs_fmt2(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
 	/* SIE will load etoken directly from SDNX and therefore kvm_run */
 }
 
-static void sync_regs(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
+static void sync_regs(struct kvm_vcpu *vcpu)
 {
+	struct kvm_run *kvm_run = vcpu->run;
+
 	if (kvm_run->kvm_dirty_regs & KVM_SYNC_PREFIX)
 		kvm_s390_set_prefix(vcpu, kvm_run->s.regs.prefix);
 	if (kvm_run->kvm_dirty_regs & KVM_SYNC_CRS) {
@@ -4273,7 +4276,7 @@ static void sync_regs(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
 
 	/* Sync fmt2 only data */
 	if (likely(!kvm_s390_pv_cpu_is_protected(vcpu))) {
-		sync_regs_fmt2(vcpu, kvm_run);
+		sync_regs_fmt2(vcpu);
 	} else {
 		/*
 		 * In several places we have to modify our internal view to
@@ -4292,8 +4295,10 @@ static void sync_regs(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
 	kvm_run->kvm_dirty_regs = 0;
 }
 
-static void store_regs_fmt2(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
+static void store_regs_fmt2(struct kvm_vcpu *vcpu)
 {
+	struct kvm_run *kvm_run = vcpu->run;
+
 	kvm_run->s.regs.todpr = vcpu->arch.sie_block->todpr;
 	kvm_run->s.regs.pp = vcpu->arch.sie_block->pp;
 	kvm_run->s.regs.gbea = vcpu->arch.sie_block->gbea;
@@ -4313,8 +4318,10 @@ static void store_regs_fmt2(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
 	/* SIE will save etoken directly into SDNX and therefore kvm_run */
 }
 
-static void store_regs(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
+static void store_regs(struct kvm_vcpu *vcpu)
 {
+	struct kvm_run *kvm_run = vcpu->run;
+
 	kvm_run->psw_mask = vcpu->arch.sie_block->gpsw.mask;
 	kvm_run->psw_addr = vcpu->arch.sie_block->gpsw.addr;
 	kvm_run->s.regs.prefix = kvm_s390_get_prefix(vcpu);
@@ -4333,7 +4340,7 @@ static void store_regs(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
 	current->thread.fpu.fpc = vcpu->arch.host_fpregs.fpc;
 	current->thread.fpu.regs = vcpu->arch.host_fpregs.regs;
 	if (likely(!kvm_s390_pv_cpu_is_protected(vcpu)))
-		store_regs_fmt2(vcpu, kvm_run);
+		store_regs_fmt2(vcpu);
 }
 
 int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
@@ -4371,7 +4378,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 		goto out;
 	}
 
-	sync_regs(vcpu, kvm_run);
+	sync_regs(vcpu);
 	enable_cpu_timer_accounting(vcpu);
 
 	might_fault();
@@ -4393,7 +4400,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 	}
 
 	disable_cpu_timer_accounting(vcpu);
-	store_regs(vcpu, kvm_run);
+	store_regs(vcpu);
 
 	kvm_sigset_deactivate(vcpu);
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 2/7] KVM: arm64: clean up redundant 'kvm_run' parameters
  2020-04-27  4:35 [PATCH v4 0/7] clean up redundant 'kvm_run' parameters Tianjia Zhang
  2020-04-27  4:35 ` [PATCH v4 1/7] KVM: s390: " Tianjia Zhang
@ 2020-04-27  4:35 ` Tianjia Zhang
  2020-04-29 12:07   ` Vitaly Kuznetsov
  2020-05-05  8:39   ` Marc Zyngier
  2020-04-27  4:35 ` [PATCH v4 3/7] KVM: PPC: Remove redundant kvm_run from vcpu_arch Tianjia Zhang
                   ` (6 subsequent siblings)
  8 siblings, 2 replies; 29+ messages in thread
From: Tianjia Zhang @ 2020-04-27  4:35 UTC (permalink / raw)
  To: pbonzini, tsbogend, paulus, mpe, benh, borntraeger, frankja,
	david, cohuck, heiko.carstens, gor, sean.j.christopherson,
	vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp, x86, hpa,
	maz, james.morse, julien.thierry.kdev, suzuki.poulose,
	christoffer.dall, peterx, thuth, chenhuacai
  Cc: kvm, linux-arm-kernel, kvmarm, linux-mips, kvm-ppc, linuxppc-dev,
	linux-s390, linux-kernel, tianjia.zhang

In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
structure. For historical reasons, many kvm-related function parameters
retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
patch does a unified cleanup of these remaining redundant parameters.

Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
---
 arch/arm64/include/asm/kvm_coproc.h | 12 +++++-----
 arch/arm64/include/asm/kvm_host.h   | 11 ++++-----
 arch/arm64/include/asm/kvm_mmu.h    |  2 +-
 arch/arm64/kvm/handle_exit.c        | 36 ++++++++++++++---------------
 arch/arm64/kvm/sys_regs.c           | 13 +++++------
 virt/kvm/arm/arm.c                  |  6 ++---
 virt/kvm/arm/mmio.c                 | 11 +++++----
 virt/kvm/arm/mmu.c                  |  5 ++--
 8 files changed, 46 insertions(+), 50 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_coproc.h b/arch/arm64/include/asm/kvm_coproc.h
index 0185ee8b8b5e..454373704b8a 100644
--- a/arch/arm64/include/asm/kvm_coproc.h
+++ b/arch/arm64/include/asm/kvm_coproc.h
@@ -27,12 +27,12 @@ struct kvm_sys_reg_target_table {
 void kvm_register_target_sys_reg_table(unsigned int target,
 				       struct kvm_sys_reg_target_table *table);
 
-int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu, struct kvm_run *run);
-int kvm_handle_cp14_32(struct kvm_vcpu *vcpu, struct kvm_run *run);
-int kvm_handle_cp14_64(struct kvm_vcpu *vcpu, struct kvm_run *run);
-int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run);
-int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run);
-int kvm_handle_sys_reg(struct kvm_vcpu *vcpu, struct kvm_run *run);
+int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu);
+int kvm_handle_cp14_32(struct kvm_vcpu *vcpu);
+int kvm_handle_cp14_64(struct kvm_vcpu *vcpu);
+int kvm_handle_cp15_32(struct kvm_vcpu *vcpu);
+int kvm_handle_cp15_64(struct kvm_vcpu *vcpu);
+int kvm_handle_sys_reg(struct kvm_vcpu *vcpu);
 
 #define kvm_coproc_table_init kvm_sys_reg_table_init
 void kvm_sys_reg_table_init(void);
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 32c8a675e5a4..3fab32e4948c 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -481,18 +481,15 @@ u64 __kvm_call_hyp(void *hypfn, ...);
 void force_vm_exit(const cpumask_t *mask);
 void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot);
 
-int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
-		int exception_index);
-void handle_exit_early(struct kvm_vcpu *vcpu, struct kvm_run *run,
-		       int exception_index);
+int handle_exit(struct kvm_vcpu *vcpu, int exception_index);
+void handle_exit_early(struct kvm_vcpu *vcpu, int exception_index);
 
 /* MMIO helpers */
 void kvm_mmio_write_buf(void *buf, unsigned int len, unsigned long data);
 unsigned long kvm_mmio_read_buf(const void *buf, unsigned int len);
 
-int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run);
-int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
-		 phys_addr_t fault_ipa);
+int kvm_handle_mmio_return(struct kvm_vcpu *vcpu);
+int io_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa);
 
 int kvm_perf_init(void);
 int kvm_perf_teardown(void);
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 30b0e8d6b895..2ec7b9bb25d3 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -159,7 +159,7 @@ void kvm_free_stage2_pgd(struct kvm *kvm);
 int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
 			  phys_addr_t pa, unsigned long size, bool writable);
 
-int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run);
+int kvm_handle_guest_abort(struct kvm_vcpu *vcpu);
 
 void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu);
 
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index aacfc55de44c..ec3a66642ea5 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -25,7 +25,7 @@
 #define CREATE_TRACE_POINTS
 #include "trace.h"
 
-typedef int (*exit_handle_fn)(struct kvm_vcpu *, struct kvm_run *);
+typedef int (*exit_handle_fn)(struct kvm_vcpu *);
 
 static void kvm_handle_guest_serror(struct kvm_vcpu *vcpu, u32 esr)
 {
@@ -33,7 +33,7 @@ static void kvm_handle_guest_serror(struct kvm_vcpu *vcpu, u32 esr)
 		kvm_inject_vabt(vcpu);
 }
 
-static int handle_hvc(struct kvm_vcpu *vcpu, struct kvm_run *run)
+static int handle_hvc(struct kvm_vcpu *vcpu)
 {
 	int ret;
 
@@ -50,7 +50,7 @@ static int handle_hvc(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	return ret;
 }
 
-static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
+static int handle_smc(struct kvm_vcpu *vcpu)
 {
 	/*
 	 * "If an SMC instruction executed at Non-secure EL1 is
@@ -69,7 +69,7 @@ static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
  * Guest access to FP/ASIMD registers are routed to this handler only
  * when the system doesn't support FP/ASIMD.
  */
-static int handle_no_fpsimd(struct kvm_vcpu *vcpu, struct kvm_run *run)
+static int handle_no_fpsimd(struct kvm_vcpu *vcpu)
 {
 	kvm_inject_undefined(vcpu);
 	return 1;
@@ -87,7 +87,7 @@ static int handle_no_fpsimd(struct kvm_vcpu *vcpu, struct kvm_run *run)
  * world-switches and schedule other host processes until there is an
  * incoming IRQ or FIQ to the VM.
  */
-static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run)
+static int kvm_handle_wfx(struct kvm_vcpu *vcpu)
 {
 	if (kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WFx_ISS_WFE) {
 		trace_kvm_wfx_arm64(*vcpu_pc(vcpu), true);
@@ -109,16 +109,16 @@ static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run)
  * kvm_handle_guest_debug - handle a debug exception instruction
  *
  * @vcpu:	the vcpu pointer
- * @run:	access to the kvm_run structure for results
  *
  * We route all debug exceptions through the same handler. If both the
  * guest and host are using the same debug facilities it will be up to
  * userspace to re-inject the correct exception for guest delivery.
  *
- * @return: 0 (while setting run->exit_reason), -1 for error
+ * @return: 0 (while setting vcpu->run->exit_reason), -1 for error
  */
-static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu, struct kvm_run *run)
+static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu)
 {
+	struct kvm_run *run = vcpu->run;
 	u32 hsr = kvm_vcpu_get_hsr(vcpu);
 	int ret = 0;
 
@@ -144,7 +144,7 @@ static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	return ret;
 }
 
-static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu, struct kvm_run *run)
+static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu)
 {
 	u32 hsr = kvm_vcpu_get_hsr(vcpu);
 
@@ -155,7 +155,7 @@ static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	return 1;
 }
 
-static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
+static int handle_sve(struct kvm_vcpu *vcpu)
 {
 	/* Until SVE is supported for guests: */
 	kvm_inject_undefined(vcpu);
@@ -193,7 +193,7 @@ void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
  * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
  * a NOP).
  */
-static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
+static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu)
 {
 	kvm_arm_vcpu_ptrauth_trap(vcpu);
 	return 1;
@@ -238,7 +238,7 @@ static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
  * KVM_EXIT_DEBUG, otherwise userspace needs to complete its
  * emulation first.
  */
-static int handle_trap_exceptions(struct kvm_vcpu *vcpu, struct kvm_run *run)
+static int handle_trap_exceptions(struct kvm_vcpu *vcpu)
 {
 	int handled;
 
@@ -253,7 +253,7 @@ static int handle_trap_exceptions(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		exit_handle_fn exit_handler;
 
 		exit_handler = kvm_get_exit_handler(vcpu);
-		handled = exit_handler(vcpu, run);
+		handled = exit_handler(vcpu);
 	}
 
 	return handled;
@@ -263,9 +263,10 @@ static int handle_trap_exceptions(struct kvm_vcpu *vcpu, struct kvm_run *run)
  * Return > 0 to return to guest, < 0 on error, 0 (and set exit_reason) on
  * proper exit to userspace.
  */
-int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
-		       int exception_index)
+int handle_exit(struct kvm_vcpu *vcpu, int exception_index)
 {
+	struct kvm_run *run = vcpu->run;
+
 	if (ARM_SERROR_PENDING(exception_index)) {
 		u8 hsr_ec = ESR_ELx_EC(kvm_vcpu_get_hsr(vcpu));
 
@@ -291,7 +292,7 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
 	case ARM_EXCEPTION_EL1_SERROR:
 		return 1;
 	case ARM_EXCEPTION_TRAP:
-		return handle_trap_exceptions(vcpu, run);
+		return handle_trap_exceptions(vcpu);
 	case ARM_EXCEPTION_HYP_GONE:
 		/*
 		 * EL2 has been reset to the hyp-stub. This happens when a guest
@@ -315,8 +316,7 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
 }
 
 /* For exit types that need handling before we can be preempted */
-void handle_exit_early(struct kvm_vcpu *vcpu, struct kvm_run *run,
-		       int exception_index)
+void handle_exit_early(struct kvm_vcpu *vcpu, int exception_index)
 {
 	if (ARM_SERROR_PENDING(exception_index)) {
 		if (this_cpu_has_cap(ARM64_HAS_RAS_EXTN)) {
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 51db934702b6..e5a0d0d676c8 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2116,7 +2116,7 @@ static const struct sys_reg_desc *find_reg(const struct sys_reg_params *params,
 	return bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg);
 }
 
-int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu, struct kvm_run *run)
+int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu)
 {
 	kvm_inject_undefined(vcpu);
 	return 1;
@@ -2295,7 +2295,7 @@ static int kvm_handle_cp_32(struct kvm_vcpu *vcpu,
 	return 1;
 }
 
-int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
+int kvm_handle_cp15_64(struct kvm_vcpu *vcpu)
 {
 	const struct sys_reg_desc *target_specific;
 	size_t num;
@@ -2306,7 +2306,7 @@ int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
 				target_specific, num);
 }
 
-int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
+int kvm_handle_cp15_32(struct kvm_vcpu *vcpu)
 {
 	const struct sys_reg_desc *target_specific;
 	size_t num;
@@ -2317,14 +2317,14 @@ int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
 				target_specific, num);
 }
 
-int kvm_handle_cp14_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
+int kvm_handle_cp14_64(struct kvm_vcpu *vcpu)
 {
 	return kvm_handle_cp_64(vcpu,
 				cp14_64_regs, ARRAY_SIZE(cp14_64_regs),
 				NULL, 0);
 }
 
-int kvm_handle_cp14_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
+int kvm_handle_cp14_32(struct kvm_vcpu *vcpu)
 {
 	return kvm_handle_cp_32(vcpu,
 				cp14_regs, ARRAY_SIZE(cp14_regs),
@@ -2382,9 +2382,8 @@ static void reset_sys_reg_descs(struct kvm_vcpu *vcpu,
 /**
  * kvm_handle_sys_reg -- handles a mrs/msr trap on a guest sys_reg access
  * @vcpu: The VCPU pointer
- * @run:  The kvm_run struct
  */
-int kvm_handle_sys_reg(struct kvm_vcpu *vcpu, struct kvm_run *run)
+int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
 {
 	struct sys_reg_params params;
 	unsigned long esr = kvm_vcpu_get_hsr(vcpu);
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index f5390ac2165b..dbeb20804a75 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -659,7 +659,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 		return ret;
 
 	if (run->exit_reason == KVM_EXIT_MMIO) {
-		ret = kvm_handle_mmio_return(vcpu, run);
+		ret = kvm_handle_mmio_return(vcpu);
 		if (ret)
 			return ret;
 	}
@@ -811,11 +811,11 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 		trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
 
 		/* Exit types that need handling before we can be preempted */
-		handle_exit_early(vcpu, run, ret);
+		handle_exit_early(vcpu, ret);
 
 		preempt_enable();
 
-		ret = handle_exit(vcpu, run, ret);
+		ret = handle_exit(vcpu, ret);
 	}
 
 	/* Tell userspace about in-kernel device output levels */
diff --git a/virt/kvm/arm/mmio.c b/virt/kvm/arm/mmio.c
index aedfcff99ac5..41ef5c5dbc62 100644
--- a/virt/kvm/arm/mmio.c
+++ b/virt/kvm/arm/mmio.c
@@ -77,9 +77,8 @@ unsigned long kvm_mmio_read_buf(const void *buf, unsigned int len)
  *			     or in-kernel IO emulation
  *
  * @vcpu: The VCPU pointer
- * @run:  The VCPU run struct containing the mmio data
  */
-int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run)
+int kvm_handle_mmio_return(struct kvm_vcpu *vcpu)
 {
 	unsigned long data;
 	unsigned int len;
@@ -92,6 +91,8 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	vcpu->mmio_needed = 0;
 
 	if (!kvm_vcpu_dabt_iswrite(vcpu)) {
+		struct kvm_run *run = vcpu->run;
+
 		len = kvm_vcpu_dabt_get_as(vcpu);
 		data = kvm_mmio_read_buf(run->mmio.data, len);
 
@@ -119,9 +120,9 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	return 0;
 }
 
-int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
-		 phys_addr_t fault_ipa)
+int io_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
 {
+	struct kvm_run *run = vcpu->run;
 	unsigned long data;
 	unsigned long rt;
 	int ret;
@@ -188,7 +189,7 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
 		if (!is_write)
 			memcpy(run->mmio.data, data_buf, len);
 		vcpu->stat.mmio_exit_kernel++;
-		kvm_handle_mmio_return(vcpu, run);
+		kvm_handle_mmio_return(vcpu);
 		return 1;
 	}
 
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index e3b9ee268823..c5dc58226b5b 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1892,7 +1892,6 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
 /**
  * kvm_handle_guest_abort - handles all 2nd stage aborts
  * @vcpu:	the VCPU pointer
- * @run:	the kvm_run structure
  *
  * Any abort that gets to the host is almost guaranteed to be caused by a
  * missing second stage translation table entry, which can mean that either the
@@ -1901,7 +1900,7 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
  * space. The distinction is based on the IPA causing the fault and whether this
  * memory region has been registered as standard RAM by user space.
  */
-int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
+int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
 {
 	unsigned long fault_status;
 	phys_addr_t fault_ipa;
@@ -1980,7 +1979,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		 * of the page size.
 		 */
 		fault_ipa |= kvm_vcpu_get_hfar(vcpu) & ((1 << 12) - 1);
-		ret = io_mem_abort(vcpu, run, fault_ipa);
+		ret = io_mem_abort(vcpu, fault_ipa);
 		goto out_unlock;
 	}
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 3/7] KVM: PPC: Remove redundant kvm_run from vcpu_arch
  2020-04-27  4:35 [PATCH v4 0/7] clean up redundant 'kvm_run' parameters Tianjia Zhang
  2020-04-27  4:35 ` [PATCH v4 1/7] KVM: s390: " Tianjia Zhang
  2020-04-27  4:35 ` [PATCH v4 2/7] KVM: arm64: " Tianjia Zhang
@ 2020-04-27  4:35 ` Tianjia Zhang
  2020-04-29 12:23   ` Vitaly Kuznetsov
                     ` (2 more replies)
  2020-04-27  4:35 ` [PATCH v4 4/7] KVM: PPC: clean up redundant 'kvm_run' parameters Tianjia Zhang
                   ` (5 subsequent siblings)
  8 siblings, 3 replies; 29+ messages in thread
From: Tianjia Zhang @ 2020-04-27  4:35 UTC (permalink / raw)
  To: pbonzini, tsbogend, paulus, mpe, benh, borntraeger, frankja,
	david, cohuck, heiko.carstens, gor, sean.j.christopherson,
	vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp, x86, hpa,
	maz, james.morse, julien.thierry.kdev, suzuki.poulose,
	christoffer.dall, peterx, thuth, chenhuacai
  Cc: kvm, linux-arm-kernel, kvmarm, linux-mips, kvm-ppc, linuxppc-dev,
	linux-s390, linux-kernel, tianjia.zhang

The 'kvm_run' field already exists in the 'vcpu' structure, which
is the same structure as the 'kvm_run' in the 'vcpu_arch' and
should be deleted.

Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
---
 arch/powerpc/include/asm/kvm_host.h | 1 -
 arch/powerpc/kvm/book3s_hv.c        | 6 ++----
 arch/powerpc/kvm/book3s_hv_nested.c | 3 +--
 3 files changed, 3 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 1dc63101ffe1..2745ff8faa01 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -795,7 +795,6 @@ struct kvm_vcpu_arch {
 	struct mmio_hpte_cache_entry *pgfault_cache;
 
 	struct task_struct *run_task;
-	struct kvm_run *kvm_run;
 
 	spinlock_t vpa_update_lock;
 	struct kvmppc_vpa vpa;
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 93493f0cbfe8..413ea2dcb10c 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -2934,7 +2934,7 @@ static void post_guest_process(struct kvmppc_vcore *vc, bool is_master)
 
 		ret = RESUME_GUEST;
 		if (vcpu->arch.trap)
-			ret = kvmppc_handle_exit_hv(vcpu->arch.kvm_run, vcpu,
+			ret = kvmppc_handle_exit_hv(vcpu->run, vcpu,
 						    vcpu->arch.run_task);
 
 		vcpu->arch.ret = ret;
@@ -3920,7 +3920,6 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
 	spin_lock(&vc->lock);
 	vcpu->arch.ceded = 0;
 	vcpu->arch.run_task = current;
-	vcpu->arch.kvm_run = kvm_run;
 	vcpu->arch.stolen_logged = vcore_stolen_time(vc, mftb());
 	vcpu->arch.state = KVMPPC_VCPU_RUNNABLE;
 	vcpu->arch.busy_preempt = TB_NIL;
@@ -3973,7 +3972,7 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
 			if (signal_pending(v->arch.run_task)) {
 				kvmppc_remove_runnable(vc, v);
 				v->stat.signal_exits++;
-				v->arch.kvm_run->exit_reason = KVM_EXIT_INTR;
+				v->run->exit_reason = KVM_EXIT_INTR;
 				v->arch.ret = -EINTR;
 				wake_up(&v->arch.cpu_run);
 			}
@@ -4049,7 +4048,6 @@ int kvmhv_run_single_vcpu(struct kvm_run *kvm_run,
 	vc = vcpu->arch.vcore;
 	vcpu->arch.ceded = 0;
 	vcpu->arch.run_task = current;
-	vcpu->arch.kvm_run = kvm_run;
 	vcpu->arch.stolen_logged = vcore_stolen_time(vc, mftb());
 	vcpu->arch.state = KVMPPC_VCPU_RUNNABLE;
 	vcpu->arch.busy_preempt = TB_NIL;
diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
index dc97e5be76f6..5a3987f3ebf3 100644
--- a/arch/powerpc/kvm/book3s_hv_nested.c
+++ b/arch/powerpc/kvm/book3s_hv_nested.c
@@ -290,8 +290,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
 			r = RESUME_HOST;
 			break;
 		}
-		r = kvmhv_run_single_vcpu(vcpu->arch.kvm_run, vcpu, hdec_exp,
-					  lpcr);
+		r = kvmhv_run_single_vcpu(vcpu->run, vcpu, hdec_exp, lpcr);
 	} while (is_kvmppc_resume_guest(r));
 
 	/* save L2 state for return */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 4/7] KVM: PPC: clean up redundant 'kvm_run' parameters
  2020-04-27  4:35 [PATCH v4 0/7] clean up redundant 'kvm_run' parameters Tianjia Zhang
                   ` (2 preceding siblings ...)
  2020-04-27  4:35 ` [PATCH v4 3/7] KVM: PPC: Remove redundant kvm_run from vcpu_arch Tianjia Zhang
@ 2020-04-27  4:35 ` Tianjia Zhang
  2020-04-29 12:32   ` Vitaly Kuznetsov
  2020-05-26  5:49   ` Paul Mackerras
  2020-04-27  4:35 ` [PATCH v4 5/7] KVM: PPC: clean up redundant kvm_run parameters in assembly Tianjia Zhang
                   ` (4 subsequent siblings)
  8 siblings, 2 replies; 29+ messages in thread
From: Tianjia Zhang @ 2020-04-27  4:35 UTC (permalink / raw)
  To: pbonzini, tsbogend, paulus, mpe, benh, borntraeger, frankja,
	david, cohuck, heiko.carstens, gor, sean.j.christopherson,
	vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp, x86, hpa,
	maz, james.morse, julien.thierry.kdev, suzuki.poulose,
	christoffer.dall, peterx, thuth, chenhuacai
  Cc: kvm, linux-arm-kernel, kvmarm, linux-mips, kvm-ppc, linuxppc-dev,
	linux-s390, linux-kernel, tianjia.zhang

In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
structure. For historical reasons, many kvm-related function parameters
retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
patch does a unified cleanup of these remaining redundant parameters.

Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
---
 arch/powerpc/include/asm/kvm_book3s.h    | 16 +++---
 arch/powerpc/include/asm/kvm_ppc.h       | 27 +++++----
 arch/powerpc/kvm/book3s.c                |  4 +-
 arch/powerpc/kvm/book3s.h                |  2 +-
 arch/powerpc/kvm/book3s_64_mmu_hv.c      | 12 ++--
 arch/powerpc/kvm/book3s_64_mmu_radix.c   |  4 +-
 arch/powerpc/kvm/book3s_emulate.c        | 10 ++--
 arch/powerpc/kvm/book3s_hv.c             | 60 ++++++++++----------
 arch/powerpc/kvm/book3s_hv_nested.c      | 11 ++--
 arch/powerpc/kvm/book3s_paired_singles.c | 72 ++++++++++++------------
 arch/powerpc/kvm/book3s_pr.c             | 30 +++++-----
 arch/powerpc/kvm/booke.c                 | 36 ++++++------
 arch/powerpc/kvm/booke.h                 |  8 +--
 arch/powerpc/kvm/booke_emulate.c         |  2 +-
 arch/powerpc/kvm/e500_emulate.c          | 15 +++--
 arch/powerpc/kvm/emulate.c               | 10 ++--
 arch/powerpc/kvm/emulate_loadstore.c     | 32 +++++------
 arch/powerpc/kvm/powerpc.c               | 72 ++++++++++++------------
 arch/powerpc/kvm/trace_hv.h              |  6 +-
 19 files changed, 212 insertions(+), 217 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 506e4df2d730..66dbb1f85d59 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -155,12 +155,11 @@ extern void kvmppc_mmu_unmap_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *pte)
 extern int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr);
 extern void kvmppc_mmu_flush_segment(struct kvm_vcpu *vcpu, ulong eaddr, ulong seg_size);
 extern void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu);
-extern int kvmppc_book3s_hv_page_fault(struct kvm_run *run,
-			struct kvm_vcpu *vcpu, unsigned long addr,
-			unsigned long status);
+extern int kvmppc_book3s_hv_page_fault(struct kvm_vcpu *vcpu,
+			unsigned long addr, unsigned long status);
 extern long kvmppc_hv_find_lock_hpte(struct kvm *kvm, gva_t eaddr,
 			unsigned long slb_v, unsigned long valid);
-extern int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
+extern int kvmppc_hv_emulate_mmio(struct kvm_vcpu *vcpu,
 			unsigned long gpa, gva_t ea, int is_store);
 
 extern void kvmppc_mmu_hpte_cache_map(struct kvm_vcpu *vcpu, struct hpte_cache *pte);
@@ -174,8 +173,7 @@ extern void kvmppc_mmu_hpte_sysexit(void);
 extern int kvmppc_mmu_hv_init(void);
 extern int kvmppc_book3s_hcall_implemented(struct kvm *kvm, unsigned long hc);
 
-extern int kvmppc_book3s_radix_page_fault(struct kvm_run *run,
-			struct kvm_vcpu *vcpu,
+extern int kvmppc_book3s_radix_page_fault(struct kvm_vcpu *vcpu,
 			unsigned long ea, unsigned long dsisr);
 extern unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid,
 					gva_t eaddr, void *to, void *from,
@@ -234,7 +232,7 @@ extern void kvmppc_trigger_fac_interrupt(struct kvm_vcpu *vcpu, ulong fac);
 extern void kvmppc_set_bat(struct kvm_vcpu *vcpu, struct kvmppc_bat *bat,
 			   bool upper, u32 val);
 extern void kvmppc_giveup_ext(struct kvm_vcpu *vcpu, ulong msr);
-extern int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu);
+extern int kvmppc_emulate_paired_single(struct kvm_vcpu *vcpu);
 extern kvm_pfn_t kvmppc_gpa_to_pfn(struct kvm_vcpu *vcpu, gpa_t gpa,
 			bool writing, bool *writable);
 extern void kvmppc_add_revmap_chain(struct kvm *kvm, struct revmap_entry *rev,
@@ -300,12 +298,12 @@ void kvmhv_set_ptbl_entry(unsigned int lpid, u64 dw0, u64 dw1);
 void kvmhv_release_all_nested(struct kvm *kvm);
 long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu);
 long kvmhv_do_nested_tlbie(struct kvm_vcpu *vcpu);
-int kvmhv_run_single_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu,
+int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu,
 			  u64 time_limit, unsigned long lpcr);
 void kvmhv_save_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr);
 void kvmhv_restore_hv_return_state(struct kvm_vcpu *vcpu,
 				   struct hv_guest_state *hr);
-long int kvmhv_nested_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu);
+long int kvmhv_nested_page_fault(struct kvm_vcpu *vcpu);
 
 void kvmppc_giveup_fac(struct kvm_vcpu *vcpu, ulong fac);
 
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index 94f5a32acaf1..ccf66b3a4c1d 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -58,28 +58,28 @@ enum xlate_readwrite {
 	XLATE_WRITE		/* check for write permissions */
 };
 
-extern int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu);
-extern int __kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu);
+extern int kvmppc_vcpu_run(struct kvm_vcpu *vcpu);
+extern int __kvmppc_vcpu_run(struct kvm_run *run, struct kvm_vcpu *vcpu);
 extern void kvmppc_handler_highmem(void);
 
 extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
-extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
+extern int kvmppc_handle_load(struct kvm_vcpu *vcpu,
                               unsigned int rt, unsigned int bytes,
 			      int is_default_endian);
-extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
+extern int kvmppc_handle_loads(struct kvm_vcpu *vcpu,
                                unsigned int rt, unsigned int bytes,
 			       int is_default_endian);
-extern int kvmppc_handle_vsx_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
+extern int kvmppc_handle_vsx_load(struct kvm_vcpu *vcpu,
 				unsigned int rt, unsigned int bytes,
 			int is_default_endian, int mmio_sign_extend);
-extern int kvmppc_handle_vmx_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
+extern int kvmppc_handle_vmx_load(struct kvm_vcpu *vcpu,
 		unsigned int rt, unsigned int bytes, int is_default_endian);
-extern int kvmppc_handle_vmx_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
+extern int kvmppc_handle_vmx_store(struct kvm_vcpu *vcpu,
 		unsigned int rs, unsigned int bytes, int is_default_endian);
-extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
+extern int kvmppc_handle_store(struct kvm_vcpu *vcpu,
 			       u64 val, unsigned int bytes,
 			       int is_default_endian);
-extern int kvmppc_handle_vsx_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
+extern int kvmppc_handle_vsx_store(struct kvm_vcpu *vcpu,
 				int rs, unsigned int bytes,
 				int is_default_endian);
 
@@ -90,10 +90,9 @@ extern int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
 		     bool data);
 extern int kvmppc_st(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
 		     bool data);
-extern int kvmppc_emulate_instruction(struct kvm_run *run,
-                                      struct kvm_vcpu *vcpu);
+extern int kvmppc_emulate_instruction(struct kvm_vcpu *vcpu);
 extern int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu);
-extern int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu);
+extern int kvmppc_emulate_mmio(struct kvm_vcpu *vcpu);
 extern void kvmppc_emulate_dec(struct kvm_vcpu *vcpu);
 extern u32 kvmppc_get_dec(struct kvm_vcpu *vcpu, u64 tb);
 extern void kvmppc_decrementer_func(struct kvm_vcpu *vcpu);
@@ -267,7 +266,7 @@ struct kvmppc_ops {
 	void (*vcpu_put)(struct kvm_vcpu *vcpu);
 	void (*inject_interrupt)(struct kvm_vcpu *vcpu, int vec, u64 srr1_flags);
 	void (*set_msr)(struct kvm_vcpu *vcpu, u64 msr);
-	int (*vcpu_run)(struct kvm_run *run, struct kvm_vcpu *vcpu);
+	int (*vcpu_run)(struct kvm_vcpu *vcpu);
 	int (*vcpu_create)(struct kvm_vcpu *vcpu);
 	void (*vcpu_free)(struct kvm_vcpu *vcpu);
 	int (*check_requests)(struct kvm_vcpu *vcpu);
@@ -291,7 +290,7 @@ struct kvmppc_ops {
 	int (*init_vm)(struct kvm *kvm);
 	void (*destroy_vm)(struct kvm *kvm);
 	int (*get_smmu_info)(struct kvm *kvm, struct kvm_ppc_smmu_info *info);
-	int (*emulate_op)(struct kvm_run *run, struct kvm_vcpu *vcpu,
+	int (*emulate_op)(struct kvm_vcpu *vcpu,
 			  unsigned int inst, int *advance);
 	int (*emulate_mtspr)(struct kvm_vcpu *vcpu, int sprn, ulong spr_val);
 	int (*emulate_mfspr)(struct kvm_vcpu *vcpu, int sprn, ulong *spr_val);
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 5690a1f9b976..345d22de213b 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -758,9 +758,9 @@ void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 msr)
 }
 EXPORT_SYMBOL_GPL(kvmppc_set_msr);
 
-int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
+int kvmppc_vcpu_run(struct kvm_vcpu *vcpu)
 {
-	return vcpu->kvm->arch.kvm_ops->vcpu_run(kvm_run, vcpu);
+	return vcpu->kvm->arch.kvm_ops->vcpu_run(vcpu);
 }
 
 int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu,
diff --git a/arch/powerpc/kvm/book3s.h b/arch/powerpc/kvm/book3s.h
index eae259ee49af..9b6323ec8e60 100644
--- a/arch/powerpc/kvm/book3s.h
+++ b/arch/powerpc/kvm/book3s.h
@@ -18,7 +18,7 @@ extern void kvm_set_spte_hva_hv(struct kvm *kvm, unsigned long hva, pte_t pte);
 
 extern int kvmppc_mmu_init_pr(struct kvm_vcpu *vcpu);
 extern void kvmppc_mmu_destroy_pr(struct kvm_vcpu *vcpu);
-extern int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
+extern int kvmppc_core_emulate_op_pr(struct kvm_vcpu *vcpu,
 				     unsigned int inst, int *advance);
 extern int kvmppc_core_emulate_mtspr_pr(struct kvm_vcpu *vcpu,
 					int sprn, ulong spr_val);
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 2b35f9bcf892..36a07656ebbb 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -413,7 +413,7 @@ static int instruction_is_store(unsigned int instr)
 	return (instr & mask) != 0;
 }
 
-int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
+int kvmppc_hv_emulate_mmio(struct kvm_vcpu *vcpu,
 			   unsigned long gpa, gva_t ea, int is_store)
 {
 	u32 last_inst;
@@ -473,10 +473,10 @@ int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 	vcpu->arch.paddr_accessed = gpa;
 	vcpu->arch.vaddr_accessed = ea;
-	return kvmppc_emulate_mmio(run, vcpu);
+	return kvmppc_emulate_mmio(vcpu);
 }
 
-int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
+int kvmppc_book3s_hv_page_fault(struct kvm_vcpu *vcpu,
 				unsigned long ea, unsigned long dsisr)
 {
 	struct kvm *kvm = vcpu->kvm;
@@ -499,7 +499,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	pte_t pte, *ptep;
 
 	if (kvm_is_radix(kvm))
-		return kvmppc_book3s_radix_page_fault(run, vcpu, ea, dsisr);
+		return kvmppc_book3s_radix_page_fault(vcpu, ea, dsisr);
 
 	/*
 	 * Real-mode code has already searched the HPT and found the
@@ -519,7 +519,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			gpa_base = r & HPTE_R_RPN & ~(psize - 1);
 			gfn_base = gpa_base >> PAGE_SHIFT;
 			gpa = gpa_base | (ea & (psize - 1));
-			return kvmppc_hv_emulate_mmio(run, vcpu, gpa, ea,
+			return kvmppc_hv_emulate_mmio(vcpu, gpa, ea,
 						dsisr & DSISR_ISSTORE);
 		}
 	}
@@ -555,7 +555,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 	/* No memslot means it's an emulated MMIO region */
 	if (!memslot || (memslot->flags & KVM_MEMSLOT_INVALID))
-		return kvmppc_hv_emulate_mmio(run, vcpu, gpa, ea,
+		return kvmppc_hv_emulate_mmio(vcpu, gpa, ea,
 					      dsisr & DSISR_ISSTORE);
 
 	/*
diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index aa12cd4078b3..16c947bd5e87 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -887,7 +887,7 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu,
 	return ret;
 }
 
-int kvmppc_book3s_radix_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
+int kvmppc_book3s_radix_page_fault(struct kvm_vcpu *vcpu,
 				   unsigned long ea, unsigned long dsisr)
 {
 	struct kvm *kvm = vcpu->kvm;
@@ -933,7 +933,7 @@ int kvmppc_book3s_radix_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			kvmppc_core_queue_data_storage(vcpu, ea, dsisr);
 			return RESUME_GUEST;
 		}
-		return kvmppc_hv_emulate_mmio(run, vcpu, gpa, ea, writing);
+		return kvmppc_hv_emulate_mmio(vcpu, gpa, ea, writing);
 	}
 
 	if (memslot->flags & KVM_MEM_READONLY) {
diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index dad71d276b91..0effd48c8f4d 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -235,7 +235,7 @@ void kvmppc_emulate_tabort(struct kvm_vcpu *vcpu, int ra_val)
 
 #endif
 
-int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
+int kvmppc_core_emulate_op_pr(struct kvm_vcpu *vcpu,
 			      unsigned int inst, int *advance)
 {
 	int emulated = EMULATE_DONE;
@@ -371,13 +371,13 @@ int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			if (kvmppc_h_pr(vcpu, cmd) == EMULATE_DONE)
 				break;
 
-			run->papr_hcall.nr = cmd;
+			vcpu->run->papr_hcall.nr = cmd;
 			for (i = 0; i < 9; ++i) {
 				ulong gpr = kvmppc_get_gpr(vcpu, 4 + i);
-				run->papr_hcall.args[i] = gpr;
+				vcpu->run->papr_hcall.args[i] = gpr;
 			}
 
-			run->exit_reason = KVM_EXIT_PAPR_HCALL;
+			vcpu->run->exit_reason = KVM_EXIT_PAPR_HCALL;
 			vcpu->arch.hcall_needed = 1;
 			emulated = EMULATE_EXIT_USER;
 			break;
@@ -629,7 +629,7 @@ int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	}
 
 	if (emulated == EMULATE_FAIL)
-		emulated = kvmppc_emulate_paired_single(run, vcpu);
+		emulated = kvmppc_emulate_paired_single(vcpu);
 
 	return emulated;
 }
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 413ea2dcb10c..296bc6fb4eb1 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -1156,8 +1156,7 @@ static int kvmppc_hcall_impl_hv(unsigned long cmd)
 	return kvmppc_hcall_impl_hv_realmode(cmd);
 }
 
-static int kvmppc_emulate_debug_inst(struct kvm_run *run,
-					struct kvm_vcpu *vcpu)
+static int kvmppc_emulate_debug_inst(struct kvm_vcpu *vcpu)
 {
 	u32 last_inst;
 
@@ -1171,8 +1170,8 @@ static int kvmppc_emulate_debug_inst(struct kvm_run *run,
 	}
 
 	if (last_inst == KVMPPC_INST_SW_BREAKPOINT) {
-		run->exit_reason = KVM_EXIT_DEBUG;
-		run->debug.arch.address = kvmppc_get_pc(vcpu);
+		vcpu->run->exit_reason = KVM_EXIT_DEBUG;
+		vcpu->run->debug.arch.address = kvmppc_get_pc(vcpu);
 		return RESUME_HOST;
 	} else {
 		kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
@@ -1273,9 +1272,10 @@ static int kvmppc_emulate_doorbell_instr(struct kvm_vcpu *vcpu)
 	return RESUME_GUEST;
 }
 
-static int kvmppc_handle_exit_hv(struct kvm_run *run, struct kvm_vcpu *vcpu,
+static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu,
 				 struct task_struct *tsk)
 {
+	struct kvm_run *run = vcpu->run;
 	int r = RESUME_HOST;
 
 	vcpu->stat.sum_exits++;
@@ -1410,7 +1410,7 @@ static int kvmppc_handle_exit_hv(struct kvm_run *run, struct kvm_vcpu *vcpu,
 				swab32(vcpu->arch.emul_inst) :
 				vcpu->arch.emul_inst;
 		if (vcpu->guest_debug & KVM_GUESTDBG_USE_SW_BP) {
-			r = kvmppc_emulate_debug_inst(run, vcpu);
+			r = kvmppc_emulate_debug_inst(vcpu);
 		} else {
 			kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
 			r = RESUME_GUEST;
@@ -1462,7 +1462,7 @@ static int kvmppc_handle_exit_hv(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	return r;
 }
 
-static int kvmppc_handle_nested_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
+static int kvmppc_handle_nested_exit(struct kvm_vcpu *vcpu)
 {
 	int r;
 	int srcu_idx;
@@ -1520,7 +1520,7 @@ static int kvmppc_handle_nested_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
 	 */
 	case BOOK3S_INTERRUPT_H_DATA_STORAGE:
 		srcu_idx = srcu_read_lock(&vcpu->kvm->srcu);
-		r = kvmhv_nested_page_fault(run, vcpu);
+		r = kvmhv_nested_page_fault(vcpu);
 		srcu_read_unlock(&vcpu->kvm->srcu, srcu_idx);
 		break;
 	case BOOK3S_INTERRUPT_H_INST_STORAGE:
@@ -1530,7 +1530,7 @@ static int kvmppc_handle_nested_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
 		if (vcpu->arch.shregs.msr & HSRR1_HISI_WRITE)
 			vcpu->arch.fault_dsisr |= DSISR_ISSTORE;
 		srcu_idx = srcu_read_lock(&vcpu->kvm->srcu);
-		r = kvmhv_nested_page_fault(run, vcpu);
+		r = kvmhv_nested_page_fault(vcpu);
 		srcu_read_unlock(&vcpu->kvm->srcu, srcu_idx);
 		break;
 
@@ -2934,7 +2934,7 @@ static void post_guest_process(struct kvmppc_vcore *vc, bool is_master)
 
 		ret = RESUME_GUEST;
 		if (vcpu->arch.trap)
-			ret = kvmppc_handle_exit_hv(vcpu->run, vcpu,
+			ret = kvmppc_handle_exit_hv(vcpu,
 						    vcpu->arch.run_task);
 
 		vcpu->arch.ret = ret;
@@ -3900,15 +3900,16 @@ static int kvmhv_setup_mmu(struct kvm_vcpu *vcpu)
 	return r;
 }
 
-static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
+static int kvmppc_run_vcpu(struct kvm_vcpu *vcpu)
 {
+	struct kvm_run *run = vcpu->run;
 	int n_ceded, i, r;
 	struct kvmppc_vcore *vc;
 	struct kvm_vcpu *v;
 
 	trace_kvmppc_run_vcpu_enter(vcpu);
 
-	kvm_run->exit_reason = 0;
+	run->exit_reason = 0;
 	vcpu->arch.ret = RESUME_GUEST;
 	vcpu->arch.trap = 0;
 	kvmppc_update_vpas(vcpu);
@@ -3952,8 +3953,8 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
 			r = kvmhv_setup_mmu(vcpu);
 			spin_lock(&vc->lock);
 			if (r) {
-				kvm_run->exit_reason = KVM_EXIT_FAIL_ENTRY;
-				kvm_run->fail_entry.
+				run->exit_reason = KVM_EXIT_FAIL_ENTRY;
+				run->fail_entry.
 					hardware_entry_failure_reason = 0;
 				vcpu->arch.ret = r;
 				break;
@@ -4013,7 +4014,7 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
 	if (vcpu->arch.state == KVMPPC_VCPU_RUNNABLE) {
 		kvmppc_remove_runnable(vc, vcpu);
 		vcpu->stat.signal_exits++;
-		kvm_run->exit_reason = KVM_EXIT_INTR;
+		run->exit_reason = KVM_EXIT_INTR;
 		vcpu->arch.ret = -EINTR;
 	}
 
@@ -4024,15 +4025,15 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
 		wake_up(&v->arch.cpu_run);
 	}
 
-	trace_kvmppc_run_vcpu_exit(vcpu, kvm_run);
+	trace_kvmppc_run_vcpu_exit(vcpu);
 	spin_unlock(&vc->lock);
 	return vcpu->arch.ret;
 }
 
-int kvmhv_run_single_vcpu(struct kvm_run *kvm_run,
-			  struct kvm_vcpu *vcpu, u64 time_limit,
+int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
 			  unsigned long lpcr)
 {
+	struct kvm_run *run = vcpu->run;
 	int trap, r, pcpu;
 	int srcu_idx, lpid;
 	struct kvmppc_vcore *vc;
@@ -4041,7 +4042,7 @@ int kvmhv_run_single_vcpu(struct kvm_run *kvm_run,
 
 	trace_kvmppc_run_vcpu_enter(vcpu);
 
-	kvm_run->exit_reason = 0;
+	run->exit_reason = 0;
 	vcpu->arch.ret = RESUME_GUEST;
 	vcpu->arch.trap = 0;
 
@@ -4165,9 +4166,9 @@ int kvmhv_run_single_vcpu(struct kvm_run *kvm_run,
 	r = RESUME_GUEST;
 	if (trap) {
 		if (!nested)
-			r = kvmppc_handle_exit_hv(kvm_run, vcpu, current);
+			r = kvmppc_handle_exit_hv(vcpu, current);
 		else
-			r = kvmppc_handle_nested_exit(kvm_run, vcpu);
+			r = kvmppc_handle_nested_exit(vcpu);
 	}
 	vcpu->arch.ret = r;
 
@@ -4177,7 +4178,7 @@ int kvmhv_run_single_vcpu(struct kvm_run *kvm_run,
 		while (vcpu->arch.ceded && !kvmppc_vcpu_woken(vcpu)) {
 			if (signal_pending(current)) {
 				vcpu->stat.signal_exits++;
-				kvm_run->exit_reason = KVM_EXIT_INTR;
+				run->exit_reason = KVM_EXIT_INTR;
 				vcpu->arch.ret = -EINTR;
 				break;
 			}
@@ -4193,13 +4194,13 @@ int kvmhv_run_single_vcpu(struct kvm_run *kvm_run,
 
  done:
 	kvmppc_remove_runnable(vc, vcpu);
-	trace_kvmppc_run_vcpu_exit(vcpu, kvm_run);
+	trace_kvmppc_run_vcpu_exit(vcpu);
 
 	return vcpu->arch.ret;
 
  sigpend:
 	vcpu->stat.signal_exits++;
-	kvm_run->exit_reason = KVM_EXIT_INTR;
+	run->exit_reason = KVM_EXIT_INTR;
 	vcpu->arch.ret = -EINTR;
  out:
 	local_irq_enable();
@@ -4207,8 +4208,9 @@ int kvmhv_run_single_vcpu(struct kvm_run *kvm_run,
 	goto done;
 }
 
-static int kvmppc_vcpu_run_hv(struct kvm_run *run, struct kvm_vcpu *vcpu)
+static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu)
 {
+	struct kvm_run *run = vcpu->run;
 	int r;
 	int srcu_idx;
 	unsigned long ebb_regs[3] = {};	/* shut up GCC */
@@ -4292,10 +4294,10 @@ static int kvmppc_vcpu_run_hv(struct kvm_run *run, struct kvm_vcpu *vcpu)
 		 */
 		if (kvm->arch.threads_indep && kvm_is_radix(kvm) &&
 		    !no_mixing_hpt_and_radix)
-			r = kvmhv_run_single_vcpu(run, vcpu, ~(u64)0,
+			r = kvmhv_run_single_vcpu(vcpu, ~(u64)0,
 						  vcpu->arch.vcore->lpcr);
 		else
-			r = kvmppc_run_vcpu(run, vcpu);
+			r = kvmppc_run_vcpu(vcpu);
 
 		if (run->exit_reason == KVM_EXIT_PAPR_HCALL &&
 		    !(vcpu->arch.shregs.msr & MSR_PR)) {
@@ -4305,7 +4307,7 @@ static int kvmppc_vcpu_run_hv(struct kvm_run *run, struct kvm_vcpu *vcpu)
 			kvmppc_core_prepare_to_enter(vcpu);
 		} else if (r == RESUME_PAGE_FAULT) {
 			srcu_idx = srcu_read_lock(&kvm->srcu);
-			r = kvmppc_book3s_hv_page_fault(run, vcpu,
+			r = kvmppc_book3s_hv_page_fault(vcpu,
 				vcpu->arch.fault_dar, vcpu->arch.fault_dsisr);
 			srcu_read_unlock(&kvm->srcu, srcu_idx);
 		} else if (r == RESUME_PASSTHROUGH) {
@@ -4979,7 +4981,7 @@ static void kvmppc_core_destroy_vm_hv(struct kvm *kvm)
 }
 
 /* We don't need to emulate any privileged instructions or dcbz */
-static int kvmppc_core_emulate_op_hv(struct kvm_run *run, struct kvm_vcpu *vcpu,
+static int kvmppc_core_emulate_op_hv(struct kvm_vcpu *vcpu,
 				     unsigned int inst, int *advance)
 {
 	return EMULATE_FAIL;
diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
index 5a3987f3ebf3..fe4c535882e6 100644
--- a/arch/powerpc/kvm/book3s_hv_nested.c
+++ b/arch/powerpc/kvm/book3s_hv_nested.c
@@ -290,7 +290,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
 			r = RESUME_HOST;
 			break;
 		}
-		r = kvmhv_run_single_vcpu(vcpu->run, vcpu, hdec_exp, lpcr);
+		r = kvmhv_run_single_vcpu(vcpu, hdec_exp, lpcr);
 	} while (is_kvmppc_resume_guest(r));
 
 	/* save L2 state for return */
@@ -1256,8 +1256,7 @@ static inline int kvmppc_radix_shift_to_level(int shift)
 }
 
 /* called with gp->tlb_lock held */
-static long int __kvmhv_nested_page_fault(struct kvm_run *run,
-					  struct kvm_vcpu *vcpu,
+static long int __kvmhv_nested_page_fault(struct kvm_vcpu *vcpu,
 					  struct kvm_nested_guest *gp)
 {
 	struct kvm *kvm = vcpu->kvm;
@@ -1340,7 +1339,7 @@ static long int __kvmhv_nested_page_fault(struct kvm_run *run,
 		}
 
 		/* passthrough of emulated MMIO case */
-		return kvmppc_hv_emulate_mmio(run, vcpu, gpa, ea, writing);
+		return kvmppc_hv_emulate_mmio(vcpu, gpa, ea, writing);
 	}
 	if (memslot->flags & KVM_MEM_READONLY) {
 		if (writing) {
@@ -1427,13 +1426,13 @@ static long int __kvmhv_nested_page_fault(struct kvm_run *run,
 	return RESUME_GUEST;
 }
 
-long int kvmhv_nested_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu)
+long int kvmhv_nested_page_fault(struct kvm_vcpu *vcpu)
 {
 	struct kvm_nested_guest *gp = vcpu->arch.nested;
 	long int ret;
 
 	mutex_lock(&gp->tlb_lock);
-	ret = __kvmhv_nested_page_fault(run, vcpu, gp);
+	ret = __kvmhv_nested_page_fault(vcpu, gp);
 	mutex_unlock(&gp->tlb_lock);
 	return ret;
 }
diff --git a/arch/powerpc/kvm/book3s_paired_singles.c b/arch/powerpc/kvm/book3s_paired_singles.c
index bf0282775e37..a11436720a8c 100644
--- a/arch/powerpc/kvm/book3s_paired_singles.c
+++ b/arch/powerpc/kvm/book3s_paired_singles.c
@@ -169,7 +169,7 @@ static void kvmppc_inject_pf(struct kvm_vcpu *vcpu, ulong eaddr, bool is_store)
 	kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_DATA_STORAGE);
 }
 
-static int kvmppc_emulate_fpr_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
+static int kvmppc_emulate_fpr_load(struct kvm_vcpu *vcpu,
 				   int rs, ulong addr, int ls_type)
 {
 	int emulated = EMULATE_FAIL;
@@ -188,7 +188,7 @@ static int kvmppc_emulate_fpr_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		kvmppc_inject_pf(vcpu, addr, false);
 		goto done_load;
 	} else if (r == EMULATE_DO_MMIO) {
-		emulated = kvmppc_handle_load(run, vcpu, KVM_MMIO_REG_FPR | rs,
+		emulated = kvmppc_handle_load(vcpu, KVM_MMIO_REG_FPR | rs,
 					      len, 1);
 		goto done_load;
 	}
@@ -213,7 +213,7 @@ static int kvmppc_emulate_fpr_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	return emulated;
 }
 
-static int kvmppc_emulate_fpr_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
+static int kvmppc_emulate_fpr_store(struct kvm_vcpu *vcpu,
 				    int rs, ulong addr, int ls_type)
 {
 	int emulated = EMULATE_FAIL;
@@ -248,7 +248,7 @@ static int kvmppc_emulate_fpr_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	if (r < 0) {
 		kvmppc_inject_pf(vcpu, addr, true);
 	} else if (r == EMULATE_DO_MMIO) {
-		emulated = kvmppc_handle_store(run, vcpu, val, len, 1);
+		emulated = kvmppc_handle_store(vcpu, val, len, 1);
 	} else {
 		emulated = EMULATE_DONE;
 	}
@@ -259,7 +259,7 @@ static int kvmppc_emulate_fpr_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	return emulated;
 }
 
-static int kvmppc_emulate_psq_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
+static int kvmppc_emulate_psq_load(struct kvm_vcpu *vcpu,
 				   int rs, ulong addr, bool w, int i)
 {
 	int emulated = EMULATE_FAIL;
@@ -279,12 +279,12 @@ static int kvmppc_emulate_psq_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		kvmppc_inject_pf(vcpu, addr, false);
 		goto done_load;
 	} else if ((r == EMULATE_DO_MMIO) && w) {
-		emulated = kvmppc_handle_load(run, vcpu, KVM_MMIO_REG_FPR | rs,
+		emulated = kvmppc_handle_load(vcpu, KVM_MMIO_REG_FPR | rs,
 					      4, 1);
 		vcpu->arch.qpr[rs] = tmp[1];
 		goto done_load;
 	} else if (r == EMULATE_DO_MMIO) {
-		emulated = kvmppc_handle_load(run, vcpu, KVM_MMIO_REG_FQPR | rs,
+		emulated = kvmppc_handle_load(vcpu, KVM_MMIO_REG_FQPR | rs,
 					      8, 1);
 		goto done_load;
 	}
@@ -302,7 +302,7 @@ static int kvmppc_emulate_psq_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	return emulated;
 }
 
-static int kvmppc_emulate_psq_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
+static int kvmppc_emulate_psq_store(struct kvm_vcpu *vcpu,
 				    int rs, ulong addr, bool w, int i)
 {
 	int emulated = EMULATE_FAIL;
@@ -318,10 +318,10 @@ static int kvmppc_emulate_psq_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	if (r < 0) {
 		kvmppc_inject_pf(vcpu, addr, true);
 	} else if ((r == EMULATE_DO_MMIO) && w) {
-		emulated = kvmppc_handle_store(run, vcpu, tmp[0], 4, 1);
+		emulated = kvmppc_handle_store(vcpu, tmp[0], 4, 1);
 	} else if (r == EMULATE_DO_MMIO) {
 		u64 val = ((u64)tmp[0] << 32) | tmp[1];
-		emulated = kvmppc_handle_store(run, vcpu, val, 8, 1);
+		emulated = kvmppc_handle_store(vcpu, val, 8, 1);
 	} else {
 		emulated = EMULATE_DONE;
 	}
@@ -618,7 +618,7 @@ static int kvmppc_ps_one_in(struct kvm_vcpu *vcpu, bool rc,
 	return EMULATE_DONE;
 }
 
-int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
+int kvmppc_emulate_paired_single(struct kvm_vcpu *vcpu)
 {
 	u32 inst;
 	enum emulation_result emulated = EMULATE_DONE;
@@ -680,7 +680,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 		int i = inst_get_field(inst, 17, 19);
 
 		addr += get_d_signext(inst);
-		emulated = kvmppc_emulate_psq_load(run, vcpu, ax_rd, addr, w, i);
+		emulated = kvmppc_emulate_psq_load(vcpu, ax_rd, addr, w, i);
 		break;
 	}
 	case OP_PSQ_LU:
@@ -690,7 +690,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 		int i = inst_get_field(inst, 17, 19);
 
 		addr += get_d_signext(inst);
-		emulated = kvmppc_emulate_psq_load(run, vcpu, ax_rd, addr, w, i);
+		emulated = kvmppc_emulate_psq_load(vcpu, ax_rd, addr, w, i);
 
 		if (emulated == EMULATE_DONE)
 			kvmppc_set_gpr(vcpu, ax_ra, addr);
@@ -703,7 +703,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 		int i = inst_get_field(inst, 17, 19);
 
 		addr += get_d_signext(inst);
-		emulated = kvmppc_emulate_psq_store(run, vcpu, ax_rd, addr, w, i);
+		emulated = kvmppc_emulate_psq_store(vcpu, ax_rd, addr, w, i);
 		break;
 	}
 	case OP_PSQ_STU:
@@ -713,7 +713,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 		int i = inst_get_field(inst, 17, 19);
 
 		addr += get_d_signext(inst);
-		emulated = kvmppc_emulate_psq_store(run, vcpu, ax_rd, addr, w, i);
+		emulated = kvmppc_emulate_psq_store(vcpu, ax_rd, addr, w, i);
 
 		if (emulated == EMULATE_DONE)
 			kvmppc_set_gpr(vcpu, ax_ra, addr);
@@ -733,7 +733,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 			int i = inst_get_field(inst, 22, 24);
 
 			addr += kvmppc_get_gpr(vcpu, ax_rb);
-			emulated = kvmppc_emulate_psq_load(run, vcpu, ax_rd, addr, w, i);
+			emulated = kvmppc_emulate_psq_load(vcpu, ax_rd, addr, w, i);
 			break;
 		}
 		case OP_4X_PS_CMPO0:
@@ -747,7 +747,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 			int i = inst_get_field(inst, 22, 24);
 
 			addr += kvmppc_get_gpr(vcpu, ax_rb);
-			emulated = kvmppc_emulate_psq_load(run, vcpu, ax_rd, addr, w, i);
+			emulated = kvmppc_emulate_psq_load(vcpu, ax_rd, addr, w, i);
 
 			if (emulated == EMULATE_DONE)
 				kvmppc_set_gpr(vcpu, ax_ra, addr);
@@ -824,7 +824,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 			int i = inst_get_field(inst, 22, 24);
 
 			addr += kvmppc_get_gpr(vcpu, ax_rb);
-			emulated = kvmppc_emulate_psq_store(run, vcpu, ax_rd, addr, w, i);
+			emulated = kvmppc_emulate_psq_store(vcpu, ax_rd, addr, w, i);
 			break;
 		}
 		case OP_4XW_PSQ_STUX:
@@ -834,7 +834,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 			int i = inst_get_field(inst, 22, 24);
 
 			addr += kvmppc_get_gpr(vcpu, ax_rb);
-			emulated = kvmppc_emulate_psq_store(run, vcpu, ax_rd, addr, w, i);
+			emulated = kvmppc_emulate_psq_store(vcpu, ax_rd, addr, w, i);
 
 			if (emulated == EMULATE_DONE)
 				kvmppc_set_gpr(vcpu, ax_ra, addr);
@@ -922,7 +922,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 	{
 		ulong addr = (ax_ra ? kvmppc_get_gpr(vcpu, ax_ra) : 0) + full_d;
 
-		emulated = kvmppc_emulate_fpr_load(run, vcpu, ax_rd, addr,
+		emulated = kvmppc_emulate_fpr_load(vcpu, ax_rd, addr,
 						   FPU_LS_SINGLE);
 		break;
 	}
@@ -930,7 +930,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 	{
 		ulong addr = kvmppc_get_gpr(vcpu, ax_ra) + full_d;
 
-		emulated = kvmppc_emulate_fpr_load(run, vcpu, ax_rd, addr,
+		emulated = kvmppc_emulate_fpr_load(vcpu, ax_rd, addr,
 						   FPU_LS_SINGLE);
 
 		if (emulated == EMULATE_DONE)
@@ -941,7 +941,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 	{
 		ulong addr = (ax_ra ? kvmppc_get_gpr(vcpu, ax_ra) : 0) + full_d;
 
-		emulated = kvmppc_emulate_fpr_load(run, vcpu, ax_rd, addr,
+		emulated = kvmppc_emulate_fpr_load(vcpu, ax_rd, addr,
 						   FPU_LS_DOUBLE);
 		break;
 	}
@@ -949,7 +949,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 	{
 		ulong addr = kvmppc_get_gpr(vcpu, ax_ra) + full_d;
 
-		emulated = kvmppc_emulate_fpr_load(run, vcpu, ax_rd, addr,
+		emulated = kvmppc_emulate_fpr_load(vcpu, ax_rd, addr,
 						   FPU_LS_DOUBLE);
 
 		if (emulated == EMULATE_DONE)
@@ -960,7 +960,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 	{
 		ulong addr = (ax_ra ? kvmppc_get_gpr(vcpu, ax_ra) : 0) + full_d;
 
-		emulated = kvmppc_emulate_fpr_store(run, vcpu, ax_rd, addr,
+		emulated = kvmppc_emulate_fpr_store(vcpu, ax_rd, addr,
 						    FPU_LS_SINGLE);
 		break;
 	}
@@ -968,7 +968,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 	{
 		ulong addr = kvmppc_get_gpr(vcpu, ax_ra) + full_d;
 
-		emulated = kvmppc_emulate_fpr_store(run, vcpu, ax_rd, addr,
+		emulated = kvmppc_emulate_fpr_store(vcpu, ax_rd, addr,
 						    FPU_LS_SINGLE);
 
 		if (emulated == EMULATE_DONE)
@@ -979,7 +979,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 	{
 		ulong addr = (ax_ra ? kvmppc_get_gpr(vcpu, ax_ra) : 0) + full_d;
 
-		emulated = kvmppc_emulate_fpr_store(run, vcpu, ax_rd, addr,
+		emulated = kvmppc_emulate_fpr_store(vcpu, ax_rd, addr,
 						    FPU_LS_DOUBLE);
 		break;
 	}
@@ -987,7 +987,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 	{
 		ulong addr = kvmppc_get_gpr(vcpu, ax_ra) + full_d;
 
-		emulated = kvmppc_emulate_fpr_store(run, vcpu, ax_rd, addr,
+		emulated = kvmppc_emulate_fpr_store(vcpu, ax_rd, addr,
 						    FPU_LS_DOUBLE);
 
 		if (emulated == EMULATE_DONE)
@@ -1001,7 +1001,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 			ulong addr = ax_ra ? kvmppc_get_gpr(vcpu, ax_ra) : 0;
 
 			addr += kvmppc_get_gpr(vcpu, ax_rb);
-			emulated = kvmppc_emulate_fpr_load(run, vcpu, ax_rd,
+			emulated = kvmppc_emulate_fpr_load(vcpu, ax_rd,
 							   addr, FPU_LS_SINGLE);
 			break;
 		}
@@ -1010,7 +1010,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 			ulong addr = kvmppc_get_gpr(vcpu, ax_ra) +
 				     kvmppc_get_gpr(vcpu, ax_rb);
 
-			emulated = kvmppc_emulate_fpr_load(run, vcpu, ax_rd,
+			emulated = kvmppc_emulate_fpr_load(vcpu, ax_rd,
 							   addr, FPU_LS_SINGLE);
 
 			if (emulated == EMULATE_DONE)
@@ -1022,7 +1022,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 			ulong addr = (ax_ra ? kvmppc_get_gpr(vcpu, ax_ra) : 0) +
 				     kvmppc_get_gpr(vcpu, ax_rb);
 
-			emulated = kvmppc_emulate_fpr_load(run, vcpu, ax_rd,
+			emulated = kvmppc_emulate_fpr_load(vcpu, ax_rd,
 							   addr, FPU_LS_DOUBLE);
 			break;
 		}
@@ -1031,7 +1031,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 			ulong addr = kvmppc_get_gpr(vcpu, ax_ra) +
 				     kvmppc_get_gpr(vcpu, ax_rb);
 
-			emulated = kvmppc_emulate_fpr_load(run, vcpu, ax_rd,
+			emulated = kvmppc_emulate_fpr_load(vcpu, ax_rd,
 							   addr, FPU_LS_DOUBLE);
 
 			if (emulated == EMULATE_DONE)
@@ -1043,7 +1043,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 			ulong addr = (ax_ra ? kvmppc_get_gpr(vcpu, ax_ra) : 0) +
 				     kvmppc_get_gpr(vcpu, ax_rb);
 
-			emulated = kvmppc_emulate_fpr_store(run, vcpu, ax_rd,
+			emulated = kvmppc_emulate_fpr_store(vcpu, ax_rd,
 							    addr, FPU_LS_SINGLE);
 			break;
 		}
@@ -1052,7 +1052,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 			ulong addr = kvmppc_get_gpr(vcpu, ax_ra) +
 				     kvmppc_get_gpr(vcpu, ax_rb);
 
-			emulated = kvmppc_emulate_fpr_store(run, vcpu, ax_rd,
+			emulated = kvmppc_emulate_fpr_store(vcpu, ax_rd,
 							    addr, FPU_LS_SINGLE);
 
 			if (emulated == EMULATE_DONE)
@@ -1064,7 +1064,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 			ulong addr = (ax_ra ? kvmppc_get_gpr(vcpu, ax_ra) : 0) +
 				     kvmppc_get_gpr(vcpu, ax_rb);
 
-			emulated = kvmppc_emulate_fpr_store(run, vcpu, ax_rd,
+			emulated = kvmppc_emulate_fpr_store(vcpu, ax_rd,
 							    addr, FPU_LS_DOUBLE);
 			break;
 		}
@@ -1073,7 +1073,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 			ulong addr = kvmppc_get_gpr(vcpu, ax_ra) +
 				     kvmppc_get_gpr(vcpu, ax_rb);
 
-			emulated = kvmppc_emulate_fpr_store(run, vcpu, ax_rd,
+			emulated = kvmppc_emulate_fpr_store(vcpu, ax_rd,
 							    addr, FPU_LS_DOUBLE);
 
 			if (emulated == EMULATE_DONE)
@@ -1085,7 +1085,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 			ulong addr = (ax_ra ? kvmppc_get_gpr(vcpu, ax_ra) : 0) +
 				     kvmppc_get_gpr(vcpu, ax_rb);
 
-			emulated = kvmppc_emulate_fpr_store(run, vcpu, ax_rd,
+			emulated = kvmppc_emulate_fpr_store(vcpu, ax_rd,
 							    addr,
 							    FPU_LS_SINGLE_LOW);
 			break;
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index a0f6813f4560..ef54f917bdaf 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -700,7 +700,7 @@ static bool kvmppc_visible_gpa(struct kvm_vcpu *vcpu, gpa_t gpa)
 	return kvm_is_visible_gfn(vcpu->kvm, gpa >> PAGE_SHIFT);
 }
 
-int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
+static int kvmppc_handle_pagefault(struct kvm_vcpu *vcpu,
 			    ulong eaddr, int vec)
 {
 	bool data = (vec == BOOK3S_INTERRUPT_DATA_STORAGE);
@@ -795,7 +795,7 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		/* The guest's PTE is not mapped yet. Map on the host */
 		if (kvmppc_mmu_map_page(vcpu, &pte, iswrite) == -EIO) {
 			/* Exit KVM if mapping failed */
-			run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+			vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 			return RESUME_HOST;
 		}
 		if (data)
@@ -808,7 +808,7 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		vcpu->stat.mmio_exits++;
 		vcpu->arch.paddr_accessed = pte.raddr;
 		vcpu->arch.vaddr_accessed = pte.eaddr;
-		r = kvmppc_emulate_mmio(run, vcpu);
+		r = kvmppc_emulate_mmio(vcpu);
 		if ( r == RESUME_HOST_NV )
 			r = RESUME_HOST;
 	}
@@ -992,7 +992,7 @@ static void kvmppc_emulate_fac(struct kvm_vcpu *vcpu, ulong fac)
 	enum emulation_result er = EMULATE_FAIL;
 
 	if (!(kvmppc_get_msr(vcpu) & MSR_PR))
-		er = kvmppc_emulate_instruction(vcpu->run, vcpu);
+		er = kvmppc_emulate_instruction(vcpu);
 
 	if ((er != EMULATE_DONE) && (er != EMULATE_AGAIN)) {
 		/* Couldn't emulate, trigger interrupt in guest */
@@ -1089,8 +1089,7 @@ static void kvmppc_clear_debug(struct kvm_vcpu *vcpu)
 	}
 }
 
-static int kvmppc_exit_pr_progint(struct kvm_run *run, struct kvm_vcpu *vcpu,
-				  unsigned int exit_nr)
+static int kvmppc_exit_pr_progint(struct kvm_vcpu *vcpu, unsigned int exit_nr)
 {
 	enum emulation_result er;
 	ulong flags;
@@ -1124,7 +1123,7 @@ static int kvmppc_exit_pr_progint(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	}
 
 	vcpu->stat.emulated_inst_exits++;
-	er = kvmppc_emulate_instruction(run, vcpu);
+	er = kvmppc_emulate_instruction(vcpu);
 	switch (er) {
 	case EMULATE_DONE:
 		r = RESUME_GUEST_NV;
@@ -1139,7 +1138,7 @@ static int kvmppc_exit_pr_progint(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		r = RESUME_GUEST;
 		break;
 	case EMULATE_DO_MMIO:
-		run->exit_reason = KVM_EXIT_MMIO;
+		vcpu->run->exit_reason = KVM_EXIT_MMIO;
 		r = RESUME_HOST_NV;
 		break;
 	case EMULATE_EXIT_USER:
@@ -1198,7 +1197,7 @@ int kvmppc_handle_exit_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		/* only care about PTEG not found errors, but leave NX alone */
 		if (shadow_srr1 & 0x40000000) {
 			int idx = srcu_read_lock(&vcpu->kvm->srcu);
-			r = kvmppc_handle_pagefault(run, vcpu, kvmppc_get_pc(vcpu), exit_nr);
+			r = kvmppc_handle_pagefault(vcpu, kvmppc_get_pc(vcpu), exit_nr);
 			srcu_read_unlock(&vcpu->kvm->srcu, idx);
 			vcpu->stat.sp_instruc++;
 		} else if (vcpu->arch.mmu.is_dcbz32(vcpu) &&
@@ -1248,7 +1247,7 @@ int kvmppc_handle_exit_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		 */
 		if (fault_dsisr & (DSISR_NOHPTE | DSISR_PROTFAULT)) {
 			int idx = srcu_read_lock(&vcpu->kvm->srcu);
-			r = kvmppc_handle_pagefault(run, vcpu, dar, exit_nr);
+			r = kvmppc_handle_pagefault(vcpu, dar, exit_nr);
 			srcu_read_unlock(&vcpu->kvm->srcu, idx);
 		} else {
 			kvmppc_core_queue_data_storage(vcpu, dar, fault_dsisr);
@@ -1292,7 +1291,7 @@ int kvmppc_handle_exit_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		break;
 	case BOOK3S_INTERRUPT_PROGRAM:
 	case BOOK3S_INTERRUPT_H_EMUL_ASSIST:
-		r = kvmppc_exit_pr_progint(run, vcpu, exit_nr);
+		r = kvmppc_exit_pr_progint(vcpu, exit_nr);
 		break;
 	case BOOK3S_INTERRUPT_SYSCALL:
 	{
@@ -1370,7 +1369,7 @@ int kvmppc_handle_exit_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			emul = kvmppc_get_last_inst(vcpu, INST_GENERIC,
 						    &last_inst);
 			if (emul == EMULATE_DONE)
-				r = kvmppc_exit_pr_progint(run, vcpu, exit_nr);
+				r = kvmppc_exit_pr_progint(vcpu, exit_nr);
 			else
 				r = RESUME_GUEST;
 
@@ -1825,8 +1824,9 @@ static void kvmppc_core_vcpu_free_pr(struct kvm_vcpu *vcpu)
 	vfree(vcpu_book3s);
 }
 
-static int kvmppc_vcpu_run_pr(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
+static int kvmppc_vcpu_run_pr(struct kvm_vcpu *vcpu)
 {
+	struct kvm_run *run = vcpu->run;
 	int ret;
 #ifdef CONFIG_ALTIVEC
 	unsigned long uninitialized_var(vrsave);
@@ -1834,7 +1834,7 @@ static int kvmppc_vcpu_run_pr(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
 
 	/* Check if we can run the vcpu at all */
 	if (!vcpu->arch.sane) {
-		kvm_run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		ret = -EINVAL;
 		goto out;
 	}
@@ -1861,7 +1861,7 @@ static int kvmppc_vcpu_run_pr(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
 
 	kvmppc_fix_ee_before_entry();
 
-	ret = __kvmppc_vcpu_run(kvm_run, vcpu);
+	ret = __kvmppc_vcpu_run(run, vcpu);
 
 	kvmppc_clear_debug(vcpu);
 
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 6c18ea88fd25..26b3f5900b72 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -730,13 +730,14 @@ int kvmppc_core_check_requests(struct kvm_vcpu *vcpu)
 	return r;
 }
 
-int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
+int kvmppc_vcpu_run(struct kvm_vcpu *vcpu)
 {
+	struct kvm_run *run = vcpu->run;
 	int ret, s;
 	struct debug_reg debug;
 
 	if (!vcpu->arch.sane) {
-		kvm_run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		return -EINVAL;
 	}
 
@@ -778,7 +779,7 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
 	vcpu->arch.pgdir = vcpu->kvm->mm->pgd;
 	kvmppc_fix_ee_before_entry();
 
-	ret = __kvmppc_vcpu_run(kvm_run, vcpu);
+	ret = __kvmppc_vcpu_run(run, vcpu);
 
 	/* No need for guest_exit. It's done in handle_exit.
 	   We also get here with interrupts enabled. */
@@ -800,11 +801,11 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
 	return ret;
 }
 
-static int emulation_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
+static int emulation_exit(struct kvm_vcpu *vcpu)
 {
 	enum emulation_result er;
 
-	er = kvmppc_emulate_instruction(run, vcpu);
+	er = kvmppc_emulate_instruction(vcpu);
 	switch (er) {
 	case EMULATE_DONE:
 		/* don't overwrite subtypes, just account kvm_stats */
@@ -821,8 +822,8 @@ static int emulation_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
 		       __func__, vcpu->arch.regs.nip, vcpu->arch.last_inst);
 		/* For debugging, encode the failing instruction and
 		 * report it to userspace. */
-		run->hw.hardware_exit_reason = ~0ULL << 32;
-		run->hw.hardware_exit_reason |= vcpu->arch.last_inst;
+		vcpu->run->hw.hardware_exit_reason = ~0ULL << 32;
+		vcpu->run->hw.hardware_exit_reason |= vcpu->arch.last_inst;
 		kvmppc_core_queue_program(vcpu, ESR_PIL);
 		return RESUME_HOST;
 
@@ -834,8 +835,9 @@ static int emulation_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
 	}
 }
 
-static int kvmppc_handle_debug(struct kvm_run *run, struct kvm_vcpu *vcpu)
+static int kvmppc_handle_debug(struct kvm_vcpu *vcpu)
 {
+	struct kvm_run *run = vcpu->run;
 	struct debug_reg *dbg_reg = &(vcpu->arch.dbg_reg);
 	u32 dbsr = vcpu->arch.dbsr;
 
@@ -954,7 +956,7 @@ static void kvmppc_restart_interrupt(struct kvm_vcpu *vcpu,
 	}
 }
 
-static int kvmppc_resume_inst_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
+static int kvmppc_resume_inst_load(struct kvm_vcpu *vcpu,
 				  enum emulation_result emulated, u32 last_inst)
 {
 	switch (emulated) {
@@ -966,8 +968,8 @@ static int kvmppc_resume_inst_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		       __func__, vcpu->arch.regs.nip);
 		/* For debugging, encode the failing instruction and
 		 * report it to userspace. */
-		run->hw.hardware_exit_reason = ~0ULL << 32;
-		run->hw.hardware_exit_reason |= last_inst;
+		vcpu->run->hw.hardware_exit_reason = ~0ULL << 32;
+		vcpu->run->hw.hardware_exit_reason |= last_inst;
 		kvmppc_core_queue_program(vcpu, ESR_PIL);
 		return RESUME_HOST;
 
@@ -1024,7 +1026,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	run->ready_for_interrupt_injection = 1;
 
 	if (emulated != EMULATE_DONE) {
-		r = kvmppc_resume_inst_load(run, vcpu, emulated, last_inst);
+		r = kvmppc_resume_inst_load(vcpu, emulated, last_inst);
 		goto out;
 	}
 
@@ -1084,7 +1086,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		break;
 
 	case BOOKE_INTERRUPT_HV_PRIV:
-		r = emulation_exit(run, vcpu);
+		r = emulation_exit(vcpu);
 		break;
 
 	case BOOKE_INTERRUPT_PROGRAM:
@@ -1094,7 +1096,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			 * We are here because of an SW breakpoint instr,
 			 * so lets return to host to handle.
 			 */
-			r = kvmppc_handle_debug(run, vcpu);
+			r = kvmppc_handle_debug(vcpu);
 			run->exit_reason = KVM_EXIT_DEBUG;
 			kvmppc_account_exit(vcpu, DEBUG_EXITS);
 			break;
@@ -1115,7 +1117,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			break;
 		}
 
-		r = emulation_exit(run, vcpu);
+		r = emulation_exit(vcpu);
 		break;
 
 	case BOOKE_INTERRUPT_FP_UNAVAIL:
@@ -1282,7 +1284,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			 * actually RAM. */
 			vcpu->arch.paddr_accessed = gpaddr;
 			vcpu->arch.vaddr_accessed = eaddr;
-			r = kvmppc_emulate_mmio(run, vcpu);
+			r = kvmppc_emulate_mmio(vcpu);
 			kvmppc_account_exit(vcpu, MMIO_EXITS);
 		}
 
@@ -1333,7 +1335,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	}
 
 	case BOOKE_INTERRUPT_DEBUG: {
-		r = kvmppc_handle_debug(run, vcpu);
+		r = kvmppc_handle_debug(vcpu);
 		if (r == RESUME_HOST)
 			run->exit_reason = KVM_EXIT_DEBUG;
 		kvmppc_account_exit(vcpu, DEBUG_EXITS);
diff --git a/arch/powerpc/kvm/booke.h b/arch/powerpc/kvm/booke.h
index 65b4d337d337..be9da96d9f06 100644
--- a/arch/powerpc/kvm/booke.h
+++ b/arch/powerpc/kvm/booke.h
@@ -70,7 +70,7 @@ void kvmppc_set_tcr(struct kvm_vcpu *vcpu, u32 new_tcr);
 void kvmppc_set_tsr_bits(struct kvm_vcpu *vcpu, u32 tsr_bits);
 void kvmppc_clr_tsr_bits(struct kvm_vcpu *vcpu, u32 tsr_bits);
 
-int kvmppc_booke_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
+int kvmppc_booke_emulate_op(struct kvm_vcpu *vcpu,
                             unsigned int inst, int *advance);
 int kvmppc_booke_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, ulong *spr_val);
 int kvmppc_booke_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, ulong spr_val);
@@ -94,16 +94,12 @@ enum int_class {
 
 void kvmppc_set_pending_interrupt(struct kvm_vcpu *vcpu, enum int_class type);
 
-extern int kvmppc_core_emulate_op_e500(struct kvm_run *run,
-				       struct kvm_vcpu *vcpu,
+extern int kvmppc_core_emulate_op_e500(struct kvm_vcpu *vcpu,
 				       unsigned int inst, int *advance);
 extern int kvmppc_core_emulate_mtspr_e500(struct kvm_vcpu *vcpu, int sprn,
 					  ulong spr_val);
 extern int kvmppc_core_emulate_mfspr_e500(struct kvm_vcpu *vcpu, int sprn,
 					  ulong *spr_val);
-extern int kvmppc_core_emulate_op_e500(struct kvm_run *run,
-				       struct kvm_vcpu *vcpu,
-				       unsigned int inst, int *advance);
 extern int kvmppc_core_emulate_mtspr_e500(struct kvm_vcpu *vcpu, int sprn,
 					  ulong spr_val);
 extern int kvmppc_core_emulate_mfspr_e500(struct kvm_vcpu *vcpu, int sprn,
diff --git a/arch/powerpc/kvm/booke_emulate.c b/arch/powerpc/kvm/booke_emulate.c
index 689ff5f90e9e..d8d38aca71bd 100644
--- a/arch/powerpc/kvm/booke_emulate.c
+++ b/arch/powerpc/kvm/booke_emulate.c
@@ -39,7 +39,7 @@ static void kvmppc_emul_rfci(struct kvm_vcpu *vcpu)
 	kvmppc_set_msr(vcpu, vcpu->arch.csrr1);
 }
 
-int kvmppc_booke_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
+int kvmppc_booke_emulate_op(struct kvm_vcpu *vcpu,
                             unsigned int inst, int *advance)
 {
 	int emulated = EMULATE_DONE;
diff --git a/arch/powerpc/kvm/e500_emulate.c b/arch/powerpc/kvm/e500_emulate.c
index 3d0d3ec5be96..64eb833e9f02 100644
--- a/arch/powerpc/kvm/e500_emulate.c
+++ b/arch/powerpc/kvm/e500_emulate.c
@@ -83,16 +83,16 @@ static int kvmppc_e500_emul_msgsnd(struct kvm_vcpu *vcpu, int rb)
 }
 #endif
 
-static int kvmppc_e500_emul_ehpriv(struct kvm_run *run, struct kvm_vcpu *vcpu,
+static int kvmppc_e500_emul_ehpriv(struct kvm_vcpu *vcpu,
 				   unsigned int inst, int *advance)
 {
 	int emulated = EMULATE_DONE;
 
 	switch (get_oc(inst)) {
 	case EHPRIV_OC_DEBUG:
-		run->exit_reason = KVM_EXIT_DEBUG;
-		run->debug.arch.address = vcpu->arch.regs.nip;
-		run->debug.arch.status = 0;
+		vcpu->run->exit_reason = KVM_EXIT_DEBUG;
+		vcpu->run->debug.arch.address = vcpu->arch.regs.nip;
+		vcpu->run->debug.arch.status = 0;
 		kvmppc_account_exit(vcpu, DEBUG_EXITS);
 		emulated = EMULATE_EXIT_USER;
 		*advance = 0;
@@ -125,7 +125,7 @@ static int kvmppc_e500_emul_mftmr(struct kvm_vcpu *vcpu, unsigned int inst,
 	return EMULATE_FAIL;
 }
 
-int kvmppc_core_emulate_op_e500(struct kvm_run *run, struct kvm_vcpu *vcpu,
+int kvmppc_core_emulate_op_e500(struct kvm_vcpu *vcpu,
 				unsigned int inst, int *advance)
 {
 	int emulated = EMULATE_DONE;
@@ -182,8 +182,7 @@ int kvmppc_core_emulate_op_e500(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			break;
 
 		case XOP_EHPRIV:
-			emulated = kvmppc_e500_emul_ehpriv(run, vcpu, inst,
-							   advance);
+			emulated = kvmppc_e500_emul_ehpriv(vcpu, inst, advance);
 			break;
 
 		default:
@@ -197,7 +196,7 @@ int kvmppc_core_emulate_op_e500(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	}
 
 	if (emulated == EMULATE_FAIL)
-		emulated = kvmppc_booke_emulate_op(run, vcpu, inst, advance);
+		emulated = kvmppc_booke_emulate_op(vcpu, inst, advance);
 
 	return emulated;
 }
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 6fca38ca791f..ee1147c98cd8 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -191,7 +191,7 @@ static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
 
 /* XXX Should probably auto-generate instruction decoding for a particular core
  * from opcode tables in the future. */
-int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
+int kvmppc_emulate_instruction(struct kvm_vcpu *vcpu)
 {
 	u32 inst;
 	int rs, rt, sprn;
@@ -270,9 +270,9 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
 		 * these are illegal instructions.
 		 */
 		if (inst == KVMPPC_INST_SW_BREAKPOINT) {
-			run->exit_reason = KVM_EXIT_DEBUG;
-			run->debug.arch.status = 0;
-			run->debug.arch.address = kvmppc_get_pc(vcpu);
+			vcpu->run->exit_reason = KVM_EXIT_DEBUG;
+			vcpu->run->debug.arch.status = 0;
+			vcpu->run->debug.arch.address = kvmppc_get_pc(vcpu);
 			emulated = EMULATE_EXIT_USER;
 			advance = 0;
 		} else
@@ -285,7 +285,7 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
 	}
 
 	if (emulated == EMULATE_FAIL) {
-		emulated = vcpu->kvm->arch.kvm_ops->emulate_op(run, vcpu, inst,
+		emulated = vcpu->kvm->arch.kvm_ops->emulate_op(vcpu, inst,
 							       &advance);
 		if (emulated == EMULATE_AGAIN) {
 			advance = 0;
diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
index 1139bc56e004..e8a47c84d77d 100644
--- a/arch/powerpc/kvm/emulate_loadstore.c
+++ b/arch/powerpc/kvm/emulate_loadstore.c
@@ -71,7 +71,6 @@ static bool kvmppc_check_altivec_disabled(struct kvm_vcpu *vcpu)
  */
 int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 {
-	struct kvm_run *run = vcpu->run;
 	u32 inst;
 	enum emulation_result emulated = EMULATE_FAIL;
 	int advance = 1;
@@ -104,10 +103,10 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 			int instr_byte_swap = op.type & BYTEREV;
 
 			if (op.type & SIGNEXT)
-				emulated = kvmppc_handle_loads(run, vcpu,
+				emulated = kvmppc_handle_loads(vcpu,
 						op.reg, size, !instr_byte_swap);
 			else
-				emulated = kvmppc_handle_load(run, vcpu,
+				emulated = kvmppc_handle_load(vcpu,
 						op.reg, size, !instr_byte_swap);
 
 			if ((op.type & UPDATE) && (emulated != EMULATE_FAIL))
@@ -124,10 +123,10 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 				vcpu->arch.mmio_sp64_extend = 1;
 
 			if (op.type & SIGNEXT)
-				emulated = kvmppc_handle_loads(run, vcpu,
+				emulated = kvmppc_handle_loads(vcpu,
 					     KVM_MMIO_REG_FPR|op.reg, size, 1);
 			else
-				emulated = kvmppc_handle_load(run, vcpu,
+				emulated = kvmppc_handle_load(vcpu,
 					     KVM_MMIO_REG_FPR|op.reg, size, 1);
 
 			if ((op.type & UPDATE) && (emulated != EMULATE_FAIL))
@@ -164,12 +163,12 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 
 			if (size == 16) {
 				vcpu->arch.mmio_vmx_copy_nums = 2;
-				emulated = kvmppc_handle_vmx_load(run,
-						vcpu, KVM_MMIO_REG_VMX|op.reg,
+				emulated = kvmppc_handle_vmx_load(vcpu,
+						KVM_MMIO_REG_VMX|op.reg,
 						8, 1);
 			} else {
 				vcpu->arch.mmio_vmx_copy_nums = 1;
-				emulated = kvmppc_handle_vmx_load(run, vcpu,
+				emulated = kvmppc_handle_vmx_load(vcpu,
 						KVM_MMIO_REG_VMX|op.reg,
 						size, 1);
 			}
@@ -217,7 +216,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 				io_size_each = op.element_size;
 			}
 
-			emulated = kvmppc_handle_vsx_load(run, vcpu,
+			emulated = kvmppc_handle_vsx_load(vcpu,
 					KVM_MMIO_REG_VSX|op.reg, io_size_each,
 					1, op.type & SIGNEXT);
 			break;
@@ -227,8 +226,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 			/* if need byte reverse, op.val has been reversed by
 			 * analyse_instr().
 			 */
-			emulated = kvmppc_handle_store(run, vcpu, op.val,
-					size, 1);
+			emulated = kvmppc_handle_store(vcpu, op.val, size, 1);
 
 			if ((op.type & UPDATE) && (emulated != EMULATE_FAIL))
 				kvmppc_set_gpr(vcpu, op.update_reg, op.ea);
@@ -250,7 +248,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 			if (op.type & FPCONV)
 				vcpu->arch.mmio_sp64_extend = 1;
 
-			emulated = kvmppc_handle_store(run, vcpu,
+			emulated = kvmppc_handle_store(vcpu,
 					VCPU_FPR(vcpu, op.reg), size, 1);
 
 			if ((op.type & UPDATE) && (emulated != EMULATE_FAIL))
@@ -290,12 +288,12 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 
 			if (size == 16) {
 				vcpu->arch.mmio_vmx_copy_nums = 2;
-				emulated = kvmppc_handle_vmx_store(run,
-						vcpu, op.reg, 8, 1);
+				emulated = kvmppc_handle_vmx_store(vcpu,
+						op.reg, 8, 1);
 			} else {
 				vcpu->arch.mmio_vmx_copy_nums = 1;
-				emulated = kvmppc_handle_vmx_store(run,
-						vcpu, op.reg, size, 1);
+				emulated = kvmppc_handle_vmx_store(vcpu,
+						op.reg, size, 1);
 			}
 
 			break;
@@ -338,7 +336,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
 				io_size_each = op.element_size;
 			}
 
-			emulated = kvmppc_handle_vsx_store(run, vcpu,
+			emulated = kvmppc_handle_vsx_store(vcpu,
 					op.reg, io_size_each, 1);
 			break;
 		}
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 7e24691e138a..de4c317ad5f1 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -279,7 +279,7 @@ int kvmppc_sanity_check(struct kvm_vcpu *vcpu)
 }
 EXPORT_SYMBOL_GPL(kvmppc_sanity_check);
 
-int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu)
+int kvmppc_emulate_mmio(struct kvm_vcpu *vcpu)
 {
 	enum emulation_result er;
 	int r;
@@ -295,7 +295,7 @@ int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu)
 		r = RESUME_GUEST;
 		break;
 	case EMULATE_DO_MMIO:
-		run->exit_reason = KVM_EXIT_MMIO;
+		vcpu->run->exit_reason = KVM_EXIT_MMIO;
 		/* We must reload nonvolatiles because "update" load/store
 		 * instructions modify register state. */
 		/* Future optimization: only reload non-volatiles if they were
@@ -1106,9 +1106,9 @@ static inline u32 dp_to_sp(u64 fprd)
 #define dp_to_sp(x)	(x)
 #endif /* CONFIG_PPC_FPU */
 
-static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
-                                      struct kvm_run *run)
+static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu)
 {
+	struct kvm_run *run = vcpu->run;
 	u64 uninitialized_var(gpr);
 
 	if (run->mmio.len > sizeof(gpr)) {
@@ -1218,10 +1218,11 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
 	}
 }
 
-static int __kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
+static int __kvmppc_handle_load(struct kvm_vcpu *vcpu,
 				unsigned int rt, unsigned int bytes,
 				int is_default_endian, int sign_extend)
 {
+	struct kvm_run *run = vcpu->run;
 	int idx, ret;
 	bool host_swabbed;
 
@@ -1255,7 +1256,7 @@ static int __kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	srcu_read_unlock(&vcpu->kvm->srcu, idx);
 
 	if (!ret) {
-		kvmppc_complete_mmio_load(vcpu, run);
+		kvmppc_complete_mmio_load(vcpu);
 		vcpu->mmio_needed = 0;
 		return EMULATE_DONE;
 	}
@@ -1263,24 +1264,24 @@ static int __kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	return EMULATE_DO_MMIO;
 }
 
-int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
+int kvmppc_handle_load(struct kvm_vcpu *vcpu,
 		       unsigned int rt, unsigned int bytes,
 		       int is_default_endian)
 {
-	return __kvmppc_handle_load(run, vcpu, rt, bytes, is_default_endian, 0);
+	return __kvmppc_handle_load(vcpu, rt, bytes, is_default_endian, 0);
 }
 EXPORT_SYMBOL_GPL(kvmppc_handle_load);
 
 /* Same as above, but sign extends */
-int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
+int kvmppc_handle_loads(struct kvm_vcpu *vcpu,
 			unsigned int rt, unsigned int bytes,
 			int is_default_endian)
 {
-	return __kvmppc_handle_load(run, vcpu, rt, bytes, is_default_endian, 1);
+	return __kvmppc_handle_load(vcpu, rt, bytes, is_default_endian, 1);
 }
 
 #ifdef CONFIG_VSX
-int kvmppc_handle_vsx_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
+int kvmppc_handle_vsx_load(struct kvm_vcpu *vcpu,
 			unsigned int rt, unsigned int bytes,
 			int is_default_endian, int mmio_sign_extend)
 {
@@ -1291,13 +1292,13 @@ int kvmppc_handle_vsx_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		return EMULATE_FAIL;
 
 	while (vcpu->arch.mmio_vsx_copy_nums) {
-		emulated = __kvmppc_handle_load(run, vcpu, rt, bytes,
+		emulated = __kvmppc_handle_load(vcpu, rt, bytes,
 			is_default_endian, mmio_sign_extend);
 
 		if (emulated != EMULATE_DONE)
 			break;
 
-		vcpu->arch.paddr_accessed += run->mmio.len;
+		vcpu->arch.paddr_accessed += vcpu->run->mmio.len;
 
 		vcpu->arch.mmio_vsx_copy_nums--;
 		vcpu->arch.mmio_vsx_offset++;
@@ -1306,9 +1307,10 @@ int kvmppc_handle_vsx_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 }
 #endif /* CONFIG_VSX */
 
-int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
+int kvmppc_handle_store(struct kvm_vcpu *vcpu,
 			u64 val, unsigned int bytes, int is_default_endian)
 {
+	struct kvm_run *run = vcpu->run;
 	void *data = run->mmio.data;
 	int idx, ret;
 	bool host_swabbed;
@@ -1422,7 +1424,7 @@ static inline int kvmppc_get_vsr_data(struct kvm_vcpu *vcpu, int rs, u64 *val)
 	return result;
 }
 
-int kvmppc_handle_vsx_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
+int kvmppc_handle_vsx_store(struct kvm_vcpu *vcpu,
 			int rs, unsigned int bytes, int is_default_endian)
 {
 	u64 val;
@@ -1438,13 +1440,13 @@ int kvmppc_handle_vsx_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		if (kvmppc_get_vsr_data(vcpu, rs, &val) == -1)
 			return EMULATE_FAIL;
 
-		emulated = kvmppc_handle_store(run, vcpu,
+		emulated = kvmppc_handle_store(vcpu,
 			 val, bytes, is_default_endian);
 
 		if (emulated != EMULATE_DONE)
 			break;
 
-		vcpu->arch.paddr_accessed += run->mmio.len;
+		vcpu->arch.paddr_accessed += vcpu->run->mmio.len;
 
 		vcpu->arch.mmio_vsx_copy_nums--;
 		vcpu->arch.mmio_vsx_offset++;
@@ -1453,19 +1455,19 @@ int kvmppc_handle_vsx_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	return emulated;
 }
 
-static int kvmppc_emulate_mmio_vsx_loadstore(struct kvm_vcpu *vcpu,
-			struct kvm_run *run)
+static int kvmppc_emulate_mmio_vsx_loadstore(struct kvm_vcpu *vcpu)
 {
+	struct kvm_run *run = vcpu->run;
 	enum emulation_result emulated = EMULATE_FAIL;
 	int r;
 
 	vcpu->arch.paddr_accessed += run->mmio.len;
 
 	if (!vcpu->mmio_is_write) {
-		emulated = kvmppc_handle_vsx_load(run, vcpu, vcpu->arch.io_gpr,
+		emulated = kvmppc_handle_vsx_load(vcpu, vcpu->arch.io_gpr,
 			 run->mmio.len, 1, vcpu->arch.mmio_sign_extend);
 	} else {
-		emulated = kvmppc_handle_vsx_store(run, vcpu,
+		emulated = kvmppc_handle_vsx_store(vcpu,
 			 vcpu->arch.io_gpr, run->mmio.len, 1);
 	}
 
@@ -1489,7 +1491,7 @@ static int kvmppc_emulate_mmio_vsx_loadstore(struct kvm_vcpu *vcpu,
 #endif /* CONFIG_VSX */
 
 #ifdef CONFIG_ALTIVEC
-int kvmppc_handle_vmx_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
+int kvmppc_handle_vmx_load(struct kvm_vcpu *vcpu,
 		unsigned int rt, unsigned int bytes, int is_default_endian)
 {
 	enum emulation_result emulated = EMULATE_DONE;
@@ -1498,13 +1500,13 @@ int kvmppc_handle_vmx_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		return EMULATE_FAIL;
 
 	while (vcpu->arch.mmio_vmx_copy_nums) {
-		emulated = __kvmppc_handle_load(run, vcpu, rt, bytes,
+		emulated = __kvmppc_handle_load(vcpu, rt, bytes,
 				is_default_endian, 0);
 
 		if (emulated != EMULATE_DONE)
 			break;
 
-		vcpu->arch.paddr_accessed += run->mmio.len;
+		vcpu->arch.paddr_accessed += vcpu->run->mmio.len;
 		vcpu->arch.mmio_vmx_copy_nums--;
 		vcpu->arch.mmio_vmx_offset++;
 	}
@@ -1584,7 +1586,7 @@ int kvmppc_get_vmx_byte(struct kvm_vcpu *vcpu, int index, u64 *val)
 	return result;
 }
 
-int kvmppc_handle_vmx_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
+int kvmppc_handle_vmx_store(struct kvm_vcpu *vcpu,
 		unsigned int rs, unsigned int bytes, int is_default_endian)
 {
 	u64 val = 0;
@@ -1619,12 +1621,12 @@ int kvmppc_handle_vmx_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			return EMULATE_FAIL;
 		}
 
-		emulated = kvmppc_handle_store(run, vcpu, val, bytes,
+		emulated = kvmppc_handle_store(vcpu, val, bytes,
 				is_default_endian);
 		if (emulated != EMULATE_DONE)
 			break;
 
-		vcpu->arch.paddr_accessed += run->mmio.len;
+		vcpu->arch.paddr_accessed += vcpu->run->mmio.len;
 		vcpu->arch.mmio_vmx_copy_nums--;
 		vcpu->arch.mmio_vmx_offset++;
 	}
@@ -1632,19 +1634,19 @@ int kvmppc_handle_vmx_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	return emulated;
 }
 
-static int kvmppc_emulate_mmio_vmx_loadstore(struct kvm_vcpu *vcpu,
-		struct kvm_run *run)
+static int kvmppc_emulate_mmio_vmx_loadstore(struct kvm_vcpu *vcpu)
 {
+	struct kvm_run *run = vcpu->run;
 	enum emulation_result emulated = EMULATE_FAIL;
 	int r;
 
 	vcpu->arch.paddr_accessed += run->mmio.len;
 
 	if (!vcpu->mmio_is_write) {
-		emulated = kvmppc_handle_vmx_load(run, vcpu,
+		emulated = kvmppc_handle_vmx_load(vcpu,
 				vcpu->arch.io_gpr, run->mmio.len, 1);
 	} else {
-		emulated = kvmppc_handle_vmx_store(run, vcpu,
+		emulated = kvmppc_handle_vmx_store(vcpu,
 				vcpu->arch.io_gpr, run->mmio.len, 1);
 	}
 
@@ -1774,7 +1776,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 	if (vcpu->mmio_needed) {
 		vcpu->mmio_needed = 0;
 		if (!vcpu->mmio_is_write)
-			kvmppc_complete_mmio_load(vcpu, run);
+			kvmppc_complete_mmio_load(vcpu);
 #ifdef CONFIG_VSX
 		if (vcpu->arch.mmio_vsx_copy_nums > 0) {
 			vcpu->arch.mmio_vsx_copy_nums--;
@@ -1782,7 +1784,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 		}
 
 		if (vcpu->arch.mmio_vsx_copy_nums > 0) {
-			r = kvmppc_emulate_mmio_vsx_loadstore(vcpu, run);
+			r = kvmppc_emulate_mmio_vsx_loadstore(vcpu);
 			if (r == RESUME_HOST) {
 				vcpu->mmio_needed = 1;
 				goto out;
@@ -1796,7 +1798,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 		}
 
 		if (vcpu->arch.mmio_vmx_copy_nums > 0) {
-			r = kvmppc_emulate_mmio_vmx_loadstore(vcpu, run);
+			r = kvmppc_emulate_mmio_vmx_loadstore(vcpu);
 			if (r == RESUME_HOST) {
 				vcpu->mmio_needed = 1;
 				goto out;
@@ -1829,7 +1831,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 	if (run->immediate_exit)
 		r = -EINTR;
 	else
-		r = kvmppc_vcpu_run(run, vcpu);
+		r = kvmppc_vcpu_run(vcpu);
 
 	kvm_sigset_deactivate(vcpu);
 
diff --git a/arch/powerpc/kvm/trace_hv.h b/arch/powerpc/kvm/trace_hv.h
index 8a1e3b0047f1..4a61a971c34e 100644
--- a/arch/powerpc/kvm/trace_hv.h
+++ b/arch/powerpc/kvm/trace_hv.h
@@ -472,9 +472,9 @@ TRACE_EVENT(kvmppc_run_vcpu_enter,
 );
 
 TRACE_EVENT(kvmppc_run_vcpu_exit,
-	TP_PROTO(struct kvm_vcpu *vcpu, struct kvm_run *run),
+	TP_PROTO(struct kvm_vcpu *vcpu),
 
-	TP_ARGS(vcpu, run),
+	TP_ARGS(vcpu),
 
 	TP_STRUCT__entry(
 		__field(int,		vcpu_id)
@@ -484,7 +484,7 @@ TRACE_EVENT(kvmppc_run_vcpu_exit,
 
 	TP_fast_assign(
 		__entry->vcpu_id  = vcpu->vcpu_id;
-		__entry->exit     = run->exit_reason;
+		__entry->exit     = vcpu->run->exit_reason;
 		__entry->ret      = vcpu->arch.ret;
 	),
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 5/7] KVM: PPC: clean up redundant kvm_run parameters in assembly
  2020-04-27  4:35 [PATCH v4 0/7] clean up redundant 'kvm_run' parameters Tianjia Zhang
                   ` (3 preceding siblings ...)
  2020-04-27  4:35 ` [PATCH v4 4/7] KVM: PPC: clean up redundant 'kvm_run' parameters Tianjia Zhang
@ 2020-04-27  4:35 ` Tianjia Zhang
  2020-05-26  5:59   ` Paul Mackerras
  2020-04-27  4:35 ` [PATCH v4 6/7] KVM: MIPS: clean up redundant 'kvm_run' parameters Tianjia Zhang
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 29+ messages in thread
From: Tianjia Zhang @ 2020-04-27  4:35 UTC (permalink / raw)
  To: pbonzini, tsbogend, paulus, mpe, benh, borntraeger, frankja,
	david, cohuck, heiko.carstens, gor, sean.j.christopherson,
	vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp, x86, hpa,
	maz, james.morse, julien.thierry.kdev, suzuki.poulose,
	christoffer.dall, peterx, thuth, chenhuacai
  Cc: kvm, linux-arm-kernel, kvmarm, linux-mips, kvm-ppc, linuxppc-dev,
	linux-s390, linux-kernel, tianjia.zhang

In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
structure. For historical reasons, many kvm-related function parameters
retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
patch does a unified cleanup of these remaining redundant parameters.

Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
---
 arch/powerpc/include/asm/kvm_ppc.h    |  2 +-
 arch/powerpc/kvm/book3s_interrupts.S  | 17 ++++++++---------
 arch/powerpc/kvm/book3s_pr.c          |  9 ++++-----
 arch/powerpc/kvm/booke.c              |  9 ++++-----
 arch/powerpc/kvm/booke_interrupts.S   |  9 ++++-----
 arch/powerpc/kvm/bookehv_interrupts.S | 10 +++++-----
 6 files changed, 26 insertions(+), 30 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index ccf66b3a4c1d..0a056c64c317 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -59,7 +59,7 @@ enum xlate_readwrite {
 };
 
 extern int kvmppc_vcpu_run(struct kvm_vcpu *vcpu);
-extern int __kvmppc_vcpu_run(struct kvm_run *run, struct kvm_vcpu *vcpu);
+extern int __kvmppc_vcpu_run(struct kvm_vcpu *vcpu);
 extern void kvmppc_handler_highmem(void);
 
 extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
diff --git a/arch/powerpc/kvm/book3s_interrupts.S b/arch/powerpc/kvm/book3s_interrupts.S
index f7ad99d972ce..0eff749d8027 100644
--- a/arch/powerpc/kvm/book3s_interrupts.S
+++ b/arch/powerpc/kvm/book3s_interrupts.S
@@ -55,8 +55,7 @@
  ****************************************************************************/
 
 /* Registers:
- *  r3: kvm_run pointer
- *  r4: vcpu pointer
+ *  r3: vcpu pointer
  */
 _GLOBAL(__kvmppc_vcpu_run)
 
@@ -68,8 +67,8 @@ kvm_start_entry:
 	/* Save host state to the stack */
 	PPC_STLU r1, -SWITCH_FRAME_SIZE(r1)
 
-	/* Save r3 (kvm_run) and r4 (vcpu) */
-	SAVE_2GPRS(3, r1)
+	/* Save r3 (vcpu) */
+	SAVE_GPR(3, r1)
 
 	/* Save non-volatile registers (r14 - r31) */
 	SAVE_NVGPRS(r1)
@@ -82,11 +81,11 @@ kvm_start_entry:
 	PPC_STL	r0, _LINK(r1)
 
 	/* Load non-volatile guest state from the vcpu */
-	VCPU_LOAD_NVGPRS(r4)
+	VCPU_LOAD_NVGPRS(r3)
 
 kvm_start_lightweight:
 	/* Copy registers into shadow vcpu so we can access them in real mode */
-	mr	r3, r4
+	mr	r4, r3
 	bl	FUNC(kvmppc_copy_to_svcpu)
 	nop
 	REST_GPR(4, r1)
@@ -191,10 +190,10 @@ after_sprg3_load:
 	PPC_STL	r31, VCPU_GPR(R31)(r7)
 
 	/* Pass the exit number as 3rd argument to kvmppc_handle_exit */
-	lwz	r5, VCPU_TRAP(r7)
+	lwz	r4, VCPU_TRAP(r7)
 
-	/* Restore r3 (kvm_run) and r4 (vcpu) */
-	REST_2GPRS(3, r1)
+	/* Restore r3 (vcpu) */
+	REST_GPR(3, r1)
 	bl	FUNC(kvmppc_handle_exit_pr)
 
 	/* If RESUME_GUEST, get back in the loop */
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index ef54f917bdaf..01c8fe5abe0d 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -1151,9 +1151,9 @@ static int kvmppc_exit_pr_progint(struct kvm_vcpu *vcpu, unsigned int exit_nr)
 	return r;
 }
 
-int kvmppc_handle_exit_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
-			  unsigned int exit_nr)
+int kvmppc_handle_exit_pr(struct kvm_vcpu *vcpu, unsigned int exit_nr)
 {
+	struct kvm_run *run = vcpu->run;
 	int r = RESUME_HOST;
 	int s;
 
@@ -1826,7 +1826,6 @@ static void kvmppc_core_vcpu_free_pr(struct kvm_vcpu *vcpu)
 
 static int kvmppc_vcpu_run_pr(struct kvm_vcpu *vcpu)
 {
-	struct kvm_run *run = vcpu->run;
 	int ret;
 #ifdef CONFIG_ALTIVEC
 	unsigned long uninitialized_var(vrsave);
@@ -1834,7 +1833,7 @@ static int kvmppc_vcpu_run_pr(struct kvm_vcpu *vcpu)
 
 	/* Check if we can run the vcpu at all */
 	if (!vcpu->arch.sane) {
-		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		ret = -EINVAL;
 		goto out;
 	}
@@ -1861,7 +1860,7 @@ static int kvmppc_vcpu_run_pr(struct kvm_vcpu *vcpu)
 
 	kvmppc_fix_ee_before_entry();
 
-	ret = __kvmppc_vcpu_run(run, vcpu);
+	ret = __kvmppc_vcpu_run(vcpu);
 
 	kvmppc_clear_debug(vcpu);
 
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 26b3f5900b72..942039aae598 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -732,12 +732,11 @@ int kvmppc_core_check_requests(struct kvm_vcpu *vcpu)
 
 int kvmppc_vcpu_run(struct kvm_vcpu *vcpu)
 {
-	struct kvm_run *run = vcpu->run;
 	int ret, s;
 	struct debug_reg debug;
 
 	if (!vcpu->arch.sane) {
-		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		return -EINVAL;
 	}
 
@@ -779,7 +778,7 @@ int kvmppc_vcpu_run(struct kvm_vcpu *vcpu)
 	vcpu->arch.pgdir = vcpu->kvm->mm->pgd;
 	kvmppc_fix_ee_before_entry();
 
-	ret = __kvmppc_vcpu_run(run, vcpu);
+	ret = __kvmppc_vcpu_run(vcpu);
 
 	/* No need for guest_exit. It's done in handle_exit.
 	   We also get here with interrupts enabled. */
@@ -983,9 +982,9 @@ static int kvmppc_resume_inst_load(struct kvm_vcpu *vcpu,
  *
  * Return value is in the form (errcode<<2 | RESUME_FLAG_HOST | RESUME_FLAG_NV)
  */
-int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                       unsigned int exit_nr)
+int kvmppc_handle_exit(struct kvm_vcpu *vcpu, unsigned int exit_nr)
 {
+	struct kvm_run *run = vcpu->run;
 	int r = RESUME_HOST;
 	int s;
 	int idx;
diff --git a/arch/powerpc/kvm/booke_interrupts.S b/arch/powerpc/kvm/booke_interrupts.S
index 2e56ab5a5f55..6fa82efe833b 100644
--- a/arch/powerpc/kvm/booke_interrupts.S
+++ b/arch/powerpc/kvm/booke_interrupts.S
@@ -237,7 +237,7 @@ _GLOBAL(kvmppc_resume_host)
 	/* Switch to kernel stack and jump to handler. */
 	LOAD_REG_ADDR(r3, kvmppc_handle_exit)
 	mtctr	r3
-	lwz	r3, HOST_RUN(r1)
+	mr	r3, r4
 	lwz	r2, HOST_R2(r1)
 	mr	r14, r4 /* Save vcpu pointer. */
 
@@ -337,15 +337,14 @@ heavyweight_exit:
 
 
 /* Registers:
- *  r3: kvm_run pointer
- *  r4: vcpu pointer
+ *  r3: vcpu pointer
  */
 _GLOBAL(__kvmppc_vcpu_run)
 	stwu	r1, -HOST_STACK_SIZE(r1)
-	stw	r1, VCPU_HOST_STACK(r4)	/* Save stack pointer to vcpu. */
+	stw	r1, VCPU_HOST_STACK(r3)	/* Save stack pointer to vcpu. */
 
 	/* Save host state to stack. */
-	stw	r3, HOST_RUN(r1)
+	mr	r4, r3
 	mflr	r3
 	stw	r3, HOST_STACK_LR(r1)
 	mfcr	r5
diff --git a/arch/powerpc/kvm/bookehv_interrupts.S b/arch/powerpc/kvm/bookehv_interrupts.S
index c577ba4b3169..8262c14fc9e6 100644
--- a/arch/powerpc/kvm/bookehv_interrupts.S
+++ b/arch/powerpc/kvm/bookehv_interrupts.S
@@ -434,9 +434,10 @@ _GLOBAL(kvmppc_resume_host)
 #endif
 
 	/* Switch to kernel stack and jump to handler. */
-	PPC_LL	r3, HOST_RUN(r1)
+	mr	r3, r4
 	mr	r5, r14 /* intno */
 	mr	r14, r4 /* Save vcpu pointer. */
+	mr	r4, r5
 	bl	kvmppc_handle_exit
 
 	/* Restore vcpu pointer and the nonvolatiles we used. */
@@ -525,15 +526,14 @@ heavyweight_exit:
 	blr
 
 /* Registers:
- *  r3: kvm_run pointer
- *  r4: vcpu pointer
+ *  r3: vcpu pointer
  */
 _GLOBAL(__kvmppc_vcpu_run)
 	stwu	r1, -HOST_STACK_SIZE(r1)
-	PPC_STL	r1, VCPU_HOST_STACK(r4)	/* Save stack pointer to vcpu. */
+	PPC_STL	r1, VCPU_HOST_STACK(r3)	/* Save stack pointer to vcpu. */
 
 	/* Save host state to stack. */
-	PPC_STL	r3, HOST_RUN(r1)
+	mr	r4, r3
 	mflr	r3
 	mfcr	r5
 	PPC_STL	r3, HOST_STACK_LR(r1)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 6/7] KVM: MIPS: clean up redundant 'kvm_run' parameters
  2020-04-27  4:35 [PATCH v4 0/7] clean up redundant 'kvm_run' parameters Tianjia Zhang
                   ` (4 preceding siblings ...)
  2020-04-27  4:35 ` [PATCH v4 5/7] KVM: PPC: clean up redundant kvm_run parameters in assembly Tianjia Zhang
@ 2020-04-27  4:35 ` Tianjia Zhang
  2020-04-27  5:40   ` Huacai Chen
  2020-04-27  4:35 ` [PATCH v4 7/7] KVM: MIPS: clean up redundant kvm_run parameters in assembly Tianjia Zhang
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 29+ messages in thread
From: Tianjia Zhang @ 2020-04-27  4:35 UTC (permalink / raw)
  To: pbonzini, tsbogend, paulus, mpe, benh, borntraeger, frankja,
	david, cohuck, heiko.carstens, gor, sean.j.christopherson,
	vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp, x86, hpa,
	maz, james.morse, julien.thierry.kdev, suzuki.poulose,
	christoffer.dall, peterx, thuth, chenhuacai
  Cc: kvm, linux-arm-kernel, kvmarm, linux-mips, kvm-ppc, linuxppc-dev,
	linux-s390, linux-kernel, tianjia.zhang

In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
structure. For historical reasons, many kvm-related function parameters
retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
patch does a unified cleanup of these remaining redundant parameters.

Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
---
 arch/mips/include/asm/kvm_host.h |  28 +-------
 arch/mips/kvm/emulate.c          |  59 ++++++----------
 arch/mips/kvm/mips.c             |  11 ++-
 arch/mips/kvm/trap_emul.c        | 114 ++++++++++++++-----------------
 arch/mips/kvm/vz.c               |  26 +++----
 5 files changed, 87 insertions(+), 151 deletions(-)

diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
index 2c343c346b79..971439297cea 100644
--- a/arch/mips/include/asm/kvm_host.h
+++ b/arch/mips/include/asm/kvm_host.h
@@ -812,8 +812,8 @@ struct kvm_mips_callbacks {
 			   const struct kvm_one_reg *reg, s64 v);
 	int (*vcpu_load)(struct kvm_vcpu *vcpu, int cpu);
 	int (*vcpu_put)(struct kvm_vcpu *vcpu, int cpu);
-	int (*vcpu_run)(struct kvm_run *run, struct kvm_vcpu *vcpu);
-	void (*vcpu_reenter)(struct kvm_run *run, struct kvm_vcpu *vcpu);
+	int (*vcpu_run)(struct kvm_vcpu *vcpu);
+	void (*vcpu_reenter)(struct kvm_vcpu *vcpu);
 };
 extern struct kvm_mips_callbacks *kvm_mips_callbacks;
 int kvm_mips_emulation_init(struct kvm_mips_callbacks **install_callbacks);
@@ -868,7 +868,6 @@ extern int kvm_mips_handle_mapped_seg_tlb_fault(struct kvm_vcpu *vcpu,
 
 extern enum emulation_result kvm_mips_handle_tlbmiss(u32 cause,
 						     u32 *opc,
-						     struct kvm_run *run,
 						     struct kvm_vcpu *vcpu,
 						     bool write_fault);
 
@@ -975,83 +974,67 @@ static inline bool kvm_is_ifetch_fault(struct kvm_vcpu_arch *vcpu)
 
 extern enum emulation_result kvm_mips_emulate_inst(u32 cause,
 						   u32 *opc,
-						   struct kvm_run *run,
 						   struct kvm_vcpu *vcpu);
 
 long kvm_mips_guest_exception_base(struct kvm_vcpu *vcpu);
 
 extern enum emulation_result kvm_mips_emulate_syscall(u32 cause,
 						      u32 *opc,
-						      struct kvm_run *run,
 						      struct kvm_vcpu *vcpu);
 
 extern enum emulation_result kvm_mips_emulate_tlbmiss_ld(u32 cause,
 							 u32 *opc,
-							 struct kvm_run *run,
 							 struct kvm_vcpu *vcpu);
 
 extern enum emulation_result kvm_mips_emulate_tlbinv_ld(u32 cause,
 							u32 *opc,
-							struct kvm_run *run,
 							struct kvm_vcpu *vcpu);
 
 extern enum emulation_result kvm_mips_emulate_tlbmiss_st(u32 cause,
 							 u32 *opc,
-							 struct kvm_run *run,
 							 struct kvm_vcpu *vcpu);
 
 extern enum emulation_result kvm_mips_emulate_tlbinv_st(u32 cause,
 							u32 *opc,
-							struct kvm_run *run,
 							struct kvm_vcpu *vcpu);
 
 extern enum emulation_result kvm_mips_emulate_tlbmod(u32 cause,
 						     u32 *opc,
-						     struct kvm_run *run,
 						     struct kvm_vcpu *vcpu);
 
 extern enum emulation_result kvm_mips_emulate_fpu_exc(u32 cause,
 						      u32 *opc,
-						      struct kvm_run *run,
 						      struct kvm_vcpu *vcpu);
 
 extern enum emulation_result kvm_mips_handle_ri(u32 cause,
 						u32 *opc,
-						struct kvm_run *run,
 						struct kvm_vcpu *vcpu);
 
 extern enum emulation_result kvm_mips_emulate_ri_exc(u32 cause,
 						     u32 *opc,
-						     struct kvm_run *run,
 						     struct kvm_vcpu *vcpu);
 
 extern enum emulation_result kvm_mips_emulate_bp_exc(u32 cause,
 						     u32 *opc,
-						     struct kvm_run *run,
 						     struct kvm_vcpu *vcpu);
 
 extern enum emulation_result kvm_mips_emulate_trap_exc(u32 cause,
 						       u32 *opc,
-						       struct kvm_run *run,
 						       struct kvm_vcpu *vcpu);
 
 extern enum emulation_result kvm_mips_emulate_msafpe_exc(u32 cause,
 							 u32 *opc,
-							 struct kvm_run *run,
 							 struct kvm_vcpu *vcpu);
 
 extern enum emulation_result kvm_mips_emulate_fpe_exc(u32 cause,
 						      u32 *opc,
-						      struct kvm_run *run,
 						      struct kvm_vcpu *vcpu);
 
 extern enum emulation_result kvm_mips_emulate_msadis_exc(u32 cause,
 							 u32 *opc,
-							 struct kvm_run *run,
 							 struct kvm_vcpu *vcpu);
 
-extern enum emulation_result kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu,
-							 struct kvm_run *run);
+extern enum emulation_result kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu);
 
 u32 kvm_mips_read_count(struct kvm_vcpu *vcpu);
 void kvm_mips_write_count(struct kvm_vcpu *vcpu, u32 count);
@@ -1080,26 +1063,21 @@ static inline void kvm_vz_lose_htimer(struct kvm_vcpu *vcpu) {}
 
 enum emulation_result kvm_mips_check_privilege(u32 cause,
 					       u32 *opc,
-					       struct kvm_run *run,
 					       struct kvm_vcpu *vcpu);
 
 enum emulation_result kvm_mips_emulate_cache(union mips_instruction inst,
 					     u32 *opc,
 					     u32 cause,
-					     struct kvm_run *run,
 					     struct kvm_vcpu *vcpu);
 enum emulation_result kvm_mips_emulate_CP0(union mips_instruction inst,
 					   u32 *opc,
 					   u32 cause,
-					   struct kvm_run *run,
 					   struct kvm_vcpu *vcpu);
 enum emulation_result kvm_mips_emulate_store(union mips_instruction inst,
 					     u32 cause,
-					     struct kvm_run *run,
 					     struct kvm_vcpu *vcpu);
 enum emulation_result kvm_mips_emulate_load(union mips_instruction inst,
 					    u32 cause,
-					    struct kvm_run *run,
 					    struct kvm_vcpu *vcpu);
 
 /* COP0 */
diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c
index 754094b40a75..36718b8dce21 100644
--- a/arch/mips/kvm/emulate.c
+++ b/arch/mips/kvm/emulate.c
@@ -1262,7 +1262,6 @@ unsigned int kvm_mips_config5_wrmask(struct kvm_vcpu *vcpu)
 
 enum emulation_result kvm_mips_emulate_CP0(union mips_instruction inst,
 					   u32 *opc, u32 cause,
-					   struct kvm_run *run,
 					   struct kvm_vcpu *vcpu)
 {
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
@@ -1597,11 +1596,11 @@ enum emulation_result kvm_mips_emulate_CP0(union mips_instruction inst,
 
 enum emulation_result kvm_mips_emulate_store(union mips_instruction inst,
 					     u32 cause,
-					     struct kvm_run *run,
 					     struct kvm_vcpu *vcpu)
 {
 	enum emulation_result er;
 	u32 rt;
+	struct kvm_run *run = vcpu->run;
 	void *data = run->mmio.data;
 	unsigned long curr_pc;
 
@@ -1678,9 +1677,9 @@ enum emulation_result kvm_mips_emulate_store(union mips_instruction inst,
 }
 
 enum emulation_result kvm_mips_emulate_load(union mips_instruction inst,
-					    u32 cause, struct kvm_run *run,
-					    struct kvm_vcpu *vcpu)
+					    u32 cause, struct kvm_vcpu *vcpu)
 {
+	struct kvm_run *run = vcpu->run;
 	enum emulation_result er;
 	unsigned long curr_pc;
 	u32 op, rt;
@@ -1752,7 +1751,6 @@ enum emulation_result kvm_mips_emulate_load(union mips_instruction inst,
 static enum emulation_result kvm_mips_guest_cache_op(int (*fn)(unsigned long),
 						     unsigned long curr_pc,
 						     unsigned long addr,
-						     struct kvm_run *run,
 						     struct kvm_vcpu *vcpu,
 						     u32 cause)
 {
@@ -1780,13 +1778,13 @@ static enum emulation_result kvm_mips_guest_cache_op(int (*fn)(unsigned long),
 			/* no matching guest TLB */
 			vcpu->arch.host_cp0_badvaddr = addr;
 			vcpu->arch.pc = curr_pc;
-			kvm_mips_emulate_tlbmiss_ld(cause, NULL, run, vcpu);
+			kvm_mips_emulate_tlbmiss_ld(cause, NULL, vcpu);
 			return EMULATE_EXCEPT;
 		case KVM_MIPS_TLBINV:
 			/* invalid matching guest TLB */
 			vcpu->arch.host_cp0_badvaddr = addr;
 			vcpu->arch.pc = curr_pc;
-			kvm_mips_emulate_tlbinv_ld(cause, NULL, run, vcpu);
+			kvm_mips_emulate_tlbinv_ld(cause, NULL, vcpu);
 			return EMULATE_EXCEPT;
 		default:
 			break;
@@ -1796,7 +1794,6 @@ static enum emulation_result kvm_mips_guest_cache_op(int (*fn)(unsigned long),
 
 enum emulation_result kvm_mips_emulate_cache(union mips_instruction inst,
 					     u32 *opc, u32 cause,
-					     struct kvm_run *run,
 					     struct kvm_vcpu *vcpu)
 {
 	enum emulation_result er = EMULATE_DONE;
@@ -1886,7 +1883,7 @@ enum emulation_result kvm_mips_emulate_cache(union mips_instruction inst,
 		 * guest's behalf.
 		 */
 		er = kvm_mips_guest_cache_op(protected_writeback_dcache_line,
-					     curr_pc, va, run, vcpu, cause);
+					     curr_pc, va, vcpu, cause);
 		if (er != EMULATE_DONE)
 			goto done;
 #ifdef CONFIG_KVM_MIPS_DYN_TRANS
@@ -1899,11 +1896,11 @@ enum emulation_result kvm_mips_emulate_cache(union mips_instruction inst,
 	} else if (op_inst == Hit_Invalidate_I) {
 		/* Perform the icache synchronisation on the guest's behalf */
 		er = kvm_mips_guest_cache_op(protected_writeback_dcache_line,
-					     curr_pc, va, run, vcpu, cause);
+					     curr_pc, va, vcpu, cause);
 		if (er != EMULATE_DONE)
 			goto done;
 		er = kvm_mips_guest_cache_op(protected_flush_icache_line,
-					     curr_pc, va, run, vcpu, cause);
+					     curr_pc, va, vcpu, cause);
 		if (er != EMULATE_DONE)
 			goto done;
 
@@ -1929,7 +1926,6 @@ enum emulation_result kvm_mips_emulate_cache(union mips_instruction inst,
 }
 
 enum emulation_result kvm_mips_emulate_inst(u32 cause, u32 *opc,
-					    struct kvm_run *run,
 					    struct kvm_vcpu *vcpu)
 {
 	union mips_instruction inst;
@@ -1945,14 +1941,14 @@ enum emulation_result kvm_mips_emulate_inst(u32 cause, u32 *opc,
 
 	switch (inst.r_format.opcode) {
 	case cop0_op:
-		er = kvm_mips_emulate_CP0(inst, opc, cause, run, vcpu);
+		er = kvm_mips_emulate_CP0(inst, opc, cause, vcpu);
 		break;
 
 #ifndef CONFIG_CPU_MIPSR6
 	case cache_op:
 		++vcpu->stat.cache_exits;
 		trace_kvm_exit(vcpu, KVM_TRACE_EXIT_CACHE);
-		er = kvm_mips_emulate_cache(inst, opc, cause, run, vcpu);
+		er = kvm_mips_emulate_cache(inst, opc, cause, vcpu);
 		break;
 #else
 	case spec3_op:
@@ -1960,7 +1956,7 @@ enum emulation_result kvm_mips_emulate_inst(u32 cause, u32 *opc,
 		case cache6_op:
 			++vcpu->stat.cache_exits;
 			trace_kvm_exit(vcpu, KVM_TRACE_EXIT_CACHE);
-			er = kvm_mips_emulate_cache(inst, opc, cause, run,
+			er = kvm_mips_emulate_cache(inst, opc, cause,
 						    vcpu);
 			break;
 		default:
@@ -2000,7 +1996,6 @@ long kvm_mips_guest_exception_base(struct kvm_vcpu *vcpu)
 
 enum emulation_result kvm_mips_emulate_syscall(u32 cause,
 					       u32 *opc,
-					       struct kvm_run *run,
 					       struct kvm_vcpu *vcpu)
 {
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
@@ -2035,7 +2030,6 @@ enum emulation_result kvm_mips_emulate_syscall(u32 cause,
 
 enum emulation_result kvm_mips_emulate_tlbmiss_ld(u32 cause,
 						  u32 *opc,
-						  struct kvm_run *run,
 						  struct kvm_vcpu *vcpu)
 {
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
@@ -2079,7 +2073,6 @@ enum emulation_result kvm_mips_emulate_tlbmiss_ld(u32 cause,
 
 enum emulation_result kvm_mips_emulate_tlbinv_ld(u32 cause,
 						 u32 *opc,
-						 struct kvm_run *run,
 						 struct kvm_vcpu *vcpu)
 {
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
@@ -2121,7 +2114,6 @@ enum emulation_result kvm_mips_emulate_tlbinv_ld(u32 cause,
 
 enum emulation_result kvm_mips_emulate_tlbmiss_st(u32 cause,
 						  u32 *opc,
-						  struct kvm_run *run,
 						  struct kvm_vcpu *vcpu)
 {
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
@@ -2163,7 +2155,6 @@ enum emulation_result kvm_mips_emulate_tlbmiss_st(u32 cause,
 
 enum emulation_result kvm_mips_emulate_tlbinv_st(u32 cause,
 						 u32 *opc,
-						 struct kvm_run *run,
 						 struct kvm_vcpu *vcpu)
 {
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
@@ -2204,7 +2195,6 @@ enum emulation_result kvm_mips_emulate_tlbinv_st(u32 cause,
 
 enum emulation_result kvm_mips_emulate_tlbmod(u32 cause,
 					      u32 *opc,
-					      struct kvm_run *run,
 					      struct kvm_vcpu *vcpu)
 {
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
@@ -2244,7 +2234,6 @@ enum emulation_result kvm_mips_emulate_tlbmod(u32 cause,
 
 enum emulation_result kvm_mips_emulate_fpu_exc(u32 cause,
 					       u32 *opc,
-					       struct kvm_run *run,
 					       struct kvm_vcpu *vcpu)
 {
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
@@ -2273,7 +2262,6 @@ enum emulation_result kvm_mips_emulate_fpu_exc(u32 cause,
 
 enum emulation_result kvm_mips_emulate_ri_exc(u32 cause,
 					      u32 *opc,
-					      struct kvm_run *run,
 					      struct kvm_vcpu *vcpu)
 {
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
@@ -2308,7 +2296,6 @@ enum emulation_result kvm_mips_emulate_ri_exc(u32 cause,
 
 enum emulation_result kvm_mips_emulate_bp_exc(u32 cause,
 					      u32 *opc,
-					      struct kvm_run *run,
 					      struct kvm_vcpu *vcpu)
 {
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
@@ -2343,7 +2330,6 @@ enum emulation_result kvm_mips_emulate_bp_exc(u32 cause,
 
 enum emulation_result kvm_mips_emulate_trap_exc(u32 cause,
 						u32 *opc,
-						struct kvm_run *run,
 						struct kvm_vcpu *vcpu)
 {
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
@@ -2378,7 +2364,6 @@ enum emulation_result kvm_mips_emulate_trap_exc(u32 cause,
 
 enum emulation_result kvm_mips_emulate_msafpe_exc(u32 cause,
 						  u32 *opc,
-						  struct kvm_run *run,
 						  struct kvm_vcpu *vcpu)
 {
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
@@ -2413,7 +2398,6 @@ enum emulation_result kvm_mips_emulate_msafpe_exc(u32 cause,
 
 enum emulation_result kvm_mips_emulate_fpe_exc(u32 cause,
 					       u32 *opc,
-					       struct kvm_run *run,
 					       struct kvm_vcpu *vcpu)
 {
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
@@ -2448,7 +2432,6 @@ enum emulation_result kvm_mips_emulate_fpe_exc(u32 cause,
 
 enum emulation_result kvm_mips_emulate_msadis_exc(u32 cause,
 						  u32 *opc,
-						  struct kvm_run *run,
 						  struct kvm_vcpu *vcpu)
 {
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
@@ -2482,7 +2465,6 @@ enum emulation_result kvm_mips_emulate_msadis_exc(u32 cause,
 }
 
 enum emulation_result kvm_mips_handle_ri(u32 cause, u32 *opc,
-					 struct kvm_run *run,
 					 struct kvm_vcpu *vcpu)
 {
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
@@ -2571,12 +2553,12 @@ enum emulation_result kvm_mips_handle_ri(u32 cause, u32 *opc,
 	 * branch target), and pass the RI exception to the guest OS.
 	 */
 	vcpu->arch.pc = curr_pc;
-	return kvm_mips_emulate_ri_exc(cause, opc, run, vcpu);
+	return kvm_mips_emulate_ri_exc(cause, opc, vcpu);
 }
 
-enum emulation_result kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu,
-						  struct kvm_run *run)
+enum emulation_result kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu)
 {
+	struct kvm_run *run = vcpu->run;
 	unsigned long *gpr = &vcpu->arch.gprs[vcpu->arch.io_gpr];
 	enum emulation_result er = EMULATE_DONE;
 
@@ -2622,7 +2604,6 @@ enum emulation_result kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu,
 
 static enum emulation_result kvm_mips_emulate_exc(u32 cause,
 						  u32 *opc,
-						  struct kvm_run *run,
 						  struct kvm_vcpu *vcpu)
 {
 	u32 exccode = (cause >> CAUSEB_EXCCODE) & 0x1f;
@@ -2660,7 +2641,6 @@ static enum emulation_result kvm_mips_emulate_exc(u32 cause,
 
 enum emulation_result kvm_mips_check_privilege(u32 cause,
 					       u32 *opc,
-					       struct kvm_run *run,
 					       struct kvm_vcpu *vcpu)
 {
 	enum emulation_result er = EMULATE_DONE;
@@ -2742,7 +2722,7 @@ enum emulation_result kvm_mips_check_privilege(u32 cause,
 	}
 
 	if (er == EMULATE_PRIV_FAIL)
-		kvm_mips_emulate_exc(cause, opc, run, vcpu);
+		kvm_mips_emulate_exc(cause, opc, vcpu);
 
 	return er;
 }
@@ -2756,7 +2736,6 @@ enum emulation_result kvm_mips_check_privilege(u32 cause,
  */
 enum emulation_result kvm_mips_handle_tlbmiss(u32 cause,
 					      u32 *opc,
-					      struct kvm_run *run,
 					      struct kvm_vcpu *vcpu,
 					      bool write_fault)
 {
@@ -2780,9 +2759,9 @@ enum emulation_result kvm_mips_handle_tlbmiss(u32 cause,
 		       KVM_ENTRYHI_ASID));
 	if (index < 0) {
 		if (exccode == EXCCODE_TLBL) {
-			er = kvm_mips_emulate_tlbmiss_ld(cause, opc, run, vcpu);
+			er = kvm_mips_emulate_tlbmiss_ld(cause, opc, vcpu);
 		} else if (exccode == EXCCODE_TLBS) {
-			er = kvm_mips_emulate_tlbmiss_st(cause, opc, run, vcpu);
+			er = kvm_mips_emulate_tlbmiss_st(cause, opc, vcpu);
 		} else {
 			kvm_err("%s: invalid exc code: %d\n", __func__,
 				exccode);
@@ -2797,10 +2776,10 @@ enum emulation_result kvm_mips_handle_tlbmiss(u32 cause,
 		 */
 		if (!TLB_IS_VALID(*tlb, va)) {
 			if (exccode == EXCCODE_TLBL) {
-				er = kvm_mips_emulate_tlbinv_ld(cause, opc, run,
+				er = kvm_mips_emulate_tlbinv_ld(cause, opc,
 								vcpu);
 			} else if (exccode == EXCCODE_TLBS) {
-				er = kvm_mips_emulate_tlbinv_st(cause, opc, run,
+				er = kvm_mips_emulate_tlbinv_st(cause, opc,
 								vcpu);
 			} else {
 				kvm_err("%s: invalid exc code: %d\n", __func__,
diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
index ec24adf4857e..9710477a9827 100644
--- a/arch/mips/kvm/mips.c
+++ b/arch/mips/kvm/mips.c
@@ -441,7 +441,6 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
 
 int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 {
-	struct kvm_run *run = vcpu->run;
 	int r = -EINTR;
 
 	vcpu_load(vcpu);
@@ -450,11 +449,11 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 
 	if (vcpu->mmio_needed) {
 		if (!vcpu->mmio_is_write)
-			kvm_mips_complete_mmio_load(vcpu, run);
+			kvm_mips_complete_mmio_load(vcpu);
 		vcpu->mmio_needed = 0;
 	}
 
-	if (run->immediate_exit)
+	if (vcpu->run->immediate_exit)
 		goto out;
 
 	lose_fpu(1);
@@ -471,7 +470,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 	 */
 	smp_store_mb(vcpu->mode, IN_GUEST_MODE);
 
-	r = kvm_mips_callbacks->vcpu_run(run, vcpu);
+	r = kvm_mips_callbacks->vcpu_run(vcpu);
 
 	trace_kvm_out(vcpu);
 	guest_exit_irqoff();
@@ -1225,7 +1224,7 @@ int kvm_mips_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
 		 * end up causing an exception to be delivered to the Guest
 		 * Kernel
 		 */
-		er = kvm_mips_check_privilege(cause, opc, run, vcpu);
+		er = kvm_mips_check_privilege(cause, opc, vcpu);
 		if (er == EMULATE_PRIV_FAIL) {
 			goto skip_emul;
 		} else if (er == EMULATE_FAIL) {
@@ -1374,7 +1373,7 @@ int kvm_mips_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
 		 */
 		smp_store_mb(vcpu->mode, IN_GUEST_MODE);
 
-		kvm_mips_callbacks->vcpu_reenter(run, vcpu);
+		kvm_mips_callbacks->vcpu_reenter(vcpu);
 
 		/*
 		 * If FPU / MSA are enabled (i.e. the guest's FPU / MSA context
diff --git a/arch/mips/kvm/trap_emul.c b/arch/mips/kvm/trap_emul.c
index 5a11e83dffe6..d822f3aee3dc 100644
--- a/arch/mips/kvm/trap_emul.c
+++ b/arch/mips/kvm/trap_emul.c
@@ -67,7 +67,6 @@ static int kvm_trap_emul_no_handler(struct kvm_vcpu *vcpu)
 static int kvm_trap_emul_handle_cop_unusable(struct kvm_vcpu *vcpu)
 {
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
-	struct kvm_run *run = vcpu->run;
 	u32 __user *opc = (u32 __user *) vcpu->arch.pc;
 	u32 cause = vcpu->arch.host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
@@ -81,14 +80,14 @@ static int kvm_trap_emul_handle_cop_unusable(struct kvm_vcpu *vcpu)
 			 * Unusable/no FPU in guest:
 			 * deliver guest COP1 Unusable Exception
 			 */
-			er = kvm_mips_emulate_fpu_exc(cause, opc, run, vcpu);
+			er = kvm_mips_emulate_fpu_exc(cause, opc, vcpu);
 		} else {
 			/* Restore FPU state */
 			kvm_own_fpu(vcpu);
 			er = EMULATE_DONE;
 		}
 	} else {
-		er = kvm_mips_emulate_inst(cause, opc, run, vcpu);
+		er = kvm_mips_emulate_inst(cause, opc, vcpu);
 	}
 
 	switch (er) {
@@ -97,12 +96,12 @@ static int kvm_trap_emul_handle_cop_unusable(struct kvm_vcpu *vcpu)
 		break;
 
 	case EMULATE_FAIL:
-		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		ret = RESUME_HOST;
 		break;
 
 	case EMULATE_WAIT:
-		run->exit_reason = KVM_EXIT_INTR;
+		vcpu->run->exit_reason = KVM_EXIT_INTR;
 		ret = RESUME_HOST;
 		break;
 
@@ -116,8 +115,7 @@ static int kvm_trap_emul_handle_cop_unusable(struct kvm_vcpu *vcpu)
 	return ret;
 }
 
-static int kvm_mips_bad_load(u32 cause, u32 *opc, struct kvm_run *run,
-			     struct kvm_vcpu *vcpu)
+static int kvm_mips_bad_load(u32 cause, u32 *opc, struct kvm_vcpu *vcpu)
 {
 	enum emulation_result er;
 	union mips_instruction inst;
@@ -125,7 +123,7 @@ static int kvm_mips_bad_load(u32 cause, u32 *opc, struct kvm_run *run,
 
 	/* A code fetch fault doesn't count as an MMIO */
 	if (kvm_is_ifetch_fault(&vcpu->arch)) {
-		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		return RESUME_HOST;
 	}
 
@@ -134,23 +132,22 @@ static int kvm_mips_bad_load(u32 cause, u32 *opc, struct kvm_run *run,
 		opc += 1;
 	err = kvm_get_badinstr(opc, vcpu, &inst.word);
 	if (err) {
-		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		return RESUME_HOST;
 	}
 
 	/* Emulate the load */
-	er = kvm_mips_emulate_load(inst, cause, run, vcpu);
+	er = kvm_mips_emulate_load(inst, cause, vcpu);
 	if (er == EMULATE_FAIL) {
 		kvm_err("Emulate load from MMIO space failed\n");
-		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 	} else {
-		run->exit_reason = KVM_EXIT_MMIO;
+		vcpu->run->exit_reason = KVM_EXIT_MMIO;
 	}
 	return RESUME_HOST;
 }
 
-static int kvm_mips_bad_store(u32 cause, u32 *opc, struct kvm_run *run,
-			      struct kvm_vcpu *vcpu)
+static int kvm_mips_bad_store(u32 cause, u32 *opc, struct kvm_vcpu *vcpu)
 {
 	enum emulation_result er;
 	union mips_instruction inst;
@@ -161,34 +158,33 @@ static int kvm_mips_bad_store(u32 cause, u32 *opc, struct kvm_run *run,
 		opc += 1;
 	err = kvm_get_badinstr(opc, vcpu, &inst.word);
 	if (err) {
-		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		return RESUME_HOST;
 	}
 
 	/* Emulate the store */
-	er = kvm_mips_emulate_store(inst, cause, run, vcpu);
+	er = kvm_mips_emulate_store(inst, cause, vcpu);
 	if (er == EMULATE_FAIL) {
 		kvm_err("Emulate store to MMIO space failed\n");
-		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 	} else {
-		run->exit_reason = KVM_EXIT_MMIO;
+		vcpu->run->exit_reason = KVM_EXIT_MMIO;
 	}
 	return RESUME_HOST;
 }
 
-static int kvm_mips_bad_access(u32 cause, u32 *opc, struct kvm_run *run,
+static int kvm_mips_bad_access(u32 cause, u32 *opc,
 			       struct kvm_vcpu *vcpu, bool store)
 {
 	if (store)
-		return kvm_mips_bad_store(cause, opc, run, vcpu);
+		return kvm_mips_bad_store(cause, opc, vcpu);
 	else
-		return kvm_mips_bad_load(cause, opc, run, vcpu);
+		return kvm_mips_bad_load(cause, opc, vcpu);
 }
 
 static int kvm_trap_emul_handle_tlb_mod(struct kvm_vcpu *vcpu)
 {
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
-	struct kvm_run *run = vcpu->run;
 	u32 __user *opc = (u32 __user *) vcpu->arch.pc;
 	unsigned long badvaddr = vcpu->arch.host_cp0_badvaddr;
 	u32 cause = vcpu->arch.host_cp0_cause;
@@ -212,12 +208,12 @@ static int kvm_trap_emul_handle_tlb_mod(struct kvm_vcpu *vcpu)
 		 * They would indicate stale host TLB entries.
 		 */
 		if (unlikely(index < 0)) {
-			run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+			vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 			return RESUME_HOST;
 		}
 		tlb = vcpu->arch.guest_tlb + index;
 		if (unlikely(!TLB_IS_VALID(*tlb, badvaddr))) {
-			run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+			vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 			return RESUME_HOST;
 		}
 
@@ -226,23 +222,23 @@ static int kvm_trap_emul_handle_tlb_mod(struct kvm_vcpu *vcpu)
 		 * exception. Relay that on to the guest so it can handle it.
 		 */
 		if (!TLB_IS_DIRTY(*tlb, badvaddr)) {
-			kvm_mips_emulate_tlbmod(cause, opc, run, vcpu);
+			kvm_mips_emulate_tlbmod(cause, opc, vcpu);
 			return RESUME_GUEST;
 		}
 
 		if (kvm_mips_handle_mapped_seg_tlb_fault(vcpu, tlb, badvaddr,
 							 true))
 			/* Not writable, needs handling as MMIO */
-			return kvm_mips_bad_store(cause, opc, run, vcpu);
+			return kvm_mips_bad_store(cause, opc, vcpu);
 		return RESUME_GUEST;
 	} else if (KVM_GUEST_KSEGX(badvaddr) == KVM_GUEST_KSEG0) {
 		if (kvm_mips_handle_kseg0_tlb_fault(badvaddr, vcpu, true) < 0)
 			/* Not writable, needs handling as MMIO */
-			return kvm_mips_bad_store(cause, opc, run, vcpu);
+			return kvm_mips_bad_store(cause, opc, vcpu);
 		return RESUME_GUEST;
 	} else {
 		/* host kernel addresses are all handled as MMIO */
-		return kvm_mips_bad_store(cause, opc, run, vcpu);
+		return kvm_mips_bad_store(cause, opc, vcpu);
 	}
 }
 
@@ -276,7 +272,7 @@ static int kvm_trap_emul_handle_tlb_miss(struct kvm_vcpu *vcpu, bool store)
 		 *     into the shadow host TLB
 		 */
 
-		er = kvm_mips_handle_tlbmiss(cause, opc, run, vcpu, store);
+		er = kvm_mips_handle_tlbmiss(cause, opc, vcpu, store);
 		if (er == EMULATE_DONE)
 			ret = RESUME_GUEST;
 		else {
@@ -289,14 +285,14 @@ static int kvm_trap_emul_handle_tlb_miss(struct kvm_vcpu *vcpu, bool store)
 		 * not expect to ever get them
 		 */
 		if (kvm_mips_handle_kseg0_tlb_fault(badvaddr, vcpu, store) < 0)
-			ret = kvm_mips_bad_access(cause, opc, run, vcpu, store);
+			ret = kvm_mips_bad_access(cause, opc, vcpu, store);
 	} else if (KVM_GUEST_KERNEL_MODE(vcpu)
 		   && (KSEGX(badvaddr) == CKSEG0 || KSEGX(badvaddr) == CKSEG1)) {
 		/*
 		 * With EVA we may get a TLB exception instead of an address
 		 * error when the guest performs MMIO to KSeg1 addresses.
 		 */
-		ret = kvm_mips_bad_access(cause, opc, run, vcpu, store);
+		ret = kvm_mips_bad_access(cause, opc, vcpu, store);
 	} else {
 		kvm_err("Illegal TLB %s fault address , cause %#x, PC: %p, BadVaddr: %#lx\n",
 			store ? "ST" : "LD", cause, opc, badvaddr);
@@ -320,7 +316,6 @@ static int kvm_trap_emul_handle_tlb_ld_miss(struct kvm_vcpu *vcpu)
 
 static int kvm_trap_emul_handle_addr_err_st(struct kvm_vcpu *vcpu)
 {
-	struct kvm_run *run = vcpu->run;
 	u32 __user *opc = (u32 __user *) vcpu->arch.pc;
 	unsigned long badvaddr = vcpu->arch.host_cp0_badvaddr;
 	u32 cause = vcpu->arch.host_cp0_cause;
@@ -328,11 +323,11 @@ static int kvm_trap_emul_handle_addr_err_st(struct kvm_vcpu *vcpu)
 
 	if (KVM_GUEST_KERNEL_MODE(vcpu)
 	    && (KSEGX(badvaddr) == CKSEG0 || KSEGX(badvaddr) == CKSEG1)) {
-		ret = kvm_mips_bad_store(cause, opc, run, vcpu);
+		ret = kvm_mips_bad_store(cause, opc, vcpu);
 	} else {
 		kvm_err("Address Error (STORE): cause %#x, PC: %p, BadVaddr: %#lx\n",
 			cause, opc, badvaddr);
-		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		ret = RESUME_HOST;
 	}
 	return ret;
@@ -340,18 +335,17 @@ static int kvm_trap_emul_handle_addr_err_st(struct kvm_vcpu *vcpu)
 
 static int kvm_trap_emul_handle_addr_err_ld(struct kvm_vcpu *vcpu)
 {
-	struct kvm_run *run = vcpu->run;
 	u32 __user *opc = (u32 __user *) vcpu->arch.pc;
 	unsigned long badvaddr = vcpu->arch.host_cp0_badvaddr;
 	u32 cause = vcpu->arch.host_cp0_cause;
 	int ret = RESUME_GUEST;
 
 	if (KSEGX(badvaddr) == CKSEG0 || KSEGX(badvaddr) == CKSEG1) {
-		ret = kvm_mips_bad_load(cause, opc, run, vcpu);
+		ret = kvm_mips_bad_load(cause, opc, vcpu);
 	} else {
 		kvm_err("Address Error (LOAD): cause %#x, PC: %p, BadVaddr: %#lx\n",
 			cause, opc, badvaddr);
-		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		ret = RESUME_HOST;
 	}
 	return ret;
@@ -359,17 +353,16 @@ static int kvm_trap_emul_handle_addr_err_ld(struct kvm_vcpu *vcpu)
 
 static int kvm_trap_emul_handle_syscall(struct kvm_vcpu *vcpu)
 {
-	struct kvm_run *run = vcpu->run;
 	u32 __user *opc = (u32 __user *) vcpu->arch.pc;
 	u32 cause = vcpu->arch.host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
 
-	er = kvm_mips_emulate_syscall(cause, opc, run, vcpu);
+	er = kvm_mips_emulate_syscall(cause, opc, vcpu);
 	if (er == EMULATE_DONE)
 		ret = RESUME_GUEST;
 	else {
-		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		ret = RESUME_HOST;
 	}
 	return ret;
@@ -377,17 +370,16 @@ static int kvm_trap_emul_handle_syscall(struct kvm_vcpu *vcpu)
 
 static int kvm_trap_emul_handle_res_inst(struct kvm_vcpu *vcpu)
 {
-	struct kvm_run *run = vcpu->run;
 	u32 __user *opc = (u32 __user *) vcpu->arch.pc;
 	u32 cause = vcpu->arch.host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
 
-	er = kvm_mips_handle_ri(cause, opc, run, vcpu);
+	er = kvm_mips_handle_ri(cause, opc, vcpu);
 	if (er == EMULATE_DONE)
 		ret = RESUME_GUEST;
 	else {
-		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		ret = RESUME_HOST;
 	}
 	return ret;
@@ -395,17 +387,16 @@ static int kvm_trap_emul_handle_res_inst(struct kvm_vcpu *vcpu)
 
 static int kvm_trap_emul_handle_break(struct kvm_vcpu *vcpu)
 {
-	struct kvm_run *run = vcpu->run;
 	u32 __user *opc = (u32 __user *) vcpu->arch.pc;
 	u32 cause = vcpu->arch.host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
 
-	er = kvm_mips_emulate_bp_exc(cause, opc, run, vcpu);
+	er = kvm_mips_emulate_bp_exc(cause, opc, vcpu);
 	if (er == EMULATE_DONE)
 		ret = RESUME_GUEST;
 	else {
-		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		ret = RESUME_HOST;
 	}
 	return ret;
@@ -413,17 +404,16 @@ static int kvm_trap_emul_handle_break(struct kvm_vcpu *vcpu)
 
 static int kvm_trap_emul_handle_trap(struct kvm_vcpu *vcpu)
 {
-	struct kvm_run *run = vcpu->run;
 	u32 __user *opc = (u32 __user *)vcpu->arch.pc;
 	u32 cause = vcpu->arch.host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
 
-	er = kvm_mips_emulate_trap_exc(cause, opc, run, vcpu);
+	er = kvm_mips_emulate_trap_exc(cause, opc, vcpu);
 	if (er == EMULATE_DONE) {
 		ret = RESUME_GUEST;
 	} else {
-		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		ret = RESUME_HOST;
 	}
 	return ret;
@@ -431,17 +421,16 @@ static int kvm_trap_emul_handle_trap(struct kvm_vcpu *vcpu)
 
 static int kvm_trap_emul_handle_msa_fpe(struct kvm_vcpu *vcpu)
 {
-	struct kvm_run *run = vcpu->run;
 	u32 __user *opc = (u32 __user *)vcpu->arch.pc;
 	u32 cause = vcpu->arch.host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
 
-	er = kvm_mips_emulate_msafpe_exc(cause, opc, run, vcpu);
+	er = kvm_mips_emulate_msafpe_exc(cause, opc, vcpu);
 	if (er == EMULATE_DONE) {
 		ret = RESUME_GUEST;
 	} else {
-		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		ret = RESUME_HOST;
 	}
 	return ret;
@@ -449,17 +438,16 @@ static int kvm_trap_emul_handle_msa_fpe(struct kvm_vcpu *vcpu)
 
 static int kvm_trap_emul_handle_fpe(struct kvm_vcpu *vcpu)
 {
-	struct kvm_run *run = vcpu->run;
 	u32 __user *opc = (u32 __user *)vcpu->arch.pc;
 	u32 cause = vcpu->arch.host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
 
-	er = kvm_mips_emulate_fpe_exc(cause, opc, run, vcpu);
+	er = kvm_mips_emulate_fpe_exc(cause, opc, vcpu);
 	if (er == EMULATE_DONE) {
 		ret = RESUME_GUEST;
 	} else {
-		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		ret = RESUME_HOST;
 	}
 	return ret;
@@ -474,7 +462,6 @@ static int kvm_trap_emul_handle_fpe(struct kvm_vcpu *vcpu)
 static int kvm_trap_emul_handle_msa_disabled(struct kvm_vcpu *vcpu)
 {
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
-	struct kvm_run *run = vcpu->run;
 	u32 __user *opc = (u32 __user *) vcpu->arch.pc;
 	u32 cause = vcpu->arch.host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
@@ -486,10 +473,10 @@ static int kvm_trap_emul_handle_msa_disabled(struct kvm_vcpu *vcpu)
 		 * No MSA in guest, or FPU enabled and not in FR=1 mode,
 		 * guest reserved instruction exception
 		 */
-		er = kvm_mips_emulate_ri_exc(cause, opc, run, vcpu);
+		er = kvm_mips_emulate_ri_exc(cause, opc, vcpu);
 	} else if (!(kvm_read_c0_guest_config5(cop0) & MIPS_CONF5_MSAEN)) {
 		/* MSA disabled by guest, guest MSA disabled exception */
-		er = kvm_mips_emulate_msadis_exc(cause, opc, run, vcpu);
+		er = kvm_mips_emulate_msadis_exc(cause, opc, vcpu);
 	} else {
 		/* Restore MSA/FPU state */
 		kvm_own_msa(vcpu);
@@ -502,7 +489,7 @@ static int kvm_trap_emul_handle_msa_disabled(struct kvm_vcpu *vcpu)
 		break;
 
 	case EMULATE_FAIL:
-		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		ret = RESUME_HOST;
 		break;
 
@@ -1181,8 +1168,7 @@ void kvm_trap_emul_gva_lockless_end(struct kvm_vcpu *vcpu)
 	local_irq_enable();
 }
 
-static void kvm_trap_emul_vcpu_reenter(struct kvm_run *run,
-				       struct kvm_vcpu *vcpu)
+static void kvm_trap_emul_vcpu_reenter(struct kvm_vcpu *vcpu)
 {
 	struct mm_struct *kern_mm = &vcpu->arch.guest_kernel_mm;
 	struct mm_struct *user_mm = &vcpu->arch.guest_user_mm;
@@ -1225,7 +1211,7 @@ static void kvm_trap_emul_vcpu_reenter(struct kvm_run *run,
 	check_mmu_context(mm);
 }
 
-static int kvm_trap_emul_vcpu_run(struct kvm_run *run, struct kvm_vcpu *vcpu)
+static int kvm_trap_emul_vcpu_run(struct kvm_vcpu *vcpu)
 {
 	int cpu = smp_processor_id();
 	int r;
@@ -1234,7 +1220,7 @@ static int kvm_trap_emul_vcpu_run(struct kvm_run *run, struct kvm_vcpu *vcpu)
 	kvm_mips_deliver_interrupts(vcpu,
 				    kvm_read_c0_guest_cause(vcpu->arch.cop0));
 
-	kvm_trap_emul_vcpu_reenter(run, vcpu);
+	kvm_trap_emul_vcpu_reenter(vcpu);
 
 	/*
 	 * We use user accessors to access guest memory, but we don't want to
@@ -1252,7 +1238,7 @@ static int kvm_trap_emul_vcpu_run(struct kvm_run *run, struct kvm_vcpu *vcpu)
 	 */
 	kvm_mips_suspend_mm(cpu);
 
-	r = vcpu->arch.vcpu_run(run, vcpu);
+	r = vcpu->arch.vcpu_run(vcpu->run, vcpu);
 
 	/* We may have migrated while handling guest exits */
 	cpu = smp_processor_id();
diff --git a/arch/mips/kvm/vz.c b/arch/mips/kvm/vz.c
index dde20887a70d..94f1d23828e3 100644
--- a/arch/mips/kvm/vz.c
+++ b/arch/mips/kvm/vz.c
@@ -899,7 +899,6 @@ static void kvm_write_maari(struct kvm_vcpu *vcpu, unsigned long val)
 
 static enum emulation_result kvm_vz_gpsi_cop0(union mips_instruction inst,
 					      u32 *opc, u32 cause,
-					      struct kvm_run *run,
 					      struct kvm_vcpu *vcpu)
 {
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
@@ -1062,7 +1061,6 @@ static enum emulation_result kvm_vz_gpsi_cop0(union mips_instruction inst,
 
 static enum emulation_result kvm_vz_gpsi_cache(union mips_instruction inst,
 					       u32 *opc, u32 cause,
-					       struct kvm_run *run,
 					       struct kvm_vcpu *vcpu)
 {
 	enum emulation_result er = EMULATE_DONE;
@@ -1134,7 +1132,6 @@ static enum emulation_result kvm_trap_vz_handle_gpsi(u32 cause, u32 *opc,
 {
 	enum emulation_result er = EMULATE_DONE;
 	struct kvm_vcpu_arch *arch = &vcpu->arch;
-	struct kvm_run *run = vcpu->run;
 	union mips_instruction inst;
 	int rd, rt, sel;
 	int err;
@@ -1150,12 +1147,12 @@ static enum emulation_result kvm_trap_vz_handle_gpsi(u32 cause, u32 *opc,
 
 	switch (inst.r_format.opcode) {
 	case cop0_op:
-		er = kvm_vz_gpsi_cop0(inst, opc, cause, run, vcpu);
+		er = kvm_vz_gpsi_cop0(inst, opc, cause, vcpu);
 		break;
 #ifndef CONFIG_CPU_MIPSR6
 	case cache_op:
 		trace_kvm_exit(vcpu, KVM_TRACE_EXIT_CACHE);
-		er = kvm_vz_gpsi_cache(inst, opc, cause, run, vcpu);
+		er = kvm_vz_gpsi_cache(inst, opc, cause, vcpu);
 		break;
 #endif
 	case spec3_op:
@@ -1163,7 +1160,7 @@ static enum emulation_result kvm_trap_vz_handle_gpsi(u32 cause, u32 *opc,
 #ifdef CONFIG_CPU_MIPSR6
 		case cache6_op:
 			trace_kvm_exit(vcpu, KVM_TRACE_EXIT_CACHE);
-			er = kvm_vz_gpsi_cache(inst, opc, cause, run, vcpu);
+			er = kvm_vz_gpsi_cache(inst, opc, cause, vcpu);
 			break;
 #endif
 		case rdhwr_op:
@@ -1465,7 +1462,6 @@ static int kvm_trap_vz_handle_guest_exit(struct kvm_vcpu *vcpu)
  */
 static int kvm_trap_vz_handle_cop_unusable(struct kvm_vcpu *vcpu)
 {
-	struct kvm_run *run = vcpu->run;
 	u32 cause = vcpu->arch.host_cp0_cause;
 	enum emulation_result er = EMULATE_FAIL;
 	int ret = RESUME_GUEST;
@@ -1493,7 +1489,7 @@ static int kvm_trap_vz_handle_cop_unusable(struct kvm_vcpu *vcpu)
 		break;
 
 	case EMULATE_FAIL:
-		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		ret = RESUME_HOST;
 		break;
 
@@ -1512,8 +1508,6 @@ static int kvm_trap_vz_handle_cop_unusable(struct kvm_vcpu *vcpu)
  */
 static int kvm_trap_vz_handle_msa_disabled(struct kvm_vcpu *vcpu)
 {
-	struct kvm_run *run = vcpu->run;
-
 	/*
 	 * If MSA not present or not exposed to guest or FR=0, the MSA operation
 	 * should have been treated as a reserved instruction!
@@ -1524,7 +1518,7 @@ static int kvm_trap_vz_handle_msa_disabled(struct kvm_vcpu *vcpu)
 	    (read_gc0_status() & (ST0_CU1 | ST0_FR)) == ST0_CU1 ||
 	    !(read_gc0_config5() & MIPS_CONF5_MSAEN) ||
 	    vcpu->arch.aux_inuse & KVM_MIPS_AUX_MSA) {
-		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		return RESUME_HOST;
 	}
 
@@ -1560,7 +1554,7 @@ static int kvm_trap_vz_handle_tlb_ld_miss(struct kvm_vcpu *vcpu)
 		}
 
 		/* Treat as MMIO */
-		er = kvm_mips_emulate_load(inst, cause, run, vcpu);
+		er = kvm_mips_emulate_load(inst, cause, vcpu);
 		if (er == EMULATE_FAIL) {
 			kvm_err("Guest Emulate Load from MMIO space failed: PC: %p, BadVaddr: %#lx\n",
 				opc, badvaddr);
@@ -1607,7 +1601,7 @@ static int kvm_trap_vz_handle_tlb_st_miss(struct kvm_vcpu *vcpu)
 		}
 
 		/* Treat as MMIO */
-		er = kvm_mips_emulate_store(inst, cause, run, vcpu);
+		er = kvm_mips_emulate_store(inst, cause, vcpu);
 		if (er == EMULATE_FAIL) {
 			kvm_err("Guest Emulate Store to MMIO space failed: PC: %p, BadVaddr: %#lx\n",
 				opc, badvaddr);
@@ -3129,7 +3123,7 @@ static void kvm_vz_flush_shadow_memslot(struct kvm *kvm,
 	kvm_vz_flush_shadow_all(kvm);
 }
 
-static void kvm_vz_vcpu_reenter(struct kvm_run *run, struct kvm_vcpu *vcpu)
+static void kvm_vz_vcpu_reenter(struct kvm_vcpu *vcpu)
 {
 	int cpu = smp_processor_id();
 	int preserve_guest_tlb;
@@ -3145,7 +3139,7 @@ static void kvm_vz_vcpu_reenter(struct kvm_run *run, struct kvm_vcpu *vcpu)
 		kvm_vz_vcpu_load_wired(vcpu);
 }
 
-static int kvm_vz_vcpu_run(struct kvm_run *run, struct kvm_vcpu *vcpu)
+static int kvm_vz_vcpu_run(struct kvm_vcpu *vcpu)
 {
 	int cpu = smp_processor_id();
 	int r;
@@ -3158,7 +3152,7 @@ static int kvm_vz_vcpu_run(struct kvm_run *run, struct kvm_vcpu *vcpu)
 	kvm_vz_vcpu_load_tlb(vcpu, cpu);
 	kvm_vz_vcpu_load_wired(vcpu);
 
-	r = vcpu->arch.vcpu_run(run, vcpu);
+	r = vcpu->arch.vcpu_run(vcpu->run, vcpu);
 
 	kvm_vz_vcpu_save_wired(vcpu);
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v4 7/7] KVM: MIPS: clean up redundant kvm_run parameters in assembly
  2020-04-27  4:35 [PATCH v4 0/7] clean up redundant 'kvm_run' parameters Tianjia Zhang
                   ` (5 preceding siblings ...)
  2020-04-27  4:35 ` [PATCH v4 6/7] KVM: MIPS: clean up redundant 'kvm_run' parameters Tianjia Zhang
@ 2020-04-27  4:35 ` Tianjia Zhang
  2020-04-27  5:36   ` Huacai Chen
  2020-05-05  4:15 ` [PATCH v4 0/7] clean up redundant 'kvm_run' parameters Tianjia Zhang
  2020-06-23  9:42 ` Paolo Bonzini
  8 siblings, 1 reply; 29+ messages in thread
From: Tianjia Zhang @ 2020-04-27  4:35 UTC (permalink / raw)
  To: pbonzini, tsbogend, paulus, mpe, benh, borntraeger, frankja,
	david, cohuck, heiko.carstens, gor, sean.j.christopherson,
	vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp, x86, hpa,
	maz, james.morse, julien.thierry.kdev, suzuki.poulose,
	christoffer.dall, peterx, thuth, chenhuacai
  Cc: kvm, linux-arm-kernel, kvmarm, linux-mips, kvm-ppc, linuxppc-dev,
	linux-s390, linux-kernel, tianjia.zhang

In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
structure. For historical reasons, many kvm-related function parameters
retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
patch does a unified cleanup of these remaining redundant parameters.

Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
---
 arch/mips/include/asm/kvm_host.h |  4 ++--
 arch/mips/kvm/entry.c            | 21 ++++++++-------------
 arch/mips/kvm/mips.c             |  3 ++-
 arch/mips/kvm/trap_emul.c        |  2 +-
 arch/mips/kvm/vz.c               |  2 +-
 5 files changed, 14 insertions(+), 18 deletions(-)

diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
index 971439297cea..db915c55166d 100644
--- a/arch/mips/include/asm/kvm_host.h
+++ b/arch/mips/include/asm/kvm_host.h
@@ -310,7 +310,7 @@ struct kvm_mmu_memory_cache {
 #define KVM_MIPS_GUEST_TLB_SIZE	64
 struct kvm_vcpu_arch {
 	void *guest_ebase;
-	int (*vcpu_run)(struct kvm_run *run, struct kvm_vcpu *vcpu);
+	int (*vcpu_run)(struct kvm_vcpu *vcpu);
 
 	/* Host registers preserved across guest mode execution */
 	unsigned long host_stack;
@@ -821,7 +821,7 @@ int kvm_mips_emulation_init(struct kvm_mips_callbacks **install_callbacks);
 /* Debug: dump vcpu state */
 int kvm_arch_vcpu_dump_regs(struct kvm_vcpu *vcpu);
 
-extern int kvm_mips_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu);
+extern int kvm_mips_handle_exit(struct kvm_vcpu *vcpu);
 
 /* Building of entry/exception code */
 int kvm_mips_entry_setup(void);
diff --git a/arch/mips/kvm/entry.c b/arch/mips/kvm/entry.c
index 16e1c93b484f..1083f35361ea 100644
--- a/arch/mips/kvm/entry.c
+++ b/arch/mips/kvm/entry.c
@@ -204,7 +204,7 @@ static inline void build_set_exc_base(u32 **p, unsigned int reg)
  * Assemble the start of the vcpu_run function to run a guest VCPU. The function
  * conforms to the following prototype:
  *
- * int vcpu_run(struct kvm_run *run, struct kvm_vcpu *vcpu);
+ * int vcpu_run(struct kvm_vcpu *vcpu);
  *
  * The exit from the guest and return to the caller is handled by the code
  * generated by kvm_mips_build_ret_to_host().
@@ -217,8 +217,7 @@ void *kvm_mips_build_vcpu_run(void *addr)
 	unsigned int i;
 
 	/*
-	 * A0: run
-	 * A1: vcpu
+	 * A0: vcpu
 	 */
 
 	/* k0/k1 not being used in host kernel context */
@@ -237,10 +236,10 @@ void *kvm_mips_build_vcpu_run(void *addr)
 	kvm_mips_build_save_scratch(&p, V1, K1);
 
 	/* VCPU scratch register has pointer to vcpu */
-	UASM_i_MTC0(&p, A1, scratch_vcpu[0], scratch_vcpu[1]);
+	UASM_i_MTC0(&p, A0, scratch_vcpu[0], scratch_vcpu[1]);
 
 	/* Offset into vcpu->arch */
-	UASM_i_ADDIU(&p, K1, A1, offsetof(struct kvm_vcpu, arch));
+	UASM_i_ADDIU(&p, K1, A0, offsetof(struct kvm_vcpu, arch));
 
 	/*
 	 * Save the host stack to VCPU, used for exception processing
@@ -628,10 +627,7 @@ void *kvm_mips_build_exit(void *addr)
 	/* Now that context has been saved, we can use other registers */
 
 	/* Restore vcpu */
-	UASM_i_MFC0(&p, S1, scratch_vcpu[0], scratch_vcpu[1]);
-
-	/* Restore run (vcpu->run) */
-	UASM_i_LW(&p, S0, offsetof(struct kvm_vcpu, run), S1);
+	UASM_i_MFC0(&p, S0, scratch_vcpu[0], scratch_vcpu[1]);
 
 	/*
 	 * Save Host level EPC, BadVaddr and Cause to VCPU, useful to process
@@ -793,7 +789,6 @@ void *kvm_mips_build_exit(void *addr)
 	 * with this in the kernel
 	 */
 	uasm_i_move(&p, A0, S0);
-	uasm_i_move(&p, A1, S1);
 	UASM_i_LA(&p, T9, (unsigned long)kvm_mips_handle_exit);
 	uasm_i_jalr(&p, RA, T9);
 	 UASM_i_ADDIU(&p, SP, SP, -CALLFRAME_SIZ);
@@ -835,7 +830,7 @@ static void *kvm_mips_build_ret_from_exit(void *addr)
 	 * guest, reload k1
 	 */
 
-	uasm_i_move(&p, K1, S1);
+	uasm_i_move(&p, K1, S0);
 	UASM_i_ADDIU(&p, K1, K1, offsetof(struct kvm_vcpu, arch));
 
 	/*
@@ -869,8 +864,8 @@ static void *kvm_mips_build_ret_to_guest(void *addr)
 {
 	u32 *p = addr;
 
-	/* Put the saved pointer to vcpu (s1) back into the scratch register */
-	UASM_i_MTC0(&p, S1, scratch_vcpu[0], scratch_vcpu[1]);
+	/* Put the saved pointer to vcpu (s0) back into the scratch register */
+	UASM_i_MTC0(&p, S0, scratch_vcpu[0], scratch_vcpu[1]);
 
 	/* Load up the Guest EBASE to minimize the window where BEV is set */
 	UASM_i_LW(&p, T0, offsetof(struct kvm_vcpu_arch, guest_ebase), K1);
diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
index 9710477a9827..32850470c037 100644
--- a/arch/mips/kvm/mips.c
+++ b/arch/mips/kvm/mips.c
@@ -1186,8 +1186,9 @@ static void kvm_mips_set_c0_status(void)
 /*
  * Return value is in the form (errcode<<2 | RESUME_FLAG_HOST | RESUME_FLAG_NV)
  */
-int kvm_mips_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
+int kvm_mips_handle_exit(struct kvm_vcpu *vcpu)
 {
+	struct kvm_run *run = vcpu->run;
 	u32 cause = vcpu->arch.host_cp0_cause;
 	u32 exccode = (cause >> CAUSEB_EXCCODE) & 0x1f;
 	u32 __user *opc = (u32 __user *) vcpu->arch.pc;
diff --git a/arch/mips/kvm/trap_emul.c b/arch/mips/kvm/trap_emul.c
index d822f3aee3dc..04c864cc356a 100644
--- a/arch/mips/kvm/trap_emul.c
+++ b/arch/mips/kvm/trap_emul.c
@@ -1238,7 +1238,7 @@ static int kvm_trap_emul_vcpu_run(struct kvm_vcpu *vcpu)
 	 */
 	kvm_mips_suspend_mm(cpu);
 
-	r = vcpu->arch.vcpu_run(vcpu->run, vcpu);
+	r = vcpu->arch.vcpu_run(vcpu);
 
 	/* We may have migrated while handling guest exits */
 	cpu = smp_processor_id();
diff --git a/arch/mips/kvm/vz.c b/arch/mips/kvm/vz.c
index 94f1d23828e3..c5878fa0636d 100644
--- a/arch/mips/kvm/vz.c
+++ b/arch/mips/kvm/vz.c
@@ -3152,7 +3152,7 @@ static int kvm_vz_vcpu_run(struct kvm_vcpu *vcpu)
 	kvm_vz_vcpu_load_tlb(vcpu, cpu);
 	kvm_vz_vcpu_load_wired(vcpu);
 
-	r = vcpu->arch.vcpu_run(vcpu->run, vcpu);
+	r = vcpu->arch.vcpu_run(vcpu);
 
 	kvm_vz_vcpu_save_wired(vcpu);
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 7/7] KVM: MIPS: clean up redundant kvm_run parameters in assembly
  2020-04-27  4:35 ` [PATCH v4 7/7] KVM: MIPS: clean up redundant kvm_run parameters in assembly Tianjia Zhang
@ 2020-04-27  5:36   ` Huacai Chen
  0 siblings, 0 replies; 29+ messages in thread
From: Huacai Chen @ 2020-04-27  5:36 UTC (permalink / raw)
  To: Tianjia Zhang
  Cc: Paolo Bonzini, Thomas Bogendoerfer, paulus, mpe,
	Benjamin Herrenschmidt, borntraeger, frankja, david, cohuck,
	heiko.carstens, gor, sean.j.christopherson, vkuznets, wanpengli,
	jmattson, joro, Thomas Gleixner, mingo, Borislav Petkov, x86,
	hpa, Marc Zyngier, james.morse, julien.thierry.kdev,
	suzuki.poulose, christoffer.dall, Peter Xu, thuth, kvm,
	linux-arm-kernel, kvmarm, open list:MIPS, kvm-ppc, linuxppc-dev,
	linux-s390, LKML

Reviewed-by: Huacai Chen <chenhc@lemote.com>

On Mon, Apr 27, 2020 at 12:35 PM Tianjia Zhang
<tianjia.zhang@linux.alibaba.com> wrote:
>
> In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
> structure. For historical reasons, many kvm-related function parameters
> retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
> patch does a unified cleanup of these remaining redundant parameters.
>
> Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
> ---
>  arch/mips/include/asm/kvm_host.h |  4 ++--
>  arch/mips/kvm/entry.c            | 21 ++++++++-------------
>  arch/mips/kvm/mips.c             |  3 ++-
>  arch/mips/kvm/trap_emul.c        |  2 +-
>  arch/mips/kvm/vz.c               |  2 +-
>  5 files changed, 14 insertions(+), 18 deletions(-)
>
> diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
> index 971439297cea..db915c55166d 100644
> --- a/arch/mips/include/asm/kvm_host.h
> +++ b/arch/mips/include/asm/kvm_host.h
> @@ -310,7 +310,7 @@ struct kvm_mmu_memory_cache {
>  #define KVM_MIPS_GUEST_TLB_SIZE        64
>  struct kvm_vcpu_arch {
>         void *guest_ebase;
> -       int (*vcpu_run)(struct kvm_run *run, struct kvm_vcpu *vcpu);
> +       int (*vcpu_run)(struct kvm_vcpu *vcpu);
>
>         /* Host registers preserved across guest mode execution */
>         unsigned long host_stack;
> @@ -821,7 +821,7 @@ int kvm_mips_emulation_init(struct kvm_mips_callbacks **install_callbacks);
>  /* Debug: dump vcpu state */
>  int kvm_arch_vcpu_dump_regs(struct kvm_vcpu *vcpu);
>
> -extern int kvm_mips_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu);
> +extern int kvm_mips_handle_exit(struct kvm_vcpu *vcpu);
>
>  /* Building of entry/exception code */
>  int kvm_mips_entry_setup(void);
> diff --git a/arch/mips/kvm/entry.c b/arch/mips/kvm/entry.c
> index 16e1c93b484f..1083f35361ea 100644
> --- a/arch/mips/kvm/entry.c
> +++ b/arch/mips/kvm/entry.c
> @@ -204,7 +204,7 @@ static inline void build_set_exc_base(u32 **p, unsigned int reg)
>   * Assemble the start of the vcpu_run function to run a guest VCPU. The function
>   * conforms to the following prototype:
>   *
> - * int vcpu_run(struct kvm_run *run, struct kvm_vcpu *vcpu);
> + * int vcpu_run(struct kvm_vcpu *vcpu);
>   *
>   * The exit from the guest and return to the caller is handled by the code
>   * generated by kvm_mips_build_ret_to_host().
> @@ -217,8 +217,7 @@ void *kvm_mips_build_vcpu_run(void *addr)
>         unsigned int i;
>
>         /*
> -        * A0: run
> -        * A1: vcpu
> +        * A0: vcpu
>          */
>
>         /* k0/k1 not being used in host kernel context */
> @@ -237,10 +236,10 @@ void *kvm_mips_build_vcpu_run(void *addr)
>         kvm_mips_build_save_scratch(&p, V1, K1);
>
>         /* VCPU scratch register has pointer to vcpu */
> -       UASM_i_MTC0(&p, A1, scratch_vcpu[0], scratch_vcpu[1]);
> +       UASM_i_MTC0(&p, A0, scratch_vcpu[0], scratch_vcpu[1]);
>
>         /* Offset into vcpu->arch */
> -       UASM_i_ADDIU(&p, K1, A1, offsetof(struct kvm_vcpu, arch));
> +       UASM_i_ADDIU(&p, K1, A0, offsetof(struct kvm_vcpu, arch));
>
>         /*
>          * Save the host stack to VCPU, used for exception processing
> @@ -628,10 +627,7 @@ void *kvm_mips_build_exit(void *addr)
>         /* Now that context has been saved, we can use other registers */
>
>         /* Restore vcpu */
> -       UASM_i_MFC0(&p, S1, scratch_vcpu[0], scratch_vcpu[1]);
> -
> -       /* Restore run (vcpu->run) */
> -       UASM_i_LW(&p, S0, offsetof(struct kvm_vcpu, run), S1);
> +       UASM_i_MFC0(&p, S0, scratch_vcpu[0], scratch_vcpu[1]);
>
>         /*
>          * Save Host level EPC, BadVaddr and Cause to VCPU, useful to process
> @@ -793,7 +789,6 @@ void *kvm_mips_build_exit(void *addr)
>          * with this in the kernel
>          */
>         uasm_i_move(&p, A0, S0);
> -       uasm_i_move(&p, A1, S1);
>         UASM_i_LA(&p, T9, (unsigned long)kvm_mips_handle_exit);
>         uasm_i_jalr(&p, RA, T9);
>          UASM_i_ADDIU(&p, SP, SP, -CALLFRAME_SIZ);
> @@ -835,7 +830,7 @@ static void *kvm_mips_build_ret_from_exit(void *addr)
>          * guest, reload k1
>          */
>
> -       uasm_i_move(&p, K1, S1);
> +       uasm_i_move(&p, K1, S0);
>         UASM_i_ADDIU(&p, K1, K1, offsetof(struct kvm_vcpu, arch));
>
>         /*
> @@ -869,8 +864,8 @@ static void *kvm_mips_build_ret_to_guest(void *addr)
>  {
>         u32 *p = addr;
>
> -       /* Put the saved pointer to vcpu (s1) back into the scratch register */
> -       UASM_i_MTC0(&p, S1, scratch_vcpu[0], scratch_vcpu[1]);
> +       /* Put the saved pointer to vcpu (s0) back into the scratch register */
> +       UASM_i_MTC0(&p, S0, scratch_vcpu[0], scratch_vcpu[1]);
>
>         /* Load up the Guest EBASE to minimize the window where BEV is set */
>         UASM_i_LW(&p, T0, offsetof(struct kvm_vcpu_arch, guest_ebase), K1);
> diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
> index 9710477a9827..32850470c037 100644
> --- a/arch/mips/kvm/mips.c
> +++ b/arch/mips/kvm/mips.c
> @@ -1186,8 +1186,9 @@ static void kvm_mips_set_c0_status(void)
>  /*
>   * Return value is in the form (errcode<<2 | RESUME_FLAG_HOST | RESUME_FLAG_NV)
>   */
> -int kvm_mips_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
> +int kvm_mips_handle_exit(struct kvm_vcpu *vcpu)
>  {
> +       struct kvm_run *run = vcpu->run;
>         u32 cause = vcpu->arch.host_cp0_cause;
>         u32 exccode = (cause >> CAUSEB_EXCCODE) & 0x1f;
>         u32 __user *opc = (u32 __user *) vcpu->arch.pc;
> diff --git a/arch/mips/kvm/trap_emul.c b/arch/mips/kvm/trap_emul.c
> index d822f3aee3dc..04c864cc356a 100644
> --- a/arch/mips/kvm/trap_emul.c
> +++ b/arch/mips/kvm/trap_emul.c
> @@ -1238,7 +1238,7 @@ static int kvm_trap_emul_vcpu_run(struct kvm_vcpu *vcpu)
>          */
>         kvm_mips_suspend_mm(cpu);
>
> -       r = vcpu->arch.vcpu_run(vcpu->run, vcpu);
> +       r = vcpu->arch.vcpu_run(vcpu);
>
>         /* We may have migrated while handling guest exits */
>         cpu = smp_processor_id();
> diff --git a/arch/mips/kvm/vz.c b/arch/mips/kvm/vz.c
> index 94f1d23828e3..c5878fa0636d 100644
> --- a/arch/mips/kvm/vz.c
> +++ b/arch/mips/kvm/vz.c
> @@ -3152,7 +3152,7 @@ static int kvm_vz_vcpu_run(struct kvm_vcpu *vcpu)
>         kvm_vz_vcpu_load_tlb(vcpu, cpu);
>         kvm_vz_vcpu_load_wired(vcpu);
>
> -       r = vcpu->arch.vcpu_run(vcpu->run, vcpu);
> +       r = vcpu->arch.vcpu_run(vcpu);
>
>         kvm_vz_vcpu_save_wired(vcpu);
>
> --
> 2.17.1
>

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 6/7] KVM: MIPS: clean up redundant 'kvm_run' parameters
  2020-04-27  4:35 ` [PATCH v4 6/7] KVM: MIPS: clean up redundant 'kvm_run' parameters Tianjia Zhang
@ 2020-04-27  5:40   ` Huacai Chen
  2020-05-27  6:24     ` Tianjia Zhang
  0 siblings, 1 reply; 29+ messages in thread
From: Huacai Chen @ 2020-04-27  5:40 UTC (permalink / raw)
  To: Tianjia Zhang
  Cc: Paolo Bonzini, Thomas Bogendoerfer, paulus, mpe,
	Benjamin Herrenschmidt, borntraeger, frankja, david, cohuck,
	heiko.carstens, gor, sean.j.christopherson, vkuznets, wanpengli,
	jmattson, joro, Thomas Gleixner, mingo, Borislav Petkov, x86,
	hpa, Marc Zyngier, james.morse, julien.thierry.kdev,
	suzuki.poulose, christoffer.dall, Peter Xu, thuth, kvm,
	linux-arm-kernel, kvmarm, open list:MIPS, kvm-ppc, linuxppc-dev,
	linux-s390, LKML

Reviewed-by: Huacai Chen <chenhc@lemote.com>

On Mon, Apr 27, 2020 at 12:35 PM Tianjia Zhang
<tianjia.zhang@linux.alibaba.com> wrote:
>
> In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
> structure. For historical reasons, many kvm-related function parameters
> retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
> patch does a unified cleanup of these remaining redundant parameters.
>
> Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
> ---
>  arch/mips/include/asm/kvm_host.h |  28 +-------
>  arch/mips/kvm/emulate.c          |  59 ++++++----------
>  arch/mips/kvm/mips.c             |  11 ++-
>  arch/mips/kvm/trap_emul.c        | 114 ++++++++++++++-----------------
>  arch/mips/kvm/vz.c               |  26 +++----
>  5 files changed, 87 insertions(+), 151 deletions(-)
>
> diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
> index 2c343c346b79..971439297cea 100644
> --- a/arch/mips/include/asm/kvm_host.h
> +++ b/arch/mips/include/asm/kvm_host.h
> @@ -812,8 +812,8 @@ struct kvm_mips_callbacks {
>                            const struct kvm_one_reg *reg, s64 v);
>         int (*vcpu_load)(struct kvm_vcpu *vcpu, int cpu);
>         int (*vcpu_put)(struct kvm_vcpu *vcpu, int cpu);
> -       int (*vcpu_run)(struct kvm_run *run, struct kvm_vcpu *vcpu);
> -       void (*vcpu_reenter)(struct kvm_run *run, struct kvm_vcpu *vcpu);
> +       int (*vcpu_run)(struct kvm_vcpu *vcpu);
> +       void (*vcpu_reenter)(struct kvm_vcpu *vcpu);
>  };
>  extern struct kvm_mips_callbacks *kvm_mips_callbacks;
>  int kvm_mips_emulation_init(struct kvm_mips_callbacks **install_callbacks);
> @@ -868,7 +868,6 @@ extern int kvm_mips_handle_mapped_seg_tlb_fault(struct kvm_vcpu *vcpu,
>
>  extern enum emulation_result kvm_mips_handle_tlbmiss(u32 cause,
>                                                      u32 *opc,
> -                                                    struct kvm_run *run,
>                                                      struct kvm_vcpu *vcpu,
>                                                      bool write_fault);
>
> @@ -975,83 +974,67 @@ static inline bool kvm_is_ifetch_fault(struct kvm_vcpu_arch *vcpu)
>
>  extern enum emulation_result kvm_mips_emulate_inst(u32 cause,
>                                                    u32 *opc,
> -                                                  struct kvm_run *run,
>                                                    struct kvm_vcpu *vcpu);
>
>  long kvm_mips_guest_exception_base(struct kvm_vcpu *vcpu);
>
>  extern enum emulation_result kvm_mips_emulate_syscall(u32 cause,
>                                                       u32 *opc,
> -                                                     struct kvm_run *run,
>                                                       struct kvm_vcpu *vcpu);
>
>  extern enum emulation_result kvm_mips_emulate_tlbmiss_ld(u32 cause,
>                                                          u32 *opc,
> -                                                        struct kvm_run *run,
>                                                          struct kvm_vcpu *vcpu);
>
>  extern enum emulation_result kvm_mips_emulate_tlbinv_ld(u32 cause,
>                                                         u32 *opc,
> -                                                       struct kvm_run *run,
>                                                         struct kvm_vcpu *vcpu);
>
>  extern enum emulation_result kvm_mips_emulate_tlbmiss_st(u32 cause,
>                                                          u32 *opc,
> -                                                        struct kvm_run *run,
>                                                          struct kvm_vcpu *vcpu);
>
>  extern enum emulation_result kvm_mips_emulate_tlbinv_st(u32 cause,
>                                                         u32 *opc,
> -                                                       struct kvm_run *run,
>                                                         struct kvm_vcpu *vcpu);
>
>  extern enum emulation_result kvm_mips_emulate_tlbmod(u32 cause,
>                                                      u32 *opc,
> -                                                    struct kvm_run *run,
>                                                      struct kvm_vcpu *vcpu);
>
>  extern enum emulation_result kvm_mips_emulate_fpu_exc(u32 cause,
>                                                       u32 *opc,
> -                                                     struct kvm_run *run,
>                                                       struct kvm_vcpu *vcpu);
>
>  extern enum emulation_result kvm_mips_handle_ri(u32 cause,
>                                                 u32 *opc,
> -                                               struct kvm_run *run,
>                                                 struct kvm_vcpu *vcpu);
>
>  extern enum emulation_result kvm_mips_emulate_ri_exc(u32 cause,
>                                                      u32 *opc,
> -                                                    struct kvm_run *run,
>                                                      struct kvm_vcpu *vcpu);
>
>  extern enum emulation_result kvm_mips_emulate_bp_exc(u32 cause,
>                                                      u32 *opc,
> -                                                    struct kvm_run *run,
>                                                      struct kvm_vcpu *vcpu);
>
>  extern enum emulation_result kvm_mips_emulate_trap_exc(u32 cause,
>                                                        u32 *opc,
> -                                                      struct kvm_run *run,
>                                                        struct kvm_vcpu *vcpu);
>
>  extern enum emulation_result kvm_mips_emulate_msafpe_exc(u32 cause,
>                                                          u32 *opc,
> -                                                        struct kvm_run *run,
>                                                          struct kvm_vcpu *vcpu);
>
>  extern enum emulation_result kvm_mips_emulate_fpe_exc(u32 cause,
>                                                       u32 *opc,
> -                                                     struct kvm_run *run,
>                                                       struct kvm_vcpu *vcpu);
>
>  extern enum emulation_result kvm_mips_emulate_msadis_exc(u32 cause,
>                                                          u32 *opc,
> -                                                        struct kvm_run *run,
>                                                          struct kvm_vcpu *vcpu);
>
> -extern enum emulation_result kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu,
> -                                                        struct kvm_run *run);
> +extern enum emulation_result kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu);
>
>  u32 kvm_mips_read_count(struct kvm_vcpu *vcpu);
>  void kvm_mips_write_count(struct kvm_vcpu *vcpu, u32 count);
> @@ -1080,26 +1063,21 @@ static inline void kvm_vz_lose_htimer(struct kvm_vcpu *vcpu) {}
>
>  enum emulation_result kvm_mips_check_privilege(u32 cause,
>                                                u32 *opc,
> -                                              struct kvm_run *run,
>                                                struct kvm_vcpu *vcpu);
>
>  enum emulation_result kvm_mips_emulate_cache(union mips_instruction inst,
>                                              u32 *opc,
>                                              u32 cause,
> -                                            struct kvm_run *run,
>                                              struct kvm_vcpu *vcpu);
>  enum emulation_result kvm_mips_emulate_CP0(union mips_instruction inst,
>                                            u32 *opc,
>                                            u32 cause,
> -                                          struct kvm_run *run,
>                                            struct kvm_vcpu *vcpu);
>  enum emulation_result kvm_mips_emulate_store(union mips_instruction inst,
>                                              u32 cause,
> -                                            struct kvm_run *run,
>                                              struct kvm_vcpu *vcpu);
>  enum emulation_result kvm_mips_emulate_load(union mips_instruction inst,
>                                             u32 cause,
> -                                           struct kvm_run *run,
>                                             struct kvm_vcpu *vcpu);
>
>  /* COP0 */
> diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c
> index 754094b40a75..36718b8dce21 100644
> --- a/arch/mips/kvm/emulate.c
> +++ b/arch/mips/kvm/emulate.c
> @@ -1262,7 +1262,6 @@ unsigned int kvm_mips_config5_wrmask(struct kvm_vcpu *vcpu)
>
>  enum emulation_result kvm_mips_emulate_CP0(union mips_instruction inst,
>                                            u32 *opc, u32 cause,
> -                                          struct kvm_run *run,
>                                            struct kvm_vcpu *vcpu)
>  {
>         struct mips_coproc *cop0 = vcpu->arch.cop0;
> @@ -1597,11 +1596,11 @@ enum emulation_result kvm_mips_emulate_CP0(union mips_instruction inst,
>
>  enum emulation_result kvm_mips_emulate_store(union mips_instruction inst,
>                                              u32 cause,
> -                                            struct kvm_run *run,
>                                              struct kvm_vcpu *vcpu)
>  {
>         enum emulation_result er;
>         u32 rt;
> +       struct kvm_run *run = vcpu->run;
>         void *data = run->mmio.data;
>         unsigned long curr_pc;
>
> @@ -1678,9 +1677,9 @@ enum emulation_result kvm_mips_emulate_store(union mips_instruction inst,
>  }
>
>  enum emulation_result kvm_mips_emulate_load(union mips_instruction inst,
> -                                           u32 cause, struct kvm_run *run,
> -                                           struct kvm_vcpu *vcpu)
> +                                           u32 cause, struct kvm_vcpu *vcpu)
>  {
> +       struct kvm_run *run = vcpu->run;
>         enum emulation_result er;
>         unsigned long curr_pc;
>         u32 op, rt;
> @@ -1752,7 +1751,6 @@ enum emulation_result kvm_mips_emulate_load(union mips_instruction inst,
>  static enum emulation_result kvm_mips_guest_cache_op(int (*fn)(unsigned long),
>                                                      unsigned long curr_pc,
>                                                      unsigned long addr,
> -                                                    struct kvm_run *run,
>                                                      struct kvm_vcpu *vcpu,
>                                                      u32 cause)
>  {
> @@ -1780,13 +1778,13 @@ static enum emulation_result kvm_mips_guest_cache_op(int (*fn)(unsigned long),
>                         /* no matching guest TLB */
>                         vcpu->arch.host_cp0_badvaddr = addr;
>                         vcpu->arch.pc = curr_pc;
> -                       kvm_mips_emulate_tlbmiss_ld(cause, NULL, run, vcpu);
> +                       kvm_mips_emulate_tlbmiss_ld(cause, NULL, vcpu);
>                         return EMULATE_EXCEPT;
>                 case KVM_MIPS_TLBINV:
>                         /* invalid matching guest TLB */
>                         vcpu->arch.host_cp0_badvaddr = addr;
>                         vcpu->arch.pc = curr_pc;
> -                       kvm_mips_emulate_tlbinv_ld(cause, NULL, run, vcpu);
> +                       kvm_mips_emulate_tlbinv_ld(cause, NULL, vcpu);
>                         return EMULATE_EXCEPT;
>                 default:
>                         break;
> @@ -1796,7 +1794,6 @@ static enum emulation_result kvm_mips_guest_cache_op(int (*fn)(unsigned long),
>
>  enum emulation_result kvm_mips_emulate_cache(union mips_instruction inst,
>                                              u32 *opc, u32 cause,
> -                                            struct kvm_run *run,
>                                              struct kvm_vcpu *vcpu)
>  {
>         enum emulation_result er = EMULATE_DONE;
> @@ -1886,7 +1883,7 @@ enum emulation_result kvm_mips_emulate_cache(union mips_instruction inst,
>                  * guest's behalf.
>                  */
>                 er = kvm_mips_guest_cache_op(protected_writeback_dcache_line,
> -                                            curr_pc, va, run, vcpu, cause);
> +                                            curr_pc, va, vcpu, cause);
>                 if (er != EMULATE_DONE)
>                         goto done;
>  #ifdef CONFIG_KVM_MIPS_DYN_TRANS
> @@ -1899,11 +1896,11 @@ enum emulation_result kvm_mips_emulate_cache(union mips_instruction inst,
>         } else if (op_inst == Hit_Invalidate_I) {
>                 /* Perform the icache synchronisation on the guest's behalf */
>                 er = kvm_mips_guest_cache_op(protected_writeback_dcache_line,
> -                                            curr_pc, va, run, vcpu, cause);
> +                                            curr_pc, va, vcpu, cause);
>                 if (er != EMULATE_DONE)
>                         goto done;
>                 er = kvm_mips_guest_cache_op(protected_flush_icache_line,
> -                                            curr_pc, va, run, vcpu, cause);
> +                                            curr_pc, va, vcpu, cause);
>                 if (er != EMULATE_DONE)
>                         goto done;
>
> @@ -1929,7 +1926,6 @@ enum emulation_result kvm_mips_emulate_cache(union mips_instruction inst,
>  }
>
>  enum emulation_result kvm_mips_emulate_inst(u32 cause, u32 *opc,
> -                                           struct kvm_run *run,
>                                             struct kvm_vcpu *vcpu)
>  {
>         union mips_instruction inst;
> @@ -1945,14 +1941,14 @@ enum emulation_result kvm_mips_emulate_inst(u32 cause, u32 *opc,
>
>         switch (inst.r_format.opcode) {
>         case cop0_op:
> -               er = kvm_mips_emulate_CP0(inst, opc, cause, run, vcpu);
> +               er = kvm_mips_emulate_CP0(inst, opc, cause, vcpu);
>                 break;
>
>  #ifndef CONFIG_CPU_MIPSR6
>         case cache_op:
>                 ++vcpu->stat.cache_exits;
>                 trace_kvm_exit(vcpu, KVM_TRACE_EXIT_CACHE);
> -               er = kvm_mips_emulate_cache(inst, opc, cause, run, vcpu);
> +               er = kvm_mips_emulate_cache(inst, opc, cause, vcpu);
>                 break;
>  #else
>         case spec3_op:
> @@ -1960,7 +1956,7 @@ enum emulation_result kvm_mips_emulate_inst(u32 cause, u32 *opc,
>                 case cache6_op:
>                         ++vcpu->stat.cache_exits;
>                         trace_kvm_exit(vcpu, KVM_TRACE_EXIT_CACHE);
> -                       er = kvm_mips_emulate_cache(inst, opc, cause, run,
> +                       er = kvm_mips_emulate_cache(inst, opc, cause,
>                                                     vcpu);
>                         break;
>                 default:
> @@ -2000,7 +1996,6 @@ long kvm_mips_guest_exception_base(struct kvm_vcpu *vcpu)
>
>  enum emulation_result kvm_mips_emulate_syscall(u32 cause,
>                                                u32 *opc,
> -                                              struct kvm_run *run,
>                                                struct kvm_vcpu *vcpu)
>  {
>         struct mips_coproc *cop0 = vcpu->arch.cop0;
> @@ -2035,7 +2030,6 @@ enum emulation_result kvm_mips_emulate_syscall(u32 cause,
>
>  enum emulation_result kvm_mips_emulate_tlbmiss_ld(u32 cause,
>                                                   u32 *opc,
> -                                                 struct kvm_run *run,
>                                                   struct kvm_vcpu *vcpu)
>  {
>         struct mips_coproc *cop0 = vcpu->arch.cop0;
> @@ -2079,7 +2073,6 @@ enum emulation_result kvm_mips_emulate_tlbmiss_ld(u32 cause,
>
>  enum emulation_result kvm_mips_emulate_tlbinv_ld(u32 cause,
>                                                  u32 *opc,
> -                                                struct kvm_run *run,
>                                                  struct kvm_vcpu *vcpu)
>  {
>         struct mips_coproc *cop0 = vcpu->arch.cop0;
> @@ -2121,7 +2114,6 @@ enum emulation_result kvm_mips_emulate_tlbinv_ld(u32 cause,
>
>  enum emulation_result kvm_mips_emulate_tlbmiss_st(u32 cause,
>                                                   u32 *opc,
> -                                                 struct kvm_run *run,
>                                                   struct kvm_vcpu *vcpu)
>  {
>         struct mips_coproc *cop0 = vcpu->arch.cop0;
> @@ -2163,7 +2155,6 @@ enum emulation_result kvm_mips_emulate_tlbmiss_st(u32 cause,
>
>  enum emulation_result kvm_mips_emulate_tlbinv_st(u32 cause,
>                                                  u32 *opc,
> -                                                struct kvm_run *run,
>                                                  struct kvm_vcpu *vcpu)
>  {
>         struct mips_coproc *cop0 = vcpu->arch.cop0;
> @@ -2204,7 +2195,6 @@ enum emulation_result kvm_mips_emulate_tlbinv_st(u32 cause,
>
>  enum emulation_result kvm_mips_emulate_tlbmod(u32 cause,
>                                               u32 *opc,
> -                                             struct kvm_run *run,
>                                               struct kvm_vcpu *vcpu)
>  {
>         struct mips_coproc *cop0 = vcpu->arch.cop0;
> @@ -2244,7 +2234,6 @@ enum emulation_result kvm_mips_emulate_tlbmod(u32 cause,
>
>  enum emulation_result kvm_mips_emulate_fpu_exc(u32 cause,
>                                                u32 *opc,
> -                                              struct kvm_run *run,
>                                                struct kvm_vcpu *vcpu)
>  {
>         struct mips_coproc *cop0 = vcpu->arch.cop0;
> @@ -2273,7 +2262,6 @@ enum emulation_result kvm_mips_emulate_fpu_exc(u32 cause,
>
>  enum emulation_result kvm_mips_emulate_ri_exc(u32 cause,
>                                               u32 *opc,
> -                                             struct kvm_run *run,
>                                               struct kvm_vcpu *vcpu)
>  {
>         struct mips_coproc *cop0 = vcpu->arch.cop0;
> @@ -2308,7 +2296,6 @@ enum emulation_result kvm_mips_emulate_ri_exc(u32 cause,
>
>  enum emulation_result kvm_mips_emulate_bp_exc(u32 cause,
>                                               u32 *opc,
> -                                             struct kvm_run *run,
>                                               struct kvm_vcpu *vcpu)
>  {
>         struct mips_coproc *cop0 = vcpu->arch.cop0;
> @@ -2343,7 +2330,6 @@ enum emulation_result kvm_mips_emulate_bp_exc(u32 cause,
>
>  enum emulation_result kvm_mips_emulate_trap_exc(u32 cause,
>                                                 u32 *opc,
> -                                               struct kvm_run *run,
>                                                 struct kvm_vcpu *vcpu)
>  {
>         struct mips_coproc *cop0 = vcpu->arch.cop0;
> @@ -2378,7 +2364,6 @@ enum emulation_result kvm_mips_emulate_trap_exc(u32 cause,
>
>  enum emulation_result kvm_mips_emulate_msafpe_exc(u32 cause,
>                                                   u32 *opc,
> -                                                 struct kvm_run *run,
>                                                   struct kvm_vcpu *vcpu)
>  {
>         struct mips_coproc *cop0 = vcpu->arch.cop0;
> @@ -2413,7 +2398,6 @@ enum emulation_result kvm_mips_emulate_msafpe_exc(u32 cause,
>
>  enum emulation_result kvm_mips_emulate_fpe_exc(u32 cause,
>                                                u32 *opc,
> -                                              struct kvm_run *run,
>                                                struct kvm_vcpu *vcpu)
>  {
>         struct mips_coproc *cop0 = vcpu->arch.cop0;
> @@ -2448,7 +2432,6 @@ enum emulation_result kvm_mips_emulate_fpe_exc(u32 cause,
>
>  enum emulation_result kvm_mips_emulate_msadis_exc(u32 cause,
>                                                   u32 *opc,
> -                                                 struct kvm_run *run,
>                                                   struct kvm_vcpu *vcpu)
>  {
>         struct mips_coproc *cop0 = vcpu->arch.cop0;
> @@ -2482,7 +2465,6 @@ enum emulation_result kvm_mips_emulate_msadis_exc(u32 cause,
>  }
>
>  enum emulation_result kvm_mips_handle_ri(u32 cause, u32 *opc,
> -                                        struct kvm_run *run,
>                                          struct kvm_vcpu *vcpu)
>  {
>         struct mips_coproc *cop0 = vcpu->arch.cop0;
> @@ -2571,12 +2553,12 @@ enum emulation_result kvm_mips_handle_ri(u32 cause, u32 *opc,
>          * branch target), and pass the RI exception to the guest OS.
>          */
>         vcpu->arch.pc = curr_pc;
> -       return kvm_mips_emulate_ri_exc(cause, opc, run, vcpu);
> +       return kvm_mips_emulate_ri_exc(cause, opc, vcpu);
>  }
>
> -enum emulation_result kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu,
> -                                                 struct kvm_run *run)
> +enum emulation_result kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu)
>  {
> +       struct kvm_run *run = vcpu->run;
>         unsigned long *gpr = &vcpu->arch.gprs[vcpu->arch.io_gpr];
>         enum emulation_result er = EMULATE_DONE;
>
> @@ -2622,7 +2604,6 @@ enum emulation_result kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu,
>
>  static enum emulation_result kvm_mips_emulate_exc(u32 cause,
>                                                   u32 *opc,
> -                                                 struct kvm_run *run,
>                                                   struct kvm_vcpu *vcpu)
>  {
>         u32 exccode = (cause >> CAUSEB_EXCCODE) & 0x1f;
> @@ -2660,7 +2641,6 @@ static enum emulation_result kvm_mips_emulate_exc(u32 cause,
>
>  enum emulation_result kvm_mips_check_privilege(u32 cause,
>                                                u32 *opc,
> -                                              struct kvm_run *run,
>                                                struct kvm_vcpu *vcpu)
>  {
>         enum emulation_result er = EMULATE_DONE;
> @@ -2742,7 +2722,7 @@ enum emulation_result kvm_mips_check_privilege(u32 cause,
>         }
>
>         if (er == EMULATE_PRIV_FAIL)
> -               kvm_mips_emulate_exc(cause, opc, run, vcpu);
> +               kvm_mips_emulate_exc(cause, opc, vcpu);
>
>         return er;
>  }
> @@ -2756,7 +2736,6 @@ enum emulation_result kvm_mips_check_privilege(u32 cause,
>   */
>  enum emulation_result kvm_mips_handle_tlbmiss(u32 cause,
>                                               u32 *opc,
> -                                             struct kvm_run *run,
>                                               struct kvm_vcpu *vcpu,
>                                               bool write_fault)
>  {
> @@ -2780,9 +2759,9 @@ enum emulation_result kvm_mips_handle_tlbmiss(u32 cause,
>                        KVM_ENTRYHI_ASID));
>         if (index < 0) {
>                 if (exccode == EXCCODE_TLBL) {
> -                       er = kvm_mips_emulate_tlbmiss_ld(cause, opc, run, vcpu);
> +                       er = kvm_mips_emulate_tlbmiss_ld(cause, opc, vcpu);
>                 } else if (exccode == EXCCODE_TLBS) {
> -                       er = kvm_mips_emulate_tlbmiss_st(cause, opc, run, vcpu);
> +                       er = kvm_mips_emulate_tlbmiss_st(cause, opc, vcpu);
>                 } else {
>                         kvm_err("%s: invalid exc code: %d\n", __func__,
>                                 exccode);
> @@ -2797,10 +2776,10 @@ enum emulation_result kvm_mips_handle_tlbmiss(u32 cause,
>                  */
>                 if (!TLB_IS_VALID(*tlb, va)) {
>                         if (exccode == EXCCODE_TLBL) {
> -                               er = kvm_mips_emulate_tlbinv_ld(cause, opc, run,
> +                               er = kvm_mips_emulate_tlbinv_ld(cause, opc,
>                                                                 vcpu);
>                         } else if (exccode == EXCCODE_TLBS) {
> -                               er = kvm_mips_emulate_tlbinv_st(cause, opc, run,
> +                               er = kvm_mips_emulate_tlbinv_st(cause, opc,
>                                                                 vcpu);
>                         } else {
>                                 kvm_err("%s: invalid exc code: %d\n", __func__,
> diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
> index ec24adf4857e..9710477a9827 100644
> --- a/arch/mips/kvm/mips.c
> +++ b/arch/mips/kvm/mips.c
> @@ -441,7 +441,6 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
>
>  int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>  {
> -       struct kvm_run *run = vcpu->run;
>         int r = -EINTR;
>
>         vcpu_load(vcpu);
> @@ -450,11 +449,11 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>
>         if (vcpu->mmio_needed) {
>                 if (!vcpu->mmio_is_write)
> -                       kvm_mips_complete_mmio_load(vcpu, run);
> +                       kvm_mips_complete_mmio_load(vcpu);
>                 vcpu->mmio_needed = 0;
>         }
>
> -       if (run->immediate_exit)
> +       if (vcpu->run->immediate_exit)
>                 goto out;
>
>         lose_fpu(1);
> @@ -471,7 +470,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>          */
>         smp_store_mb(vcpu->mode, IN_GUEST_MODE);
>
> -       r = kvm_mips_callbacks->vcpu_run(run, vcpu);
> +       r = kvm_mips_callbacks->vcpu_run(vcpu);
>
>         trace_kvm_out(vcpu);
>         guest_exit_irqoff();
> @@ -1225,7 +1224,7 @@ int kvm_mips_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
>                  * end up causing an exception to be delivered to the Guest
>                  * Kernel
>                  */
> -               er = kvm_mips_check_privilege(cause, opc, run, vcpu);
> +               er = kvm_mips_check_privilege(cause, opc, vcpu);
>                 if (er == EMULATE_PRIV_FAIL) {
>                         goto skip_emul;
>                 } else if (er == EMULATE_FAIL) {
> @@ -1374,7 +1373,7 @@ int kvm_mips_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
>                  */
>                 smp_store_mb(vcpu->mode, IN_GUEST_MODE);
>
> -               kvm_mips_callbacks->vcpu_reenter(run, vcpu);
> +               kvm_mips_callbacks->vcpu_reenter(vcpu);
>
>                 /*
>                  * If FPU / MSA are enabled (i.e. the guest's FPU / MSA context
> diff --git a/arch/mips/kvm/trap_emul.c b/arch/mips/kvm/trap_emul.c
> index 5a11e83dffe6..d822f3aee3dc 100644
> --- a/arch/mips/kvm/trap_emul.c
> +++ b/arch/mips/kvm/trap_emul.c
> @@ -67,7 +67,6 @@ static int kvm_trap_emul_no_handler(struct kvm_vcpu *vcpu)
>  static int kvm_trap_emul_handle_cop_unusable(struct kvm_vcpu *vcpu)
>  {
>         struct mips_coproc *cop0 = vcpu->arch.cop0;
> -       struct kvm_run *run = vcpu->run;
>         u32 __user *opc = (u32 __user *) vcpu->arch.pc;
>         u32 cause = vcpu->arch.host_cp0_cause;
>         enum emulation_result er = EMULATE_DONE;
> @@ -81,14 +80,14 @@ static int kvm_trap_emul_handle_cop_unusable(struct kvm_vcpu *vcpu)
>                          * Unusable/no FPU in guest:
>                          * deliver guest COP1 Unusable Exception
>                          */
> -                       er = kvm_mips_emulate_fpu_exc(cause, opc, run, vcpu);
> +                       er = kvm_mips_emulate_fpu_exc(cause, opc, vcpu);
>                 } else {
>                         /* Restore FPU state */
>                         kvm_own_fpu(vcpu);
>                         er = EMULATE_DONE;
>                 }
>         } else {
> -               er = kvm_mips_emulate_inst(cause, opc, run, vcpu);
> +               er = kvm_mips_emulate_inst(cause, opc, vcpu);
>         }
>
>         switch (er) {
> @@ -97,12 +96,12 @@ static int kvm_trap_emul_handle_cop_unusable(struct kvm_vcpu *vcpu)
>                 break;
>
>         case EMULATE_FAIL:
> -               run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> +               vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>                 ret = RESUME_HOST;
>                 break;
>
>         case EMULATE_WAIT:
> -               run->exit_reason = KVM_EXIT_INTR;
> +               vcpu->run->exit_reason = KVM_EXIT_INTR;
>                 ret = RESUME_HOST;
>                 break;
>
> @@ -116,8 +115,7 @@ static int kvm_trap_emul_handle_cop_unusable(struct kvm_vcpu *vcpu)
>         return ret;
>  }
>
> -static int kvm_mips_bad_load(u32 cause, u32 *opc, struct kvm_run *run,
> -                            struct kvm_vcpu *vcpu)
> +static int kvm_mips_bad_load(u32 cause, u32 *opc, struct kvm_vcpu *vcpu)
>  {
>         enum emulation_result er;
>         union mips_instruction inst;
> @@ -125,7 +123,7 @@ static int kvm_mips_bad_load(u32 cause, u32 *opc, struct kvm_run *run,
>
>         /* A code fetch fault doesn't count as an MMIO */
>         if (kvm_is_ifetch_fault(&vcpu->arch)) {
> -               run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> +               vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>                 return RESUME_HOST;
>         }
>
> @@ -134,23 +132,22 @@ static int kvm_mips_bad_load(u32 cause, u32 *opc, struct kvm_run *run,
>                 opc += 1;
>         err = kvm_get_badinstr(opc, vcpu, &inst.word);
>         if (err) {
> -               run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> +               vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>                 return RESUME_HOST;
>         }
>
>         /* Emulate the load */
> -       er = kvm_mips_emulate_load(inst, cause, run, vcpu);
> +       er = kvm_mips_emulate_load(inst, cause, vcpu);
>         if (er == EMULATE_FAIL) {
>                 kvm_err("Emulate load from MMIO space failed\n");
> -               run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> +               vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>         } else {
> -               run->exit_reason = KVM_EXIT_MMIO;
> +               vcpu->run->exit_reason = KVM_EXIT_MMIO;
>         }
>         return RESUME_HOST;
>  }
>
> -static int kvm_mips_bad_store(u32 cause, u32 *opc, struct kvm_run *run,
> -                             struct kvm_vcpu *vcpu)
> +static int kvm_mips_bad_store(u32 cause, u32 *opc, struct kvm_vcpu *vcpu)
>  {
>         enum emulation_result er;
>         union mips_instruction inst;
> @@ -161,34 +158,33 @@ static int kvm_mips_bad_store(u32 cause, u32 *opc, struct kvm_run *run,
>                 opc += 1;
>         err = kvm_get_badinstr(opc, vcpu, &inst.word);
>         if (err) {
> -               run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> +               vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>                 return RESUME_HOST;
>         }
>
>         /* Emulate the store */
> -       er = kvm_mips_emulate_store(inst, cause, run, vcpu);
> +       er = kvm_mips_emulate_store(inst, cause, vcpu);
>         if (er == EMULATE_FAIL) {
>                 kvm_err("Emulate store to MMIO space failed\n");
> -               run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> +               vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>         } else {
> -               run->exit_reason = KVM_EXIT_MMIO;
> +               vcpu->run->exit_reason = KVM_EXIT_MMIO;
>         }
>         return RESUME_HOST;
>  }
>
> -static int kvm_mips_bad_access(u32 cause, u32 *opc, struct kvm_run *run,
> +static int kvm_mips_bad_access(u32 cause, u32 *opc,
>                                struct kvm_vcpu *vcpu, bool store)
>  {
>         if (store)
> -               return kvm_mips_bad_store(cause, opc, run, vcpu);
> +               return kvm_mips_bad_store(cause, opc, vcpu);
>         else
> -               return kvm_mips_bad_load(cause, opc, run, vcpu);
> +               return kvm_mips_bad_load(cause, opc, vcpu);
>  }
>
>  static int kvm_trap_emul_handle_tlb_mod(struct kvm_vcpu *vcpu)
>  {
>         struct mips_coproc *cop0 = vcpu->arch.cop0;
> -       struct kvm_run *run = vcpu->run;
>         u32 __user *opc = (u32 __user *) vcpu->arch.pc;
>         unsigned long badvaddr = vcpu->arch.host_cp0_badvaddr;
>         u32 cause = vcpu->arch.host_cp0_cause;
> @@ -212,12 +208,12 @@ static int kvm_trap_emul_handle_tlb_mod(struct kvm_vcpu *vcpu)
>                  * They would indicate stale host TLB entries.
>                  */
>                 if (unlikely(index < 0)) {
> -                       run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> +                       vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>                         return RESUME_HOST;
>                 }
>                 tlb = vcpu->arch.guest_tlb + index;
>                 if (unlikely(!TLB_IS_VALID(*tlb, badvaddr))) {
> -                       run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> +                       vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>                         return RESUME_HOST;
>                 }
>
> @@ -226,23 +222,23 @@ static int kvm_trap_emul_handle_tlb_mod(struct kvm_vcpu *vcpu)
>                  * exception. Relay that on to the guest so it can handle it.
>                  */
>                 if (!TLB_IS_DIRTY(*tlb, badvaddr)) {
> -                       kvm_mips_emulate_tlbmod(cause, opc, run, vcpu);
> +                       kvm_mips_emulate_tlbmod(cause, opc, vcpu);
>                         return RESUME_GUEST;
>                 }
>
>                 if (kvm_mips_handle_mapped_seg_tlb_fault(vcpu, tlb, badvaddr,
>                                                          true))
>                         /* Not writable, needs handling as MMIO */
> -                       return kvm_mips_bad_store(cause, opc, run, vcpu);
> +                       return kvm_mips_bad_store(cause, opc, vcpu);
>                 return RESUME_GUEST;
>         } else if (KVM_GUEST_KSEGX(badvaddr) == KVM_GUEST_KSEG0) {
>                 if (kvm_mips_handle_kseg0_tlb_fault(badvaddr, vcpu, true) < 0)
>                         /* Not writable, needs handling as MMIO */
> -                       return kvm_mips_bad_store(cause, opc, run, vcpu);
> +                       return kvm_mips_bad_store(cause, opc, vcpu);
>                 return RESUME_GUEST;
>         } else {
>                 /* host kernel addresses are all handled as MMIO */
> -               return kvm_mips_bad_store(cause, opc, run, vcpu);
> +               return kvm_mips_bad_store(cause, opc, vcpu);
>         }
>  }
>
> @@ -276,7 +272,7 @@ static int kvm_trap_emul_handle_tlb_miss(struct kvm_vcpu *vcpu, bool store)
>                  *     into the shadow host TLB
>                  */
>
> -               er = kvm_mips_handle_tlbmiss(cause, opc, run, vcpu, store);
> +               er = kvm_mips_handle_tlbmiss(cause, opc, vcpu, store);
>                 if (er == EMULATE_DONE)
>                         ret = RESUME_GUEST;
>                 else {
> @@ -289,14 +285,14 @@ static int kvm_trap_emul_handle_tlb_miss(struct kvm_vcpu *vcpu, bool store)
>                  * not expect to ever get them
>                  */
>                 if (kvm_mips_handle_kseg0_tlb_fault(badvaddr, vcpu, store) < 0)
> -                       ret = kvm_mips_bad_access(cause, opc, run, vcpu, store);
> +                       ret = kvm_mips_bad_access(cause, opc, vcpu, store);
>         } else if (KVM_GUEST_KERNEL_MODE(vcpu)
>                    && (KSEGX(badvaddr) == CKSEG0 || KSEGX(badvaddr) == CKSEG1)) {
>                 /*
>                  * With EVA we may get a TLB exception instead of an address
>                  * error when the guest performs MMIO to KSeg1 addresses.
>                  */
> -               ret = kvm_mips_bad_access(cause, opc, run, vcpu, store);
> +               ret = kvm_mips_bad_access(cause, opc, vcpu, store);
>         } else {
>                 kvm_err("Illegal TLB %s fault address , cause %#x, PC: %p, BadVaddr: %#lx\n",
>                         store ? "ST" : "LD", cause, opc, badvaddr);
> @@ -320,7 +316,6 @@ static int kvm_trap_emul_handle_tlb_ld_miss(struct kvm_vcpu *vcpu)
>
>  static int kvm_trap_emul_handle_addr_err_st(struct kvm_vcpu *vcpu)
>  {
> -       struct kvm_run *run = vcpu->run;
>         u32 __user *opc = (u32 __user *) vcpu->arch.pc;
>         unsigned long badvaddr = vcpu->arch.host_cp0_badvaddr;
>         u32 cause = vcpu->arch.host_cp0_cause;
> @@ -328,11 +323,11 @@ static int kvm_trap_emul_handle_addr_err_st(struct kvm_vcpu *vcpu)
>
>         if (KVM_GUEST_KERNEL_MODE(vcpu)
>             && (KSEGX(badvaddr) == CKSEG0 || KSEGX(badvaddr) == CKSEG1)) {
> -               ret = kvm_mips_bad_store(cause, opc, run, vcpu);
> +               ret = kvm_mips_bad_store(cause, opc, vcpu);
>         } else {
>                 kvm_err("Address Error (STORE): cause %#x, PC: %p, BadVaddr: %#lx\n",
>                         cause, opc, badvaddr);
> -               run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> +               vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>                 ret = RESUME_HOST;
>         }
>         return ret;
> @@ -340,18 +335,17 @@ static int kvm_trap_emul_handle_addr_err_st(struct kvm_vcpu *vcpu)
>
>  static int kvm_trap_emul_handle_addr_err_ld(struct kvm_vcpu *vcpu)
>  {
> -       struct kvm_run *run = vcpu->run;
>         u32 __user *opc = (u32 __user *) vcpu->arch.pc;
>         unsigned long badvaddr = vcpu->arch.host_cp0_badvaddr;
>         u32 cause = vcpu->arch.host_cp0_cause;
>         int ret = RESUME_GUEST;
>
>         if (KSEGX(badvaddr) == CKSEG0 || KSEGX(badvaddr) == CKSEG1) {
> -               ret = kvm_mips_bad_load(cause, opc, run, vcpu);
> +               ret = kvm_mips_bad_load(cause, opc, vcpu);
>         } else {
>                 kvm_err("Address Error (LOAD): cause %#x, PC: %p, BadVaddr: %#lx\n",
>                         cause, opc, badvaddr);
> -               run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> +               vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>                 ret = RESUME_HOST;
>         }
>         return ret;
> @@ -359,17 +353,16 @@ static int kvm_trap_emul_handle_addr_err_ld(struct kvm_vcpu *vcpu)
>
>  static int kvm_trap_emul_handle_syscall(struct kvm_vcpu *vcpu)
>  {
> -       struct kvm_run *run = vcpu->run;
>         u32 __user *opc = (u32 __user *) vcpu->arch.pc;
>         u32 cause = vcpu->arch.host_cp0_cause;
>         enum emulation_result er = EMULATE_DONE;
>         int ret = RESUME_GUEST;
>
> -       er = kvm_mips_emulate_syscall(cause, opc, run, vcpu);
> +       er = kvm_mips_emulate_syscall(cause, opc, vcpu);
>         if (er == EMULATE_DONE)
>                 ret = RESUME_GUEST;
>         else {
> -               run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> +               vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>                 ret = RESUME_HOST;
>         }
>         return ret;
> @@ -377,17 +370,16 @@ static int kvm_trap_emul_handle_syscall(struct kvm_vcpu *vcpu)
>
>  static int kvm_trap_emul_handle_res_inst(struct kvm_vcpu *vcpu)
>  {
> -       struct kvm_run *run = vcpu->run;
>         u32 __user *opc = (u32 __user *) vcpu->arch.pc;
>         u32 cause = vcpu->arch.host_cp0_cause;
>         enum emulation_result er = EMULATE_DONE;
>         int ret = RESUME_GUEST;
>
> -       er = kvm_mips_handle_ri(cause, opc, run, vcpu);
> +       er = kvm_mips_handle_ri(cause, opc, vcpu);
>         if (er == EMULATE_DONE)
>                 ret = RESUME_GUEST;
>         else {
> -               run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> +               vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>                 ret = RESUME_HOST;
>         }
>         return ret;
> @@ -395,17 +387,16 @@ static int kvm_trap_emul_handle_res_inst(struct kvm_vcpu *vcpu)
>
>  static int kvm_trap_emul_handle_break(struct kvm_vcpu *vcpu)
>  {
> -       struct kvm_run *run = vcpu->run;
>         u32 __user *opc = (u32 __user *) vcpu->arch.pc;
>         u32 cause = vcpu->arch.host_cp0_cause;
>         enum emulation_result er = EMULATE_DONE;
>         int ret = RESUME_GUEST;
>
> -       er = kvm_mips_emulate_bp_exc(cause, opc, run, vcpu);
> +       er = kvm_mips_emulate_bp_exc(cause, opc, vcpu);
>         if (er == EMULATE_DONE)
>                 ret = RESUME_GUEST;
>         else {
> -               run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> +               vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>                 ret = RESUME_HOST;
>         }
>         return ret;
> @@ -413,17 +404,16 @@ static int kvm_trap_emul_handle_break(struct kvm_vcpu *vcpu)
>
>  static int kvm_trap_emul_handle_trap(struct kvm_vcpu *vcpu)
>  {
> -       struct kvm_run *run = vcpu->run;
>         u32 __user *opc = (u32 __user *)vcpu->arch.pc;
>         u32 cause = vcpu->arch.host_cp0_cause;
>         enum emulation_result er = EMULATE_DONE;
>         int ret = RESUME_GUEST;
>
> -       er = kvm_mips_emulate_trap_exc(cause, opc, run, vcpu);
> +       er = kvm_mips_emulate_trap_exc(cause, opc, vcpu);
>         if (er == EMULATE_DONE) {
>                 ret = RESUME_GUEST;
>         } else {
> -               run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> +               vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>                 ret = RESUME_HOST;
>         }
>         return ret;
> @@ -431,17 +421,16 @@ static int kvm_trap_emul_handle_trap(struct kvm_vcpu *vcpu)
>
>  static int kvm_trap_emul_handle_msa_fpe(struct kvm_vcpu *vcpu)
>  {
> -       struct kvm_run *run = vcpu->run;
>         u32 __user *opc = (u32 __user *)vcpu->arch.pc;
>         u32 cause = vcpu->arch.host_cp0_cause;
>         enum emulation_result er = EMULATE_DONE;
>         int ret = RESUME_GUEST;
>
> -       er = kvm_mips_emulate_msafpe_exc(cause, opc, run, vcpu);
> +       er = kvm_mips_emulate_msafpe_exc(cause, opc, vcpu);
>         if (er == EMULATE_DONE) {
>                 ret = RESUME_GUEST;
>         } else {
> -               run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> +               vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>                 ret = RESUME_HOST;
>         }
>         return ret;
> @@ -449,17 +438,16 @@ static int kvm_trap_emul_handle_msa_fpe(struct kvm_vcpu *vcpu)
>
>  static int kvm_trap_emul_handle_fpe(struct kvm_vcpu *vcpu)
>  {
> -       struct kvm_run *run = vcpu->run;
>         u32 __user *opc = (u32 __user *)vcpu->arch.pc;
>         u32 cause = vcpu->arch.host_cp0_cause;
>         enum emulation_result er = EMULATE_DONE;
>         int ret = RESUME_GUEST;
>
> -       er = kvm_mips_emulate_fpe_exc(cause, opc, run, vcpu);
> +       er = kvm_mips_emulate_fpe_exc(cause, opc, vcpu);
>         if (er == EMULATE_DONE) {
>                 ret = RESUME_GUEST;
>         } else {
> -               run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> +               vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>                 ret = RESUME_HOST;
>         }
>         return ret;
> @@ -474,7 +462,6 @@ static int kvm_trap_emul_handle_fpe(struct kvm_vcpu *vcpu)
>  static int kvm_trap_emul_handle_msa_disabled(struct kvm_vcpu *vcpu)
>  {
>         struct mips_coproc *cop0 = vcpu->arch.cop0;
> -       struct kvm_run *run = vcpu->run;
>         u32 __user *opc = (u32 __user *) vcpu->arch.pc;
>         u32 cause = vcpu->arch.host_cp0_cause;
>         enum emulation_result er = EMULATE_DONE;
> @@ -486,10 +473,10 @@ static int kvm_trap_emul_handle_msa_disabled(struct kvm_vcpu *vcpu)
>                  * No MSA in guest, or FPU enabled and not in FR=1 mode,
>                  * guest reserved instruction exception
>                  */
> -               er = kvm_mips_emulate_ri_exc(cause, opc, run, vcpu);
> +               er = kvm_mips_emulate_ri_exc(cause, opc, vcpu);
>         } else if (!(kvm_read_c0_guest_config5(cop0) & MIPS_CONF5_MSAEN)) {
>                 /* MSA disabled by guest, guest MSA disabled exception */
> -               er = kvm_mips_emulate_msadis_exc(cause, opc, run, vcpu);
> +               er = kvm_mips_emulate_msadis_exc(cause, opc, vcpu);
>         } else {
>                 /* Restore MSA/FPU state */
>                 kvm_own_msa(vcpu);
> @@ -502,7 +489,7 @@ static int kvm_trap_emul_handle_msa_disabled(struct kvm_vcpu *vcpu)
>                 break;
>
>         case EMULATE_FAIL:
> -               run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> +               vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>                 ret = RESUME_HOST;
>                 break;
>
> @@ -1181,8 +1168,7 @@ void kvm_trap_emul_gva_lockless_end(struct kvm_vcpu *vcpu)
>         local_irq_enable();
>  }
>
> -static void kvm_trap_emul_vcpu_reenter(struct kvm_run *run,
> -                                      struct kvm_vcpu *vcpu)
> +static void kvm_trap_emul_vcpu_reenter(struct kvm_vcpu *vcpu)
>  {
>         struct mm_struct *kern_mm = &vcpu->arch.guest_kernel_mm;
>         struct mm_struct *user_mm = &vcpu->arch.guest_user_mm;
> @@ -1225,7 +1211,7 @@ static void kvm_trap_emul_vcpu_reenter(struct kvm_run *run,
>         check_mmu_context(mm);
>  }
>
> -static int kvm_trap_emul_vcpu_run(struct kvm_run *run, struct kvm_vcpu *vcpu)
> +static int kvm_trap_emul_vcpu_run(struct kvm_vcpu *vcpu)
>  {
>         int cpu = smp_processor_id();
>         int r;
> @@ -1234,7 +1220,7 @@ static int kvm_trap_emul_vcpu_run(struct kvm_run *run, struct kvm_vcpu *vcpu)
>         kvm_mips_deliver_interrupts(vcpu,
>                                     kvm_read_c0_guest_cause(vcpu->arch.cop0));
>
> -       kvm_trap_emul_vcpu_reenter(run, vcpu);
> +       kvm_trap_emul_vcpu_reenter(vcpu);
>
>         /*
>          * We use user accessors to access guest memory, but we don't want to
> @@ -1252,7 +1238,7 @@ static int kvm_trap_emul_vcpu_run(struct kvm_run *run, struct kvm_vcpu *vcpu)
>          */
>         kvm_mips_suspend_mm(cpu);
>
> -       r = vcpu->arch.vcpu_run(run, vcpu);
> +       r = vcpu->arch.vcpu_run(vcpu->run, vcpu);
>
>         /* We may have migrated while handling guest exits */
>         cpu = smp_processor_id();
> diff --git a/arch/mips/kvm/vz.c b/arch/mips/kvm/vz.c
> index dde20887a70d..94f1d23828e3 100644
> --- a/arch/mips/kvm/vz.c
> +++ b/arch/mips/kvm/vz.c
> @@ -899,7 +899,6 @@ static void kvm_write_maari(struct kvm_vcpu *vcpu, unsigned long val)
>
>  static enum emulation_result kvm_vz_gpsi_cop0(union mips_instruction inst,
>                                               u32 *opc, u32 cause,
> -                                             struct kvm_run *run,
>                                               struct kvm_vcpu *vcpu)
>  {
>         struct mips_coproc *cop0 = vcpu->arch.cop0;
> @@ -1062,7 +1061,6 @@ static enum emulation_result kvm_vz_gpsi_cop0(union mips_instruction inst,
>
>  static enum emulation_result kvm_vz_gpsi_cache(union mips_instruction inst,
>                                                u32 *opc, u32 cause,
> -                                              struct kvm_run *run,
>                                                struct kvm_vcpu *vcpu)
>  {
>         enum emulation_result er = EMULATE_DONE;
> @@ -1134,7 +1132,6 @@ static enum emulation_result kvm_trap_vz_handle_gpsi(u32 cause, u32 *opc,
>  {
>         enum emulation_result er = EMULATE_DONE;
>         struct kvm_vcpu_arch *arch = &vcpu->arch;
> -       struct kvm_run *run = vcpu->run;
>         union mips_instruction inst;
>         int rd, rt, sel;
>         int err;
> @@ -1150,12 +1147,12 @@ static enum emulation_result kvm_trap_vz_handle_gpsi(u32 cause, u32 *opc,
>
>         switch (inst.r_format.opcode) {
>         case cop0_op:
> -               er = kvm_vz_gpsi_cop0(inst, opc, cause, run, vcpu);
> +               er = kvm_vz_gpsi_cop0(inst, opc, cause, vcpu);
>                 break;
>  #ifndef CONFIG_CPU_MIPSR6
>         case cache_op:
>                 trace_kvm_exit(vcpu, KVM_TRACE_EXIT_CACHE);
> -               er = kvm_vz_gpsi_cache(inst, opc, cause, run, vcpu);
> +               er = kvm_vz_gpsi_cache(inst, opc, cause, vcpu);
>                 break;
>  #endif
>         case spec3_op:
> @@ -1163,7 +1160,7 @@ static enum emulation_result kvm_trap_vz_handle_gpsi(u32 cause, u32 *opc,
>  #ifdef CONFIG_CPU_MIPSR6
>                 case cache6_op:
>                         trace_kvm_exit(vcpu, KVM_TRACE_EXIT_CACHE);
> -                       er = kvm_vz_gpsi_cache(inst, opc, cause, run, vcpu);
> +                       er = kvm_vz_gpsi_cache(inst, opc, cause, vcpu);
>                         break;
>  #endif
>                 case rdhwr_op:
> @@ -1465,7 +1462,6 @@ static int kvm_trap_vz_handle_guest_exit(struct kvm_vcpu *vcpu)
>   */
>  static int kvm_trap_vz_handle_cop_unusable(struct kvm_vcpu *vcpu)
>  {
> -       struct kvm_run *run = vcpu->run;
>         u32 cause = vcpu->arch.host_cp0_cause;
>         enum emulation_result er = EMULATE_FAIL;
>         int ret = RESUME_GUEST;
> @@ -1493,7 +1489,7 @@ static int kvm_trap_vz_handle_cop_unusable(struct kvm_vcpu *vcpu)
>                 break;
>
>         case EMULATE_FAIL:
> -               run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> +               vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>                 ret = RESUME_HOST;
>                 break;
>
> @@ -1512,8 +1508,6 @@ static int kvm_trap_vz_handle_cop_unusable(struct kvm_vcpu *vcpu)
>   */
>  static int kvm_trap_vz_handle_msa_disabled(struct kvm_vcpu *vcpu)
>  {
> -       struct kvm_run *run = vcpu->run;
> -
>         /*
>          * If MSA not present or not exposed to guest or FR=0, the MSA operation
>          * should have been treated as a reserved instruction!
> @@ -1524,7 +1518,7 @@ static int kvm_trap_vz_handle_msa_disabled(struct kvm_vcpu *vcpu)
>             (read_gc0_status() & (ST0_CU1 | ST0_FR)) == ST0_CU1 ||
>             !(read_gc0_config5() & MIPS_CONF5_MSAEN) ||
>             vcpu->arch.aux_inuse & KVM_MIPS_AUX_MSA) {
> -               run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> +               vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>                 return RESUME_HOST;
>         }
>
> @@ -1560,7 +1554,7 @@ static int kvm_trap_vz_handle_tlb_ld_miss(struct kvm_vcpu *vcpu)
>                 }
>
>                 /* Treat as MMIO */
> -               er = kvm_mips_emulate_load(inst, cause, run, vcpu);
> +               er = kvm_mips_emulate_load(inst, cause, vcpu);
>                 if (er == EMULATE_FAIL) {
>                         kvm_err("Guest Emulate Load from MMIO space failed: PC: %p, BadVaddr: %#lx\n",
>                                 opc, badvaddr);
> @@ -1607,7 +1601,7 @@ static int kvm_trap_vz_handle_tlb_st_miss(struct kvm_vcpu *vcpu)
>                 }
>
>                 /* Treat as MMIO */
> -               er = kvm_mips_emulate_store(inst, cause, run, vcpu);
> +               er = kvm_mips_emulate_store(inst, cause, vcpu);
>                 if (er == EMULATE_FAIL) {
>                         kvm_err("Guest Emulate Store to MMIO space failed: PC: %p, BadVaddr: %#lx\n",
>                                 opc, badvaddr);
> @@ -3129,7 +3123,7 @@ static void kvm_vz_flush_shadow_memslot(struct kvm *kvm,
>         kvm_vz_flush_shadow_all(kvm);
>  }
>
> -static void kvm_vz_vcpu_reenter(struct kvm_run *run, struct kvm_vcpu *vcpu)
> +static void kvm_vz_vcpu_reenter(struct kvm_vcpu *vcpu)
>  {
>         int cpu = smp_processor_id();
>         int preserve_guest_tlb;
> @@ -3145,7 +3139,7 @@ static void kvm_vz_vcpu_reenter(struct kvm_run *run, struct kvm_vcpu *vcpu)
>                 kvm_vz_vcpu_load_wired(vcpu);
>  }
>
> -static int kvm_vz_vcpu_run(struct kvm_run *run, struct kvm_vcpu *vcpu)
> +static int kvm_vz_vcpu_run(struct kvm_vcpu *vcpu)
>  {
>         int cpu = smp_processor_id();
>         int r;
> @@ -3158,7 +3152,7 @@ static int kvm_vz_vcpu_run(struct kvm_run *run, struct kvm_vcpu *vcpu)
>         kvm_vz_vcpu_load_tlb(vcpu, cpu);
>         kvm_vz_vcpu_load_wired(vcpu);
>
> -       r = vcpu->arch.vcpu_run(run, vcpu);
> +       r = vcpu->arch.vcpu_run(vcpu->run, vcpu);
>
>         kvm_vz_vcpu_save_wired(vcpu);
>
> --
> 2.17.1
>

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 1/7] KVM: s390: clean up redundant 'kvm_run' parameters
  2020-04-27  4:35 ` [PATCH v4 1/7] KVM: s390: " Tianjia Zhang
@ 2020-04-29 12:03   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 29+ messages in thread
From: Vitaly Kuznetsov @ 2020-04-29 12:03 UTC (permalink / raw)
  To: Tianjia Zhang
  Cc: kvm, linux-arm-kernel, kvmarm, linux-mips, kvm-ppc, linuxppc-dev,
	linux-s390, linux-kernel, tianjia.zhang, pbonzini, tsbogend,
	paulus, mpe, benh, borntraeger, frankja, david, cohuck,
	heiko.carstens, gor, sean.j.christopherson, wanpengli, jmattson,
	joro, tglx, mingo, bp, x86, hpa, maz, james.morse,
	julien.thierry.kdev, suzuki.poulose, christoffer.dall, peterx,
	thuth, chenhuacai

Tianjia Zhang <tianjia.zhang@linux.alibaba.com> writes:

> In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
> structure. For historical reasons, many kvm-related function parameters
> retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
> patch does a unified cleanup of these remaining redundant parameters.
>
> Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
> ---
>  arch/s390/kvm/kvm-s390.c | 23 +++++++++++++++--------
>  1 file changed, 15 insertions(+), 8 deletions(-)
>
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index e335a7e5ead7..c0d94eaa00d7 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -4176,8 +4176,9 @@ static int __vcpu_run(struct kvm_vcpu *vcpu)
>  	return rc;
>  }
>  
> -static void sync_regs_fmt2(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
> +static void sync_regs_fmt2(struct kvm_vcpu *vcpu)
>  {
> +	struct kvm_run *kvm_run = vcpu->run;
>  	struct runtime_instr_cb *riccb;
>  	struct gs_cb *gscb;
>  
> @@ -4243,8 +4244,10 @@ static void sync_regs_fmt2(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
>  	/* SIE will load etoken directly from SDNX and therefore kvm_run */
>  }
>  
> -static void sync_regs(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
> +static void sync_regs(struct kvm_vcpu *vcpu)
>  {
> +	struct kvm_run *kvm_run = vcpu->run;
> +
>  	if (kvm_run->kvm_dirty_regs & KVM_SYNC_PREFIX)
>  		kvm_s390_set_prefix(vcpu, kvm_run->s.regs.prefix);
>  	if (kvm_run->kvm_dirty_regs & KVM_SYNC_CRS) {
> @@ -4273,7 +4276,7 @@ static void sync_regs(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
>  
>  	/* Sync fmt2 only data */
>  	if (likely(!kvm_s390_pv_cpu_is_protected(vcpu))) {
> -		sync_regs_fmt2(vcpu, kvm_run);
> +		sync_regs_fmt2(vcpu);
>  	} else {
>  		/*
>  		 * In several places we have to modify our internal view to
> @@ -4292,8 +4295,10 @@ static void sync_regs(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
>  	kvm_run->kvm_dirty_regs = 0;
>  }
>  
> -static void store_regs_fmt2(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
> +static void store_regs_fmt2(struct kvm_vcpu *vcpu)
>  {
> +	struct kvm_run *kvm_run = vcpu->run;
> +
>  	kvm_run->s.regs.todpr = vcpu->arch.sie_block->todpr;
>  	kvm_run->s.regs.pp = vcpu->arch.sie_block->pp;
>  	kvm_run->s.regs.gbea = vcpu->arch.sie_block->gbea;
> @@ -4313,8 +4318,10 @@ static void store_regs_fmt2(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
>  	/* SIE will save etoken directly into SDNX and therefore kvm_run */
>  }
>  
> -static void store_regs(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
> +static void store_regs(struct kvm_vcpu *vcpu)
>  {
> +	struct kvm_run *kvm_run = vcpu->run;
> +
>  	kvm_run->psw_mask = vcpu->arch.sie_block->gpsw.mask;
>  	kvm_run->psw_addr = vcpu->arch.sie_block->gpsw.addr;
>  	kvm_run->s.regs.prefix = kvm_s390_get_prefix(vcpu);
> @@ -4333,7 +4340,7 @@ static void store_regs(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
>  	current->thread.fpu.fpc = vcpu->arch.host_fpregs.fpc;
>  	current->thread.fpu.regs = vcpu->arch.host_fpregs.regs;
>  	if (likely(!kvm_s390_pv_cpu_is_protected(vcpu)))
> -		store_regs_fmt2(vcpu, kvm_run);
> +		store_regs_fmt2(vcpu);
>  }
>  
>  int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
> @@ -4371,7 +4378,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>  		goto out;
>  	}
>  
> -	sync_regs(vcpu, kvm_run);
> +	sync_regs(vcpu);
>  	enable_cpu_timer_accounting(vcpu);
>  
>  	might_fault();
> @@ -4393,7 +4400,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>  	}
>  
>  	disable_cpu_timer_accounting(vcpu);
> -	store_regs(vcpu, kvm_run);
> +	store_regs(vcpu);
>  
>  	kvm_sigset_deactivate(vcpu);

Haven't tried to compile this but the change itself looks obviously
correct, so

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 2/7] KVM: arm64: clean up redundant 'kvm_run' parameters
  2020-04-27  4:35 ` [PATCH v4 2/7] KVM: arm64: " Tianjia Zhang
@ 2020-04-29 12:07   ` Vitaly Kuznetsov
  2020-05-05  8:39   ` Marc Zyngier
  1 sibling, 0 replies; 29+ messages in thread
From: Vitaly Kuznetsov @ 2020-04-29 12:07 UTC (permalink / raw)
  To: Tianjia Zhang
  Cc: kvm, linux-arm-kernel, kvmarm, linux-mips, kvm-ppc, linuxppc-dev,
	linux-s390, linux-kernel, tianjia.zhang, pbonzini, tsbogend,
	paulus, mpe, benh, borntraeger, frankja, david, cohuck,
	heiko.carstens, gor, sean.j.christopherson, wanpengli, jmattson,
	joro, tglx, mingo, bp, x86, hpa, maz, james.morse,
	julien.thierry.kdev, suzuki.poulose, christoffer.dall, peterx,
	thuth, chenhuacai

Tianjia Zhang <tianjia.zhang@linux.alibaba.com> writes:

> In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
> structure. For historical reasons, many kvm-related function parameters
> retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
> patch does a unified cleanup of these remaining redundant parameters.
>
> Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
> ---
>  arch/arm64/include/asm/kvm_coproc.h | 12 +++++-----
>  arch/arm64/include/asm/kvm_host.h   | 11 ++++-----
>  arch/arm64/include/asm/kvm_mmu.h    |  2 +-
>  arch/arm64/kvm/handle_exit.c        | 36 ++++++++++++++---------------
>  arch/arm64/kvm/sys_regs.c           | 13 +++++------
>  virt/kvm/arm/arm.c                  |  6 ++---
>  virt/kvm/arm/mmio.c                 | 11 +++++----
>  virt/kvm/arm/mmu.c                  |  5 ++--
>  8 files changed, 46 insertions(+), 50 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_coproc.h b/arch/arm64/include/asm/kvm_coproc.h
> index 0185ee8b8b5e..454373704b8a 100644
> --- a/arch/arm64/include/asm/kvm_coproc.h
> +++ b/arch/arm64/include/asm/kvm_coproc.h
> @@ -27,12 +27,12 @@ struct kvm_sys_reg_target_table {
>  void kvm_register_target_sys_reg_table(unsigned int target,
>  				       struct kvm_sys_reg_target_table *table);
>  
> -int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu, struct kvm_run *run);
> -int kvm_handle_cp14_32(struct kvm_vcpu *vcpu, struct kvm_run *run);
> -int kvm_handle_cp14_64(struct kvm_vcpu *vcpu, struct kvm_run *run);
> -int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run);
> -int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run);
> -int kvm_handle_sys_reg(struct kvm_vcpu *vcpu, struct kvm_run *run);
> +int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu);
> +int kvm_handle_cp14_32(struct kvm_vcpu *vcpu);
> +int kvm_handle_cp14_64(struct kvm_vcpu *vcpu);
> +int kvm_handle_cp15_32(struct kvm_vcpu *vcpu);
> +int kvm_handle_cp15_64(struct kvm_vcpu *vcpu);
> +int kvm_handle_sys_reg(struct kvm_vcpu *vcpu);
>  
>  #define kvm_coproc_table_init kvm_sys_reg_table_init
>  void kvm_sys_reg_table_init(void);
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 32c8a675e5a4..3fab32e4948c 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -481,18 +481,15 @@ u64 __kvm_call_hyp(void *hypfn, ...);
>  void force_vm_exit(const cpumask_t *mask);
>  void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot);
>  
> -int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
> -		int exception_index);
> -void handle_exit_early(struct kvm_vcpu *vcpu, struct kvm_run *run,
> -		       int exception_index);
> +int handle_exit(struct kvm_vcpu *vcpu, int exception_index);
> +void handle_exit_early(struct kvm_vcpu *vcpu, int exception_index);
>  
>  /* MMIO helpers */
>  void kvm_mmio_write_buf(void *buf, unsigned int len, unsigned long data);
>  unsigned long kvm_mmio_read_buf(const void *buf, unsigned int len);
>  
> -int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run);
> -int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
> -		 phys_addr_t fault_ipa);
> +int kvm_handle_mmio_return(struct kvm_vcpu *vcpu);
> +int io_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa);
>  
>  int kvm_perf_init(void);
>  int kvm_perf_teardown(void);
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index 30b0e8d6b895..2ec7b9bb25d3 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -159,7 +159,7 @@ void kvm_free_stage2_pgd(struct kvm *kvm);
>  int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
>  			  phys_addr_t pa, unsigned long size, bool writable);
>  
> -int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run);
> +int kvm_handle_guest_abort(struct kvm_vcpu *vcpu);
>  
>  void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu);
>  
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index aacfc55de44c..ec3a66642ea5 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -25,7 +25,7 @@
>  #define CREATE_TRACE_POINTS
>  #include "trace.h"
>  
> -typedef int (*exit_handle_fn)(struct kvm_vcpu *, struct kvm_run *);
> +typedef int (*exit_handle_fn)(struct kvm_vcpu *);
>  
>  static void kvm_handle_guest_serror(struct kvm_vcpu *vcpu, u32 esr)
>  {
> @@ -33,7 +33,7 @@ static void kvm_handle_guest_serror(struct kvm_vcpu *vcpu, u32 esr)
>  		kvm_inject_vabt(vcpu);
>  }
>  
> -static int handle_hvc(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +static int handle_hvc(struct kvm_vcpu *vcpu)
>  {
>  	int ret;
>  
> @@ -50,7 +50,7 @@ static int handle_hvc(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  	return ret;
>  }
>  
> -static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +static int handle_smc(struct kvm_vcpu *vcpu)
>  {
>  	/*
>  	 * "If an SMC instruction executed at Non-secure EL1 is
> @@ -69,7 +69,7 @@ static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
>   * Guest access to FP/ASIMD registers are routed to this handler only
>   * when the system doesn't support FP/ASIMD.
>   */
> -static int handle_no_fpsimd(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +static int handle_no_fpsimd(struct kvm_vcpu *vcpu)
>  {
>  	kvm_inject_undefined(vcpu);
>  	return 1;
> @@ -87,7 +87,7 @@ static int handle_no_fpsimd(struct kvm_vcpu *vcpu, struct kvm_run *run)
>   * world-switches and schedule other host processes until there is an
>   * incoming IRQ or FIQ to the VM.
>   */
> -static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +static int kvm_handle_wfx(struct kvm_vcpu *vcpu)
>  {
>  	if (kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WFx_ISS_WFE) {
>  		trace_kvm_wfx_arm64(*vcpu_pc(vcpu), true);
> @@ -109,16 +109,16 @@ static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run)
>   * kvm_handle_guest_debug - handle a debug exception instruction
>   *
>   * @vcpu:	the vcpu pointer
> - * @run:	access to the kvm_run structure for results
>   *
>   * We route all debug exceptions through the same handler. If both the
>   * guest and host are using the same debug facilities it will be up to
>   * userspace to re-inject the correct exception for guest delivery.
>   *
> - * @return: 0 (while setting run->exit_reason), -1 for error
> + * @return: 0 (while setting vcpu->run->exit_reason), -1 for error
>   */
> -static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu)
>  {
> +	struct kvm_run *run = vcpu->run;
>  	u32 hsr = kvm_vcpu_get_hsr(vcpu);
>  	int ret = 0;
>  
> @@ -144,7 +144,7 @@ static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  	return ret;
>  }
>  
> -static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu)
>  {
>  	u32 hsr = kvm_vcpu_get_hsr(vcpu);
>  
> @@ -155,7 +155,7 @@ static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  	return 1;
>  }
>  
> -static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +static int handle_sve(struct kvm_vcpu *vcpu)
>  {
>  	/* Until SVE is supported for guests: */
>  	kvm_inject_undefined(vcpu);
> @@ -193,7 +193,7 @@ void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
>   * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
>   * a NOP).
>   */
> -static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu)
>  {
>  	kvm_arm_vcpu_ptrauth_trap(vcpu);
>  	return 1;
> @@ -238,7 +238,7 @@ static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
>   * KVM_EXIT_DEBUG, otherwise userspace needs to complete its
>   * emulation first.
>   */
> -static int handle_trap_exceptions(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +static int handle_trap_exceptions(struct kvm_vcpu *vcpu)
>  {
>  	int handled;
>  
> @@ -253,7 +253,7 @@ static int handle_trap_exceptions(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  		exit_handle_fn exit_handler;
>  
>  		exit_handler = kvm_get_exit_handler(vcpu);
> -		handled = exit_handler(vcpu, run);
> +		handled = exit_handler(vcpu);
>  	}
>  
>  	return handled;
> @@ -263,9 +263,10 @@ static int handle_trap_exceptions(struct kvm_vcpu *vcpu, struct kvm_run *run)
>   * Return > 0 to return to guest, < 0 on error, 0 (and set exit_reason) on
>   * proper exit to userspace.
>   */
> -int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
> -		       int exception_index)
> +int handle_exit(struct kvm_vcpu *vcpu, int exception_index)
>  {
> +	struct kvm_run *run = vcpu->run;
> +
>  	if (ARM_SERROR_PENDING(exception_index)) {
>  		u8 hsr_ec = ESR_ELx_EC(kvm_vcpu_get_hsr(vcpu));
>  
> @@ -291,7 +292,7 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
>  	case ARM_EXCEPTION_EL1_SERROR:
>  		return 1;
>  	case ARM_EXCEPTION_TRAP:
> -		return handle_trap_exceptions(vcpu, run);
> +		return handle_trap_exceptions(vcpu);
>  	case ARM_EXCEPTION_HYP_GONE:
>  		/*
>  		 * EL2 has been reset to the hyp-stub. This happens when a guest
> @@ -315,8 +316,7 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
>  }
>  
>  /* For exit types that need handling before we can be preempted */
> -void handle_exit_early(struct kvm_vcpu *vcpu, struct kvm_run *run,
> -		       int exception_index)
> +void handle_exit_early(struct kvm_vcpu *vcpu, int exception_index)
>  {
>  	if (ARM_SERROR_PENDING(exception_index)) {
>  		if (this_cpu_has_cap(ARM64_HAS_RAS_EXTN)) {
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 51db934702b6..e5a0d0d676c8 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -2116,7 +2116,7 @@ static const struct sys_reg_desc *find_reg(const struct sys_reg_params *params,
>  	return bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg);
>  }
>  
> -int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu)
>  {
>  	kvm_inject_undefined(vcpu);
>  	return 1;
> @@ -2295,7 +2295,7 @@ static int kvm_handle_cp_32(struct kvm_vcpu *vcpu,
>  	return 1;
>  }
>  
> -int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +int kvm_handle_cp15_64(struct kvm_vcpu *vcpu)
>  {
>  	const struct sys_reg_desc *target_specific;
>  	size_t num;
> @@ -2306,7 +2306,7 @@ int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  				target_specific, num);
>  }
>  
> -int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +int kvm_handle_cp15_32(struct kvm_vcpu *vcpu)
>  {
>  	const struct sys_reg_desc *target_specific;
>  	size_t num;
> @@ -2317,14 +2317,14 @@ int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  				target_specific, num);
>  }
>  
> -int kvm_handle_cp14_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +int kvm_handle_cp14_64(struct kvm_vcpu *vcpu)
>  {
>  	return kvm_handle_cp_64(vcpu,
>  				cp14_64_regs, ARRAY_SIZE(cp14_64_regs),
>  				NULL, 0);
>  }
>  
> -int kvm_handle_cp14_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +int kvm_handle_cp14_32(struct kvm_vcpu *vcpu)
>  {
>  	return kvm_handle_cp_32(vcpu,
>  				cp14_regs, ARRAY_SIZE(cp14_regs),
> @@ -2382,9 +2382,8 @@ static void reset_sys_reg_descs(struct kvm_vcpu *vcpu,
>  /**
>   * kvm_handle_sys_reg -- handles a mrs/msr trap on a guest sys_reg access
>   * @vcpu: The VCPU pointer
> - * @run:  The kvm_run struct
>   */
> -int kvm_handle_sys_reg(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
>  {
>  	struct sys_reg_params params;
>  	unsigned long esr = kvm_vcpu_get_hsr(vcpu);
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index f5390ac2165b..dbeb20804a75 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -659,7 +659,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>  		return ret;
>  
>  	if (run->exit_reason == KVM_EXIT_MMIO) {
> -		ret = kvm_handle_mmio_return(vcpu, run);
> +		ret = kvm_handle_mmio_return(vcpu);
>  		if (ret)
>  			return ret;
>  	}
> @@ -811,11 +811,11 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>  		trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
>  
>  		/* Exit types that need handling before we can be preempted */
> -		handle_exit_early(vcpu, run, ret);
> +		handle_exit_early(vcpu, ret);
>  
>  		preempt_enable();
>  
> -		ret = handle_exit(vcpu, run, ret);
> +		ret = handle_exit(vcpu, ret);
>  	}
>  
>  	/* Tell userspace about in-kernel device output levels */
> diff --git a/virt/kvm/arm/mmio.c b/virt/kvm/arm/mmio.c
> index aedfcff99ac5..41ef5c5dbc62 100644
> --- a/virt/kvm/arm/mmio.c
> +++ b/virt/kvm/arm/mmio.c
> @@ -77,9 +77,8 @@ unsigned long kvm_mmio_read_buf(const void *buf, unsigned int len)
>   *			     or in-kernel IO emulation
>   *
>   * @vcpu: The VCPU pointer
> - * @run:  The VCPU run struct containing the mmio data
>   */
> -int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +int kvm_handle_mmio_return(struct kvm_vcpu *vcpu)
>  {
>  	unsigned long data;
>  	unsigned int len;
> @@ -92,6 +91,8 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  	vcpu->mmio_needed = 0;
>  
>  	if (!kvm_vcpu_dabt_iswrite(vcpu)) {
> +		struct kvm_run *run = vcpu->run;
> +
>  		len = kvm_vcpu_dabt_get_as(vcpu);
>  		data = kvm_mmio_read_buf(run->mmio.data, len);
>  
> @@ -119,9 +120,9 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  	return 0;
>  }
>  
> -int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
> -		 phys_addr_t fault_ipa)
> +int io_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
>  {
> +	struct kvm_run *run = vcpu->run;
>  	unsigned long data;
>  	unsigned long rt;
>  	int ret;
> @@ -188,7 +189,7 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
>  		if (!is_write)
>  			memcpy(run->mmio.data, data_buf, len);
>  		vcpu->stat.mmio_exit_kernel++;
> -		kvm_handle_mmio_return(vcpu, run);
> +		kvm_handle_mmio_return(vcpu);
>  		return 1;
>  	}
>  
> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> index e3b9ee268823..c5dc58226b5b 100644
> --- a/virt/kvm/arm/mmu.c
> +++ b/virt/kvm/arm/mmu.c
> @@ -1892,7 +1892,6 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
>  /**
>   * kvm_handle_guest_abort - handles all 2nd stage aborts
>   * @vcpu:	the VCPU pointer
> - * @run:	the kvm_run structure
>   *
>   * Any abort that gets to the host is almost guaranteed to be caused by a
>   * missing second stage translation table entry, which can mean that either the
> @@ -1901,7 +1900,7 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
>   * space. The distinction is based on the IPA causing the fault and whether this
>   * memory region has been registered as standard RAM by user space.
>   */
> -int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
>  {
>  	unsigned long fault_status;
>  	phys_addr_t fault_ipa;
> @@ -1980,7 +1979,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  		 * of the page size.
>  		 */
>  		fault_ipa |= kvm_vcpu_get_hfar(vcpu) & ((1 << 12) - 1);
> -		ret = io_mem_abort(vcpu, run, fault_ipa);
> +		ret = io_mem_abort(vcpu, fault_ipa);
>  		goto out_unlock;
>  	}

Haven't tried to compile this but the change itself looks obviously
correct, so

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 3/7] KVM: PPC: Remove redundant kvm_run from vcpu_arch
  2020-04-27  4:35 ` [PATCH v4 3/7] KVM: PPC: Remove redundant kvm_run from vcpu_arch Tianjia Zhang
@ 2020-04-29 12:23   ` Vitaly Kuznetsov
  2020-05-26  4:36   ` Paul Mackerras
  2020-05-27  4:20   ` Paul Mackerras
  2 siblings, 0 replies; 29+ messages in thread
From: Vitaly Kuznetsov @ 2020-04-29 12:23 UTC (permalink / raw)
  To: Tianjia Zhang
  Cc: kvm, linux-arm-kernel, kvmarm, linux-mips, kvm-ppc, linuxppc-dev,
	linux-s390, linux-kernel, tianjia.zhang, pbonzini, tsbogend,
	paulus, mpe, benh, borntraeger, frankja, david, cohuck,
	heiko.carstens, gor, sean.j.christopherson, wanpengli, jmattson,
	joro, tglx, mingo, bp, x86, hpa, maz, james.morse,
	julien.thierry.kdev, suzuki.poulose, christoffer.dall, peterx,
	thuth, chenhuacai

Tianjia Zhang <tianjia.zhang@linux.alibaba.com> writes:

> The 'kvm_run' field already exists in the 'vcpu' structure, which
> is the same structure as the 'kvm_run' in the 'vcpu_arch' and
> should be deleted.
>
> Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
> ---
>  arch/powerpc/include/asm/kvm_host.h | 1 -
>  arch/powerpc/kvm/book3s_hv.c        | 6 ++----
>  arch/powerpc/kvm/book3s_hv_nested.c | 3 +--
>  3 files changed, 3 insertions(+), 7 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
> index 1dc63101ffe1..2745ff8faa01 100644
> --- a/arch/powerpc/include/asm/kvm_host.h
> +++ b/arch/powerpc/include/asm/kvm_host.h
> @@ -795,7 +795,6 @@ struct kvm_vcpu_arch {
>  	struct mmio_hpte_cache_entry *pgfault_cache;
>  
>  	struct task_struct *run_task;
> -	struct kvm_run *kvm_run;
>  
>  	spinlock_t vpa_update_lock;
>  	struct kvmppc_vpa vpa;
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 93493f0cbfe8..413ea2dcb10c 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -2934,7 +2934,7 @@ static void post_guest_process(struct kvmppc_vcore *vc, bool is_master)
>  
>  		ret = RESUME_GUEST;
>  		if (vcpu->arch.trap)
> -			ret = kvmppc_handle_exit_hv(vcpu->arch.kvm_run, vcpu,
> +			ret = kvmppc_handle_exit_hv(vcpu->run, vcpu,
>  						    vcpu->arch.run_task);
>  
>  		vcpu->arch.ret = ret;
> @@ -3920,7 +3920,6 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
>  	spin_lock(&vc->lock);
>  	vcpu->arch.ceded = 0;
>  	vcpu->arch.run_task = current;
> -	vcpu->arch.kvm_run = kvm_run;
>  	vcpu->arch.stolen_logged = vcore_stolen_time(vc, mftb());
>  	vcpu->arch.state = KVMPPC_VCPU_RUNNABLE;
>  	vcpu->arch.busy_preempt = TB_NIL;
> @@ -3973,7 +3972,7 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
>  			if (signal_pending(v->arch.run_task)) {
>  				kvmppc_remove_runnable(vc, v);
>  				v->stat.signal_exits++;
> -				v->arch.kvm_run->exit_reason = KVM_EXIT_INTR;
> +				v->run->exit_reason = KVM_EXIT_INTR;
>  				v->arch.ret = -EINTR;
>  				wake_up(&v->arch.cpu_run);
>  			}
> @@ -4049,7 +4048,6 @@ int kvmhv_run_single_vcpu(struct kvm_run *kvm_run,
>  	vc = vcpu->arch.vcore;
>  	vcpu->arch.ceded = 0;
>  	vcpu->arch.run_task = current;
> -	vcpu->arch.kvm_run = kvm_run;
>  	vcpu->arch.stolen_logged = vcore_stolen_time(vc, mftb());
>  	vcpu->arch.state = KVMPPC_VCPU_RUNNABLE;
>  	vcpu->arch.busy_preempt = TB_NIL;
> diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
> index dc97e5be76f6..5a3987f3ebf3 100644
> --- a/arch/powerpc/kvm/book3s_hv_nested.c
> +++ b/arch/powerpc/kvm/book3s_hv_nested.c
> @@ -290,8 +290,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
>  			r = RESUME_HOST;
>  			break;
>  		}
> -		r = kvmhv_run_single_vcpu(vcpu->arch.kvm_run, vcpu, hdec_exp,
> -					  lpcr);
> +		r = kvmhv_run_single_vcpu(vcpu->run, vcpu, hdec_exp, lpcr);
>  	} while (is_kvmppc_resume_guest(r));
>  
>  	/* save L2 state for return */

FWIW,

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 4/7] KVM: PPC: clean up redundant 'kvm_run' parameters
  2020-04-27  4:35 ` [PATCH v4 4/7] KVM: PPC: clean up redundant 'kvm_run' parameters Tianjia Zhang
@ 2020-04-29 12:32   ` Vitaly Kuznetsov
  2020-05-26  5:49   ` Paul Mackerras
  1 sibling, 0 replies; 29+ messages in thread
From: Vitaly Kuznetsov @ 2020-04-29 12:32 UTC (permalink / raw)
  To: Tianjia Zhang
  Cc: kvm, linux-arm-kernel, kvmarm, linux-mips, kvm-ppc, linuxppc-dev,
	linux-s390, linux-kernel, tianjia.zhang, pbonzini, tsbogend,
	paulus, mpe, benh, borntraeger, frankja, david, cohuck,
	heiko.carstens, gor, sean.j.christopherson, wanpengli, jmattson,
	joro, tglx, mingo, bp, x86, hpa, maz, james.morse,
	julien.thierry.kdev, suzuki.poulose, christoffer.dall, peterx,
	thuth, chenhuacai

Tianjia Zhang <tianjia.zhang@linux.alibaba.com> writes:

> In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
> structure. For historical reasons, many kvm-related function parameters
> retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
> patch does a unified cleanup of these remaining redundant parameters.
>
> Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
> ---
>  arch/powerpc/include/asm/kvm_book3s.h    | 16 +++---
>  arch/powerpc/include/asm/kvm_ppc.h       | 27 +++++----
>  arch/powerpc/kvm/book3s.c                |  4 +-
>  arch/powerpc/kvm/book3s.h                |  2 +-
>  arch/powerpc/kvm/book3s_64_mmu_hv.c      | 12 ++--
>  arch/powerpc/kvm/book3s_64_mmu_radix.c   |  4 +-
>  arch/powerpc/kvm/book3s_emulate.c        | 10 ++--
>  arch/powerpc/kvm/book3s_hv.c             | 60 ++++++++++----------
>  arch/powerpc/kvm/book3s_hv_nested.c      | 11 ++--
>  arch/powerpc/kvm/book3s_paired_singles.c | 72 ++++++++++++------------
>  arch/powerpc/kvm/book3s_pr.c             | 30 +++++-----
>  arch/powerpc/kvm/booke.c                 | 36 ++++++------
>  arch/powerpc/kvm/booke.h                 |  8 +--
>  arch/powerpc/kvm/booke_emulate.c         |  2 +-
>  arch/powerpc/kvm/e500_emulate.c          | 15 +++--
>  arch/powerpc/kvm/emulate.c               | 10 ++--
>  arch/powerpc/kvm/emulate_loadstore.c     | 32 +++++------
>  arch/powerpc/kvm/powerpc.c               | 72 ++++++++++++------------
>  arch/powerpc/kvm/trace_hv.h              |  6 +-
>  19 files changed, 212 insertions(+), 217 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
> index 506e4df2d730..66dbb1f85d59 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -155,12 +155,11 @@ extern void kvmppc_mmu_unmap_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *pte)
>  extern int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr);
>  extern void kvmppc_mmu_flush_segment(struct kvm_vcpu *vcpu, ulong eaddr, ulong seg_size);
>  extern void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu);
> -extern int kvmppc_book3s_hv_page_fault(struct kvm_run *run,
> -			struct kvm_vcpu *vcpu, unsigned long addr,
> -			unsigned long status);
> +extern int kvmppc_book3s_hv_page_fault(struct kvm_vcpu *vcpu,
> +			unsigned long addr, unsigned long status);
>  extern long kvmppc_hv_find_lock_hpte(struct kvm *kvm, gva_t eaddr,
>  			unsigned long slb_v, unsigned long valid);
> -extern int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +extern int kvmppc_hv_emulate_mmio(struct kvm_vcpu *vcpu,
>  			unsigned long gpa, gva_t ea, int is_store);
>  
>  extern void kvmppc_mmu_hpte_cache_map(struct kvm_vcpu *vcpu, struct hpte_cache *pte);
> @@ -174,8 +173,7 @@ extern void kvmppc_mmu_hpte_sysexit(void);
>  extern int kvmppc_mmu_hv_init(void);
>  extern int kvmppc_book3s_hcall_implemented(struct kvm *kvm, unsigned long hc);
>  
> -extern int kvmppc_book3s_radix_page_fault(struct kvm_run *run,
> -			struct kvm_vcpu *vcpu,
> +extern int kvmppc_book3s_radix_page_fault(struct kvm_vcpu *vcpu,
>  			unsigned long ea, unsigned long dsisr);
>  extern unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid,
>  					gva_t eaddr, void *to, void *from,
> @@ -234,7 +232,7 @@ extern void kvmppc_trigger_fac_interrupt(struct kvm_vcpu *vcpu, ulong fac);
>  extern void kvmppc_set_bat(struct kvm_vcpu *vcpu, struct kvmppc_bat *bat,
>  			   bool upper, u32 val);
>  extern void kvmppc_giveup_ext(struct kvm_vcpu *vcpu, ulong msr);
> -extern int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu);
> +extern int kvmppc_emulate_paired_single(struct kvm_vcpu *vcpu);
>  extern kvm_pfn_t kvmppc_gpa_to_pfn(struct kvm_vcpu *vcpu, gpa_t gpa,
>  			bool writing, bool *writable);
>  extern void kvmppc_add_revmap_chain(struct kvm *kvm, struct revmap_entry *rev,
> @@ -300,12 +298,12 @@ void kvmhv_set_ptbl_entry(unsigned int lpid, u64 dw0, u64 dw1);
>  void kvmhv_release_all_nested(struct kvm *kvm);
>  long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu);
>  long kvmhv_do_nested_tlbie(struct kvm_vcpu *vcpu);
> -int kvmhv_run_single_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu,
> +int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu,
>  			  u64 time_limit, unsigned long lpcr);
>  void kvmhv_save_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr);
>  void kvmhv_restore_hv_return_state(struct kvm_vcpu *vcpu,
>  				   struct hv_guest_state *hr);
> -long int kvmhv_nested_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu);
> +long int kvmhv_nested_page_fault(struct kvm_vcpu *vcpu);
>  
>  void kvmppc_giveup_fac(struct kvm_vcpu *vcpu, ulong fac);
>  
> diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
> index 94f5a32acaf1..ccf66b3a4c1d 100644
> --- a/arch/powerpc/include/asm/kvm_ppc.h
> +++ b/arch/powerpc/include/asm/kvm_ppc.h
> @@ -58,28 +58,28 @@ enum xlate_readwrite {
>  	XLATE_WRITE		/* check for write permissions */
>  };
>  
> -extern int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu);
> -extern int __kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu);
> +extern int kvmppc_vcpu_run(struct kvm_vcpu *vcpu);
> +extern int __kvmppc_vcpu_run(struct kvm_run *run, struct kvm_vcpu *vcpu);
>  extern void kvmppc_handler_highmem(void);
>  
>  extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
> -extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +extern int kvmppc_handle_load(struct kvm_vcpu *vcpu,
>                                unsigned int rt, unsigned int bytes,
>  			      int is_default_endian);
> -extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +extern int kvmppc_handle_loads(struct kvm_vcpu *vcpu,
>                                 unsigned int rt, unsigned int bytes,
>  			       int is_default_endian);
> -extern int kvmppc_handle_vsx_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +extern int kvmppc_handle_vsx_load(struct kvm_vcpu *vcpu,
>  				unsigned int rt, unsigned int bytes,
>  			int is_default_endian, int mmio_sign_extend);
> -extern int kvmppc_handle_vmx_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +extern int kvmppc_handle_vmx_load(struct kvm_vcpu *vcpu,
>  		unsigned int rt, unsigned int bytes, int is_default_endian);
> -extern int kvmppc_handle_vmx_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +extern int kvmppc_handle_vmx_store(struct kvm_vcpu *vcpu,
>  		unsigned int rs, unsigned int bytes, int is_default_endian);
> -extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +extern int kvmppc_handle_store(struct kvm_vcpu *vcpu,
>  			       u64 val, unsigned int bytes,
>  			       int is_default_endian);
> -extern int kvmppc_handle_vsx_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +extern int kvmppc_handle_vsx_store(struct kvm_vcpu *vcpu,
>  				int rs, unsigned int bytes,
>  				int is_default_endian);
>  
> @@ -90,10 +90,9 @@ extern int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
>  		     bool data);
>  extern int kvmppc_st(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
>  		     bool data);
> -extern int kvmppc_emulate_instruction(struct kvm_run *run,
> -                                      struct kvm_vcpu *vcpu);
> +extern int kvmppc_emulate_instruction(struct kvm_vcpu *vcpu);
>  extern int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu);
> -extern int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu);
> +extern int kvmppc_emulate_mmio(struct kvm_vcpu *vcpu);
>  extern void kvmppc_emulate_dec(struct kvm_vcpu *vcpu);
>  extern u32 kvmppc_get_dec(struct kvm_vcpu *vcpu, u64 tb);
>  extern void kvmppc_decrementer_func(struct kvm_vcpu *vcpu);
> @@ -267,7 +266,7 @@ struct kvmppc_ops {
>  	void (*vcpu_put)(struct kvm_vcpu *vcpu);
>  	void (*inject_interrupt)(struct kvm_vcpu *vcpu, int vec, u64 srr1_flags);
>  	void (*set_msr)(struct kvm_vcpu *vcpu, u64 msr);
> -	int (*vcpu_run)(struct kvm_run *run, struct kvm_vcpu *vcpu);
> +	int (*vcpu_run)(struct kvm_vcpu *vcpu);
>  	int (*vcpu_create)(struct kvm_vcpu *vcpu);
>  	void (*vcpu_free)(struct kvm_vcpu *vcpu);
>  	int (*check_requests)(struct kvm_vcpu *vcpu);
> @@ -291,7 +290,7 @@ struct kvmppc_ops {
>  	int (*init_vm)(struct kvm *kvm);
>  	void (*destroy_vm)(struct kvm *kvm);
>  	int (*get_smmu_info)(struct kvm *kvm, struct kvm_ppc_smmu_info *info);
> -	int (*emulate_op)(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +	int (*emulate_op)(struct kvm_vcpu *vcpu,
>  			  unsigned int inst, int *advance);
>  	int (*emulate_mtspr)(struct kvm_vcpu *vcpu, int sprn, ulong spr_val);
>  	int (*emulate_mfspr)(struct kvm_vcpu *vcpu, int sprn, ulong *spr_val);
> diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
> index 5690a1f9b976..345d22de213b 100644
> --- a/arch/powerpc/kvm/book3s.c
> +++ b/arch/powerpc/kvm/book3s.c
> @@ -758,9 +758,9 @@ void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 msr)
>  }
>  EXPORT_SYMBOL_GPL(kvmppc_set_msr);
>  
> -int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
> +int kvmppc_vcpu_run(struct kvm_vcpu *vcpu)
>  {
> -	return vcpu->kvm->arch.kvm_ops->vcpu_run(kvm_run, vcpu);
> +	return vcpu->kvm->arch.kvm_ops->vcpu_run(vcpu);
>  }
>  
>  int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu,
> diff --git a/arch/powerpc/kvm/book3s.h b/arch/powerpc/kvm/book3s.h
> index eae259ee49af..9b6323ec8e60 100644
> --- a/arch/powerpc/kvm/book3s.h
> +++ b/arch/powerpc/kvm/book3s.h
> @@ -18,7 +18,7 @@ extern void kvm_set_spte_hva_hv(struct kvm *kvm, unsigned long hva, pte_t pte);
>  
>  extern int kvmppc_mmu_init_pr(struct kvm_vcpu *vcpu);
>  extern void kvmppc_mmu_destroy_pr(struct kvm_vcpu *vcpu);
> -extern int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +extern int kvmppc_core_emulate_op_pr(struct kvm_vcpu *vcpu,
>  				     unsigned int inst, int *advance);
>  extern int kvmppc_core_emulate_mtspr_pr(struct kvm_vcpu *vcpu,
>  					int sprn, ulong spr_val);
> diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
> index 2b35f9bcf892..36a07656ebbb 100644
> --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
> +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
> @@ -413,7 +413,7 @@ static int instruction_is_store(unsigned int instr)
>  	return (instr & mask) != 0;
>  }
>  
> -int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +int kvmppc_hv_emulate_mmio(struct kvm_vcpu *vcpu,
>  			   unsigned long gpa, gva_t ea, int is_store)
>  {
>  	u32 last_inst;
> @@ -473,10 +473,10 @@ int kvmppc_hv_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  
>  	vcpu->arch.paddr_accessed = gpa;
>  	vcpu->arch.vaddr_accessed = ea;
> -	return kvmppc_emulate_mmio(run, vcpu);
> +	return kvmppc_emulate_mmio(vcpu);
>  }
>  
> -int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +int kvmppc_book3s_hv_page_fault(struct kvm_vcpu *vcpu,
>  				unsigned long ea, unsigned long dsisr)
>  {
>  	struct kvm *kvm = vcpu->kvm;
> @@ -499,7 +499,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  	pte_t pte, *ptep;
>  
>  	if (kvm_is_radix(kvm))
> -		return kvmppc_book3s_radix_page_fault(run, vcpu, ea, dsisr);
> +		return kvmppc_book3s_radix_page_fault(vcpu, ea, dsisr);
>  
>  	/*
>  	 * Real-mode code has already searched the HPT and found the
> @@ -519,7 +519,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  			gpa_base = r & HPTE_R_RPN & ~(psize - 1);
>  			gfn_base = gpa_base >> PAGE_SHIFT;
>  			gpa = gpa_base | (ea & (psize - 1));
> -			return kvmppc_hv_emulate_mmio(run, vcpu, gpa, ea,
> +			return kvmppc_hv_emulate_mmio(vcpu, gpa, ea,
>  						dsisr & DSISR_ISSTORE);
>  		}
>  	}
> @@ -555,7 +555,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  
>  	/* No memslot means it's an emulated MMIO region */
>  	if (!memslot || (memslot->flags & KVM_MEMSLOT_INVALID))
> -		return kvmppc_hv_emulate_mmio(run, vcpu, gpa, ea,
> +		return kvmppc_hv_emulate_mmio(vcpu, gpa, ea,
>  					      dsisr & DSISR_ISSTORE);
>  
>  	/*
> diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> index aa12cd4078b3..16c947bd5e87 100644
> --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
> +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> @@ -887,7 +887,7 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu,
>  	return ret;
>  }
>  
> -int kvmppc_book3s_radix_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +int kvmppc_book3s_radix_page_fault(struct kvm_vcpu *vcpu,
>  				   unsigned long ea, unsigned long dsisr)
>  {
>  	struct kvm *kvm = vcpu->kvm;
> @@ -933,7 +933,7 @@ int kvmppc_book3s_radix_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  			kvmppc_core_queue_data_storage(vcpu, ea, dsisr);
>  			return RESUME_GUEST;
>  		}
> -		return kvmppc_hv_emulate_mmio(run, vcpu, gpa, ea, writing);
> +		return kvmppc_hv_emulate_mmio(vcpu, gpa, ea, writing);
>  	}
>  
>  	if (memslot->flags & KVM_MEM_READONLY) {
> diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
> index dad71d276b91..0effd48c8f4d 100644
> --- a/arch/powerpc/kvm/book3s_emulate.c
> +++ b/arch/powerpc/kvm/book3s_emulate.c
> @@ -235,7 +235,7 @@ void kvmppc_emulate_tabort(struct kvm_vcpu *vcpu, int ra_val)
>  
>  #endif
>  
> -int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +int kvmppc_core_emulate_op_pr(struct kvm_vcpu *vcpu,
>  			      unsigned int inst, int *advance)
>  {
>  	int emulated = EMULATE_DONE;
> @@ -371,13 +371,13 @@ int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  			if (kvmppc_h_pr(vcpu, cmd) == EMULATE_DONE)
>  				break;
>  
> -			run->papr_hcall.nr = cmd;
> +			vcpu->run->papr_hcall.nr = cmd;
>  			for (i = 0; i < 9; ++i) {
>  				ulong gpr = kvmppc_get_gpr(vcpu, 4 + i);
> -				run->papr_hcall.args[i] = gpr;
> +				vcpu->run->papr_hcall.args[i] = gpr;
>  			}
>  
> -			run->exit_reason = KVM_EXIT_PAPR_HCALL;
> +			vcpu->run->exit_reason = KVM_EXIT_PAPR_HCALL;
>  			vcpu->arch.hcall_needed = 1;
>  			emulated = EMULATE_EXIT_USER;
>  			break;
> @@ -629,7 +629,7 @@ int kvmppc_core_emulate_op_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  	}
>  
>  	if (emulated == EMULATE_FAIL)
> -		emulated = kvmppc_emulate_paired_single(run, vcpu);
> +		emulated = kvmppc_emulate_paired_single(vcpu);
>  
>  	return emulated;
>  }
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 413ea2dcb10c..296bc6fb4eb1 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -1156,8 +1156,7 @@ static int kvmppc_hcall_impl_hv(unsigned long cmd)
>  	return kvmppc_hcall_impl_hv_realmode(cmd);
>  }
>  
> -static int kvmppc_emulate_debug_inst(struct kvm_run *run,
> -					struct kvm_vcpu *vcpu)
> +static int kvmppc_emulate_debug_inst(struct kvm_vcpu *vcpu)
>  {
>  	u32 last_inst;
>  
> @@ -1171,8 +1170,8 @@ static int kvmppc_emulate_debug_inst(struct kvm_run *run,
>  	}
>  
>  	if (last_inst == KVMPPC_INST_SW_BREAKPOINT) {
> -		run->exit_reason = KVM_EXIT_DEBUG;
> -		run->debug.arch.address = kvmppc_get_pc(vcpu);
> +		vcpu->run->exit_reason = KVM_EXIT_DEBUG;
> +		vcpu->run->debug.arch.address = kvmppc_get_pc(vcpu);
>  		return RESUME_HOST;
>  	} else {
>  		kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
> @@ -1273,9 +1272,10 @@ static int kvmppc_emulate_doorbell_instr(struct kvm_vcpu *vcpu)
>  	return RESUME_GUEST;
>  }
>  
> -static int kvmppc_handle_exit_hv(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu,
>  				 struct task_struct *tsk)
>  {
> +	struct kvm_run *run = vcpu->run;
>  	int r = RESUME_HOST;
>  
>  	vcpu->stat.sum_exits++;
> @@ -1410,7 +1410,7 @@ static int kvmppc_handle_exit_hv(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  				swab32(vcpu->arch.emul_inst) :
>  				vcpu->arch.emul_inst;
>  		if (vcpu->guest_debug & KVM_GUESTDBG_USE_SW_BP) {
> -			r = kvmppc_emulate_debug_inst(run, vcpu);
> +			r = kvmppc_emulate_debug_inst(vcpu);
>  		} else {
>  			kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
>  			r = RESUME_GUEST;
> @@ -1462,7 +1462,7 @@ static int kvmppc_handle_exit_hv(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  	return r;
>  }
>  
> -static int kvmppc_handle_nested_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
> +static int kvmppc_handle_nested_exit(struct kvm_vcpu *vcpu)
>  {
>  	int r;
>  	int srcu_idx;
> @@ -1520,7 +1520,7 @@ static int kvmppc_handle_nested_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  	 */
>  	case BOOK3S_INTERRUPT_H_DATA_STORAGE:
>  		srcu_idx = srcu_read_lock(&vcpu->kvm->srcu);
> -		r = kvmhv_nested_page_fault(run, vcpu);
> +		r = kvmhv_nested_page_fault(vcpu);
>  		srcu_read_unlock(&vcpu->kvm->srcu, srcu_idx);
>  		break;
>  	case BOOK3S_INTERRUPT_H_INST_STORAGE:
> @@ -1530,7 +1530,7 @@ static int kvmppc_handle_nested_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  		if (vcpu->arch.shregs.msr & HSRR1_HISI_WRITE)
>  			vcpu->arch.fault_dsisr |= DSISR_ISSTORE;
>  		srcu_idx = srcu_read_lock(&vcpu->kvm->srcu);
> -		r = kvmhv_nested_page_fault(run, vcpu);
> +		r = kvmhv_nested_page_fault(vcpu);
>  		srcu_read_unlock(&vcpu->kvm->srcu, srcu_idx);
>  		break;
>  
> @@ -2934,7 +2934,7 @@ static void post_guest_process(struct kvmppc_vcore *vc, bool is_master)
>  
>  		ret = RESUME_GUEST;
>  		if (vcpu->arch.trap)
> -			ret = kvmppc_handle_exit_hv(vcpu->run, vcpu,
> +			ret = kvmppc_handle_exit_hv(vcpu,
>  						    vcpu->arch.run_task);
>  
>  		vcpu->arch.ret = ret;
> @@ -3900,15 +3900,16 @@ static int kvmhv_setup_mmu(struct kvm_vcpu *vcpu)
>  	return r;
>  }
>  
> -static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
> +static int kvmppc_run_vcpu(struct kvm_vcpu *vcpu)
>  {
> +	struct kvm_run *run = vcpu->run;
>  	int n_ceded, i, r;
>  	struct kvmppc_vcore *vc;
>  	struct kvm_vcpu *v;
>  
>  	trace_kvmppc_run_vcpu_enter(vcpu);
>  
> -	kvm_run->exit_reason = 0;
> +	run->exit_reason = 0;
>  	vcpu->arch.ret = RESUME_GUEST;
>  	vcpu->arch.trap = 0;
>  	kvmppc_update_vpas(vcpu);
> @@ -3952,8 +3953,8 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
>  			r = kvmhv_setup_mmu(vcpu);
>  			spin_lock(&vc->lock);
>  			if (r) {
> -				kvm_run->exit_reason = KVM_EXIT_FAIL_ENTRY;
> -				kvm_run->fail_entry.
> +				run->exit_reason = KVM_EXIT_FAIL_ENTRY;
> +				run->fail_entry.
>  					hardware_entry_failure_reason = 0;
>  				vcpu->arch.ret = r;
>  				break;
> @@ -4013,7 +4014,7 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
>  	if (vcpu->arch.state == KVMPPC_VCPU_RUNNABLE) {
>  		kvmppc_remove_runnable(vc, vcpu);
>  		vcpu->stat.signal_exits++;
> -		kvm_run->exit_reason = KVM_EXIT_INTR;
> +		run->exit_reason = KVM_EXIT_INTR;
>  		vcpu->arch.ret = -EINTR;
>  	}
>  
> @@ -4024,15 +4025,15 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
>  		wake_up(&v->arch.cpu_run);
>  	}
>  
> -	trace_kvmppc_run_vcpu_exit(vcpu, kvm_run);
> +	trace_kvmppc_run_vcpu_exit(vcpu);
>  	spin_unlock(&vc->lock);
>  	return vcpu->arch.ret;
>  }
>  
> -int kvmhv_run_single_vcpu(struct kvm_run *kvm_run,
> -			  struct kvm_vcpu *vcpu, u64 time_limit,
> +int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
>  			  unsigned long lpcr)
>  {
> +	struct kvm_run *run = vcpu->run;
>  	int trap, r, pcpu;
>  	int srcu_idx, lpid;
>  	struct kvmppc_vcore *vc;
> @@ -4041,7 +4042,7 @@ int kvmhv_run_single_vcpu(struct kvm_run *kvm_run,
>  
>  	trace_kvmppc_run_vcpu_enter(vcpu);
>  
> -	kvm_run->exit_reason = 0;
> +	run->exit_reason = 0;
>  	vcpu->arch.ret = RESUME_GUEST;
>  	vcpu->arch.trap = 0;
>  
> @@ -4165,9 +4166,9 @@ int kvmhv_run_single_vcpu(struct kvm_run *kvm_run,
>  	r = RESUME_GUEST;
>  	if (trap) {
>  		if (!nested)
> -			r = kvmppc_handle_exit_hv(kvm_run, vcpu, current);
> +			r = kvmppc_handle_exit_hv(vcpu, current);
>  		else
> -			r = kvmppc_handle_nested_exit(kvm_run, vcpu);
> +			r = kvmppc_handle_nested_exit(vcpu);
>  	}
>  	vcpu->arch.ret = r;
>  
> @@ -4177,7 +4178,7 @@ int kvmhv_run_single_vcpu(struct kvm_run *kvm_run,
>  		while (vcpu->arch.ceded && !kvmppc_vcpu_woken(vcpu)) {
>  			if (signal_pending(current)) {
>  				vcpu->stat.signal_exits++;
> -				kvm_run->exit_reason = KVM_EXIT_INTR;
> +				run->exit_reason = KVM_EXIT_INTR;
>  				vcpu->arch.ret = -EINTR;
>  				break;
>  			}
> @@ -4193,13 +4194,13 @@ int kvmhv_run_single_vcpu(struct kvm_run *kvm_run,
>  
>   done:
>  	kvmppc_remove_runnable(vc, vcpu);
> -	trace_kvmppc_run_vcpu_exit(vcpu, kvm_run);
> +	trace_kvmppc_run_vcpu_exit(vcpu);
>  
>  	return vcpu->arch.ret;
>  
>   sigpend:
>  	vcpu->stat.signal_exits++;
> -	kvm_run->exit_reason = KVM_EXIT_INTR;
> +	run->exit_reason = KVM_EXIT_INTR;
>  	vcpu->arch.ret = -EINTR;
>   out:
>  	local_irq_enable();
> @@ -4207,8 +4208,9 @@ int kvmhv_run_single_vcpu(struct kvm_run *kvm_run,
>  	goto done;
>  }
>  
> -static int kvmppc_vcpu_run_hv(struct kvm_run *run, struct kvm_vcpu *vcpu)
> +static int kvmppc_vcpu_run_hv(struct kvm_vcpu *vcpu)
>  {
> +	struct kvm_run *run = vcpu->run;
>  	int r;
>  	int srcu_idx;
>  	unsigned long ebb_regs[3] = {};	/* shut up GCC */
> @@ -4292,10 +4294,10 @@ static int kvmppc_vcpu_run_hv(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  		 */
>  		if (kvm->arch.threads_indep && kvm_is_radix(kvm) &&
>  		    !no_mixing_hpt_and_radix)
> -			r = kvmhv_run_single_vcpu(run, vcpu, ~(u64)0,
> +			r = kvmhv_run_single_vcpu(vcpu, ~(u64)0,
>  						  vcpu->arch.vcore->lpcr);
>  		else
> -			r = kvmppc_run_vcpu(run, vcpu);
> +			r = kvmppc_run_vcpu(vcpu);
>  
>  		if (run->exit_reason == KVM_EXIT_PAPR_HCALL &&
>  		    !(vcpu->arch.shregs.msr & MSR_PR)) {
> @@ -4305,7 +4307,7 @@ static int kvmppc_vcpu_run_hv(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  			kvmppc_core_prepare_to_enter(vcpu);
>  		} else if (r == RESUME_PAGE_FAULT) {
>  			srcu_idx = srcu_read_lock(&kvm->srcu);
> -			r = kvmppc_book3s_hv_page_fault(run, vcpu,
> +			r = kvmppc_book3s_hv_page_fault(vcpu,
>  				vcpu->arch.fault_dar, vcpu->arch.fault_dsisr);
>  			srcu_read_unlock(&kvm->srcu, srcu_idx);
>  		} else if (r == RESUME_PASSTHROUGH) {
> @@ -4979,7 +4981,7 @@ static void kvmppc_core_destroy_vm_hv(struct kvm *kvm)
>  }
>  
>  /* We don't need to emulate any privileged instructions or dcbz */
> -static int kvmppc_core_emulate_op_hv(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +static int kvmppc_core_emulate_op_hv(struct kvm_vcpu *vcpu,
>  				     unsigned int inst, int *advance)
>  {
>  	return EMULATE_FAIL;
> diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
> index 5a3987f3ebf3..fe4c535882e6 100644
> --- a/arch/powerpc/kvm/book3s_hv_nested.c
> +++ b/arch/powerpc/kvm/book3s_hv_nested.c
> @@ -290,7 +290,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
>  			r = RESUME_HOST;
>  			break;
>  		}
> -		r = kvmhv_run_single_vcpu(vcpu->run, vcpu, hdec_exp, lpcr);
> +		r = kvmhv_run_single_vcpu(vcpu, hdec_exp, lpcr);
>  	} while (is_kvmppc_resume_guest(r));
>  
>  	/* save L2 state for return */
> @@ -1256,8 +1256,7 @@ static inline int kvmppc_radix_shift_to_level(int shift)
>  }
>  
>  /* called with gp->tlb_lock held */
> -static long int __kvmhv_nested_page_fault(struct kvm_run *run,
> -					  struct kvm_vcpu *vcpu,
> +static long int __kvmhv_nested_page_fault(struct kvm_vcpu *vcpu,
>  					  struct kvm_nested_guest *gp)
>  {
>  	struct kvm *kvm = vcpu->kvm;
> @@ -1340,7 +1339,7 @@ static long int __kvmhv_nested_page_fault(struct kvm_run *run,
>  		}
>  
>  		/* passthrough of emulated MMIO case */
> -		return kvmppc_hv_emulate_mmio(run, vcpu, gpa, ea, writing);
> +		return kvmppc_hv_emulate_mmio(vcpu, gpa, ea, writing);
>  	}
>  	if (memslot->flags & KVM_MEM_READONLY) {
>  		if (writing) {
> @@ -1427,13 +1426,13 @@ static long int __kvmhv_nested_page_fault(struct kvm_run *run,
>  	return RESUME_GUEST;
>  }
>  
> -long int kvmhv_nested_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu)
> +long int kvmhv_nested_page_fault(struct kvm_vcpu *vcpu)
>  {
>  	struct kvm_nested_guest *gp = vcpu->arch.nested;
>  	long int ret;
>  
>  	mutex_lock(&gp->tlb_lock);
> -	ret = __kvmhv_nested_page_fault(run, vcpu, gp);
> +	ret = __kvmhv_nested_page_fault(vcpu, gp);
>  	mutex_unlock(&gp->tlb_lock);
>  	return ret;
>  }
> diff --git a/arch/powerpc/kvm/book3s_paired_singles.c b/arch/powerpc/kvm/book3s_paired_singles.c
> index bf0282775e37..a11436720a8c 100644
> --- a/arch/powerpc/kvm/book3s_paired_singles.c
> +++ b/arch/powerpc/kvm/book3s_paired_singles.c
> @@ -169,7 +169,7 @@ static void kvmppc_inject_pf(struct kvm_vcpu *vcpu, ulong eaddr, bool is_store)
>  	kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_DATA_STORAGE);
>  }
>  
> -static int kvmppc_emulate_fpr_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +static int kvmppc_emulate_fpr_load(struct kvm_vcpu *vcpu,
>  				   int rs, ulong addr, int ls_type)
>  {
>  	int emulated = EMULATE_FAIL;
> @@ -188,7 +188,7 @@ static int kvmppc_emulate_fpr_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  		kvmppc_inject_pf(vcpu, addr, false);
>  		goto done_load;
>  	} else if (r == EMULATE_DO_MMIO) {
> -		emulated = kvmppc_handle_load(run, vcpu, KVM_MMIO_REG_FPR | rs,
> +		emulated = kvmppc_handle_load(vcpu, KVM_MMIO_REG_FPR | rs,
>  					      len, 1);
>  		goto done_load;
>  	}
> @@ -213,7 +213,7 @@ static int kvmppc_emulate_fpr_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  	return emulated;
>  }
>  
> -static int kvmppc_emulate_fpr_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +static int kvmppc_emulate_fpr_store(struct kvm_vcpu *vcpu,
>  				    int rs, ulong addr, int ls_type)
>  {
>  	int emulated = EMULATE_FAIL;
> @@ -248,7 +248,7 @@ static int kvmppc_emulate_fpr_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  	if (r < 0) {
>  		kvmppc_inject_pf(vcpu, addr, true);
>  	} else if (r == EMULATE_DO_MMIO) {
> -		emulated = kvmppc_handle_store(run, vcpu, val, len, 1);
> +		emulated = kvmppc_handle_store(vcpu, val, len, 1);
>  	} else {
>  		emulated = EMULATE_DONE;
>  	}
> @@ -259,7 +259,7 @@ static int kvmppc_emulate_fpr_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  	return emulated;
>  }
>  
> -static int kvmppc_emulate_psq_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +static int kvmppc_emulate_psq_load(struct kvm_vcpu *vcpu,
>  				   int rs, ulong addr, bool w, int i)
>  {
>  	int emulated = EMULATE_FAIL;
> @@ -279,12 +279,12 @@ static int kvmppc_emulate_psq_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  		kvmppc_inject_pf(vcpu, addr, false);
>  		goto done_load;
>  	} else if ((r == EMULATE_DO_MMIO) && w) {
> -		emulated = kvmppc_handle_load(run, vcpu, KVM_MMIO_REG_FPR | rs,
> +		emulated = kvmppc_handle_load(vcpu, KVM_MMIO_REG_FPR | rs,
>  					      4, 1);
>  		vcpu->arch.qpr[rs] = tmp[1];
>  		goto done_load;
>  	} else if (r == EMULATE_DO_MMIO) {
> -		emulated = kvmppc_handle_load(run, vcpu, KVM_MMIO_REG_FQPR | rs,
> +		emulated = kvmppc_handle_load(vcpu, KVM_MMIO_REG_FQPR | rs,
>  					      8, 1);
>  		goto done_load;
>  	}
> @@ -302,7 +302,7 @@ static int kvmppc_emulate_psq_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  	return emulated;
>  }
>  
> -static int kvmppc_emulate_psq_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +static int kvmppc_emulate_psq_store(struct kvm_vcpu *vcpu,
>  				    int rs, ulong addr, bool w, int i)
>  {
>  	int emulated = EMULATE_FAIL;
> @@ -318,10 +318,10 @@ static int kvmppc_emulate_psq_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  	if (r < 0) {
>  		kvmppc_inject_pf(vcpu, addr, true);
>  	} else if ((r == EMULATE_DO_MMIO) && w) {
> -		emulated = kvmppc_handle_store(run, vcpu, tmp[0], 4, 1);
> +		emulated = kvmppc_handle_store(vcpu, tmp[0], 4, 1);
>  	} else if (r == EMULATE_DO_MMIO) {
>  		u64 val = ((u64)tmp[0] << 32) | tmp[1];
> -		emulated = kvmppc_handle_store(run, vcpu, val, 8, 1);
> +		emulated = kvmppc_handle_store(vcpu, val, 8, 1);
>  	} else {
>  		emulated = EMULATE_DONE;
>  	}
> @@ -618,7 +618,7 @@ static int kvmppc_ps_one_in(struct kvm_vcpu *vcpu, bool rc,
>  	return EMULATE_DONE;
>  }
>  
> -int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
> +int kvmppc_emulate_paired_single(struct kvm_vcpu *vcpu)
>  {
>  	u32 inst;
>  	enum emulation_result emulated = EMULATE_DONE;
> @@ -680,7 +680,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  		int i = inst_get_field(inst, 17, 19);
>  
>  		addr += get_d_signext(inst);
> -		emulated = kvmppc_emulate_psq_load(run, vcpu, ax_rd, addr, w, i);
> +		emulated = kvmppc_emulate_psq_load(vcpu, ax_rd, addr, w, i);
>  		break;
>  	}
>  	case OP_PSQ_LU:
> @@ -690,7 +690,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  		int i = inst_get_field(inst, 17, 19);
>  
>  		addr += get_d_signext(inst);
> -		emulated = kvmppc_emulate_psq_load(run, vcpu, ax_rd, addr, w, i);
> +		emulated = kvmppc_emulate_psq_load(vcpu, ax_rd, addr, w, i);
>  
>  		if (emulated == EMULATE_DONE)
>  			kvmppc_set_gpr(vcpu, ax_ra, addr);
> @@ -703,7 +703,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  		int i = inst_get_field(inst, 17, 19);
>  
>  		addr += get_d_signext(inst);
> -		emulated = kvmppc_emulate_psq_store(run, vcpu, ax_rd, addr, w, i);
> +		emulated = kvmppc_emulate_psq_store(vcpu, ax_rd, addr, w, i);
>  		break;
>  	}
>  	case OP_PSQ_STU:
> @@ -713,7 +713,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  		int i = inst_get_field(inst, 17, 19);
>  
>  		addr += get_d_signext(inst);
> -		emulated = kvmppc_emulate_psq_store(run, vcpu, ax_rd, addr, w, i);
> +		emulated = kvmppc_emulate_psq_store(vcpu, ax_rd, addr, w, i);
>  
>  		if (emulated == EMULATE_DONE)
>  			kvmppc_set_gpr(vcpu, ax_ra, addr);
> @@ -733,7 +733,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  			int i = inst_get_field(inst, 22, 24);
>  
>  			addr += kvmppc_get_gpr(vcpu, ax_rb);
> -			emulated = kvmppc_emulate_psq_load(run, vcpu, ax_rd, addr, w, i);
> +			emulated = kvmppc_emulate_psq_load(vcpu, ax_rd, addr, w, i);
>  			break;
>  		}
>  		case OP_4X_PS_CMPO0:
> @@ -747,7 +747,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  			int i = inst_get_field(inst, 22, 24);
>  
>  			addr += kvmppc_get_gpr(vcpu, ax_rb);
> -			emulated = kvmppc_emulate_psq_load(run, vcpu, ax_rd, addr, w, i);
> +			emulated = kvmppc_emulate_psq_load(vcpu, ax_rd, addr, w, i);
>  
>  			if (emulated == EMULATE_DONE)
>  				kvmppc_set_gpr(vcpu, ax_ra, addr);
> @@ -824,7 +824,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  			int i = inst_get_field(inst, 22, 24);
>  
>  			addr += kvmppc_get_gpr(vcpu, ax_rb);
> -			emulated = kvmppc_emulate_psq_store(run, vcpu, ax_rd, addr, w, i);
> +			emulated = kvmppc_emulate_psq_store(vcpu, ax_rd, addr, w, i);
>  			break;
>  		}
>  		case OP_4XW_PSQ_STUX:
> @@ -834,7 +834,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  			int i = inst_get_field(inst, 22, 24);
>  
>  			addr += kvmppc_get_gpr(vcpu, ax_rb);
> -			emulated = kvmppc_emulate_psq_store(run, vcpu, ax_rd, addr, w, i);
> +			emulated = kvmppc_emulate_psq_store(vcpu, ax_rd, addr, w, i);
>  
>  			if (emulated == EMULATE_DONE)
>  				kvmppc_set_gpr(vcpu, ax_ra, addr);
> @@ -922,7 +922,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  	{
>  		ulong addr = (ax_ra ? kvmppc_get_gpr(vcpu, ax_ra) : 0) + full_d;
>  
> -		emulated = kvmppc_emulate_fpr_load(run, vcpu, ax_rd, addr,
> +		emulated = kvmppc_emulate_fpr_load(vcpu, ax_rd, addr,
>  						   FPU_LS_SINGLE);
>  		break;
>  	}
> @@ -930,7 +930,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  	{
>  		ulong addr = kvmppc_get_gpr(vcpu, ax_ra) + full_d;
>  
> -		emulated = kvmppc_emulate_fpr_load(run, vcpu, ax_rd, addr,
> +		emulated = kvmppc_emulate_fpr_load(vcpu, ax_rd, addr,
>  						   FPU_LS_SINGLE);
>  
>  		if (emulated == EMULATE_DONE)
> @@ -941,7 +941,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  	{
>  		ulong addr = (ax_ra ? kvmppc_get_gpr(vcpu, ax_ra) : 0) + full_d;
>  
> -		emulated = kvmppc_emulate_fpr_load(run, vcpu, ax_rd, addr,
> +		emulated = kvmppc_emulate_fpr_load(vcpu, ax_rd, addr,
>  						   FPU_LS_DOUBLE);
>  		break;
>  	}
> @@ -949,7 +949,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  	{
>  		ulong addr = kvmppc_get_gpr(vcpu, ax_ra) + full_d;
>  
> -		emulated = kvmppc_emulate_fpr_load(run, vcpu, ax_rd, addr,
> +		emulated = kvmppc_emulate_fpr_load(vcpu, ax_rd, addr,
>  						   FPU_LS_DOUBLE);
>  
>  		if (emulated == EMULATE_DONE)
> @@ -960,7 +960,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  	{
>  		ulong addr = (ax_ra ? kvmppc_get_gpr(vcpu, ax_ra) : 0) + full_d;
>  
> -		emulated = kvmppc_emulate_fpr_store(run, vcpu, ax_rd, addr,
> +		emulated = kvmppc_emulate_fpr_store(vcpu, ax_rd, addr,
>  						    FPU_LS_SINGLE);
>  		break;
>  	}
> @@ -968,7 +968,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  	{
>  		ulong addr = kvmppc_get_gpr(vcpu, ax_ra) + full_d;
>  
> -		emulated = kvmppc_emulate_fpr_store(run, vcpu, ax_rd, addr,
> +		emulated = kvmppc_emulate_fpr_store(vcpu, ax_rd, addr,
>  						    FPU_LS_SINGLE);
>  
>  		if (emulated == EMULATE_DONE)
> @@ -979,7 +979,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  	{
>  		ulong addr = (ax_ra ? kvmppc_get_gpr(vcpu, ax_ra) : 0) + full_d;
>  
> -		emulated = kvmppc_emulate_fpr_store(run, vcpu, ax_rd, addr,
> +		emulated = kvmppc_emulate_fpr_store(vcpu, ax_rd, addr,
>  						    FPU_LS_DOUBLE);
>  		break;
>  	}
> @@ -987,7 +987,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  	{
>  		ulong addr = kvmppc_get_gpr(vcpu, ax_ra) + full_d;
>  
> -		emulated = kvmppc_emulate_fpr_store(run, vcpu, ax_rd, addr,
> +		emulated = kvmppc_emulate_fpr_store(vcpu, ax_rd, addr,
>  						    FPU_LS_DOUBLE);
>  
>  		if (emulated == EMULATE_DONE)
> @@ -1001,7 +1001,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  			ulong addr = ax_ra ? kvmppc_get_gpr(vcpu, ax_ra) : 0;
>  
>  			addr += kvmppc_get_gpr(vcpu, ax_rb);
> -			emulated = kvmppc_emulate_fpr_load(run, vcpu, ax_rd,
> +			emulated = kvmppc_emulate_fpr_load(vcpu, ax_rd,
>  							   addr, FPU_LS_SINGLE);
>  			break;
>  		}
> @@ -1010,7 +1010,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  			ulong addr = kvmppc_get_gpr(vcpu, ax_ra) +
>  				     kvmppc_get_gpr(vcpu, ax_rb);
>  
> -			emulated = kvmppc_emulate_fpr_load(run, vcpu, ax_rd,
> +			emulated = kvmppc_emulate_fpr_load(vcpu, ax_rd,
>  							   addr, FPU_LS_SINGLE);
>  
>  			if (emulated == EMULATE_DONE)
> @@ -1022,7 +1022,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  			ulong addr = (ax_ra ? kvmppc_get_gpr(vcpu, ax_ra) : 0) +
>  				     kvmppc_get_gpr(vcpu, ax_rb);
>  
> -			emulated = kvmppc_emulate_fpr_load(run, vcpu, ax_rd,
> +			emulated = kvmppc_emulate_fpr_load(vcpu, ax_rd,
>  							   addr, FPU_LS_DOUBLE);
>  			break;
>  		}
> @@ -1031,7 +1031,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  			ulong addr = kvmppc_get_gpr(vcpu, ax_ra) +
>  				     kvmppc_get_gpr(vcpu, ax_rb);
>  
> -			emulated = kvmppc_emulate_fpr_load(run, vcpu, ax_rd,
> +			emulated = kvmppc_emulate_fpr_load(vcpu, ax_rd,
>  							   addr, FPU_LS_DOUBLE);
>  
>  			if (emulated == EMULATE_DONE)
> @@ -1043,7 +1043,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  			ulong addr = (ax_ra ? kvmppc_get_gpr(vcpu, ax_ra) : 0) +
>  				     kvmppc_get_gpr(vcpu, ax_rb);
>  
> -			emulated = kvmppc_emulate_fpr_store(run, vcpu, ax_rd,
> +			emulated = kvmppc_emulate_fpr_store(vcpu, ax_rd,
>  							    addr, FPU_LS_SINGLE);
>  			break;
>  		}
> @@ -1052,7 +1052,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  			ulong addr = kvmppc_get_gpr(vcpu, ax_ra) +
>  				     kvmppc_get_gpr(vcpu, ax_rb);
>  
> -			emulated = kvmppc_emulate_fpr_store(run, vcpu, ax_rd,
> +			emulated = kvmppc_emulate_fpr_store(vcpu, ax_rd,
>  							    addr, FPU_LS_SINGLE);
>  
>  			if (emulated == EMULATE_DONE)
> @@ -1064,7 +1064,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  			ulong addr = (ax_ra ? kvmppc_get_gpr(vcpu, ax_ra) : 0) +
>  				     kvmppc_get_gpr(vcpu, ax_rb);
>  
> -			emulated = kvmppc_emulate_fpr_store(run, vcpu, ax_rd,
> +			emulated = kvmppc_emulate_fpr_store(vcpu, ax_rd,
>  							    addr, FPU_LS_DOUBLE);
>  			break;
>  		}
> @@ -1073,7 +1073,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  			ulong addr = kvmppc_get_gpr(vcpu, ax_ra) +
>  				     kvmppc_get_gpr(vcpu, ax_rb);
>  
> -			emulated = kvmppc_emulate_fpr_store(run, vcpu, ax_rd,
> +			emulated = kvmppc_emulate_fpr_store(vcpu, ax_rd,
>  							    addr, FPU_LS_DOUBLE);
>  
>  			if (emulated == EMULATE_DONE)
> @@ -1085,7 +1085,7 @@ int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  			ulong addr = (ax_ra ? kvmppc_get_gpr(vcpu, ax_ra) : 0) +
>  				     kvmppc_get_gpr(vcpu, ax_rb);
>  
> -			emulated = kvmppc_emulate_fpr_store(run, vcpu, ax_rd,
> +			emulated = kvmppc_emulate_fpr_store(vcpu, ax_rd,
>  							    addr,
>  							    FPU_LS_SINGLE_LOW);
>  			break;
> diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
> index a0f6813f4560..ef54f917bdaf 100644
> --- a/arch/powerpc/kvm/book3s_pr.c
> +++ b/arch/powerpc/kvm/book3s_pr.c
> @@ -700,7 +700,7 @@ static bool kvmppc_visible_gpa(struct kvm_vcpu *vcpu, gpa_t gpa)
>  	return kvm_is_visible_gfn(vcpu->kvm, gpa >> PAGE_SHIFT);
>  }
>  
> -int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +static int kvmppc_handle_pagefault(struct kvm_vcpu *vcpu,
>  			    ulong eaddr, int vec)
>  {
>  	bool data = (vec == BOOK3S_INTERRUPT_DATA_STORAGE);
> @@ -795,7 +795,7 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  		/* The guest's PTE is not mapped yet. Map on the host */
>  		if (kvmppc_mmu_map_page(vcpu, &pte, iswrite) == -EIO) {
>  			/* Exit KVM if mapping failed */
> -			run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> +			vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>  			return RESUME_HOST;
>  		}
>  		if (data)
> @@ -808,7 +808,7 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  		vcpu->stat.mmio_exits++;
>  		vcpu->arch.paddr_accessed = pte.raddr;
>  		vcpu->arch.vaddr_accessed = pte.eaddr;
> -		r = kvmppc_emulate_mmio(run, vcpu);
> +		r = kvmppc_emulate_mmio(vcpu);
>  		if ( r == RESUME_HOST_NV )
>  			r = RESUME_HOST;
>  	}
> @@ -992,7 +992,7 @@ static void kvmppc_emulate_fac(struct kvm_vcpu *vcpu, ulong fac)
>  	enum emulation_result er = EMULATE_FAIL;
>  
>  	if (!(kvmppc_get_msr(vcpu) & MSR_PR))
> -		er = kvmppc_emulate_instruction(vcpu->run, vcpu);
> +		er = kvmppc_emulate_instruction(vcpu);
>  
>  	if ((er != EMULATE_DONE) && (er != EMULATE_AGAIN)) {
>  		/* Couldn't emulate, trigger interrupt in guest */
> @@ -1089,8 +1089,7 @@ static void kvmppc_clear_debug(struct kvm_vcpu *vcpu)
>  	}
>  }
>  
> -static int kvmppc_exit_pr_progint(struct kvm_run *run, struct kvm_vcpu *vcpu,
> -				  unsigned int exit_nr)
> +static int kvmppc_exit_pr_progint(struct kvm_vcpu *vcpu, unsigned int exit_nr)
>  {
>  	enum emulation_result er;
>  	ulong flags;
> @@ -1124,7 +1123,7 @@ static int kvmppc_exit_pr_progint(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  	}
>  
>  	vcpu->stat.emulated_inst_exits++;
> -	er = kvmppc_emulate_instruction(run, vcpu);
> +	er = kvmppc_emulate_instruction(vcpu);
>  	switch (er) {
>  	case EMULATE_DONE:
>  		r = RESUME_GUEST_NV;
> @@ -1139,7 +1138,7 @@ static int kvmppc_exit_pr_progint(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  		r = RESUME_GUEST;
>  		break;
>  	case EMULATE_DO_MMIO:
> -		run->exit_reason = KVM_EXIT_MMIO;
> +		vcpu->run->exit_reason = KVM_EXIT_MMIO;
>  		r = RESUME_HOST_NV;
>  		break;
>  	case EMULATE_EXIT_USER:
> @@ -1198,7 +1197,7 @@ int kvmppc_handle_exit_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  		/* only care about PTEG not found errors, but leave NX alone */
>  		if (shadow_srr1 & 0x40000000) {
>  			int idx = srcu_read_lock(&vcpu->kvm->srcu);
> -			r = kvmppc_handle_pagefault(run, vcpu, kvmppc_get_pc(vcpu), exit_nr);
> +			r = kvmppc_handle_pagefault(vcpu, kvmppc_get_pc(vcpu), exit_nr);
>  			srcu_read_unlock(&vcpu->kvm->srcu, idx);
>  			vcpu->stat.sp_instruc++;
>  		} else if (vcpu->arch.mmu.is_dcbz32(vcpu) &&
> @@ -1248,7 +1247,7 @@ int kvmppc_handle_exit_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  		 */
>  		if (fault_dsisr & (DSISR_NOHPTE | DSISR_PROTFAULT)) {
>  			int idx = srcu_read_lock(&vcpu->kvm->srcu);
> -			r = kvmppc_handle_pagefault(run, vcpu, dar, exit_nr);
> +			r = kvmppc_handle_pagefault(vcpu, dar, exit_nr);
>  			srcu_read_unlock(&vcpu->kvm->srcu, idx);
>  		} else {
>  			kvmppc_core_queue_data_storage(vcpu, dar, fault_dsisr);
> @@ -1292,7 +1291,7 @@ int kvmppc_handle_exit_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  		break;
>  	case BOOK3S_INTERRUPT_PROGRAM:
>  	case BOOK3S_INTERRUPT_H_EMUL_ASSIST:
> -		r = kvmppc_exit_pr_progint(run, vcpu, exit_nr);
> +		r = kvmppc_exit_pr_progint(vcpu, exit_nr);
>  		break;
>  	case BOOK3S_INTERRUPT_SYSCALL:
>  	{
> @@ -1370,7 +1369,7 @@ int kvmppc_handle_exit_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  			emul = kvmppc_get_last_inst(vcpu, INST_GENERIC,
>  						    &last_inst);
>  			if (emul == EMULATE_DONE)
> -				r = kvmppc_exit_pr_progint(run, vcpu, exit_nr);
> +				r = kvmppc_exit_pr_progint(vcpu, exit_nr);
>  			else
>  				r = RESUME_GUEST;
>  
> @@ -1825,8 +1824,9 @@ static void kvmppc_core_vcpu_free_pr(struct kvm_vcpu *vcpu)
>  	vfree(vcpu_book3s);
>  }
>  
> -static int kvmppc_vcpu_run_pr(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
> +static int kvmppc_vcpu_run_pr(struct kvm_vcpu *vcpu)
>  {
> +	struct kvm_run *run = vcpu->run;
>  	int ret;
>  #ifdef CONFIG_ALTIVEC
>  	unsigned long uninitialized_var(vrsave);
> @@ -1834,7 +1834,7 @@ static int kvmppc_vcpu_run_pr(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
>  
>  	/* Check if we can run the vcpu at all */
>  	if (!vcpu->arch.sane) {
> -		kvm_run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> +		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>  		ret = -EINVAL;
>  		goto out;
>  	}
> @@ -1861,7 +1861,7 @@ static int kvmppc_vcpu_run_pr(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
>  
>  	kvmppc_fix_ee_before_entry();
>  
> -	ret = __kvmppc_vcpu_run(kvm_run, vcpu);
> +	ret = __kvmppc_vcpu_run(run, vcpu);
>  
>  	kvmppc_clear_debug(vcpu);
>  
> diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
> index 6c18ea88fd25..26b3f5900b72 100644
> --- a/arch/powerpc/kvm/booke.c
> +++ b/arch/powerpc/kvm/booke.c
> @@ -730,13 +730,14 @@ int kvmppc_core_check_requests(struct kvm_vcpu *vcpu)
>  	return r;
>  }
>  
> -int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
> +int kvmppc_vcpu_run(struct kvm_vcpu *vcpu)
>  {
> +	struct kvm_run *run = vcpu->run;
>  	int ret, s;
>  	struct debug_reg debug;
>  
>  	if (!vcpu->arch.sane) {
> -		kvm_run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> +		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>  		return -EINVAL;
>  	}
>  
> @@ -778,7 +779,7 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
>  	vcpu->arch.pgdir = vcpu->kvm->mm->pgd;
>  	kvmppc_fix_ee_before_entry();
>  
> -	ret = __kvmppc_vcpu_run(kvm_run, vcpu);
> +	ret = __kvmppc_vcpu_run(run, vcpu);
>  
>  	/* No need for guest_exit. It's done in handle_exit.
>  	   We also get here with interrupts enabled. */
> @@ -800,11 +801,11 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
>  	return ret;
>  }
>  
> -static int emulation_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
> +static int emulation_exit(struct kvm_vcpu *vcpu)
>  {
>  	enum emulation_result er;
>  
> -	er = kvmppc_emulate_instruction(run, vcpu);
> +	er = kvmppc_emulate_instruction(vcpu);
>  	switch (er) {
>  	case EMULATE_DONE:
>  		/* don't overwrite subtypes, just account kvm_stats */
> @@ -821,8 +822,8 @@ static int emulation_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  		       __func__, vcpu->arch.regs.nip, vcpu->arch.last_inst);
>  		/* For debugging, encode the failing instruction and
>  		 * report it to userspace. */
> -		run->hw.hardware_exit_reason = ~0ULL << 32;
> -		run->hw.hardware_exit_reason |= vcpu->arch.last_inst;
> +		vcpu->run->hw.hardware_exit_reason = ~0ULL << 32;
> +		vcpu->run->hw.hardware_exit_reason |= vcpu->arch.last_inst;
>  		kvmppc_core_queue_program(vcpu, ESR_PIL);
>  		return RESUME_HOST;
>  
> @@ -834,8 +835,9 @@ static int emulation_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  	}
>  }
>  
> -static int kvmppc_handle_debug(struct kvm_run *run, struct kvm_vcpu *vcpu)
> +static int kvmppc_handle_debug(struct kvm_vcpu *vcpu)
>  {
> +	struct kvm_run *run = vcpu->run;
>  	struct debug_reg *dbg_reg = &(vcpu->arch.dbg_reg);
>  	u32 dbsr = vcpu->arch.dbsr;
>  
> @@ -954,7 +956,7 @@ static void kvmppc_restart_interrupt(struct kvm_vcpu *vcpu,
>  	}
>  }
>  
> -static int kvmppc_resume_inst_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +static int kvmppc_resume_inst_load(struct kvm_vcpu *vcpu,
>  				  enum emulation_result emulated, u32 last_inst)
>  {
>  	switch (emulated) {
> @@ -966,8 +968,8 @@ static int kvmppc_resume_inst_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  		       __func__, vcpu->arch.regs.nip);
>  		/* For debugging, encode the failing instruction and
>  		 * report it to userspace. */
> -		run->hw.hardware_exit_reason = ~0ULL << 32;
> -		run->hw.hardware_exit_reason |= last_inst;
> +		vcpu->run->hw.hardware_exit_reason = ~0ULL << 32;
> +		vcpu->run->hw.hardware_exit_reason |= last_inst;
>  		kvmppc_core_queue_program(vcpu, ESR_PIL);
>  		return RESUME_HOST;
>  
> @@ -1024,7 +1026,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  	run->ready_for_interrupt_injection = 1;
>  
>  	if (emulated != EMULATE_DONE) {
> -		r = kvmppc_resume_inst_load(run, vcpu, emulated, last_inst);
> +		r = kvmppc_resume_inst_load(vcpu, emulated, last_inst);
>  		goto out;
>  	}
>  
> @@ -1084,7 +1086,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  		break;
>  
>  	case BOOKE_INTERRUPT_HV_PRIV:
> -		r = emulation_exit(run, vcpu);
> +		r = emulation_exit(vcpu);
>  		break;
>  
>  	case BOOKE_INTERRUPT_PROGRAM:
> @@ -1094,7 +1096,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  			 * We are here because of an SW breakpoint instr,
>  			 * so lets return to host to handle.
>  			 */
> -			r = kvmppc_handle_debug(run, vcpu);
> +			r = kvmppc_handle_debug(vcpu);
>  			run->exit_reason = KVM_EXIT_DEBUG;
>  			kvmppc_account_exit(vcpu, DEBUG_EXITS);
>  			break;
> @@ -1115,7 +1117,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  			break;
>  		}
>  
> -		r = emulation_exit(run, vcpu);
> +		r = emulation_exit(vcpu);
>  		break;
>  
>  	case BOOKE_INTERRUPT_FP_UNAVAIL:
> @@ -1282,7 +1284,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  			 * actually RAM. */
>  			vcpu->arch.paddr_accessed = gpaddr;
>  			vcpu->arch.vaddr_accessed = eaddr;
> -			r = kvmppc_emulate_mmio(run, vcpu);
> +			r = kvmppc_emulate_mmio(vcpu);
>  			kvmppc_account_exit(vcpu, MMIO_EXITS);
>  		}
>  
> @@ -1333,7 +1335,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  	}
>  
>  	case BOOKE_INTERRUPT_DEBUG: {
> -		r = kvmppc_handle_debug(run, vcpu);
> +		r = kvmppc_handle_debug(vcpu);
>  		if (r == RESUME_HOST)
>  			run->exit_reason = KVM_EXIT_DEBUG;
>  		kvmppc_account_exit(vcpu, DEBUG_EXITS);
> diff --git a/arch/powerpc/kvm/booke.h b/arch/powerpc/kvm/booke.h
> index 65b4d337d337..be9da96d9f06 100644
> --- a/arch/powerpc/kvm/booke.h
> +++ b/arch/powerpc/kvm/booke.h
> @@ -70,7 +70,7 @@ void kvmppc_set_tcr(struct kvm_vcpu *vcpu, u32 new_tcr);
>  void kvmppc_set_tsr_bits(struct kvm_vcpu *vcpu, u32 tsr_bits);
>  void kvmppc_clr_tsr_bits(struct kvm_vcpu *vcpu, u32 tsr_bits);
>  
> -int kvmppc_booke_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +int kvmppc_booke_emulate_op(struct kvm_vcpu *vcpu,
>                              unsigned int inst, int *advance);
>  int kvmppc_booke_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, ulong *spr_val);
>  int kvmppc_booke_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, ulong spr_val);
> @@ -94,16 +94,12 @@ enum int_class {
>  
>  void kvmppc_set_pending_interrupt(struct kvm_vcpu *vcpu, enum int_class type);
>  
> -extern int kvmppc_core_emulate_op_e500(struct kvm_run *run,
> -				       struct kvm_vcpu *vcpu,
> +extern int kvmppc_core_emulate_op_e500(struct kvm_vcpu *vcpu,
>  				       unsigned int inst, int *advance);
>  extern int kvmppc_core_emulate_mtspr_e500(struct kvm_vcpu *vcpu, int sprn,
>  					  ulong spr_val);
>  extern int kvmppc_core_emulate_mfspr_e500(struct kvm_vcpu *vcpu, int sprn,
>  					  ulong *spr_val);
> -extern int kvmppc_core_emulate_op_e500(struct kvm_run *run,
> -				       struct kvm_vcpu *vcpu,
> -				       unsigned int inst, int *advance);
>  extern int kvmppc_core_emulate_mtspr_e500(struct kvm_vcpu *vcpu, int sprn,
>  					  ulong spr_val);
>  extern int kvmppc_core_emulate_mfspr_e500(struct kvm_vcpu *vcpu, int sprn,
> diff --git a/arch/powerpc/kvm/booke_emulate.c b/arch/powerpc/kvm/booke_emulate.c
> index 689ff5f90e9e..d8d38aca71bd 100644
> --- a/arch/powerpc/kvm/booke_emulate.c
> +++ b/arch/powerpc/kvm/booke_emulate.c
> @@ -39,7 +39,7 @@ static void kvmppc_emul_rfci(struct kvm_vcpu *vcpu)
>  	kvmppc_set_msr(vcpu, vcpu->arch.csrr1);
>  }
>  
> -int kvmppc_booke_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +int kvmppc_booke_emulate_op(struct kvm_vcpu *vcpu,
>                              unsigned int inst, int *advance)
>  {
>  	int emulated = EMULATE_DONE;
> diff --git a/arch/powerpc/kvm/e500_emulate.c b/arch/powerpc/kvm/e500_emulate.c
> index 3d0d3ec5be96..64eb833e9f02 100644
> --- a/arch/powerpc/kvm/e500_emulate.c
> +++ b/arch/powerpc/kvm/e500_emulate.c
> @@ -83,16 +83,16 @@ static int kvmppc_e500_emul_msgsnd(struct kvm_vcpu *vcpu, int rb)
>  }
>  #endif
>  
> -static int kvmppc_e500_emul_ehpriv(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +static int kvmppc_e500_emul_ehpriv(struct kvm_vcpu *vcpu,
>  				   unsigned int inst, int *advance)
>  {
>  	int emulated = EMULATE_DONE;
>  
>  	switch (get_oc(inst)) {
>  	case EHPRIV_OC_DEBUG:
> -		run->exit_reason = KVM_EXIT_DEBUG;
> -		run->debug.arch.address = vcpu->arch.regs.nip;
> -		run->debug.arch.status = 0;
> +		vcpu->run->exit_reason = KVM_EXIT_DEBUG;
> +		vcpu->run->debug.arch.address = vcpu->arch.regs.nip;
> +		vcpu->run->debug.arch.status = 0;
>  		kvmppc_account_exit(vcpu, DEBUG_EXITS);
>  		emulated = EMULATE_EXIT_USER;
>  		*advance = 0;
> @@ -125,7 +125,7 @@ static int kvmppc_e500_emul_mftmr(struct kvm_vcpu *vcpu, unsigned int inst,
>  	return EMULATE_FAIL;
>  }
>  
> -int kvmppc_core_emulate_op_e500(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +int kvmppc_core_emulate_op_e500(struct kvm_vcpu *vcpu,
>  				unsigned int inst, int *advance)
>  {
>  	int emulated = EMULATE_DONE;
> @@ -182,8 +182,7 @@ int kvmppc_core_emulate_op_e500(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  			break;
>  
>  		case XOP_EHPRIV:
> -			emulated = kvmppc_e500_emul_ehpriv(run, vcpu, inst,
> -							   advance);
> +			emulated = kvmppc_e500_emul_ehpriv(vcpu, inst, advance);
>  			break;
>  
>  		default:
> @@ -197,7 +196,7 @@ int kvmppc_core_emulate_op_e500(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  	}
>  
>  	if (emulated == EMULATE_FAIL)
> -		emulated = kvmppc_booke_emulate_op(run, vcpu, inst, advance);
> +		emulated = kvmppc_booke_emulate_op(vcpu, inst, advance);
>  
>  	return emulated;
>  }
> diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
> index 6fca38ca791f..ee1147c98cd8 100644
> --- a/arch/powerpc/kvm/emulate.c
> +++ b/arch/powerpc/kvm/emulate.c
> @@ -191,7 +191,7 @@ static int kvmppc_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
>  
>  /* XXX Should probably auto-generate instruction decoding for a particular core
>   * from opcode tables in the future. */
> -int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
> +int kvmppc_emulate_instruction(struct kvm_vcpu *vcpu)
>  {
>  	u32 inst;
>  	int rs, rt, sprn;
> @@ -270,9 +270,9 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  		 * these are illegal instructions.
>  		 */
>  		if (inst == KVMPPC_INST_SW_BREAKPOINT) {
> -			run->exit_reason = KVM_EXIT_DEBUG;
> -			run->debug.arch.status = 0;
> -			run->debug.arch.address = kvmppc_get_pc(vcpu);
> +			vcpu->run->exit_reason = KVM_EXIT_DEBUG;
> +			vcpu->run->debug.arch.status = 0;
> +			vcpu->run->debug.arch.address = kvmppc_get_pc(vcpu);
>  			emulated = EMULATE_EXIT_USER;
>  			advance = 0;
>  		} else
> @@ -285,7 +285,7 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  	}
>  
>  	if (emulated == EMULATE_FAIL) {
> -		emulated = vcpu->kvm->arch.kvm_ops->emulate_op(run, vcpu, inst,
> +		emulated = vcpu->kvm->arch.kvm_ops->emulate_op(vcpu, inst,
>  							       &advance);
>  		if (emulated == EMULATE_AGAIN) {
>  			advance = 0;
> diff --git a/arch/powerpc/kvm/emulate_loadstore.c b/arch/powerpc/kvm/emulate_loadstore.c
> index 1139bc56e004..e8a47c84d77d 100644
> --- a/arch/powerpc/kvm/emulate_loadstore.c
> +++ b/arch/powerpc/kvm/emulate_loadstore.c
> @@ -71,7 +71,6 @@ static bool kvmppc_check_altivec_disabled(struct kvm_vcpu *vcpu)
>   */
>  int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
>  {
> -	struct kvm_run *run = vcpu->run;
>  	u32 inst;
>  	enum emulation_result emulated = EMULATE_FAIL;
>  	int advance = 1;
> @@ -104,10 +103,10 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
>  			int instr_byte_swap = op.type & BYTEREV;
>  
>  			if (op.type & SIGNEXT)
> -				emulated = kvmppc_handle_loads(run, vcpu,
> +				emulated = kvmppc_handle_loads(vcpu,
>  						op.reg, size, !instr_byte_swap);
>  			else
> -				emulated = kvmppc_handle_load(run, vcpu,
> +				emulated = kvmppc_handle_load(vcpu,
>  						op.reg, size, !instr_byte_swap);
>  
>  			if ((op.type & UPDATE) && (emulated != EMULATE_FAIL))
> @@ -124,10 +123,10 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
>  				vcpu->arch.mmio_sp64_extend = 1;
>  
>  			if (op.type & SIGNEXT)
> -				emulated = kvmppc_handle_loads(run, vcpu,
> +				emulated = kvmppc_handle_loads(vcpu,
>  					     KVM_MMIO_REG_FPR|op.reg, size, 1);
>  			else
> -				emulated = kvmppc_handle_load(run, vcpu,
> +				emulated = kvmppc_handle_load(vcpu,
>  					     KVM_MMIO_REG_FPR|op.reg, size, 1);
>  
>  			if ((op.type & UPDATE) && (emulated != EMULATE_FAIL))
> @@ -164,12 +163,12 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
>  
>  			if (size == 16) {
>  				vcpu->arch.mmio_vmx_copy_nums = 2;
> -				emulated = kvmppc_handle_vmx_load(run,
> -						vcpu, KVM_MMIO_REG_VMX|op.reg,
> +				emulated = kvmppc_handle_vmx_load(vcpu,
> +						KVM_MMIO_REG_VMX|op.reg,
>  						8, 1);
>  			} else {
>  				vcpu->arch.mmio_vmx_copy_nums = 1;
> -				emulated = kvmppc_handle_vmx_load(run, vcpu,
> +				emulated = kvmppc_handle_vmx_load(vcpu,
>  						KVM_MMIO_REG_VMX|op.reg,
>  						size, 1);
>  			}
> @@ -217,7 +216,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
>  				io_size_each = op.element_size;
>  			}
>  
> -			emulated = kvmppc_handle_vsx_load(run, vcpu,
> +			emulated = kvmppc_handle_vsx_load(vcpu,
>  					KVM_MMIO_REG_VSX|op.reg, io_size_each,
>  					1, op.type & SIGNEXT);
>  			break;
> @@ -227,8 +226,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
>  			/* if need byte reverse, op.val has been reversed by
>  			 * analyse_instr().
>  			 */
> -			emulated = kvmppc_handle_store(run, vcpu, op.val,
> -					size, 1);
> +			emulated = kvmppc_handle_store(vcpu, op.val, size, 1);
>  
>  			if ((op.type & UPDATE) && (emulated != EMULATE_FAIL))
>  				kvmppc_set_gpr(vcpu, op.update_reg, op.ea);
> @@ -250,7 +248,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
>  			if (op.type & FPCONV)
>  				vcpu->arch.mmio_sp64_extend = 1;
>  
> -			emulated = kvmppc_handle_store(run, vcpu,
> +			emulated = kvmppc_handle_store(vcpu,
>  					VCPU_FPR(vcpu, op.reg), size, 1);
>  
>  			if ((op.type & UPDATE) && (emulated != EMULATE_FAIL))
> @@ -290,12 +288,12 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
>  
>  			if (size == 16) {
>  				vcpu->arch.mmio_vmx_copy_nums = 2;
> -				emulated = kvmppc_handle_vmx_store(run,
> -						vcpu, op.reg, 8, 1);
> +				emulated = kvmppc_handle_vmx_store(vcpu,
> +						op.reg, 8, 1);
>  			} else {
>  				vcpu->arch.mmio_vmx_copy_nums = 1;
> -				emulated = kvmppc_handle_vmx_store(run,
> -						vcpu, op.reg, size, 1);
> +				emulated = kvmppc_handle_vmx_store(vcpu,
> +						op.reg, size, 1);
>  			}
>  
>  			break;
> @@ -338,7 +336,7 @@ int kvmppc_emulate_loadstore(struct kvm_vcpu *vcpu)
>  				io_size_each = op.element_size;
>  			}
>  
> -			emulated = kvmppc_handle_vsx_store(run, vcpu,
> +			emulated = kvmppc_handle_vsx_store(vcpu,
>  					op.reg, io_size_each, 1);
>  			break;
>  		}
> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index 7e24691e138a..de4c317ad5f1 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -279,7 +279,7 @@ int kvmppc_sanity_check(struct kvm_vcpu *vcpu)
>  }
>  EXPORT_SYMBOL_GPL(kvmppc_sanity_check);
>  
> -int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu)
> +int kvmppc_emulate_mmio(struct kvm_vcpu *vcpu)
>  {
>  	enum emulation_result er;
>  	int r;
> @@ -295,7 +295,7 @@ int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu)
>  		r = RESUME_GUEST;
>  		break;
>  	case EMULATE_DO_MMIO:
> -		run->exit_reason = KVM_EXIT_MMIO;
> +		vcpu->run->exit_reason = KVM_EXIT_MMIO;
>  		/* We must reload nonvolatiles because "update" load/store
>  		 * instructions modify register state. */
>  		/* Future optimization: only reload non-volatiles if they were
> @@ -1106,9 +1106,9 @@ static inline u32 dp_to_sp(u64 fprd)
>  #define dp_to_sp(x)	(x)
>  #endif /* CONFIG_PPC_FPU */
>  
> -static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
> -                                      struct kvm_run *run)
> +static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu)
>  {
> +	struct kvm_run *run = vcpu->run;
>  	u64 uninitialized_var(gpr);
>  
>  	if (run->mmio.len > sizeof(gpr)) {
> @@ -1218,10 +1218,11 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
>  	}
>  }
>  
> -static int __kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +static int __kvmppc_handle_load(struct kvm_vcpu *vcpu,
>  				unsigned int rt, unsigned int bytes,
>  				int is_default_endian, int sign_extend)
>  {
> +	struct kvm_run *run = vcpu->run;
>  	int idx, ret;
>  	bool host_swabbed;
>  
> @@ -1255,7 +1256,7 @@ static int __kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  	srcu_read_unlock(&vcpu->kvm->srcu, idx);
>  
>  	if (!ret) {
> -		kvmppc_complete_mmio_load(vcpu, run);
> +		kvmppc_complete_mmio_load(vcpu);
>  		vcpu->mmio_needed = 0;
>  		return EMULATE_DONE;
>  	}
> @@ -1263,24 +1264,24 @@ static int __kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  	return EMULATE_DO_MMIO;
>  }
>  
> -int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +int kvmppc_handle_load(struct kvm_vcpu *vcpu,
>  		       unsigned int rt, unsigned int bytes,
>  		       int is_default_endian)
>  {
> -	return __kvmppc_handle_load(run, vcpu, rt, bytes, is_default_endian, 0);
> +	return __kvmppc_handle_load(vcpu, rt, bytes, is_default_endian, 0);
>  }
>  EXPORT_SYMBOL_GPL(kvmppc_handle_load);
>  
>  /* Same as above, but sign extends */
> -int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +int kvmppc_handle_loads(struct kvm_vcpu *vcpu,
>  			unsigned int rt, unsigned int bytes,
>  			int is_default_endian)
>  {
> -	return __kvmppc_handle_load(run, vcpu, rt, bytes, is_default_endian, 1);
> +	return __kvmppc_handle_load(vcpu, rt, bytes, is_default_endian, 1);
>  }
>  
>  #ifdef CONFIG_VSX
> -int kvmppc_handle_vsx_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +int kvmppc_handle_vsx_load(struct kvm_vcpu *vcpu,
>  			unsigned int rt, unsigned int bytes,
>  			int is_default_endian, int mmio_sign_extend)
>  {
> @@ -1291,13 +1292,13 @@ int kvmppc_handle_vsx_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  		return EMULATE_FAIL;
>  
>  	while (vcpu->arch.mmio_vsx_copy_nums) {
> -		emulated = __kvmppc_handle_load(run, vcpu, rt, bytes,
> +		emulated = __kvmppc_handle_load(vcpu, rt, bytes,
>  			is_default_endian, mmio_sign_extend);
>  
>  		if (emulated != EMULATE_DONE)
>  			break;
>  
> -		vcpu->arch.paddr_accessed += run->mmio.len;
> +		vcpu->arch.paddr_accessed += vcpu->run->mmio.len;
>  
>  		vcpu->arch.mmio_vsx_copy_nums--;
>  		vcpu->arch.mmio_vsx_offset++;
> @@ -1306,9 +1307,10 @@ int kvmppc_handle_vsx_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  }
>  #endif /* CONFIG_VSX */
>  
> -int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +int kvmppc_handle_store(struct kvm_vcpu *vcpu,
>  			u64 val, unsigned int bytes, int is_default_endian)
>  {
> +	struct kvm_run *run = vcpu->run;
>  	void *data = run->mmio.data;
>  	int idx, ret;
>  	bool host_swabbed;
> @@ -1422,7 +1424,7 @@ static inline int kvmppc_get_vsr_data(struct kvm_vcpu *vcpu, int rs, u64 *val)
>  	return result;
>  }
>  
> -int kvmppc_handle_vsx_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +int kvmppc_handle_vsx_store(struct kvm_vcpu *vcpu,
>  			int rs, unsigned int bytes, int is_default_endian)
>  {
>  	u64 val;
> @@ -1438,13 +1440,13 @@ int kvmppc_handle_vsx_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  		if (kvmppc_get_vsr_data(vcpu, rs, &val) == -1)
>  			return EMULATE_FAIL;
>  
> -		emulated = kvmppc_handle_store(run, vcpu,
> +		emulated = kvmppc_handle_store(vcpu,
>  			 val, bytes, is_default_endian);
>  
>  		if (emulated != EMULATE_DONE)
>  			break;
>  
> -		vcpu->arch.paddr_accessed += run->mmio.len;
> +		vcpu->arch.paddr_accessed += vcpu->run->mmio.len;
>  
>  		vcpu->arch.mmio_vsx_copy_nums--;
>  		vcpu->arch.mmio_vsx_offset++;
> @@ -1453,19 +1455,19 @@ int kvmppc_handle_vsx_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  	return emulated;
>  }
>  
> -static int kvmppc_emulate_mmio_vsx_loadstore(struct kvm_vcpu *vcpu,
> -			struct kvm_run *run)
> +static int kvmppc_emulate_mmio_vsx_loadstore(struct kvm_vcpu *vcpu)
>  {
> +	struct kvm_run *run = vcpu->run;
>  	enum emulation_result emulated = EMULATE_FAIL;
>  	int r;
>  
>  	vcpu->arch.paddr_accessed += run->mmio.len;
>  
>  	if (!vcpu->mmio_is_write) {
> -		emulated = kvmppc_handle_vsx_load(run, vcpu, vcpu->arch.io_gpr,
> +		emulated = kvmppc_handle_vsx_load(vcpu, vcpu->arch.io_gpr,
>  			 run->mmio.len, 1, vcpu->arch.mmio_sign_extend);
>  	} else {
> -		emulated = kvmppc_handle_vsx_store(run, vcpu,
> +		emulated = kvmppc_handle_vsx_store(vcpu,
>  			 vcpu->arch.io_gpr, run->mmio.len, 1);
>  	}
>  
> @@ -1489,7 +1491,7 @@ static int kvmppc_emulate_mmio_vsx_loadstore(struct kvm_vcpu *vcpu,
>  #endif /* CONFIG_VSX */
>  
>  #ifdef CONFIG_ALTIVEC
> -int kvmppc_handle_vmx_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +int kvmppc_handle_vmx_load(struct kvm_vcpu *vcpu,
>  		unsigned int rt, unsigned int bytes, int is_default_endian)
>  {
>  	enum emulation_result emulated = EMULATE_DONE;
> @@ -1498,13 +1500,13 @@ int kvmppc_handle_vmx_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  		return EMULATE_FAIL;
>  
>  	while (vcpu->arch.mmio_vmx_copy_nums) {
> -		emulated = __kvmppc_handle_load(run, vcpu, rt, bytes,
> +		emulated = __kvmppc_handle_load(vcpu, rt, bytes,
>  				is_default_endian, 0);
>  
>  		if (emulated != EMULATE_DONE)
>  			break;
>  
> -		vcpu->arch.paddr_accessed += run->mmio.len;
> +		vcpu->arch.paddr_accessed += vcpu->run->mmio.len;
>  		vcpu->arch.mmio_vmx_copy_nums--;
>  		vcpu->arch.mmio_vmx_offset++;
>  	}
> @@ -1584,7 +1586,7 @@ int kvmppc_get_vmx_byte(struct kvm_vcpu *vcpu, int index, u64 *val)
>  	return result;
>  }
>  
> -int kvmppc_handle_vmx_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
> +int kvmppc_handle_vmx_store(struct kvm_vcpu *vcpu,
>  		unsigned int rs, unsigned int bytes, int is_default_endian)
>  {
>  	u64 val = 0;
> @@ -1619,12 +1621,12 @@ int kvmppc_handle_vmx_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  			return EMULATE_FAIL;
>  		}
>  
> -		emulated = kvmppc_handle_store(run, vcpu, val, bytes,
> +		emulated = kvmppc_handle_store(vcpu, val, bytes,
>  				is_default_endian);
>  		if (emulated != EMULATE_DONE)
>  			break;
>  
> -		vcpu->arch.paddr_accessed += run->mmio.len;
> +		vcpu->arch.paddr_accessed += vcpu->run->mmio.len;
>  		vcpu->arch.mmio_vmx_copy_nums--;
>  		vcpu->arch.mmio_vmx_offset++;
>  	}
> @@ -1632,19 +1634,19 @@ int kvmppc_handle_vmx_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  	return emulated;
>  }
>  
> -static int kvmppc_emulate_mmio_vmx_loadstore(struct kvm_vcpu *vcpu,
> -		struct kvm_run *run)
> +static int kvmppc_emulate_mmio_vmx_loadstore(struct kvm_vcpu *vcpu)
>  {
> +	struct kvm_run *run = vcpu->run;
>  	enum emulation_result emulated = EMULATE_FAIL;
>  	int r;
>  
>  	vcpu->arch.paddr_accessed += run->mmio.len;
>  
>  	if (!vcpu->mmio_is_write) {
> -		emulated = kvmppc_handle_vmx_load(run, vcpu,
> +		emulated = kvmppc_handle_vmx_load(vcpu,
>  				vcpu->arch.io_gpr, run->mmio.len, 1);
>  	} else {
> -		emulated = kvmppc_handle_vmx_store(run, vcpu,
> +		emulated = kvmppc_handle_vmx_store(vcpu,
>  				vcpu->arch.io_gpr, run->mmio.len, 1);
>  	}
>  
> @@ -1774,7 +1776,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>  	if (vcpu->mmio_needed) {
>  		vcpu->mmio_needed = 0;
>  		if (!vcpu->mmio_is_write)
> -			kvmppc_complete_mmio_load(vcpu, run);
> +			kvmppc_complete_mmio_load(vcpu);
>  #ifdef CONFIG_VSX
>  		if (vcpu->arch.mmio_vsx_copy_nums > 0) {
>  			vcpu->arch.mmio_vsx_copy_nums--;
> @@ -1782,7 +1784,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>  		}
>  
>  		if (vcpu->arch.mmio_vsx_copy_nums > 0) {
> -			r = kvmppc_emulate_mmio_vsx_loadstore(vcpu, run);
> +			r = kvmppc_emulate_mmio_vsx_loadstore(vcpu);
>  			if (r == RESUME_HOST) {
>  				vcpu->mmio_needed = 1;
>  				goto out;
> @@ -1796,7 +1798,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>  		}
>  
>  		if (vcpu->arch.mmio_vmx_copy_nums > 0) {
> -			r = kvmppc_emulate_mmio_vmx_loadstore(vcpu, run);
> +			r = kvmppc_emulate_mmio_vmx_loadstore(vcpu);
>  			if (r == RESUME_HOST) {
>  				vcpu->mmio_needed = 1;
>  				goto out;
> @@ -1829,7 +1831,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>  	if (run->immediate_exit)
>  		r = -EINTR;
>  	else
> -		r = kvmppc_vcpu_run(run, vcpu);
> +		r = kvmppc_vcpu_run(vcpu);
>  
>  	kvm_sigset_deactivate(vcpu);
>  
> diff --git a/arch/powerpc/kvm/trace_hv.h b/arch/powerpc/kvm/trace_hv.h
> index 8a1e3b0047f1..4a61a971c34e 100644
> --- a/arch/powerpc/kvm/trace_hv.h
> +++ b/arch/powerpc/kvm/trace_hv.h
> @@ -472,9 +472,9 @@ TRACE_EVENT(kvmppc_run_vcpu_enter,
>  );
>  
>  TRACE_EVENT(kvmppc_run_vcpu_exit,
> -	TP_PROTO(struct kvm_vcpu *vcpu, struct kvm_run *run),
> +	TP_PROTO(struct kvm_vcpu *vcpu),
>  
> -	TP_ARGS(vcpu, run),
> +	TP_ARGS(vcpu),
>  
>  	TP_STRUCT__entry(
>  		__field(int,		vcpu_id)
> @@ -484,7 +484,7 @@ TRACE_EVENT(kvmppc_run_vcpu_exit,
>  
>  	TP_fast_assign(
>  		__entry->vcpu_id  = vcpu->vcpu_id;
> -		__entry->exit     = run->exit_reason;
> +		__entry->exit     = vcpu->run->exit_reason;
>  		__entry->ret      = vcpu->arch.ret;
>  	),

'git grep kvm_run arch/powerpc/kvm/' tells me the result is correct so
in case this even compiles, feel free to add

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 0/7] clean up redundant 'kvm_run' parameters
  2020-04-27  4:35 [PATCH v4 0/7] clean up redundant 'kvm_run' parameters Tianjia Zhang
                   ` (6 preceding siblings ...)
  2020-04-27  4:35 ` [PATCH v4 7/7] KVM: MIPS: clean up redundant kvm_run parameters in assembly Tianjia Zhang
@ 2020-05-05  4:15 ` Tianjia Zhang
  2020-06-23  9:42 ` Paolo Bonzini
  8 siblings, 0 replies; 29+ messages in thread
From: Tianjia Zhang @ 2020-05-05  4:15 UTC (permalink / raw)
  To: pbonzini, tsbogend, paulus, mpe, benh, borntraeger, frankja,
	david, cohuck, heiko.carstens, gor, sean.j.christopherson,
	vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp, x86, hpa,
	maz, james.morse, julien.thierry.kdev, suzuki.poulose,
	christoffer.dall, peterx, thuth, chenhuacai
  Cc: kvm, linux-arm-kernel, kvmarm, linux-mips, kvm-ppc, linuxppc-dev,
	linux-s390, linux-kernel

Paolo Bonzini, any opinion on this?

Thanks and best,
Tianjia

On 2020/4/27 12:35, Tianjia Zhang wrote:
> In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
> structure. For historical reasons, many kvm-related function parameters
> retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
> patch does a unified cleanup of these remaining redundant parameters.
> 
> This series of patches has completely cleaned the architecture of
> arm64, mips, ppc, and s390 (no such redundant code on x86). Due to
> the large number of modified codes, a separate patch is made for each
> platform. On the ppc platform, there is also a redundant structure
> pointer of 'kvm_run' in 'vcpu_arch', which has also been cleaned
> separately.
> 
> ---
> v4 change:
>    mips: fixes two errors in entry.c.
> 
> v3 change:
>    Keep the existing `vcpu->run` in the function body unchanged.
> 
> v2 change:
>    s390 retains the original variable name and minimizes modification.
> 
> Tianjia Zhang (7):
>    KVM: s390: clean up redundant 'kvm_run' parameters
>    KVM: arm64: clean up redundant 'kvm_run' parameters
>    KVM: PPC: Remove redundant kvm_run from vcpu_arch
>    KVM: PPC: clean up redundant 'kvm_run' parameters
>    KVM: PPC: clean up redundant kvm_run parameters in assembly
>    KVM: MIPS: clean up redundant 'kvm_run' parameters
>    KVM: MIPS: clean up redundant kvm_run parameters in assembly
> 
>   arch/arm64/include/asm/kvm_coproc.h      |  12 +--
>   arch/arm64/include/asm/kvm_host.h        |  11 +--
>   arch/arm64/include/asm/kvm_mmu.h         |   2 +-
>   arch/arm64/kvm/handle_exit.c             |  36 +++----
>   arch/arm64/kvm/sys_regs.c                |  13 ++-
>   arch/mips/include/asm/kvm_host.h         |  32 +------
>   arch/mips/kvm/emulate.c                  |  59 ++++--------
>   arch/mips/kvm/entry.c                    |  21 ++---
>   arch/mips/kvm/mips.c                     |  14 +--
>   arch/mips/kvm/trap_emul.c                | 114 ++++++++++-------------
>   arch/mips/kvm/vz.c                       |  26 ++----
>   arch/powerpc/include/asm/kvm_book3s.h    |  16 ++--
>   arch/powerpc/include/asm/kvm_host.h      |   1 -
>   arch/powerpc/include/asm/kvm_ppc.h       |  27 +++---
>   arch/powerpc/kvm/book3s.c                |   4 +-
>   arch/powerpc/kvm/book3s.h                |   2 +-
>   arch/powerpc/kvm/book3s_64_mmu_hv.c      |  12 +--
>   arch/powerpc/kvm/book3s_64_mmu_radix.c   |   4 +-
>   arch/powerpc/kvm/book3s_emulate.c        |  10 +-
>   arch/powerpc/kvm/book3s_hv.c             |  64 ++++++-------
>   arch/powerpc/kvm/book3s_hv_nested.c      |  12 +--
>   arch/powerpc/kvm/book3s_interrupts.S     |  17 ++--
>   arch/powerpc/kvm/book3s_paired_singles.c |  72 +++++++-------
>   arch/powerpc/kvm/book3s_pr.c             |  33 ++++---
>   arch/powerpc/kvm/booke.c                 |  39 ++++----
>   arch/powerpc/kvm/booke.h                 |   8 +-
>   arch/powerpc/kvm/booke_emulate.c         |   2 +-
>   arch/powerpc/kvm/booke_interrupts.S      |   9 +-
>   arch/powerpc/kvm/bookehv_interrupts.S    |  10 +-
>   arch/powerpc/kvm/e500_emulate.c          |  15 ++-
>   arch/powerpc/kvm/emulate.c               |  10 +-
>   arch/powerpc/kvm/emulate_loadstore.c     |  32 +++----
>   arch/powerpc/kvm/powerpc.c               |  72 +++++++-------
>   arch/powerpc/kvm/trace_hv.h              |   6 +-
>   arch/s390/kvm/kvm-s390.c                 |  23 +++--
>   virt/kvm/arm/arm.c                       |   6 +-
>   virt/kvm/arm/mmio.c                      |  11 ++-
>   virt/kvm/arm/mmu.c                       |   5 +-
>   38 files changed, 392 insertions(+), 470 deletions(-)
> 

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 2/7] KVM: arm64: clean up redundant 'kvm_run' parameters
  2020-04-27  4:35 ` [PATCH v4 2/7] KVM: arm64: " Tianjia Zhang
  2020-04-29 12:07   ` Vitaly Kuznetsov
@ 2020-05-05  8:39   ` Marc Zyngier
  2020-05-07 13:04     ` Tianjia Zhang
  1 sibling, 1 reply; 29+ messages in thread
From: Marc Zyngier @ 2020-05-05  8:39 UTC (permalink / raw)
  To: Tianjia Zhang
  Cc: pbonzini, tsbogend, paulus, mpe, benh, borntraeger, frankja,
	david, cohuck, heiko.carstens, gor, sean.j.christopherson,
	vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp, x86, hpa,
	james.morse, julien.thierry.kdev, suzuki.poulose,
	christoffer.dall, peterx, thuth, chenhuacai, kvm,
	linux-arm-kernel, kvmarm, linux-mips, kvm-ppc, linuxppc-dev,
	linux-s390, linux-kernel

Hi Tianjia,

On 2020-04-27 05:35, Tianjia Zhang wrote:
> In the current kvm version, 'kvm_run' has been included in the 
> 'kvm_vcpu'
> structure. For historical reasons, many kvm-related function parameters
> retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
> patch does a unified cleanup of these remaining redundant parameters.
> 
> Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>

On the face of it, this looks OK, but I haven't tried to run the
resulting kernel. I'm not opposed to taking this patch *if* there
is an agreement across architectures to take the series (I value
consistency over the janitorial exercise).

Another thing is that this is going to conflict with the set of
patches that move the KVM/arm code back where it belongs 
(arch/arm64/kvm),
so I'd probably cherry-pick that one directly.

Thanks,

         M.


-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 2/7] KVM: arm64: clean up redundant 'kvm_run' parameters
  2020-05-05  8:39   ` Marc Zyngier
@ 2020-05-07 13:04     ` Tianjia Zhang
  0 siblings, 0 replies; 29+ messages in thread
From: Tianjia Zhang @ 2020-05-07 13:04 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: pbonzini, tsbogend, paulus, mpe, benh, borntraeger, frankja,
	david, cohuck, heiko.carstens, gor, sean.j.christopherson,
	vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp, x86, hpa,
	james.morse, julien.thierry.kdev, suzuki.poulose,
	christoffer.dall, peterx, thuth, chenhuacai, kvm,
	linux-arm-kernel, kvmarm, linux-mips, kvm-ppc, linuxppc-dev,
	linux-s390, linux-kernel



On 2020/5/5 16:39, Marc Zyngier wrote:
> Hi Tianjia,
> 
> On 2020-04-27 05:35, Tianjia Zhang wrote:
>> In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
>> structure. For historical reasons, many kvm-related function parameters
>> retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
>> patch does a unified cleanup of these remaining redundant parameters.
>>
>> Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
> 
> On the face of it, this looks OK, but I haven't tried to run the
> resulting kernel. I'm not opposed to taking this patch *if* there
> is an agreement across architectures to take the series (I value
> consistency over the janitorial exercise).
> 
> Another thing is that this is going to conflict with the set of
> patches that move the KVM/arm code back where it belongs (arch/arm64/kvm),
> so I'd probably cherry-pick that one directly.
> 
> Thanks,
> 
>          M.
> 

Do I need to submit this set of patches separately for each 
architecture? Could it be merged at once, if necessary, I will
resubmit based on the latest mainline.

Thanks,
Tianjia

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 3/7] KVM: PPC: Remove redundant kvm_run from vcpu_arch
  2020-04-27  4:35 ` [PATCH v4 3/7] KVM: PPC: Remove redundant kvm_run from vcpu_arch Tianjia Zhang
  2020-04-29 12:23   ` Vitaly Kuznetsov
@ 2020-05-26  4:36   ` Paul Mackerras
  2020-05-27  4:20   ` Paul Mackerras
  2 siblings, 0 replies; 29+ messages in thread
From: Paul Mackerras @ 2020-05-26  4:36 UTC (permalink / raw)
  To: Tianjia Zhang
  Cc: pbonzini, tsbogend, mpe, benh, borntraeger, frankja, david,
	cohuck, heiko.carstens, gor, sean.j.christopherson, vkuznets,
	wanpengli, jmattson, joro, tglx, mingo, bp, x86, hpa, maz,
	james.morse, julien.thierry.kdev, suzuki.poulose,
	christoffer.dall, peterx, thuth, chenhuacai, kvm,
	linux-arm-kernel, kvmarm, linux-mips, kvm-ppc, linuxppc-dev,
	linux-s390, linux-kernel

On Mon, Apr 27, 2020 at 12:35:10PM +0800, Tianjia Zhang wrote:
> The 'kvm_run' field already exists in the 'vcpu' structure, which
> is the same structure as the 'kvm_run' in the 'vcpu_arch' and
> should be deleted.
> 
> Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>

This looks fine.

I assume each architecture sub-maintainer is taking the relevant
patches from this series via their tree - is that right?

Reviewed-by: Paul Mackerras <paulus@ozlabs.org>

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 4/7] KVM: PPC: clean up redundant 'kvm_run' parameters
  2020-04-27  4:35 ` [PATCH v4 4/7] KVM: PPC: clean up redundant 'kvm_run' parameters Tianjia Zhang
  2020-04-29 12:32   ` Vitaly Kuznetsov
@ 2020-05-26  5:49   ` Paul Mackerras
  1 sibling, 0 replies; 29+ messages in thread
From: Paul Mackerras @ 2020-05-26  5:49 UTC (permalink / raw)
  To: Tianjia Zhang
  Cc: pbonzini, tsbogend, mpe, benh, borntraeger, frankja, david,
	cohuck, heiko.carstens, gor, sean.j.christopherson, vkuznets,
	wanpengli, jmattson, joro, tglx, mingo, bp, x86, hpa, maz,
	james.morse, julien.thierry.kdev, suzuki.poulose,
	christoffer.dall, peterx, thuth, chenhuacai, kvm,
	linux-arm-kernel, kvmarm, linux-mips, kvm-ppc, linuxppc-dev,
	linux-s390, linux-kernel

On Mon, Apr 27, 2020 at 12:35:11PM +0800, Tianjia Zhang wrote:
> In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
> structure. For historical reasons, many kvm-related function parameters
> retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
> patch does a unified cleanup of these remaining redundant parameters.
> 
> Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>

This looks OK, though possibly a little larger than it needs to be
because of variable name changes (kvm_run -> run) that aren't strictly
necessary.

Reviewed-by: Paul Mackerras <paulus@ozlabs.org>

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 5/7] KVM: PPC: clean up redundant kvm_run parameters in assembly
  2020-04-27  4:35 ` [PATCH v4 5/7] KVM: PPC: clean up redundant kvm_run parameters in assembly Tianjia Zhang
@ 2020-05-26  5:59   ` Paul Mackerras
  2020-07-13  3:07     ` Tianjia Zhang
  0 siblings, 1 reply; 29+ messages in thread
From: Paul Mackerras @ 2020-05-26  5:59 UTC (permalink / raw)
  To: Tianjia Zhang
  Cc: pbonzini, tsbogend, mpe, benh, borntraeger, frankja, david,
	cohuck, heiko.carstens, gor, sean.j.christopherson, vkuznets,
	wanpengli, jmattson, joro, tglx, mingo, bp, x86, hpa, maz,
	james.morse, julien.thierry.kdev, suzuki.poulose,
	christoffer.dall, peterx, thuth, chenhuacai, kvm,
	linux-arm-kernel, kvmarm, linux-mips, kvm-ppc, linuxppc-dev,
	linux-s390, linux-kernel

On Mon, Apr 27, 2020 at 12:35:12PM +0800, Tianjia Zhang wrote:
> In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
> structure. For historical reasons, many kvm-related function parameters
> retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
> patch does a unified cleanup of these remaining redundant parameters.

Some of these changes don't look completely correct to me, see below.
If you're expecting these patches to go through my tree, I can fix up
the patch and commit it (with you as author), noting the changes I
made in the commit message.  Do you want me to do that?

> diff --git a/arch/powerpc/kvm/book3s_interrupts.S b/arch/powerpc/kvm/book3s_interrupts.S
> index f7ad99d972ce..0eff749d8027 100644
> --- a/arch/powerpc/kvm/book3s_interrupts.S
> +++ b/arch/powerpc/kvm/book3s_interrupts.S
> @@ -55,8 +55,7 @@
>   ****************************************************************************/
>  
>  /* Registers:
> - *  r3: kvm_run pointer
> - *  r4: vcpu pointer
> + *  r3: vcpu pointer
>   */
>  _GLOBAL(__kvmppc_vcpu_run)
>  
> @@ -68,8 +67,8 @@ kvm_start_entry:
>  	/* Save host state to the stack */
>  	PPC_STLU r1, -SWITCH_FRAME_SIZE(r1)
>  
> -	/* Save r3 (kvm_run) and r4 (vcpu) */
> -	SAVE_2GPRS(3, r1)
> +	/* Save r3 (vcpu) */
> +	SAVE_GPR(3, r1)
>  
>  	/* Save non-volatile registers (r14 - r31) */
>  	SAVE_NVGPRS(r1)
> @@ -82,11 +81,11 @@ kvm_start_entry:
>  	PPC_STL	r0, _LINK(r1)
>  
>  	/* Load non-volatile guest state from the vcpu */
> -	VCPU_LOAD_NVGPRS(r4)
> +	VCPU_LOAD_NVGPRS(r3)
>  
>  kvm_start_lightweight:
>  	/* Copy registers into shadow vcpu so we can access them in real mode */
> -	mr	r3, r4
> +	mr	r4, r3

This mr doesn't seem necessary.

>  	bl	FUNC(kvmppc_copy_to_svcpu)
>  	nop
>  	REST_GPR(4, r1)

This should be loading r4 from GPR3(r1), not GPR4(r1) - which is what
REST_GPR(4, r1) will do.

Then, in the file but not in the patch context, there is this line:

	PPC_LL	r3, GPR4(r1)		/* vcpu pointer */

where once again GPR4 needs to be GPR3.

> @@ -191,10 +190,10 @@ after_sprg3_load:
>  	PPC_STL	r31, VCPU_GPR(R31)(r7)
>  
>  	/* Pass the exit number as 3rd argument to kvmppc_handle_exit */

The comment should be modified to say "2nd" instead of "3rd",
otherwise it is confusing.

The rest of the patch looks OK.

Paul.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 3/7] KVM: PPC: Remove redundant kvm_run from vcpu_arch
  2020-04-27  4:35 ` [PATCH v4 3/7] KVM: PPC: Remove redundant kvm_run from vcpu_arch Tianjia Zhang
  2020-04-29 12:23   ` Vitaly Kuznetsov
  2020-05-26  4:36   ` Paul Mackerras
@ 2020-05-27  4:20   ` Paul Mackerras
  2020-05-27  5:23     ` Tianjia Zhang
  2 siblings, 1 reply; 29+ messages in thread
From: Paul Mackerras @ 2020-05-27  4:20 UTC (permalink / raw)
  To: Tianjia Zhang
  Cc: pbonzini, tsbogend, mpe, benh, borntraeger, frankja, david,
	cohuck, heiko.carstens, gor, sean.j.christopherson, vkuznets,
	wanpengli, jmattson, joro, tglx, mingo, bp, x86, hpa, maz,
	james.morse, julien.thierry.kdev, suzuki.poulose,
	christoffer.dall, peterx, thuth, chenhuacai, kvm,
	linux-arm-kernel, kvmarm, linux-mips, kvm-ppc, linuxppc-dev,
	linux-s390, linux-kernel

On Mon, Apr 27, 2020 at 12:35:10PM +0800, Tianjia Zhang wrote:
> The 'kvm_run' field already exists in the 'vcpu' structure, which
> is the same structure as the 'kvm_run' in the 'vcpu_arch' and
> should be deleted.
> 
> Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>

Thanks, patches 3 and 4 of this series applied to my kvm-ppc-next branch.

Paul.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 3/7] KVM: PPC: Remove redundant kvm_run from vcpu_arch
  2020-05-27  4:20   ` Paul Mackerras
@ 2020-05-27  5:23     ` Tianjia Zhang
  0 siblings, 0 replies; 29+ messages in thread
From: Tianjia Zhang @ 2020-05-27  5:23 UTC (permalink / raw)
  To: Paul Mackerras
  Cc: pbonzini, tsbogend, mpe, benh, borntraeger, frankja, david,
	cohuck, heiko.carstens, gor, sean.j.christopherson, vkuznets,
	wanpengli, jmattson, joro, tglx, mingo, bp, x86, hpa, maz,
	james.morse, julien.thierry.kdev, suzuki.poulose,
	christoffer.dall, peterx, thuth, chenhuacai, kvm,
	linux-arm-kernel, kvmarm, linux-mips, kvm-ppc, linuxppc-dev,
	linux-s390, linux-kernel



On 2020/5/27 12:20, Paul Mackerras wrote:
> On Mon, Apr 27, 2020 at 12:35:10PM +0800, Tianjia Zhang wrote:
>> The 'kvm_run' field already exists in the 'vcpu' structure, which
>> is the same structure as the 'kvm_run' in the 'vcpu_arch' and
>> should be deleted.
>>
>> Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
> 
> Thanks, patches 3 and 4 of this series applied to my kvm-ppc-next branch.
> 
> Paul.
> 

Thanks for your suggestion, for 5/7, I will submit a new version patch.

Thanks,
Tianjia

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 6/7] KVM: MIPS: clean up redundant 'kvm_run' parameters
  2020-04-27  5:40   ` Huacai Chen
@ 2020-05-27  6:24     ` Tianjia Zhang
  2020-05-29  9:48       ` Paolo Bonzini
  0 siblings, 1 reply; 29+ messages in thread
From: Tianjia Zhang @ 2020-05-27  6:24 UTC (permalink / raw)
  To: Huacai Chen
  Cc: Paolo Bonzini, Thomas Bogendoerfer, paulus, mpe,
	Benjamin Herrenschmidt, borntraeger, frankja, david, cohuck,
	heiko.carstens, gor, sean.j.christopherson, vkuznets, wanpengli,
	jmattson, joro, Thomas Gleixner, mingo, Borislav Petkov, x86,
	hpa, Marc Zyngier, james.morse, julien.thierry.kdev,
	suzuki.poulose, christoffer.dall, Peter Xu, thuth, kvm,
	linux-arm-kernel, kvmarm, open list:MIPS, kvm-ppc, linuxppc-dev,
	linux-s390, LKML



On 2020/4/27 13:40, Huacai Chen wrote:
> Reviewed-by: Huacai Chen <chenhc@lemote.com>
> 
> On Mon, Apr 27, 2020 at 12:35 PM Tianjia Zhang
> <tianjia.zhang@linux.alibaba.com> wrote:
>>
>> In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
>> structure. For historical reasons, many kvm-related function parameters
>> retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
>> patch does a unified cleanup of these remaining redundant parameters.
>>
>> Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
>> ---
>>   arch/mips/include/asm/kvm_host.h |  28 +-------
>>   arch/mips/kvm/emulate.c          |  59 ++++++----------
>>   arch/mips/kvm/mips.c             |  11 ++-
>>   arch/mips/kvm/trap_emul.c        | 114 ++++++++++++++-----------------
>>   arch/mips/kvm/vz.c               |  26 +++----
>>   5 files changed, 87 insertions(+), 151 deletions(-)
>>

Hi Huacai,

These two patches(6/7 and 7/7) should be merged into the tree of the 
mips architecture separately. At present, there seems to be no good way 
to merge the whole architecture patchs.

For this series of patches, some architectures have been merged, some 
need to update the patch.

Thanks and best,
Tianjia

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 6/7] KVM: MIPS: clean up redundant 'kvm_run' parameters
  2020-05-27  6:24     ` Tianjia Zhang
@ 2020-05-29  9:48       ` Paolo Bonzini
  2020-06-16 11:54         ` Tianjia Zhang
  0 siblings, 1 reply; 29+ messages in thread
From: Paolo Bonzini @ 2020-05-29  9:48 UTC (permalink / raw)
  To: Tianjia Zhang, Huacai Chen
  Cc: Thomas Bogendoerfer, paulus, mpe, Benjamin Herrenschmidt,
	borntraeger, frankja, david, cohuck, heiko.carstens, gor,
	sean.j.christopherson, vkuznets, wanpengli, jmattson, joro,
	Thomas Gleixner, mingo, Borislav Petkov, x86, hpa, Marc Zyngier,
	james.morse, julien.thierry.kdev, suzuki.poulose,
	christoffer.dall, Peter Xu, thuth, kvm, linux-arm-kernel, kvmarm,
	open list:MIPS, kvm-ppc, linuxppc-dev, linux-s390, LKML

On 27/05/20 08:24, Tianjia Zhang wrote:
>>>
>>>
> 
> Hi Huacai,
> 
> These two patches(6/7 and 7/7) should be merged into the tree of the
> mips architecture separately. At present, there seems to be no good way
> to merge the whole architecture patchs.
> 
> For this series of patches, some architectures have been merged, some
> need to update the patch.

Hi Tianjia, I will take care of this during the merge window.

Thanks,

Paolo


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 6/7] KVM: MIPS: clean up redundant 'kvm_run' parameters
  2020-05-29  9:48       ` Paolo Bonzini
@ 2020-06-16 11:54         ` Tianjia Zhang
  0 siblings, 0 replies; 29+ messages in thread
From: Tianjia Zhang @ 2020-06-16 11:54 UTC (permalink / raw)
  To: Paolo Bonzini, Huacai Chen
  Cc: Thomas Bogendoerfer, paulus, mpe, Benjamin Herrenschmidt,
	borntraeger, frankja, david, cohuck, heiko.carstens, gor,
	sean.j.christopherson, vkuznets, wanpengli, jmattson, joro,
	Thomas Gleixner, mingo, Borislav Petkov, x86, hpa, Marc Zyngier,
	james.morse, julien.thierry.kdev, suzuki.poulose,
	christoffer.dall, Peter Xu, thuth, kvm, linux-arm-kernel, kvmarm,
	open list:MIPS, kvm-ppc, linuxppc-dev, linux-s390, LKML



On 2020/5/29 17:48, Paolo Bonzini wrote:
> On 27/05/20 08:24, Tianjia Zhang wrote:
>>>>
>>>>
>>
>> Hi Huacai,
>>
>> These two patches(6/7 and 7/7) should be merged into the tree of the
>> mips architecture separately. At present, there seems to be no good way
>> to merge the whole architecture patchs.
>>
>> For this series of patches, some architectures have been merged, some
>> need to update the patch.
> 
> Hi Tianjia, I will take care of this during the merge window.
> 
> Thanks,
> 
> Paolo
> 

Hi Paolo,

The following individual patch is the v5 version of 5/7 in this group of 
patches.

https://lkml.org/lkml/2020/5/28/106
([v5] KVM: PPC: clean up redundant kvm_run parameters in assembly)

Thanks and best,
Tianjia

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 0/7] clean up redundant 'kvm_run' parameters
  2020-04-27  4:35 [PATCH v4 0/7] clean up redundant 'kvm_run' parameters Tianjia Zhang
                   ` (7 preceding siblings ...)
  2020-05-05  4:15 ` [PATCH v4 0/7] clean up redundant 'kvm_run' parameters Tianjia Zhang
@ 2020-06-23  9:42 ` Paolo Bonzini
  2020-06-23 10:00   ` Tianjia Zhang
  8 siblings, 1 reply; 29+ messages in thread
From: Paolo Bonzini @ 2020-06-23  9:42 UTC (permalink / raw)
  To: Tianjia Zhang, tsbogend, paulus, mpe, benh, borntraeger, frankja,
	david, cohuck, heiko.carstens, gor, sean.j.christopherson,
	vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp, x86, hpa,
	maz, james.morse, julien.thierry.kdev, suzuki.poulose,
	christoffer.dall, peterx, thuth, chenhuacai
  Cc: kvm, linux-arm-kernel, kvmarm, linux-mips, kvm-ppc, linuxppc-dev,
	linux-s390, linux-kernel

On 27/04/20 06:35, Tianjia Zhang wrote:
> In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
> structure. For historical reasons, many kvm-related function parameters
> retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
> patch does a unified cleanup of these remaining redundant parameters.
> 
> This series of patches has completely cleaned the architecture of
> arm64, mips, ppc, and s390 (no such redundant code on x86). Due to
> the large number of modified codes, a separate patch is made for each
> platform. On the ppc platform, there is also a redundant structure
> pointer of 'kvm_run' in 'vcpu_arch', which has also been cleaned
> separately.

Tianjia, can you please refresh the patches so that each architecture
maintainer can pick them up?  Thanks very much for this work!

Paolo

> 
> ---
> v4 change:
>   mips: fixes two errors in entry.c.
> 
> v3 change:
>   Keep the existing `vcpu->run` in the function body unchanged.
> 
> v2 change:
>   s390 retains the original variable name and minimizes modification.
> 
> Tianjia Zhang (7):
>   KVM: s390: clean up redundant 'kvm_run' parameters
>   KVM: arm64: clean up redundant 'kvm_run' parameters
>   KVM: PPC: Remove redundant kvm_run from vcpu_arch
>   KVM: PPC: clean up redundant 'kvm_run' parameters
>   KVM: PPC: clean up redundant kvm_run parameters in assembly
>   KVM: MIPS: clean up redundant 'kvm_run' parameters
>   KVM: MIPS: clean up redundant kvm_run parameters in assembly
> 
>  arch/arm64/include/asm/kvm_coproc.h      |  12 +--
>  arch/arm64/include/asm/kvm_host.h        |  11 +--
>  arch/arm64/include/asm/kvm_mmu.h         |   2 +-
>  arch/arm64/kvm/handle_exit.c             |  36 +++----
>  arch/arm64/kvm/sys_regs.c                |  13 ++-
>  arch/mips/include/asm/kvm_host.h         |  32 +------
>  arch/mips/kvm/emulate.c                  |  59 ++++--------
>  arch/mips/kvm/entry.c                    |  21 ++---
>  arch/mips/kvm/mips.c                     |  14 +--
>  arch/mips/kvm/trap_emul.c                | 114 ++++++++++-------------
>  arch/mips/kvm/vz.c                       |  26 ++----
>  arch/powerpc/include/asm/kvm_book3s.h    |  16 ++--
>  arch/powerpc/include/asm/kvm_host.h      |   1 -
>  arch/powerpc/include/asm/kvm_ppc.h       |  27 +++---
>  arch/powerpc/kvm/book3s.c                |   4 +-
>  arch/powerpc/kvm/book3s.h                |   2 +-
>  arch/powerpc/kvm/book3s_64_mmu_hv.c      |  12 +--
>  arch/powerpc/kvm/book3s_64_mmu_radix.c   |   4 +-
>  arch/powerpc/kvm/book3s_emulate.c        |  10 +-
>  arch/powerpc/kvm/book3s_hv.c             |  64 ++++++-------
>  arch/powerpc/kvm/book3s_hv_nested.c      |  12 +--
>  arch/powerpc/kvm/book3s_interrupts.S     |  17 ++--
>  arch/powerpc/kvm/book3s_paired_singles.c |  72 +++++++-------
>  arch/powerpc/kvm/book3s_pr.c             |  33 ++++---
>  arch/powerpc/kvm/booke.c                 |  39 ++++----
>  arch/powerpc/kvm/booke.h                 |   8 +-
>  arch/powerpc/kvm/booke_emulate.c         |   2 +-
>  arch/powerpc/kvm/booke_interrupts.S      |   9 +-
>  arch/powerpc/kvm/bookehv_interrupts.S    |  10 +-
>  arch/powerpc/kvm/e500_emulate.c          |  15 ++-
>  arch/powerpc/kvm/emulate.c               |  10 +-
>  arch/powerpc/kvm/emulate_loadstore.c     |  32 +++----
>  arch/powerpc/kvm/powerpc.c               |  72 +++++++-------
>  arch/powerpc/kvm/trace_hv.h              |   6 +-
>  arch/s390/kvm/kvm-s390.c                 |  23 +++--
>  virt/kvm/arm/arm.c                       |   6 +-
>  virt/kvm/arm/mmio.c                      |  11 ++-
>  virt/kvm/arm/mmu.c                       |   5 +-
>  38 files changed, 392 insertions(+), 470 deletions(-)
> 


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 0/7] clean up redundant 'kvm_run' parameters
  2020-06-23  9:42 ` Paolo Bonzini
@ 2020-06-23 10:00   ` Tianjia Zhang
  2020-06-23 10:24     ` Paolo Bonzini
  0 siblings, 1 reply; 29+ messages in thread
From: Tianjia Zhang @ 2020-06-23 10:00 UTC (permalink / raw)
  To: Paolo Bonzini, tsbogend, paulus, mpe, benh, borntraeger, frankja,
	david, cohuck, heiko.carstens, gor, sean.j.christopherson,
	vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp, x86, hpa,
	maz, james.morse, julien.thierry.kdev, suzuki.poulose,
	christoffer.dall, peterx, thuth, chenhuacai
  Cc: kvm, linux-arm-kernel, kvmarm, linux-mips, kvm-ppc, linuxppc-dev,
	linux-s390, linux-kernel



On 2020/6/23 17:42, Paolo Bonzini wrote:
> On 27/04/20 06:35, Tianjia Zhang wrote:
>> In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
>> structure. For historical reasons, many kvm-related function parameters
>> retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
>> patch does a unified cleanup of these remaining redundant parameters.
>>
>> This series of patches has completely cleaned the architecture of
>> arm64, mips, ppc, and s390 (no such redundant code on x86). Due to
>> the large number of modified codes, a separate patch is made for each
>> platform. On the ppc platform, there is also a redundant structure
>> pointer of 'kvm_run' in 'vcpu_arch', which has also been cleaned
>> separately.
> 
> Tianjia, can you please refresh the patches so that each architecture
> maintainer can pick them up?  Thanks very much for this work!
> 
> Paolo
> 

No problem, this is what I should do.
After I update, do I submit separately for each architecture or submit 
them together in a patchset?

Thanks,
Tianjia

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 0/7] clean up redundant 'kvm_run' parameters
  2020-06-23 10:00   ` Tianjia Zhang
@ 2020-06-23 10:24     ` Paolo Bonzini
  0 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2020-06-23 10:24 UTC (permalink / raw)
  To: Tianjia Zhang, tsbogend, paulus, mpe, benh, borntraeger, frankja,
	david, cohuck, heiko.carstens, gor, sean.j.christopherson,
	vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp, x86, hpa,
	maz, james.morse, julien.thierry.kdev, suzuki.poulose,
	christoffer.dall, peterx, thuth, chenhuacai
  Cc: kvm, linux-arm-kernel, kvmarm, linux-mips, kvm-ppc, linuxppc-dev,
	linux-s390, linux-kernel

On 23/06/20 12:00, Tianjia Zhang wrote:
> 
> 
> On 2020/6/23 17:42, Paolo Bonzini wrote:
>> On 27/04/20 06:35, Tianjia Zhang wrote:
>>> In the current kvm version, 'kvm_run' has been included in the
>>> 'kvm_vcpu'
>>> structure. For historical reasons, many kvm-related function parameters
>>> retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
>>> patch does a unified cleanup of these remaining redundant parameters.
>>>
>>> This series of patches has completely cleaned the architecture of
>>> arm64, mips, ppc, and s390 (no such redundant code on x86). Due to
>>> the large number of modified codes, a separate patch is made for each
>>> platform. On the ppc platform, there is also a redundant structure
>>> pointer of 'kvm_run' in 'vcpu_arch', which has also been cleaned
>>> separately.
>>
>> Tianjia, can you please refresh the patches so that each architecture
>> maintainer can pick them up?  Thanks very much for this work!
>>
>> Paolo
>>
> 
> No problem, this is what I should do.
> After I update, do I submit separately for each architecture or submit
> them together in a patchset?

You can send them together.

Paolo


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v4 5/7] KVM: PPC: clean up redundant kvm_run parameters in assembly
  2020-05-26  5:59   ` Paul Mackerras
@ 2020-07-13  3:07     ` Tianjia Zhang
  0 siblings, 0 replies; 29+ messages in thread
From: Tianjia Zhang @ 2020-07-13  3:07 UTC (permalink / raw)
  To: Paul Mackerras
  Cc: pbonzini, tsbogend, mpe, benh, borntraeger, frankja, david,
	cohuck, heiko.carstens, gor, sean.j.christopherson, vkuznets,
	wanpengli, jmattson, joro, tglx, mingo, bp, x86, hpa, maz,
	james.morse, julien.thierry.kdev, suzuki.poulose,
	christoffer.dall, peterx, thuth, chenhuacai, kvm,
	linux-arm-kernel, kvmarm, linux-mips, kvm-ppc, linuxppc-dev,
	linux-s390, linux-kernel



On 2020/5/26 13:59, Paul Mackerras wrote:
> On Mon, Apr 27, 2020 at 12:35:12PM +0800, Tianjia Zhang wrote:
>> In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
>> structure. For historical reasons, many kvm-related function parameters
>> retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
>> patch does a unified cleanup of these remaining redundant parameters.
> 
> Some of these changes don't look completely correct to me, see below.
> If you're expecting these patches to go through my tree, I can fix up
> the patch and commit it (with you as author), noting the changes I
> made in the commit message.  Do you want me to do that?
> 

I am very glad for you to do so, although I have submitted a new version 
of patch, I still prefer you to fix up and commit it.

Thanks and best,
Tianjia

>> diff --git a/arch/powerpc/kvm/book3s_interrupts.S b/arch/powerpc/kvm/book3s_interrupts.S
>> index f7ad99d972ce..0eff749d8027 100644
>> --- a/arch/powerpc/kvm/book3s_interrupts.S
>> +++ b/arch/powerpc/kvm/book3s_interrupts.S
>> @@ -55,8 +55,7 @@
>>    ****************************************************************************/
>>   
>>   /* Registers:
>> - *  r3: kvm_run pointer
>> - *  r4: vcpu pointer
>> + *  r3: vcpu pointer
>>    */
>>   _GLOBAL(__kvmppc_vcpu_run)
>>   
>> @@ -68,8 +67,8 @@ kvm_start_entry:
>>   	/* Save host state to the stack */
>>   	PPC_STLU r1, -SWITCH_FRAME_SIZE(r1)
>>   
>> -	/* Save r3 (kvm_run) and r4 (vcpu) */
>> -	SAVE_2GPRS(3, r1)
>> +	/* Save r3 (vcpu) */
>> +	SAVE_GPR(3, r1)
>>   
>>   	/* Save non-volatile registers (r14 - r31) */
>>   	SAVE_NVGPRS(r1)
>> @@ -82,11 +81,11 @@ kvm_start_entry:
>>   	PPC_STL	r0, _LINK(r1)
>>   
>>   	/* Load non-volatile guest state from the vcpu */
>> -	VCPU_LOAD_NVGPRS(r4)
>> +	VCPU_LOAD_NVGPRS(r3)
>>   
>>   kvm_start_lightweight:
>>   	/* Copy registers into shadow vcpu so we can access them in real mode */
>> -	mr	r3, r4
>> +	mr	r4, r3
> 
> This mr doesn't seem necessary.
> 
>>   	bl	FUNC(kvmppc_copy_to_svcpu)
>>   	nop
>>   	REST_GPR(4, r1)
> 
> This should be loading r4 from GPR3(r1), not GPR4(r1) - which is what
> REST_GPR(4, r1) will do.
> 
> Then, in the file but not in the patch context, there is this line:
> 
> 	PPC_LL	r3, GPR4(r1)		/* vcpu pointer */
> 
> where once again GPR4 needs to be GPR3.
> 
>> @@ -191,10 +190,10 @@ after_sprg3_load:
>>   	PPC_STL	r31, VCPU_GPR(R31)(r7)
>>   
>>   	/* Pass the exit number as 3rd argument to kvmppc_handle_exit */
> 
> The comment should be modified to say "2nd" instead of "3rd",
> otherwise it is confusing.
> 
> The rest of the patch looks OK.
> 
> Paul.
> 

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2020-07-13  3:07 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-27  4:35 [PATCH v4 0/7] clean up redundant 'kvm_run' parameters Tianjia Zhang
2020-04-27  4:35 ` [PATCH v4 1/7] KVM: s390: " Tianjia Zhang
2020-04-29 12:03   ` Vitaly Kuznetsov
2020-04-27  4:35 ` [PATCH v4 2/7] KVM: arm64: " Tianjia Zhang
2020-04-29 12:07   ` Vitaly Kuznetsov
2020-05-05  8:39   ` Marc Zyngier
2020-05-07 13:04     ` Tianjia Zhang
2020-04-27  4:35 ` [PATCH v4 3/7] KVM: PPC: Remove redundant kvm_run from vcpu_arch Tianjia Zhang
2020-04-29 12:23   ` Vitaly Kuznetsov
2020-05-26  4:36   ` Paul Mackerras
2020-05-27  4:20   ` Paul Mackerras
2020-05-27  5:23     ` Tianjia Zhang
2020-04-27  4:35 ` [PATCH v4 4/7] KVM: PPC: clean up redundant 'kvm_run' parameters Tianjia Zhang
2020-04-29 12:32   ` Vitaly Kuznetsov
2020-05-26  5:49   ` Paul Mackerras
2020-04-27  4:35 ` [PATCH v4 5/7] KVM: PPC: clean up redundant kvm_run parameters in assembly Tianjia Zhang
2020-05-26  5:59   ` Paul Mackerras
2020-07-13  3:07     ` Tianjia Zhang
2020-04-27  4:35 ` [PATCH v4 6/7] KVM: MIPS: clean up redundant 'kvm_run' parameters Tianjia Zhang
2020-04-27  5:40   ` Huacai Chen
2020-05-27  6:24     ` Tianjia Zhang
2020-05-29  9:48       ` Paolo Bonzini
2020-06-16 11:54         ` Tianjia Zhang
2020-04-27  4:35 ` [PATCH v4 7/7] KVM: MIPS: clean up redundant kvm_run parameters in assembly Tianjia Zhang
2020-04-27  5:36   ` Huacai Chen
2020-05-05  4:15 ` [PATCH v4 0/7] clean up redundant 'kvm_run' parameters Tianjia Zhang
2020-06-23  9:42 ` Paolo Bonzini
2020-06-23 10:00   ` Tianjia Zhang
2020-06-23 10:24     ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).