All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions.
@ 2013-06-07 23:03 David Daney
  2013-06-07 23:03 ` [PATCH 01/31] MIPS: Move allocate_kscratch to cpu-probe.c and make it public David Daney
                   ` (34 more replies)
  0 siblings, 35 replies; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

These patches take a somewhat different approach to MIPS
virtualization via the MIPS-VZ extensions than the patches previously
sent by Sanjay Lal.

Several facts about the code:

o Existing exception handlers are modified to hook in to KVM instead
  of intercepting all exceptions via the EBase register, and then
  chaining to real exception handlers.

o Able to boot 64-bit SMP guests that use the FPU (I have booted 4-way
  SMP 64-bit MIPS/Linux).

o Additional overhead on every exception even when *no* vCPU is running.

o Lower interrupt overhead, than the EBase interception method, when
  vCPU *is* running.

o This code is somewhat smaller than the existing trap/emulate
  implementation (about 2100 lines vs. about 5300 lines)

o Currently probably only usable on the OCTEON III CPU model, as some
  MIPS-VZ implementation-defined behaviors were assumed to have the
  OCTEON III behavior.

Note: I think Ralf already has the 17/31 (MIPS: Quit exposing Kconfig
symbols in uapi headers.) queued, but I also include it here.

David Daney (31):
  MIPS: Move allocate_kscratch to cpu-probe.c and make it public.
  MIPS: Save and restore K0/K1 when CONFIG_KVM_MIPSVZ
  mips/kvm: Fix 32-bitisms in kvm_locore.S
  mips/kvm: Add casts to avoid pointer width mismatch build failures.
  mips/kvm: Use generic cache flushing functions.
  mips/kvm: Rename kvm_vcpu_arch.pc to  kvm_vcpu_arch.epc
  mips/kvm: Rename VCPU_registername to KVM_VCPU_ARCH_registername
  mips/kvm: Fix code formatting in arch/mips/kvm/kvm_locore.S
  mips/kvm: Factor trap-and-emulate support into a pluggable
    implementation.
  mips/kvm: Implement ioctls to get and set FPU registers.
  MIPS: Rearrange branch.c so it can be used by kvm code.
  MIPS: Add instruction format information for WAIT, MTC0, MFC0, et al.
  mips/kvm: Add accessors for MIPS VZ registers.
  mips/kvm: Add thread_info flag to indicate operation in MIPS VZ Guest
    Mode.
  mips/kvm: Exception handling to leave and reenter guest mode.
  mips/kvm: Add exception handler for MIPSVZ Guest exceptions.
  MIPS: Quit exposing Kconfig symbols in uapi headers.
  mips/kvm: Add pt_regs slots for BadInstr and BadInstrP
  mips/kvm: Add host definitions for MIPS VZ based host.
  mips/kvm: Hook into TLB fault handlers.
  mips/kvm: Allow set_except_vector() to be used from MIPSVZ code.
  mips/kvm: Split get_new_mmu_context into two parts.
  mips/kvm: Hook into CP unusable exception handler.
  mips/kvm: Add thread_struct fields used by MIPSVZ hosts.
  mips/kvm: Add some asm-offsets constants used by MIPSVZ.
  mips/kvm: Split up Kconfig and Makefile definitions in preperation
    for MIPSVZ.
  mips/kvm: Gate the use of kvm_local_flush_tlb_all() by KVM_MIPSTE
  mips/kvm: Only use KVM_COALESCED_MMIO_PAGE_OFFSET with KVM_MIPSTE
  mips/kvm: Add MIPSVZ support.
  mips/kvm: Enable MIPSVZ in Kconfig/Makefile
  mips/kvm: Allow for upto 8 KVM vcpus per vm.

 arch/mips/Kconfig                   |    1 +
 arch/mips/include/asm/branch.h      |    7 +
 arch/mips/include/asm/kvm_host.h    |  622 +-----------
 arch/mips/include/asm/kvm_mips_te.h |  589 +++++++++++
 arch/mips/include/asm/kvm_mips_vz.h |   29 +
 arch/mips/include/asm/mipsregs.h    |  264 +++++
 arch/mips/include/asm/mmu_context.h |   12 +-
 arch/mips/include/asm/processor.h   |    6 +
 arch/mips/include/asm/ptrace.h      |   36 +
 arch/mips/include/asm/stackframe.h  |  150 ++-
 arch/mips/include/asm/thread_info.h |    2 +
 arch/mips/include/asm/uasm.h        |    2 +-
 arch/mips/include/uapi/asm/inst.h   |   23 +-
 arch/mips/include/uapi/asm/ptrace.h |   17 +-
 arch/mips/kernel/asm-offsets.c      |  124 ++-
 arch/mips/kernel/branch.c           |   63 +-
 arch/mips/kernel/cpu-probe.c        |   34 +
 arch/mips/kernel/genex.S            |    8 +
 arch/mips/kernel/scall64-64.S       |   12 +
 arch/mips/kernel/scall64-n32.S      |   12 +
 arch/mips/kernel/traps.c            |   15 +-
 arch/mips/kvm/Kconfig               |   23 +-
 arch/mips/kvm/Makefile              |   15 +-
 arch/mips/kvm/kvm_locore.S          |  980 +++++++++---------
 arch/mips/kvm/kvm_mips.c            |  768 ++------------
 arch/mips/kvm/kvm_mips_comm.h       |    1 +
 arch/mips/kvm/kvm_mips_commpage.c   |    9 +-
 arch/mips/kvm/kvm_mips_dyntrans.c   |    4 +-
 arch/mips/kvm/kvm_mips_emul.c       |  312 +++---
 arch/mips/kvm/kvm_mips_int.c        |   53 +-
 arch/mips/kvm/kvm_mips_int.h        |    2 -
 arch/mips/kvm/kvm_mips_stats.c      |    6 +-
 arch/mips/kvm/kvm_mipsvz.c          | 1894 +++++++++++++++++++++++++++++++++++
 arch/mips/kvm/kvm_mipsvz_guest.S    |  234 +++++
 arch/mips/kvm/kvm_tlb.c             |  140 +--
 arch/mips/kvm/kvm_trap_emul.c       |  932 +++++++++++++++--
 arch/mips/mm/fault.c                |    8 +
 arch/mips/mm/tlbex-fault.S          |    6 +
 arch/mips/mm/tlbex.c                |   45 +-
 39 files changed, 5299 insertions(+), 2161 deletions(-)
 create mode 100644 arch/mips/include/asm/kvm_mips_te.h
 create mode 100644 arch/mips/include/asm/kvm_mips_vz.h
 create mode 100644 arch/mips/kvm/kvm_mipsvz.c
 create mode 100644 arch/mips/kvm/kvm_mipsvz_guest.S

-- 
1.7.11.7


^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH 01/31] MIPS: Move allocate_kscratch to cpu-probe.c and make it public.
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-14 11:41   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 02/31] MIPS: Save and restore K0/K1 when CONFIG_KVM_MIPSVZ David Daney
                   ` (33 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <ddaney@caviumnetworks.com>

Signed-off-by: David Daney <ddaney@caviumnetworks.com>
---
 arch/mips/include/asm/mipsregs.h |  2 ++
 arch/mips/kernel/cpu-probe.c     | 29 +++++++++++++++++++++++++++++
 arch/mips/mm/tlbex.c             | 20 +-------------------
 3 files changed, 32 insertions(+), 19 deletions(-)

diff --git a/arch/mips/include/asm/mipsregs.h b/arch/mips/include/asm/mipsregs.h
index 87e6207..6e0da5aa 100644
--- a/arch/mips/include/asm/mipsregs.h
+++ b/arch/mips/include/asm/mipsregs.h
@@ -1806,6 +1806,8 @@ __BUILD_SET_C0(brcm_cmt_ctrl)
 __BUILD_SET_C0(brcm_config)
 __BUILD_SET_C0(brcm_mode)
 
+int allocate_kscratch(void);
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* _ASM_MIPSREGS_H */
diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
index c6568bf..ee1014e 100644
--- a/arch/mips/kernel/cpu-probe.c
+++ b/arch/mips/kernel/cpu-probe.c
@@ -1064,3 +1064,32 @@ __cpuinit void cpu_report(void)
 	if (c->options & MIPS_CPU_FPU)
 		printk(KERN_INFO "FPU revision is: %08x\n", c->fpu_id);
 }
+
+static DEFINE_SPINLOCK(kscratch_used_lock);
+
+static unsigned int kscratch_used_mask;
+
+int allocate_kscratch(void)
+{
+	int r;
+	unsigned int a;
+
+	spin_lock(&kscratch_used_lock);
+
+	a = cpu_data[0].kscratch_mask & ~kscratch_used_mask;
+
+	r = ffs(a);
+
+	if (r == 0) {
+		r = -1;
+		goto out;
+	}
+
+	r--; /* make it zero based */
+
+	kscratch_used_mask |= (1 << r);
+out:
+	spin_unlock(&kscratch_used_lock);
+
+	return r;
+}
diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index ce9818e..001b87c 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -30,6 +30,7 @@
 #include <linux/cache.h>
 
 #include <asm/cacheflush.h>
+#include <asm/mipsregs.h>
 #include <asm/pgtable.h>
 #include <asm/war.h>
 #include <asm/uasm.h>
@@ -307,25 +308,6 @@ static int check_for_high_segbits __cpuinitdata;
 
 static int check_for_high_segbits __cpuinitdata;
 
-static unsigned int kscratch_used_mask __cpuinitdata;
-
-static int __cpuinit allocate_kscratch(void)
-{
-	int r;
-	unsigned int a = cpu_data[0].kscratch_mask & ~kscratch_used_mask;
-
-	r = ffs(a);
-
-	if (r == 0)
-		return -1;
-
-	r--; /* make it zero based */
-
-	kscratch_used_mask |= (1 << r);
-
-	return r;
-}
-
 static int scratch_reg __cpuinitdata;
 static int pgd_reg __cpuinitdata;
 enum vmalloc64_mode {not_refill, refill_scratch, refill_noscratch};
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 02/31] MIPS: Save and restore K0/K1 when CONFIG_KVM_MIPSVZ
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
  2013-06-07 23:03 ` [PATCH 01/31] MIPS: Move allocate_kscratch to cpu-probe.c and make it public David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-07 23:03 ` [PATCH 03/31] mips/kvm: Fix 32-bitisms in kvm_locore.S David Daney
                   ` (32 subsequent siblings)
  34 siblings, 0 replies; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

We cannot clobber any registers on exceptions as any guest will need
them all.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/include/asm/mipsregs.h   |  2 ++
 arch/mips/include/asm/stackframe.h | 15 +++++++++++++++
 arch/mips/kernel/cpu-probe.c       |  7 ++++++-
 arch/mips/kernel/genex.S           |  5 +++++
 arch/mips/kernel/scall64-64.S      | 12 ++++++++++++
 arch/mips/kernel/scall64-n32.S     | 12 ++++++++++++
 arch/mips/kernel/traps.c           |  5 +++++
 arch/mips/mm/tlbex.c               | 25 +++++++++++++++++++++++++
 8 files changed, 82 insertions(+), 1 deletion(-)

diff --git a/arch/mips/include/asm/mipsregs.h b/arch/mips/include/asm/mipsregs.h
index 6e0da5aa..6f03c72 100644
--- a/arch/mips/include/asm/mipsregs.h
+++ b/arch/mips/include/asm/mipsregs.h
@@ -73,6 +73,8 @@
 #define CP0_TAGHI $29
 #define CP0_ERROREPC $30
 #define CP0_DESAVE $31
+#define CP0_KSCRATCH1 $31, 2
+#define CP0_KSCRATCH2 $31, 3
 
 /*
  * R4640/R4650 cp0 register names.  These registers are listed
diff --git a/arch/mips/include/asm/stackframe.h b/arch/mips/include/asm/stackframe.h
index a89d1b1..20627b2 100644
--- a/arch/mips/include/asm/stackframe.h
+++ b/arch/mips/include/asm/stackframe.h
@@ -181,6 +181,16 @@
 #endif
 		LONG_S	k0, PT_R29(sp)
 		LONG_S	$3, PT_R3(sp)
+#ifdef CONFIG_KVM_MIPSVZ
+		/*
+		 * With KVM_MIPSVZ, we must not clobber k0/k1
+		 * they were saved before they were used
+		 */
+		MFC0	k0, CP0_KSCRATCH1
+		MFC0	$3, CP0_KSCRATCH2
+		LONG_S	k0, PT_R26(sp)
+		LONG_S	$3, PT_R27(sp)
+#endif
 		/*
 		 * You might think that you don't need to save $0,
 		 * but the FPU emulator and gdb remote debug stub
@@ -447,6 +457,11 @@
 		.endm
 
 		.macro	RESTORE_SP_AND_RET
+
+#ifdef CONFIG_KVM_MIPSVZ
+		LONG_L	k0, PT_R26(sp)
+		LONG_L	k1, PT_R27(sp)
+#endif
 		LONG_L	sp, PT_R29(sp)
 		.set	mips3
 		eret
diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
index ee1014e..7a07edb 100644
--- a/arch/mips/kernel/cpu-probe.c
+++ b/arch/mips/kernel/cpu-probe.c
@@ -1067,7 +1067,12 @@ __cpuinit void cpu_report(void)
 
 static DEFINE_SPINLOCK(kscratch_used_lock);
 
-static unsigned int kscratch_used_mask;
+static unsigned int kscratch_used_mask
+#ifdef CONFIG_KVM_MIPSVZ
+/* KVM_MIPSVZ implemtation uses these two statically. */
+= 0xc
+#endif
+;
 
 int allocate_kscratch(void)
 {
diff --git a/arch/mips/kernel/genex.S b/arch/mips/kernel/genex.S
index 31fa856..163e299 100644
--- a/arch/mips/kernel/genex.S
+++ b/arch/mips/kernel/genex.S
@@ -46,6 +46,11 @@
 NESTED(except_vec3_generic, 0, sp)
 	.set	push
 	.set	noat
+#ifdef CONFIG_KVM_MIPSVZ
+		/* With KVM_MIPSVZ, we must not clobber k0/k1 */
+	MTC0	k0, CP0_KSCRATCH1
+	MTC0	k1, CP0_KSCRATCH2
+#endif
 #if R5432_CP0_INTERRUPT_WAR
 	mfc0	k0, CP0_INDEX
 #endif
diff --git a/arch/mips/kernel/scall64-64.S b/arch/mips/kernel/scall64-64.S
index 97a5909..5ff4882 100644
--- a/arch/mips/kernel/scall64-64.S
+++ b/arch/mips/kernel/scall64-64.S
@@ -62,6 +62,9 @@ NESTED(handle_sys64, PT_SIZE, sp)
 	jalr	t2			# Do The Real Thing (TM)
 
 	li	t0, -EMAXERRNO - 1	# error?
+#if defined(CONFIG_KVM_MIPSVZ) && defined(CONFIG_FAST_ACCESS_TO_THREAD_POINTER)
+	ld	t2, TI_TP_VALUE($28)
+#endif
 	sltu	t0, t0, v0
 	sd	t0, PT_R7(sp)		# set error flag
 	beqz	t0, 1f
@@ -70,6 +73,9 @@ NESTED(handle_sys64, PT_SIZE, sp)
 	dnegu	v0			# error
 	sd	t1, PT_R0(sp)		# save it for syscall restarting
 1:	sd	v0, PT_R2(sp)		# result
+#if defined(CONFIG_KVM_MIPSVZ) && defined(CONFIG_FAST_ACCESS_TO_THREAD_POINTER)
+	sd	t2, PT_R26(sp)
+#endif
 
 n64_syscall_exit:
 	j	syscall_exit_partial
@@ -93,6 +99,9 @@ syscall_trace_entry:
 	jalr	t0
 
 	li	t0, -EMAXERRNO - 1	# error?
+#if defined(CONFIG_KVM_MIPSVZ) && defined(CONFIG_FAST_ACCESS_TO_THREAD_POINTER)
+	ld	t2, TI_TP_VALUE($28)
+#endif
 	sltu	t0, t0, v0
 	sd	t0, PT_R7(sp)		# set error flag
 	beqz	t0, 1f
@@ -101,6 +110,9 @@ syscall_trace_entry:
 	dnegu	v0			# error
 	sd	t1, PT_R0(sp)		# save it for syscall restarting
 1:	sd	v0, PT_R2(sp)		# result
+#if defined(CONFIG_KVM_MIPSVZ) && defined(CONFIG_FAST_ACCESS_TO_THREAD_POINTER)
+	sd	t2, PT_R26(sp)
+#endif
 
 	j	syscall_exit
 
diff --git a/arch/mips/kernel/scall64-n32.S b/arch/mips/kernel/scall64-n32.S
index edcb659..cba35b4 100644
--- a/arch/mips/kernel/scall64-n32.S
+++ b/arch/mips/kernel/scall64-n32.S
@@ -55,6 +55,9 @@ NESTED(handle_sysn32, PT_SIZE, sp)
 	jalr	t2			# Do The Real Thing (TM)
 
 	li	t0, -EMAXERRNO - 1	# error?
+#if defined(CONFIG_KVM_MIPSVZ) && defined(CONFIG_FAST_ACCESS_TO_THREAD_POINTER)
+	ld	t2, TI_TP_VALUE($28)
+#endif
 	sltu	t0, t0, v0
 	sd	t0, PT_R7(sp)		# set error flag
 	beqz	t0, 1f
@@ -63,6 +66,9 @@ NESTED(handle_sysn32, PT_SIZE, sp)
 	dnegu	v0			# error
 	sd	t1, PT_R0(sp)		# save it for syscall restarting
 1:	sd	v0, PT_R2(sp)		# result
+#if defined(CONFIG_KVM_MIPSVZ) && defined(CONFIG_FAST_ACCESS_TO_THREAD_POINTER)
+	sd	t2, PT_R26(sp)
+#endif
 
 	j	syscall_exit_partial
 
@@ -85,6 +91,9 @@ n32_syscall_trace_entry:
 	jalr	t0
 
 	li	t0, -EMAXERRNO - 1	# error?
+#if defined(CONFIG_KVM_MIPSVZ) && defined(CONFIG_FAST_ACCESS_TO_THREAD_POINTER)
+	ld	t2, TI_TP_VALUE($28)
+#endif
 	sltu	t0, t0, v0
 	sd	t0, PT_R7(sp)		# set error flag
 	beqz	t0, 1f
@@ -93,6 +102,9 @@ n32_syscall_trace_entry:
 	dnegu	v0			# error
 	sd	t1, PT_R0(sp)		# save it for syscall restarting
 1:	sd	v0, PT_R2(sp)		# result
+#if defined(CONFIG_KVM_MIPSVZ) && defined(CONFIG_FAST_ACCESS_TO_THREAD_POINTER)
+	sd	t2, PT_R26(sp)
+#endif
 
 	j	syscall_exit
 
diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
index e3be670..f008795 100644
--- a/arch/mips/kernel/traps.c
+++ b/arch/mips/kernel/traps.c
@@ -1483,6 +1483,11 @@ void __init *set_except_vector(int n, void *addr)
 #endif
 		u32 *buf = (u32 *)(ebase + 0x200);
 		unsigned int k0 = 26;
+#ifdef CONFIG_KVM_MIPSVZ
+		unsigned int k1 = 27;
+		UASM_i_MTC0(&buf, k0, 31, 2);
+		UASM_i_MTC0(&buf, k1, 31, 3);
+#endif
 		if ((handler & jump_mask) == ((ebase + 0x200) & jump_mask)) {
 			uasm_i_j(&buf, handler & ~jump_mask);
 			uasm_i_nop(&buf);
diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index 001b87c..3ce7208 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -372,11 +372,19 @@ static void __cpuinit build_restore_work_registers(u32 **p)
 {
 	if (scratch_reg > 0) {
 		UASM_i_MFC0(p, 1, 31, scratch_reg);
+#ifdef CONFIG_KVM_MIPSVZ
+		UASM_i_MFC0(p, K0, 31, 2);
+		UASM_i_MFC0(p, K1, 31, 3);
+#endif
 		return;
 	}
 	/* K0 already points to save area, restore $1 and $2  */
 	UASM_i_LW(p, 1, offsetof(struct tlb_reg_save, a), K0);
 	UASM_i_LW(p, 2, offsetof(struct tlb_reg_save, b), K0);
+#ifdef CONFIG_KVM_MIPSVZ
+	UASM_i_MFC0(p, K0, 31, 2);
+	UASM_i_MFC0(p, K1, 31, 3);
+#endif
 }
 
 #ifndef CONFIG_MIPS_PGD_C0_CONTEXT
@@ -1089,6 +1097,11 @@ build_fast_tlb_refill_handler (u32 **p, struct uasm_label **l,
 	int vmalloc_branch_delay_filled = 0;
 	const int scratch = 1; /* Our extra working register */
 
+#ifdef CONFIG_KVM_MIPSVZ
+	UASM_i_MTC0(p, K0, 31, 2);
+	UASM_i_MTC0(p, K1, 31, 3);
+#endif
+
 	rv.huge_pte = scratch;
 	rv.restore_scratch = 0;
 
@@ -1244,6 +1257,10 @@ build_fast_tlb_refill_handler (u32 **p, struct uasm_label **l,
 		rv.restore_scratch = 1;
 	}
 
+#ifdef CONFIG_KVM_MIPSVZ
+	UASM_i_MFC0(p, K0, 31, 2);
+	UASM_i_MFC0(p, K1, 31, 3);
+#endif
 	uasm_i_eret(p); /* return from trap */
 
 	return rv;
@@ -1277,6 +1294,10 @@ static void __cpuinit build_r4000_tlb_refill_handler(void)
 							  scratch_reg);
 		vmalloc_mode = refill_scratch;
 	} else {
+#ifdef CONFIG_KVM_MIPSVZ
+		UASM_i_MTC0(&p, K0, 31, 2);
+		UASM_i_MTC0(&p, K1, 31, 3);
+#endif
 		htlb_info.huge_pte = K0;
 		htlb_info.restore_scratch = 0;
 		vmalloc_mode = refill_noscratch;
@@ -1311,6 +1332,10 @@ static void __cpuinit build_r4000_tlb_refill_handler(void)
 		build_update_entries(&p, K0, K1);
 		build_tlb_write_entry(&p, &l, &r, tlb_random);
 		uasm_l_leave(&l, p);
+#ifdef CONFIG_KVM_MIPSVZ
+		UASM_i_MFC0(&p, K0, 31, 2);
+		UASM_i_MFC0(&p, K1, 31, 3);
+#endif
 		uasm_i_eret(&p); /* return from trap */
 	}
 #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 03/31] mips/kvm: Fix 32-bitisms in kvm_locore.S
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
  2013-06-07 23:03 ` [PATCH 01/31] MIPS: Move allocate_kscratch to cpu-probe.c and make it public David Daney
  2013-06-07 23:03 ` [PATCH 02/31] MIPS: Save and restore K0/K1 when CONFIG_KVM_MIPSVZ David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-14 13:09   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 04/31] mips/kvm: Add casts to avoid pointer width mismatch build failures David Daney
                   ` (31 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

For a warning free compile, we need to use the width aware PTR_LI and
PTR_LA macros.  Use LI variant for immediate data and LA variant for
addresses.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/kvm/kvm_locore.S | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/mips/kvm/kvm_locore.S b/arch/mips/kvm/kvm_locore.S
index dca2aa6..e86fa2a 100644
--- a/arch/mips/kvm/kvm_locore.S
+++ b/arch/mips/kvm/kvm_locore.S
@@ -310,7 +310,7 @@ NESTED (MIPSX(GuestException), CALLFRAME_SIZ, ra)
     LONG_S  t0, VCPU_R26(k1)
 
     /* Get GUEST k1 and save it in VCPU */
-    la      t1, ~0x2ff
+	PTR_LI	t1, ~0x2ff
     mfc0    t0, CP0_EBASE
     and     t0, t0, t1
     LONG_L  t0, 0x3000(t0)
@@ -384,14 +384,14 @@ NESTED (MIPSX(GuestException), CALLFRAME_SIZ, ra)
     mtc0        k0, CP0_DDATA_LO
 
     /* Restore RDHWR access */
-    la      k0, 0x2000000F
+	PTR_LI	k0, 0x2000000F
     mtc0    k0,  CP0_HWRENA
 
     /* Jump to handler */
 FEXPORT(__kvm_mips_jump_to_handler)
     /* XXXKYMA: not sure if this is safe, how large is the stack?? */
     /* Now jump to the kvm_mips_handle_exit() to see if we can deal with this in the kernel */
-    la          t9,kvm_mips_handle_exit
+	PTR_LA	t9, kvm_mips_handle_exit
     jalr.hb     t9
     addiu       sp,sp, -CALLFRAME_SIZ           /* BD Slot */
 
@@ -566,7 +566,7 @@ __kvm_mips_return_to_host:
     mtlo    k0
 
     /* Restore RDHWR access */
-    la      k0, 0x2000000F
+	PTR_LI	k0, 0x2000000F
     mtc0    k0,  CP0_HWRENA
 
 
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 04/31] mips/kvm: Add casts to avoid pointer width mismatch build failures.
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (2 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 03/31] mips/kvm: Fix 32-bitisms in kvm_locore.S David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-14 13:14   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 05/31] mips/kvm: Use generic cache flushing functions David Daney
                   ` (30 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

When building for 64-bits we need these cases to make it build.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/kvm/kvm_mips.c          | 4 ++--
 arch/mips/kvm/kvm_mips_dyntrans.c | 4 ++--
 arch/mips/kvm/kvm_mips_emul.c     | 2 +-
 arch/mips/kvm/kvm_tlb.c           | 4 ++--
 4 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/mips/kvm/kvm_mips.c b/arch/mips/kvm/kvm_mips.c
index d934b01..6018e2a 100644
--- a/arch/mips/kvm/kvm_mips.c
+++ b/arch/mips/kvm/kvm_mips.c
@@ -303,7 +303,7 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id)
 	}
 
 	/* Save Linux EBASE */
-	vcpu->arch.host_ebase = (void *)read_c0_ebase();
+	vcpu->arch.host_ebase = (void *)(long)(read_c0_ebase() & 0x3ff);
 
 	gebase = kzalloc(ALIGN(size, PAGE_SIZE), GFP_KERNEL);
 
@@ -339,7 +339,7 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id)
 	offset = 0x2000;
 	kvm_info("Installing KVM Exception handlers @ %p, %#x bytes\n",
 		 gebase + offset,
-		 mips32_GuestExceptionEnd - mips32_GuestException);
+		 (unsigned)(mips32_GuestExceptionEnd - mips32_GuestException));
 
 	memcpy(gebase + offset, mips32_GuestException,
 	       mips32_GuestExceptionEnd - mips32_GuestException);
diff --git a/arch/mips/kvm/kvm_mips_dyntrans.c b/arch/mips/kvm/kvm_mips_dyntrans.c
index 96528e2..dd0b8f9 100644
--- a/arch/mips/kvm/kvm_mips_dyntrans.c
+++ b/arch/mips/kvm/kvm_mips_dyntrans.c
@@ -94,7 +94,7 @@ kvm_mips_trans_mfc0(uint32_t inst, uint32_t *opc, struct kvm_vcpu *vcpu)
 						      cop0);
 	}
 
-	if (KVM_GUEST_KSEGX(opc) == KVM_GUEST_KSEG0) {
+	if (KVM_GUEST_KSEGX((unsigned long)opc) == KVM_GUEST_KSEG0) {
 		kseg0_opc =
 		    CKSEG0ADDR(kvm_mips_translate_guest_kseg0_to_hpa
 			       (vcpu, (unsigned long) opc));
@@ -129,7 +129,7 @@ kvm_mips_trans_mtc0(uint32_t inst, uint32_t *opc, struct kvm_vcpu *vcpu)
 	    offsetof(struct mips_coproc,
 		     reg[rd][sel]) + offsetof(struct kvm_mips_commpage, cop0);
 
-	if (KVM_GUEST_KSEGX(opc) == KVM_GUEST_KSEG0) {
+	if (KVM_GUEST_KSEGX((unsigned long)opc) == KVM_GUEST_KSEG0) {
 		kseg0_opc =
 		    CKSEG0ADDR(kvm_mips_translate_guest_kseg0_to_hpa
 			       (vcpu, (unsigned long) opc));
diff --git a/arch/mips/kvm/kvm_mips_emul.c b/arch/mips/kvm/kvm_mips_emul.c
index 4b6274b..af9a661 100644
--- a/arch/mips/kvm/kvm_mips_emul.c
+++ b/arch/mips/kvm/kvm_mips_emul.c
@@ -892,7 +892,7 @@ int kvm_mips_sync_icache(unsigned long va, struct kvm_vcpu *vcpu)
 	pfn = kvm->arch.guest_pmap[gfn];
 	pa = (pfn << PAGE_SHIFT) | offset;
 
-	printk("%s: va: %#lx, unmapped: %#x\n", __func__, va, CKSEG0ADDR(pa));
+	printk("%s: va: %#lx, unmapped: %#lx\n", __func__, va, CKSEG0ADDR(pa));
 
 	mips32_SyncICache(CKSEG0ADDR(pa), 32);
 	return 0;
diff --git a/arch/mips/kvm/kvm_tlb.c b/arch/mips/kvm/kvm_tlb.c
index c777dd3..5e189be 100644
--- a/arch/mips/kvm/kvm_tlb.c
+++ b/arch/mips/kvm/kvm_tlb.c
@@ -353,7 +353,7 @@ int kvm_mips_handle_commpage_tlb_fault(unsigned long badvaddr,
 	unsigned long entrylo0 = 0, entrylo1 = 0;
 
 
-	pfn0 = CPHYSADDR(vcpu->arch.kseg0_commpage) >> PAGE_SHIFT;
+	pfn0 = CPHYSADDR((unsigned long)vcpu->arch.kseg0_commpage) >> PAGE_SHIFT;
 	pfn1 = 0;
 	entrylo0 = mips3_paddr_to_tlbpfn(pfn0 << PAGE_SHIFT) | (0x3 << 3) | (1 << 2) |
 			(0x1 << 1);
@@ -916,7 +916,7 @@ uint32_t kvm_get_inst(uint32_t *opc, struct kvm_vcpu *vcpu)
 			inst = *(opc);
 		}
 		local_irq_restore(flags);
-	} else if (KVM_GUEST_KSEGX(opc) == KVM_GUEST_KSEG0) {
+	} else if (KVM_GUEST_KSEGX((unsigned long)opc) == KVM_GUEST_KSEG0) {
 		paddr =
 		    kvm_mips_translate_guest_kseg0_to_hpa(vcpu,
 							 (unsigned long) opc);
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 05/31] mips/kvm: Use generic cache flushing functions.
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (3 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 04/31] mips/kvm: Add casts to avoid pointer width mismatch build failures David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-14 13:17   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 06/31] mips/kvm: Rename kvm_vcpu_arch.pc to kvm_vcpu_arch.epc David Daney
                   ` (29 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

We don't know if we have the r4k specific functions available, so use
universally available __flush_cache_all() instead.  This takes longer
as it flushes both i-cache and d-cache, but is available for all CPUs.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/kvm/kvm_mips_emul.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/mips/kvm/kvm_mips_emul.c b/arch/mips/kvm/kvm_mips_emul.c
index af9a661..a2c6687 100644
--- a/arch/mips/kvm/kvm_mips_emul.c
+++ b/arch/mips/kvm/kvm_mips_emul.c
@@ -916,8 +916,6 @@ kvm_mips_emulate_cache(uint32_t inst, uint32_t *opc, uint32_t cause,
 		       struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
-	extern void (*r4k_blast_dcache) (void);
-	extern void (*r4k_blast_icache) (void);
 	enum emulation_result er = EMULATE_DONE;
 	int32_t offset, cache, op_inst, op, base;
 	struct kvm_vcpu_arch *arch = &vcpu->arch;
@@ -954,9 +952,9 @@ kvm_mips_emulate_cache(uint32_t inst, uint32_t *opc, uint32_t cause,
 		     arch->gprs[base], offset);
 
 		if (cache == MIPS_CACHE_DCACHE)
-			r4k_blast_dcache();
+			__flush_cache_all();
 		else if (cache == MIPS_CACHE_ICACHE)
-			r4k_blast_icache();
+			__flush_cache_all();
 		else {
 			printk("%s: unsupported CACHE INDEX operation\n",
 			       __func__);
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 06/31] mips/kvm: Rename kvm_vcpu_arch.pc to  kvm_vcpu_arch.epc
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (4 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 05/31] mips/kvm: Use generic cache flushing functions David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-14 13:18   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 07/31] mips/kvm: Rename VCPU_registername to KVM_VCPU_ARCH_registername David Daney
                   ` (28 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

The proper MIPS name for this register is EPC, so use that.

Change the asm-offsets name to KVM_VCPU_ARCH_EPC, so that the symbol
name prefix matches the structure name.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/include/asm/kvm_host.h |   2 +-
 arch/mips/kernel/asm-offsets.c   |   2 +-
 arch/mips/kvm/kvm_locore.S       |   6 +-
 arch/mips/kvm/kvm_mips.c         |  12 ++--
 arch/mips/kvm/kvm_mips_emul.c    | 140 +++++++++++++++++++--------------------
 arch/mips/kvm/kvm_mips_int.c     |   8 +--
 arch/mips/kvm/kvm_trap_emul.c    |  20 +++---
 7 files changed, 95 insertions(+), 95 deletions(-)

diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
index 4d6fa0b..d9ee320 100644
--- a/arch/mips/include/asm/kvm_host.h
+++ b/arch/mips/include/asm/kvm_host.h
@@ -363,7 +363,7 @@ struct kvm_vcpu_arch {
 	unsigned long gprs[32];
 	unsigned long hi;
 	unsigned long lo;
-	unsigned long pc;
+	unsigned long epc;
 
 	/* FPU State */
 	struct mips_fpu_struct fpu;
diff --git a/arch/mips/kernel/asm-offsets.c b/arch/mips/kernel/asm-offsets.c
index 0845091..22bf8f5 100644
--- a/arch/mips/kernel/asm-offsets.c
+++ b/arch/mips/kernel/asm-offsets.c
@@ -385,7 +385,7 @@ void output_kvm_defines(void)
 	OFFSET(VCPU_R31, kvm_vcpu_arch, gprs[31]);
 	OFFSET(VCPU_LO, kvm_vcpu_arch, lo);
 	OFFSET(VCPU_HI, kvm_vcpu_arch, hi);
-	OFFSET(VCPU_PC, kvm_vcpu_arch, pc);
+	OFFSET(KVM_VCPU_ARCH_EPC, kvm_vcpu_arch, epc);
 	OFFSET(VCPU_COP0, kvm_vcpu_arch, cop0);
 	OFFSET(VCPU_GUEST_KERNEL_ASID, kvm_vcpu_arch, guest_kernel_asid);
 	OFFSET(VCPU_GUEST_USER_ASID, kvm_vcpu_arch, guest_user_asid);
diff --git a/arch/mips/kvm/kvm_locore.S b/arch/mips/kvm/kvm_locore.S
index e86fa2a..a434bbe 100644
--- a/arch/mips/kvm/kvm_locore.S
+++ b/arch/mips/kvm/kvm_locore.S
@@ -151,7 +151,7 @@ FEXPORT(__kvm_mips_vcpu_run)
 
 
 	/* Set Guest EPC */
-	LONG_L		t0, VCPU_PC(k1)
+	LONG_L		t0, KVM_VCPU_ARCH_EPC(k1)
 	mtc0		t0, CP0_EPC
 
 FEXPORT(__kvm_mips_load_asid)
@@ -330,7 +330,7 @@ NESTED (MIPSX(GuestException), CALLFRAME_SIZ, ra)
 
     /* Save Host level EPC, BadVaddr and Cause to VCPU, useful to process the exception */
     mfc0    k0,CP0_EPC
-    LONG_S  k0, VCPU_PC(k1)
+    LONG_S  k0, KVM_VCPU_ARCH_EPC(k1)
 
     mfc0    k0, CP0_BADVADDR
     LONG_S  k0, VCPU_HOST_CP0_BADVADDR(k1)
@@ -438,7 +438,7 @@ __kvm_mips_return_to_guest:
 
 
 	/* Set Guest EPC */
-	LONG_L		t0, VCPU_PC(k1)
+	LONG_L		t0, KVM_VCPU_ARCH_EPC(k1)
 	mtc0		t0, CP0_EPC
 
     /* Set the ASID for the Guest Kernel */
diff --git a/arch/mips/kvm/kvm_mips.c b/arch/mips/kvm/kvm_mips.c
index 6018e2a..4ac5ab4 100644
--- a/arch/mips/kvm/kvm_mips.c
+++ b/arch/mips/kvm/kvm_mips.c
@@ -583,7 +583,7 @@ static int kvm_mips_get_reg(struct kvm_vcpu *vcpu,
 		v = (long)vcpu->arch.lo;
 		break;
 	case KVM_REG_MIPS_PC:
-		v = (long)vcpu->arch.pc;
+		v = (long)vcpu->arch.epc;
 		break;
 
 	case KVM_REG_MIPS_CP0_INDEX:
@@ -658,7 +658,7 @@ static int kvm_mips_set_reg(struct kvm_vcpu *vcpu,
 		vcpu->arch.lo = v;
 		break;
 	case KVM_REG_MIPS_PC:
-		vcpu->arch.pc = v;
+		vcpu->arch.epc = v;
 		break;
 
 	case KVM_REG_MIPS_CP0_INDEX:
@@ -890,7 +890,7 @@ int kvm_arch_vcpu_dump_regs(struct kvm_vcpu *vcpu)
 		return -1;
 
 	printk("VCPU Register Dump:\n");
-	printk("\tpc = 0x%08lx\n", vcpu->arch.pc);;
+	printk("\tepc = 0x%08lx\n", vcpu->arch.epc);;
 	printk("\texceptions: %08lx\n", vcpu->arch.pending_exceptions);
 
 	for (i = 0; i < 32; i += 4) {
@@ -920,7 +920,7 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 	vcpu->arch.gprs[0] = 0; /* zero is special, and cannot be set. */
 	vcpu->arch.hi = regs->hi;
 	vcpu->arch.lo = regs->lo;
-	vcpu->arch.pc = regs->pc;
+	vcpu->arch.epc = regs->pc;
 
 	return 0;
 }
@@ -934,7 +934,7 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 
 	regs->hi = vcpu->arch.hi;
 	regs->lo = vcpu->arch.lo;
-	regs->pc = vcpu->arch.pc;
+	regs->pc = vcpu->arch.epc;
 
 	return 0;
 }
@@ -1014,7 +1014,7 @@ int kvm_mips_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
 	uint32_t cause = vcpu->arch.host_cp0_cause;
 	uint32_t exccode = (cause >> CAUSEB_EXCCODE) & 0x1f;
-	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.pc;
+	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.epc;
 	unsigned long badvaddr = vcpu->arch.host_cp0_badvaddr;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
diff --git a/arch/mips/kvm/kvm_mips_emul.c b/arch/mips/kvm/kvm_mips_emul.c
index a2c6687..7cbc3bc 100644
--- a/arch/mips/kvm/kvm_mips_emul.c
+++ b/arch/mips/kvm/kvm_mips_emul.c
@@ -213,17 +213,17 @@ enum emulation_result update_pc(struct kvm_vcpu *vcpu, uint32_t cause)
 	enum emulation_result er = EMULATE_DONE;
 
 	if (cause & CAUSEF_BD) {
-		branch_pc = kvm_compute_return_epc(vcpu, vcpu->arch.pc);
+		branch_pc = kvm_compute_return_epc(vcpu, vcpu->arch.epc);
 		if (branch_pc == KVM_INVALID_INST) {
 			er = EMULATE_FAIL;
 		} else {
-			vcpu->arch.pc = branch_pc;
-			kvm_debug("BD update_pc(): New PC: %#lx\n", vcpu->arch.pc);
+			vcpu->arch.epc = branch_pc;
+			kvm_debug("BD update_pc(): New PC: %#lx\n", vcpu->arch.epc);
 		}
 	} else
-		vcpu->arch.pc += 4;
+		vcpu->arch.epc += 4;
 
-	kvm_debug("update_pc(): New PC: %#lx\n", vcpu->arch.pc);
+	kvm_debug("update_pc(): New PC: %#lx\n", vcpu->arch.epc);
 
 	return er;
 }
@@ -255,17 +255,17 @@ enum emulation_result kvm_mips_emul_eret(struct kvm_vcpu *vcpu)
 	enum emulation_result er = EMULATE_DONE;
 
 	if (kvm_read_c0_guest_status(cop0) & ST0_EXL) {
-		kvm_debug("[%#lx] ERET to %#lx\n", vcpu->arch.pc,
+		kvm_debug("[%#lx] ERET to %#lx\n", vcpu->arch.epc,
 			  kvm_read_c0_guest_epc(cop0));
 		kvm_clear_c0_guest_status(cop0, ST0_EXL);
-		vcpu->arch.pc = kvm_read_c0_guest_epc(cop0);
+		vcpu->arch.epc = kvm_read_c0_guest_epc(cop0);
 
 	} else if (kvm_read_c0_guest_status(cop0) & ST0_ERL) {
 		kvm_clear_c0_guest_status(cop0, ST0_ERL);
-		vcpu->arch.pc = kvm_read_c0_guest_errorepc(cop0);
+		vcpu->arch.epc = kvm_read_c0_guest_errorepc(cop0);
 	} else {
 		printk("[%#lx] ERET when MIPS_SR_EXL|MIPS_SR_ERL == 0\n",
-		       vcpu->arch.pc);
+		       vcpu->arch.epc);
 		er = EMULATE_FAIL;
 	}
 
@@ -276,7 +276,7 @@ enum emulation_result kvm_mips_emul_wait(struct kvm_vcpu *vcpu)
 {
 	enum emulation_result er = EMULATE_DONE;
 
-	kvm_debug("[%#lx] !!!WAIT!!! (%#lx)\n", vcpu->arch.pc,
+	kvm_debug("[%#lx] !!!WAIT!!! (%#lx)\n", vcpu->arch.epc,
 		  vcpu->arch.pending_exceptions);
 
 	++vcpu->stat.wait_exits;
@@ -304,7 +304,7 @@ enum emulation_result kvm_mips_emul_tlbr(struct kvm_vcpu *vcpu)
 {
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
 	enum emulation_result er = EMULATE_FAIL;
-	uint32_t pc = vcpu->arch.pc;
+	uint32_t pc = vcpu->arch.epc;
 
 	printk("[%#x] COP0_TLBR [%ld]\n", pc, kvm_read_c0_guest_index(cop0));
 	return er;
@@ -317,7 +317,7 @@ enum emulation_result kvm_mips_emul_tlbwi(struct kvm_vcpu *vcpu)
 	int index = kvm_read_c0_guest_index(cop0);
 	enum emulation_result er = EMULATE_DONE;
 	struct kvm_mips_tlb *tlb = NULL;
-	uint32_t pc = vcpu->arch.pc;
+	uint32_t pc = vcpu->arch.epc;
 
 	if (index < 0 || index >= KVM_MIPS_GUEST_TLB_SIZE) {
 		printk("%s: illegal index: %d\n", __func__, index);
@@ -356,7 +356,7 @@ enum emulation_result kvm_mips_emul_tlbwr(struct kvm_vcpu *vcpu)
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
 	enum emulation_result er = EMULATE_DONE;
 	struct kvm_mips_tlb *tlb = NULL;
-	uint32_t pc = vcpu->arch.pc;
+	uint32_t pc = vcpu->arch.epc;
 	int index;
 
 #if 1
@@ -397,7 +397,7 @@ enum emulation_result kvm_mips_emul_tlbp(struct kvm_vcpu *vcpu)
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
 	long entryhi = kvm_read_c0_guest_entryhi(cop0);
 	enum emulation_result er = EMULATE_DONE;
-	uint32_t pc = vcpu->arch.pc;
+	uint32_t pc = vcpu->arch.epc;
 	int index = -1;
 
 	index = kvm_mips_guest_tlb_lookup(vcpu, entryhi);
@@ -417,14 +417,14 @@ kvm_mips_emulate_CP0(uint32_t inst, uint32_t *opc, uint32_t cause,
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
 	enum emulation_result er = EMULATE_DONE;
 	int32_t rt, rd, copz, sel, co_bit, op;
-	uint32_t pc = vcpu->arch.pc;
+	uint32_t pc = vcpu->arch.epc;
 	unsigned long curr_pc;
 
 	/*
 	 * Update PC and hold onto current PC in case there is
 	 * an error and we want to rollback the PC
 	 */
-	curr_pc = vcpu->arch.pc;
+	curr_pc = vcpu->arch.epc;
 	er = update_pc(vcpu, cause);
 	if (er == EMULATE_FAIL) {
 		return er;
@@ -585,7 +585,7 @@ kvm_mips_emulate_CP0(uint32_t inst, uint32_t *opc, uint32_t cause,
 		case dmtc_op:
 			printk
 			    ("!!!!!!![%#lx]dmtc_op: rt: %d, rd: %d, sel: %d!!!!!!\n",
-			     vcpu->arch.pc, rt, rd, sel);
+			     vcpu->arch.epc, rt, rd, sel);
 			er = EMULATE_FAIL;
 			break;
 
@@ -600,11 +600,11 @@ kvm_mips_emulate_CP0(uint32_t inst, uint32_t *opc, uint32_t cause,
 			/* EI */
 			if (inst & 0x20) {
 				kvm_debug("[%#lx] mfmcz_op: EI\n",
-					  vcpu->arch.pc);
+					  vcpu->arch.epc);
 				kvm_set_c0_guest_status(cop0, ST0_IE);
 			} else {
 				kvm_debug("[%#lx] mfmcz_op: DI\n",
-					  vcpu->arch.pc);
+					  vcpu->arch.epc);
 				kvm_clear_c0_guest_status(cop0, ST0_IE);
 			}
 
@@ -629,7 +629,7 @@ kvm_mips_emulate_CP0(uint32_t inst, uint32_t *opc, uint32_t cause,
 		default:
 			printk
 			    ("[%#lx]MachEmulateCP0: unsupported COP0, copz: 0x%x\n",
-			     vcpu->arch.pc, copz);
+			     vcpu->arch.epc, copz);
 			er = EMULATE_FAIL;
 			break;
 		}
@@ -640,7 +640,7 @@ done:
 	 * Rollback PC only if emulation was unsuccessful
 	 */
 	if (er == EMULATE_FAIL) {
-		vcpu->arch.pc = curr_pc;
+		vcpu->arch.epc = curr_pc;
 	}
 
 dont_update_pc:
@@ -667,7 +667,7 @@ kvm_mips_emulate_store(uint32_t inst, uint32_t cause,
 	 * Update PC and hold onto current PC in case there is
 	 * an error and we want to rollback the PC
 	 */
-	curr_pc = vcpu->arch.pc;
+	curr_pc = vcpu->arch.epc;
 	er = update_pc(vcpu, cause);
 	if (er == EMULATE_FAIL)
 		return er;
@@ -723,7 +723,7 @@ kvm_mips_emulate_store(uint32_t inst, uint32_t cause,
 		*(uint32_t *) data = vcpu->arch.gprs[rt];
 
 		kvm_debug("[%#lx] OP_SW: eaddr: %#lx, gpr: %#lx, data: %#x\n",
-			  vcpu->arch.pc, vcpu->arch.host_cp0_badvaddr,
+			  vcpu->arch.epc, vcpu->arch.host_cp0_badvaddr,
 			  vcpu->arch.gprs[rt], *(uint32_t *) data);
 		break;
 
@@ -748,7 +748,7 @@ kvm_mips_emulate_store(uint32_t inst, uint32_t cause,
 		*(uint16_t *) data = vcpu->arch.gprs[rt];
 
 		kvm_debug("[%#lx] OP_SH: eaddr: %#lx, gpr: %#lx, data: %#x\n",
-			  vcpu->arch.pc, vcpu->arch.host_cp0_badvaddr,
+			  vcpu->arch.epc, vcpu->arch.host_cp0_badvaddr,
 			  vcpu->arch.gprs[rt], *(uint32_t *) data);
 		break;
 
@@ -762,7 +762,7 @@ kvm_mips_emulate_store(uint32_t inst, uint32_t cause,
 	 * Rollback PC if emulation was unsuccessful
 	 */
 	if (er == EMULATE_FAIL) {
-		vcpu->arch.pc = curr_pc;
+		vcpu->arch.epc = curr_pc;
 	}
 
 	return er;
@@ -926,7 +926,7 @@ kvm_mips_emulate_cache(uint32_t inst, uint32_t *opc, uint32_t cause,
 	 * Update PC and hold onto current PC in case there is
 	 * an error and we want to rollback the PC
 	 */
-	curr_pc = vcpu->arch.pc;
+	curr_pc = vcpu->arch.epc;
 	er = update_pc(vcpu, cause);
 	if (er == EMULATE_FAIL)
 		return er;
@@ -948,7 +948,7 @@ kvm_mips_emulate_cache(uint32_t inst, uint32_t *opc, uint32_t cause,
 	if (op == MIPS_CACHE_OP_INDEX_INV) {
 		kvm_debug
 		    ("@ %#lx/%#lx CACHE (cache: %#x, op: %#x, base[%d]: %#lx, offset: %#x\n",
-		     vcpu->arch.pc, vcpu->arch.gprs[31], cache, op, base,
+		     vcpu->arch.epc, vcpu->arch.gprs[31], cache, op, base,
 		     arch->gprs[base], offset);
 
 		if (cache == MIPS_CACHE_DCACHE)
@@ -1055,7 +1055,7 @@ skip_fault:
 	/*
 	 * Rollback PC
 	 */
-	vcpu->arch.pc = curr_pc;
+	vcpu->arch.epc = curr_pc;
       done:
 	return er;
 }
@@ -1120,7 +1120,7 @@ kvm_mips_emulate_syscall(unsigned long cause, uint32_t *opc,
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
 		/* save old pc */
-		kvm_write_c0_guest_epc(cop0, arch->pc);
+		kvm_write_c0_guest_epc(cop0, arch->epc);
 		kvm_set_c0_guest_status(cop0, ST0_EXL);
 
 		if (cause & CAUSEF_BD)
@@ -1128,13 +1128,13 @@ kvm_mips_emulate_syscall(unsigned long cause, uint32_t *opc,
 		else
 			kvm_clear_c0_guest_cause(cop0, CAUSEF_BD);
 
-		kvm_debug("Delivering SYSCALL @ pc %#lx\n", arch->pc);
+		kvm_debug("Delivering SYSCALL @ pc %#lx\n", arch->epc);
 
 		kvm_change_c0_guest_cause(cop0, (0xff),
 					  (T_SYSCALL << CAUSEB_EXCCODE));
 
 		/* Set PC to the exception entry point */
-		arch->pc = KVM_GUEST_KSEG0 + 0x180;
+		arch->epc = KVM_GUEST_KSEG0 + 0x180;
 
 	} else {
 		printk("Trying to deliver SYSCALL when EXL is already set\n");
@@ -1156,7 +1156,7 @@ kvm_mips_emulate_tlbmiss_ld(unsigned long cause, uint32_t *opc,
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
 		/* save old pc */
-		kvm_write_c0_guest_epc(cop0, arch->pc);
+		kvm_write_c0_guest_epc(cop0, arch->epc);
 		kvm_set_c0_guest_status(cop0, ST0_EXL);
 
 		if (cause & CAUSEF_BD)
@@ -1165,16 +1165,16 @@ kvm_mips_emulate_tlbmiss_ld(unsigned long cause, uint32_t *opc,
 			kvm_clear_c0_guest_cause(cop0, CAUSEF_BD);
 
 		kvm_debug("[EXL == 0] delivering TLB MISS @ pc %#lx\n",
-			  arch->pc);
+			  arch->epc);
 
 		/* set pc to the exception entry point */
-		arch->pc = KVM_GUEST_KSEG0 + 0x0;
+		arch->epc = KVM_GUEST_KSEG0 + 0x0;
 
 	} else {
 		kvm_debug("[EXL == 1] delivering TLB MISS @ pc %#lx\n",
-			  arch->pc);
+			  arch->epc);
 
-		arch->pc = KVM_GUEST_KSEG0 + 0x180;
+		arch->epc = KVM_GUEST_KSEG0 + 0x180;
 	}
 
 	kvm_change_c0_guest_cause(cop0, (0xff),
@@ -1203,7 +1203,7 @@ kvm_mips_emulate_tlbinv_ld(unsigned long cause, uint32_t *opc,
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
 		/* save old pc */
-		kvm_write_c0_guest_epc(cop0, arch->pc);
+		kvm_write_c0_guest_epc(cop0, arch->epc);
 		kvm_set_c0_guest_status(cop0, ST0_EXL);
 
 		if (cause & CAUSEF_BD)
@@ -1212,15 +1212,15 @@ kvm_mips_emulate_tlbinv_ld(unsigned long cause, uint32_t *opc,
 			kvm_clear_c0_guest_cause(cop0, CAUSEF_BD);
 
 		kvm_debug("[EXL == 0] delivering TLB INV @ pc %#lx\n",
-			  arch->pc);
+			  arch->epc);
 
 		/* set pc to the exception entry point */
-		arch->pc = KVM_GUEST_KSEG0 + 0x180;
+		arch->epc = KVM_GUEST_KSEG0 + 0x180;
 
 	} else {
 		kvm_debug("[EXL == 1] delivering TLB MISS @ pc %#lx\n",
-			  arch->pc);
-		arch->pc = KVM_GUEST_KSEG0 + 0x180;
+			  arch->epc);
+		arch->epc = KVM_GUEST_KSEG0 + 0x180;
 	}
 
 	kvm_change_c0_guest_cause(cop0, (0xff),
@@ -1248,7 +1248,7 @@ kvm_mips_emulate_tlbmiss_st(unsigned long cause, uint32_t *opc,
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
 		/* save old pc */
-		kvm_write_c0_guest_epc(cop0, arch->pc);
+		kvm_write_c0_guest_epc(cop0, arch->epc);
 		kvm_set_c0_guest_status(cop0, ST0_EXL);
 
 		if (cause & CAUSEF_BD)
@@ -1257,14 +1257,14 @@ kvm_mips_emulate_tlbmiss_st(unsigned long cause, uint32_t *opc,
 			kvm_clear_c0_guest_cause(cop0, CAUSEF_BD);
 
 		kvm_debug("[EXL == 0] Delivering TLB MISS @ pc %#lx\n",
-			  arch->pc);
+			  arch->epc);
 
 		/* Set PC to the exception entry point */
-		arch->pc = KVM_GUEST_KSEG0 + 0x0;
+		arch->epc = KVM_GUEST_KSEG0 + 0x0;
 	} else {
 		kvm_debug("[EXL == 1] Delivering TLB MISS @ pc %#lx\n",
-			  arch->pc);
-		arch->pc = KVM_GUEST_KSEG0 + 0x180;
+			  arch->epc);
+		arch->epc = KVM_GUEST_KSEG0 + 0x180;
 	}
 
 	kvm_change_c0_guest_cause(cop0, (0xff),
@@ -1292,7 +1292,7 @@ kvm_mips_emulate_tlbinv_st(unsigned long cause, uint32_t *opc,
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
 		/* save old pc */
-		kvm_write_c0_guest_epc(cop0, arch->pc);
+		kvm_write_c0_guest_epc(cop0, arch->epc);
 		kvm_set_c0_guest_status(cop0, ST0_EXL);
 
 		if (cause & CAUSEF_BD)
@@ -1301,14 +1301,14 @@ kvm_mips_emulate_tlbinv_st(unsigned long cause, uint32_t *opc,
 			kvm_clear_c0_guest_cause(cop0, CAUSEF_BD);
 
 		kvm_debug("[EXL == 0] Delivering TLB MISS @ pc %#lx\n",
-			  arch->pc);
+			  arch->epc);
 
 		/* Set PC to the exception entry point */
-		arch->pc = KVM_GUEST_KSEG0 + 0x180;
+		arch->epc = KVM_GUEST_KSEG0 + 0x180;
 	} else {
 		kvm_debug("[EXL == 1] Delivering TLB MISS @ pc %#lx\n",
-			  arch->pc);
-		arch->pc = KVM_GUEST_KSEG0 + 0x180;
+			  arch->epc);
+		arch->epc = KVM_GUEST_KSEG0 + 0x180;
 	}
 
 	kvm_change_c0_guest_cause(cop0, (0xff),
@@ -1363,7 +1363,7 @@ kvm_mips_emulate_tlbmod(unsigned long cause, uint32_t *opc,
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
 		/* save old pc */
-		kvm_write_c0_guest_epc(cop0, arch->pc);
+		kvm_write_c0_guest_epc(cop0, arch->epc);
 		kvm_set_c0_guest_status(cop0, ST0_EXL);
 
 		if (cause & CAUSEF_BD)
@@ -1372,13 +1372,13 @@ kvm_mips_emulate_tlbmod(unsigned long cause, uint32_t *opc,
 			kvm_clear_c0_guest_cause(cop0, CAUSEF_BD);
 
 		kvm_debug("[EXL == 0] Delivering TLB MOD @ pc %#lx\n",
-			  arch->pc);
+			  arch->epc);
 
-		arch->pc = KVM_GUEST_KSEG0 + 0x180;
+		arch->epc = KVM_GUEST_KSEG0 + 0x180;
 	} else {
 		kvm_debug("[EXL == 1] Delivering TLB MOD @ pc %#lx\n",
-			  arch->pc);
-		arch->pc = KVM_GUEST_KSEG0 + 0x180;
+			  arch->epc);
+		arch->epc = KVM_GUEST_KSEG0 + 0x180;
 	}
 
 	kvm_change_c0_guest_cause(cop0, (0xff), (T_TLB_MOD << CAUSEB_EXCCODE));
@@ -1403,7 +1403,7 @@ kvm_mips_emulate_fpu_exc(unsigned long cause, uint32_t *opc,
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
 		/* save old pc */
-		kvm_write_c0_guest_epc(cop0, arch->pc);
+		kvm_write_c0_guest_epc(cop0, arch->epc);
 		kvm_set_c0_guest_status(cop0, ST0_EXL);
 
 		if (cause & CAUSEF_BD)
@@ -1413,7 +1413,7 @@ kvm_mips_emulate_fpu_exc(unsigned long cause, uint32_t *opc,
 
 	}
 
-	arch->pc = KVM_GUEST_KSEG0 + 0x180;
+	arch->epc = KVM_GUEST_KSEG0 + 0x180;
 
 	kvm_change_c0_guest_cause(cop0, (0xff),
 				  (T_COP_UNUSABLE << CAUSEB_EXCCODE));
@@ -1432,7 +1432,7 @@ kvm_mips_emulate_ri_exc(unsigned long cause, uint32_t *opc,
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
 		/* save old pc */
-		kvm_write_c0_guest_epc(cop0, arch->pc);
+		kvm_write_c0_guest_epc(cop0, arch->epc);
 		kvm_set_c0_guest_status(cop0, ST0_EXL);
 
 		if (cause & CAUSEF_BD)
@@ -1440,13 +1440,13 @@ kvm_mips_emulate_ri_exc(unsigned long cause, uint32_t *opc,
 		else
 			kvm_clear_c0_guest_cause(cop0, CAUSEF_BD);
 
-		kvm_debug("Delivering RI @ pc %#lx\n", arch->pc);
+		kvm_debug("Delivering RI @ pc %#lx\n", arch->epc);
 
 		kvm_change_c0_guest_cause(cop0, (0xff),
 					  (T_RES_INST << CAUSEB_EXCCODE));
 
 		/* Set PC to the exception entry point */
-		arch->pc = KVM_GUEST_KSEG0 + 0x180;
+		arch->epc = KVM_GUEST_KSEG0 + 0x180;
 
 	} else {
 		kvm_err("Trying to deliver RI when EXL is already set\n");
@@ -1466,7 +1466,7 @@ kvm_mips_emulate_bp_exc(unsigned long cause, uint32_t *opc,
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
 		/* save old pc */
-		kvm_write_c0_guest_epc(cop0, arch->pc);
+		kvm_write_c0_guest_epc(cop0, arch->epc);
 		kvm_set_c0_guest_status(cop0, ST0_EXL);
 
 		if (cause & CAUSEF_BD)
@@ -1474,13 +1474,13 @@ kvm_mips_emulate_bp_exc(unsigned long cause, uint32_t *opc,
 		else
 			kvm_clear_c0_guest_cause(cop0, CAUSEF_BD);
 
-		kvm_debug("Delivering BP @ pc %#lx\n", arch->pc);
+		kvm_debug("Delivering BP @ pc %#lx\n", arch->epc);
 
 		kvm_change_c0_guest_cause(cop0, (0xff),
 					  (T_BREAK << CAUSEB_EXCCODE));
 
 		/* Set PC to the exception entry point */
-		arch->pc = KVM_GUEST_KSEG0 + 0x180;
+		arch->epc = KVM_GUEST_KSEG0 + 0x180;
 
 	} else {
 		printk("Trying to deliver BP when EXL is already set\n");
@@ -1521,7 +1521,7 @@ kvm_mips_handle_ri(unsigned long cause, uint32_t *opc,
 	 * Update PC and hold onto current PC in case there is
 	 * an error and we want to rollback the PC
 	 */
-	curr_pc = vcpu->arch.pc;
+	curr_pc = vcpu->arch.epc;
 	er = update_pc(vcpu, cause);
 	if (er == EMULATE_FAIL)
 		return er;
@@ -1587,7 +1587,7 @@ kvm_mips_handle_ri(unsigned long cause, uint32_t *opc,
 	 * Rollback PC only if emulation was unsuccessful
 	 */
 	if (er == EMULATE_FAIL) {
-		vcpu->arch.pc = curr_pc;
+		vcpu->arch.epc = curr_pc;
 	}
 	return er;
 }
@@ -1609,7 +1609,7 @@ kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	 * Update PC and hold onto current PC in case there is
 	 * an error and we want to rollback the PC
 	 */
-	curr_pc = vcpu->arch.pc;
+	curr_pc = vcpu->arch.epc;
 	er = update_pc(vcpu, vcpu->arch.pending_load_cause);
 	if (er == EMULATE_FAIL)
 		return er;
@@ -1637,7 +1637,7 @@ kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	if (vcpu->arch.pending_load_cause & CAUSEF_BD)
 		kvm_debug
 		    ("[%#lx] Completing %d byte BD Load to gpr %d (0x%08lx) type %d\n",
-		     vcpu->arch.pc, run->mmio.len, vcpu->arch.io_gpr, *gpr,
+		     vcpu->arch.epc, run->mmio.len, vcpu->arch.io_gpr, *gpr,
 		     vcpu->mmio_needed);
 
 done:
@@ -1655,7 +1655,7 @@ kvm_mips_emulate_exc(unsigned long cause, uint32_t *opc,
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
 		/* save old pc */
-		kvm_write_c0_guest_epc(cop0, arch->pc);
+		kvm_write_c0_guest_epc(cop0, arch->epc);
 		kvm_set_c0_guest_status(cop0, ST0_EXL);
 
 		if (cause & CAUSEF_BD)
@@ -1667,7 +1667,7 @@ kvm_mips_emulate_exc(unsigned long cause, uint32_t *opc,
 					  (exccode << CAUSEB_EXCCODE));
 
 		/* Set PC to the exception entry point */
-		arch->pc = KVM_GUEST_KSEG0 + 0x180;
+		arch->epc = KVM_GUEST_KSEG0 + 0x180;
 		kvm_write_c0_guest_badvaddr(cop0, vcpu->arch.host_cp0_badvaddr);
 
 		kvm_debug("Delivering EXC %d @ pc %#lx, badVaddr: %#lx\n",
diff --git a/arch/mips/kvm/kvm_mips_int.c b/arch/mips/kvm/kvm_mips_int.c
index 1e5de16..c1ba08b 100644
--- a/arch/mips/kvm/kvm_mips_int.c
+++ b/arch/mips/kvm/kvm_mips_int.c
@@ -167,7 +167,7 @@ kvm_mips_irq_deliver_cb(struct kvm_vcpu *vcpu, unsigned int priority,
 
 		if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
 			/* save old pc */
-			kvm_write_c0_guest_epc(cop0, arch->pc);
+			kvm_write_c0_guest_epc(cop0, arch->epc);
 			kvm_set_c0_guest_status(cop0, ST0_EXL);
 
 			if (cause & CAUSEF_BD)
@@ -175,7 +175,7 @@ kvm_mips_irq_deliver_cb(struct kvm_vcpu *vcpu, unsigned int priority,
 			else
 				kvm_clear_c0_guest_cause(cop0, CAUSEF_BD);
 
-			kvm_debug("Delivering INT @ pc %#lx\n", arch->pc);
+			kvm_debug("Delivering INT @ pc %#lx\n", arch->epc);
 
 		} else
 			kvm_err("Trying to deliver interrupt when EXL is already set\n");
@@ -185,9 +185,9 @@ kvm_mips_irq_deliver_cb(struct kvm_vcpu *vcpu, unsigned int priority,
 
 		/* XXXSL Set PC to the interrupt exception entry point */
 		if (kvm_read_c0_guest_cause(cop0) & CAUSEF_IV)
-			arch->pc = KVM_GUEST_KSEG0 + 0x200;
+			arch->epc = KVM_GUEST_KSEG0 + 0x200;
 		else
-			arch->pc = KVM_GUEST_KSEG0 + 0x180;
+			arch->epc = KVM_GUEST_KSEG0 + 0x180;
 
 		clear_bit(priority, &vcpu->arch.pending_exceptions);
 	}
diff --git a/arch/mips/kvm/kvm_trap_emul.c b/arch/mips/kvm/kvm_trap_emul.c
index 30d7253..8d0ab12 100644
--- a/arch/mips/kvm/kvm_trap_emul.c
+++ b/arch/mips/kvm/kvm_trap_emul.c
@@ -43,7 +43,7 @@ static gpa_t kvm_trap_emul_gva_to_gpa_cb(gva_t gva)
 static int kvm_trap_emul_handle_cop_unusable(struct kvm_vcpu *vcpu)
 {
 	struct kvm_run *run = vcpu->run;
-	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.pc;
+	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.epc;
 	unsigned long cause = vcpu->arch.host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
@@ -77,7 +77,7 @@ static int kvm_trap_emul_handle_cop_unusable(struct kvm_vcpu *vcpu)
 static int kvm_trap_emul_handle_tlb_mod(struct kvm_vcpu *vcpu)
 {
 	struct kvm_run *run = vcpu->run;
-	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.pc;
+	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.epc;
 	unsigned long badvaddr = vcpu->arch.host_cp0_badvaddr;
 	unsigned long cause = vcpu->arch.host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
@@ -124,7 +124,7 @@ static int kvm_trap_emul_handle_tlb_mod(struct kvm_vcpu *vcpu)
 static int kvm_trap_emul_handle_tlb_st_miss(struct kvm_vcpu *vcpu)
 {
 	struct kvm_run *run = vcpu->run;
-	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.pc;
+	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.epc;
 	unsigned long badvaddr = vcpu->arch.host_cp0_badvaddr;
 	unsigned long cause = vcpu->arch.host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
@@ -174,7 +174,7 @@ static int kvm_trap_emul_handle_tlb_st_miss(struct kvm_vcpu *vcpu)
 static int kvm_trap_emul_handle_tlb_ld_miss(struct kvm_vcpu *vcpu)
 {
 	struct kvm_run *run = vcpu->run;
-	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.pc;
+	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.epc;
 	unsigned long badvaddr = vcpu->arch.host_cp0_badvaddr;
 	unsigned long cause = vcpu->arch.host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
@@ -190,7 +190,7 @@ static int kvm_trap_emul_handle_tlb_ld_miss(struct kvm_vcpu *vcpu)
 		   || KVM_GUEST_KSEGX(badvaddr) == KVM_GUEST_KSEG23) {
 #ifdef DEBUG
 		kvm_debug("USER ADDR TLB ST fault: PC: %#lx, BadVaddr: %#lx\n",
-			  vcpu->arch.pc, badvaddr);
+			  vcpu->arch.epc, badvaddr);
 #endif
 
 		/* User Address (UA) fault, this could happen if
@@ -228,7 +228,7 @@ static int kvm_trap_emul_handle_tlb_ld_miss(struct kvm_vcpu *vcpu)
 static int kvm_trap_emul_handle_addr_err_st(struct kvm_vcpu *vcpu)
 {
 	struct kvm_run *run = vcpu->run;
-	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.pc;
+	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.epc;
 	unsigned long badvaddr = vcpu->arch.host_cp0_badvaddr;
 	unsigned long cause = vcpu->arch.host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
@@ -261,7 +261,7 @@ static int kvm_trap_emul_handle_addr_err_st(struct kvm_vcpu *vcpu)
 static int kvm_trap_emul_handle_addr_err_ld(struct kvm_vcpu *vcpu)
 {
 	struct kvm_run *run = vcpu->run;
-	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.pc;
+	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.epc;
 	unsigned long badvaddr = vcpu->arch.host_cp0_badvaddr;
 	unsigned long cause = vcpu->arch.host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
@@ -294,7 +294,7 @@ static int kvm_trap_emul_handle_addr_err_ld(struct kvm_vcpu *vcpu)
 static int kvm_trap_emul_handle_syscall(struct kvm_vcpu *vcpu)
 {
 	struct kvm_run *run = vcpu->run;
-	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.pc;
+	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.epc;
 	unsigned long cause = vcpu->arch.host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
@@ -312,7 +312,7 @@ static int kvm_trap_emul_handle_syscall(struct kvm_vcpu *vcpu)
 static int kvm_trap_emul_handle_res_inst(struct kvm_vcpu *vcpu)
 {
 	struct kvm_run *run = vcpu->run;
-	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.pc;
+	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.epc;
 	unsigned long cause = vcpu->arch.host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
@@ -330,7 +330,7 @@ static int kvm_trap_emul_handle_res_inst(struct kvm_vcpu *vcpu)
 static int kvm_trap_emul_handle_break(struct kvm_vcpu *vcpu)
 {
 	struct kvm_run *run = vcpu->run;
-	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.pc;
+	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.epc;
 	unsigned long cause = vcpu->arch.host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 07/31] mips/kvm: Rename VCPU_registername to KVM_VCPU_ARCH_registername
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (5 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 06/31] mips/kvm: Rename kvm_vcpu_arch.pc to kvm_vcpu_arch.epc David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-14 13:18   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 08/31] mips/kvm: Fix code formatting in arch/mips/kvm/kvm_locore.S David Daney
                   ` (27 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

This makes it follow the pattern where the structure name is the
symbol name prefix.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/kernel/asm-offsets.c |  68 +++++++-------
 arch/mips/kvm/kvm_locore.S     | 206 ++++++++++++++++++++---------------------
 2 files changed, 137 insertions(+), 137 deletions(-)

diff --git a/arch/mips/kernel/asm-offsets.c b/arch/mips/kernel/asm-offsets.c
index 22bf8f5..a0aa12c 100644
--- a/arch/mips/kernel/asm-offsets.c
+++ b/arch/mips/kernel/asm-offsets.c
@@ -351,40 +351,40 @@ void output_kvm_defines(void)
 
 	OFFSET(VCPU_GUEST_INST, kvm_vcpu_arch, guest_inst);
 
-	OFFSET(VCPU_R0, kvm_vcpu_arch, gprs[0]);
-	OFFSET(VCPU_R1, kvm_vcpu_arch, gprs[1]);
-	OFFSET(VCPU_R2, kvm_vcpu_arch, gprs[2]);
-	OFFSET(VCPU_R3, kvm_vcpu_arch, gprs[3]);
-	OFFSET(VCPU_R4, kvm_vcpu_arch, gprs[4]);
-	OFFSET(VCPU_R5, kvm_vcpu_arch, gprs[5]);
-	OFFSET(VCPU_R6, kvm_vcpu_arch, gprs[6]);
-	OFFSET(VCPU_R7, kvm_vcpu_arch, gprs[7]);
-	OFFSET(VCPU_R8, kvm_vcpu_arch, gprs[8]);
-	OFFSET(VCPU_R9, kvm_vcpu_arch, gprs[9]);
-	OFFSET(VCPU_R10, kvm_vcpu_arch, gprs[10]);
-	OFFSET(VCPU_R11, kvm_vcpu_arch, gprs[11]);
-	OFFSET(VCPU_R12, kvm_vcpu_arch, gprs[12]);
-	OFFSET(VCPU_R13, kvm_vcpu_arch, gprs[13]);
-	OFFSET(VCPU_R14, kvm_vcpu_arch, gprs[14]);
-	OFFSET(VCPU_R15, kvm_vcpu_arch, gprs[15]);
-	OFFSET(VCPU_R16, kvm_vcpu_arch, gprs[16]);
-	OFFSET(VCPU_R17, kvm_vcpu_arch, gprs[17]);
-	OFFSET(VCPU_R18, kvm_vcpu_arch, gprs[18]);
-	OFFSET(VCPU_R19, kvm_vcpu_arch, gprs[19]);
-	OFFSET(VCPU_R20, kvm_vcpu_arch, gprs[20]);
-	OFFSET(VCPU_R21, kvm_vcpu_arch, gprs[21]);
-	OFFSET(VCPU_R22, kvm_vcpu_arch, gprs[22]);
-	OFFSET(VCPU_R23, kvm_vcpu_arch, gprs[23]);
-	OFFSET(VCPU_R24, kvm_vcpu_arch, gprs[24]);
-	OFFSET(VCPU_R25, kvm_vcpu_arch, gprs[25]);
-	OFFSET(VCPU_R26, kvm_vcpu_arch, gprs[26]);
-	OFFSET(VCPU_R27, kvm_vcpu_arch, gprs[27]);
-	OFFSET(VCPU_R28, kvm_vcpu_arch, gprs[28]);
-	OFFSET(VCPU_R29, kvm_vcpu_arch, gprs[29]);
-	OFFSET(VCPU_R30, kvm_vcpu_arch, gprs[30]);
-	OFFSET(VCPU_R31, kvm_vcpu_arch, gprs[31]);
-	OFFSET(VCPU_LO, kvm_vcpu_arch, lo);
-	OFFSET(VCPU_HI, kvm_vcpu_arch, hi);
+	OFFSET(KVM_VCPU_ARCH_R0, kvm_vcpu_arch, gprs[0]);
+	OFFSET(KVM_VCPU_ARCH_R1, kvm_vcpu_arch, gprs[1]);
+	OFFSET(KVM_VCPU_ARCH_R2, kvm_vcpu_arch, gprs[2]);
+	OFFSET(KVM_VCPU_ARCH_R3, kvm_vcpu_arch, gprs[3]);
+	OFFSET(KVM_VCPU_ARCH_R4, kvm_vcpu_arch, gprs[4]);
+	OFFSET(KVM_VCPU_ARCH_R5, kvm_vcpu_arch, gprs[5]);
+	OFFSET(KVM_VCPU_ARCH_R6, kvm_vcpu_arch, gprs[6]);
+	OFFSET(KVM_VCPU_ARCH_R7, kvm_vcpu_arch, gprs[7]);
+	OFFSET(KVM_VCPU_ARCH_R8, kvm_vcpu_arch, gprs[8]);
+	OFFSET(KVM_VCPU_ARCH_R9, kvm_vcpu_arch, gprs[9]);
+	OFFSET(KVM_VCPU_ARCH_R10, kvm_vcpu_arch, gprs[10]);
+	OFFSET(KVM_VCPU_ARCH_R11, kvm_vcpu_arch, gprs[11]);
+	OFFSET(KVM_VCPU_ARCH_R12, kvm_vcpu_arch, gprs[12]);
+	OFFSET(KVM_VCPU_ARCH_R13, kvm_vcpu_arch, gprs[13]);
+	OFFSET(KVM_VCPU_ARCH_R14, kvm_vcpu_arch, gprs[14]);
+	OFFSET(KVM_VCPU_ARCH_R15, kvm_vcpu_arch, gprs[15]);
+	OFFSET(KVM_VCPU_ARCH_R16, kvm_vcpu_arch, gprs[16]);
+	OFFSET(KVM_VCPU_ARCH_R17, kvm_vcpu_arch, gprs[17]);
+	OFFSET(KVM_VCPU_ARCH_R18, kvm_vcpu_arch, gprs[18]);
+	OFFSET(KVM_VCPU_ARCH_R19, kvm_vcpu_arch, gprs[19]);
+	OFFSET(KVM_VCPU_ARCH_R20, kvm_vcpu_arch, gprs[20]);
+	OFFSET(KVM_VCPU_ARCH_R21, kvm_vcpu_arch, gprs[21]);
+	OFFSET(KVM_VCPU_ARCH_R22, kvm_vcpu_arch, gprs[22]);
+	OFFSET(KVM_VCPU_ARCH_R23, kvm_vcpu_arch, gprs[23]);
+	OFFSET(KVM_VCPU_ARCH_R24, kvm_vcpu_arch, gprs[24]);
+	OFFSET(KVM_VCPU_ARCH_R25, kvm_vcpu_arch, gprs[25]);
+	OFFSET(KVM_VCPU_ARCH_R26, kvm_vcpu_arch, gprs[26]);
+	OFFSET(KVM_VCPU_ARCH_R27, kvm_vcpu_arch, gprs[27]);
+	OFFSET(KVM_VCPU_ARCH_R28, kvm_vcpu_arch, gprs[28]);
+	OFFSET(KVM_VCPU_ARCH_R29, kvm_vcpu_arch, gprs[29]);
+	OFFSET(KVM_VCPU_ARCH_R30, kvm_vcpu_arch, gprs[30]);
+	OFFSET(KVM_VCPU_ARCH_R31, kvm_vcpu_arch, gprs[31]);
+	OFFSET(KVM_VCPU_ARCH_LO, kvm_vcpu_arch, lo);
+	OFFSET(KVM_VCPU_ARCH_HI, kvm_vcpu_arch, hi);
 	OFFSET(KVM_VCPU_ARCH_EPC, kvm_vcpu_arch, epc);
 	OFFSET(VCPU_COP0, kvm_vcpu_arch, cop0);
 	OFFSET(VCPU_GUEST_KERNEL_ASID, kvm_vcpu_arch, guest_kernel_asid);
diff --git a/arch/mips/kvm/kvm_locore.S b/arch/mips/kvm/kvm_locore.S
index a434bbe..7a33ee7 100644
--- a/arch/mips/kvm/kvm_locore.S
+++ b/arch/mips/kvm/kvm_locore.S
@@ -175,52 +175,52 @@ FEXPORT(__kvm_mips_load_asid)
     mtc0    zero,  CP0_HWRENA
 
     /* Now load up the Guest Context from VCPU */
-    LONG_L     	$1, VCPU_R1(k1)
-    LONG_L     	$2, VCPU_R2(k1)
-    LONG_L     	$3, VCPU_R3(k1)
-
-    LONG_L     	$4, VCPU_R4(k1)
-    LONG_L     	$5, VCPU_R5(k1)
-    LONG_L     	$6, VCPU_R6(k1)
-    LONG_L     	$7, VCPU_R7(k1)
-
-    LONG_L     	$8,  VCPU_R8(k1)
-    LONG_L     	$9,  VCPU_R9(k1)
-    LONG_L     	$10, VCPU_R10(k1)
-    LONG_L     	$11, VCPU_R11(k1)
-    LONG_L     	$12, VCPU_R12(k1)
-    LONG_L     	$13, VCPU_R13(k1)
-    LONG_L     	$14, VCPU_R14(k1)
-    LONG_L     	$15, VCPU_R15(k1)
-    LONG_L     	$16, VCPU_R16(k1)
-    LONG_L     	$17, VCPU_R17(k1)
-    LONG_L     	$18, VCPU_R18(k1)
-    LONG_L     	$19, VCPU_R19(k1)
-    LONG_L     	$20, VCPU_R20(k1)
-    LONG_L     	$21, VCPU_R21(k1)
-    LONG_L     	$22, VCPU_R22(k1)
-    LONG_L     	$23, VCPU_R23(k1)
-    LONG_L     	$24, VCPU_R24(k1)
-    LONG_L     	$25, VCPU_R25(k1)
+    LONG_L     	$1, KVM_VCPU_ARCH_R1(k1)
+    LONG_L     	$2, KVM_VCPU_ARCH_R2(k1)
+    LONG_L     	$3, KVM_VCPU_ARCH_R3(k1)
+
+    LONG_L     	$4, KVM_VCPU_ARCH_R4(k1)
+    LONG_L     	$5, KVM_VCPU_ARCH_R5(k1)
+    LONG_L     	$6, KVM_VCPU_ARCH_R6(k1)
+    LONG_L     	$7, KVM_VCPU_ARCH_R7(k1)
+
+    LONG_L     	$8,  KVM_VCPU_ARCH_R8(k1)
+    LONG_L     	$9,  KVM_VCPU_ARCH_R9(k1)
+    LONG_L     	$10, KVM_VCPU_ARCH_R10(k1)
+    LONG_L     	$11, KVM_VCPU_ARCH_R11(k1)
+    LONG_L     	$12, KVM_VCPU_ARCH_R12(k1)
+    LONG_L     	$13, KVM_VCPU_ARCH_R13(k1)
+    LONG_L     	$14, KVM_VCPU_ARCH_R14(k1)
+    LONG_L     	$15, KVM_VCPU_ARCH_R15(k1)
+    LONG_L     	$16, KVM_VCPU_ARCH_R16(k1)
+    LONG_L     	$17, KVM_VCPU_ARCH_R17(k1)
+    LONG_L     	$18, KVM_VCPU_ARCH_R18(k1)
+    LONG_L     	$19, KVM_VCPU_ARCH_R19(k1)
+    LONG_L     	$20, KVM_VCPU_ARCH_R20(k1)
+    LONG_L     	$21, KVM_VCPU_ARCH_R21(k1)
+    LONG_L     	$22, KVM_VCPU_ARCH_R22(k1)
+    LONG_L     	$23, KVM_VCPU_ARCH_R23(k1)
+    LONG_L     	$24, KVM_VCPU_ARCH_R24(k1)
+    LONG_L     	$25, KVM_VCPU_ARCH_R25(k1)
 
     /* k0/k1 loaded up later */
 
-    LONG_L     	$28, VCPU_R28(k1)
-    LONG_L     	$29, VCPU_R29(k1)
-    LONG_L     	$30, VCPU_R30(k1)
-    LONG_L     	$31, VCPU_R31(k1)
+    LONG_L     	$28, KVM_VCPU_ARCH_R28(k1)
+    LONG_L     	$29, KVM_VCPU_ARCH_R29(k1)
+    LONG_L     	$30, KVM_VCPU_ARCH_R30(k1)
+    LONG_L     	$31, KVM_VCPU_ARCH_R31(k1)
 
     /* Restore hi/lo */
-	LONG_L		k0, VCPU_LO(k1)
+	LONG_L		k0, KVM_VCPU_ARCH_LO(k1)
 	mtlo		k0
 
-	LONG_L		k0, VCPU_HI(k1)
+	LONG_L		k0, KVM_VCPU_ARCH_HI(k1)
 	mthi   		k0
 
 FEXPORT(__kvm_mips_load_k0k1)
 	/* Restore the guest's k0/k1 registers */
-    LONG_L     	k0, VCPU_R26(k1)
-    LONG_L     	k1, VCPU_R27(k1)
+    LONG_L     	k0, KVM_VCPU_ARCH_R26(k1)
+    LONG_L     	k1, KVM_VCPU_ARCH_R27(k1)
 
     /* Jump to guest */
 	eret
@@ -262,59 +262,59 @@ NESTED (MIPSX(GuestException), CALLFRAME_SIZ, ra)
 	addiu		k1, k1, VCPU_HOST_ARCH
 
     /* Start saving Guest context to VCPU */
-    LONG_S  $0, VCPU_R0(k1)
-    LONG_S  $1, VCPU_R1(k1)
-    LONG_S  $2, VCPU_R2(k1)
-    LONG_S  $3, VCPU_R3(k1)
-    LONG_S  $4, VCPU_R4(k1)
-    LONG_S  $5, VCPU_R5(k1)
-    LONG_S  $6, VCPU_R6(k1)
-    LONG_S  $7, VCPU_R7(k1)
-    LONG_S  $8, VCPU_R8(k1)
-    LONG_S  $9, VCPU_R9(k1)
-    LONG_S  $10, VCPU_R10(k1)
-    LONG_S  $11, VCPU_R11(k1)
-    LONG_S  $12, VCPU_R12(k1)
-    LONG_S  $13, VCPU_R13(k1)
-    LONG_S  $14, VCPU_R14(k1)
-    LONG_S  $15, VCPU_R15(k1)
-    LONG_S  $16, VCPU_R16(k1)
-    LONG_S  $17,VCPU_R17(k1)
-    LONG_S  $18, VCPU_R18(k1)
-    LONG_S  $19, VCPU_R19(k1)
-    LONG_S  $20, VCPU_R20(k1)
-    LONG_S  $21, VCPU_R21(k1)
-    LONG_S  $22, VCPU_R22(k1)
-    LONG_S  $23, VCPU_R23(k1)
-    LONG_S  $24, VCPU_R24(k1)
-    LONG_S  $25, VCPU_R25(k1)
+    LONG_S  $0, KVM_VCPU_ARCH_R0(k1)
+    LONG_S  $1, KVM_VCPU_ARCH_R1(k1)
+    LONG_S  $2, KVM_VCPU_ARCH_R2(k1)
+    LONG_S  $3, KVM_VCPU_ARCH_R3(k1)
+    LONG_S  $4, KVM_VCPU_ARCH_R4(k1)
+    LONG_S  $5, KVM_VCPU_ARCH_R5(k1)
+    LONG_S  $6, KVM_VCPU_ARCH_R6(k1)
+    LONG_S  $7, KVM_VCPU_ARCH_R7(k1)
+    LONG_S  $8, KVM_VCPU_ARCH_R8(k1)
+    LONG_S  $9, KVM_VCPU_ARCH_R9(k1)
+    LONG_S  $10, KVM_VCPU_ARCH_R10(k1)
+    LONG_S  $11, KVM_VCPU_ARCH_R11(k1)
+    LONG_S  $12, KVM_VCPU_ARCH_R12(k1)
+    LONG_S  $13, KVM_VCPU_ARCH_R13(k1)
+    LONG_S  $14, KVM_VCPU_ARCH_R14(k1)
+    LONG_S  $15, KVM_VCPU_ARCH_R15(k1)
+    LONG_S  $16, KVM_VCPU_ARCH_R16(k1)
+    LONG_S  $17, KVM_VCPU_ARCH_R17(k1)
+    LONG_S  $18, KVM_VCPU_ARCH_R18(k1)
+    LONG_S  $19, KVM_VCPU_ARCH_R19(k1)
+    LONG_S  $20, KVM_VCPU_ARCH_R20(k1)
+    LONG_S  $21, KVM_VCPU_ARCH_R21(k1)
+    LONG_S  $22, KVM_VCPU_ARCH_R22(k1)
+    LONG_S  $23, KVM_VCPU_ARCH_R23(k1)
+    LONG_S  $24, KVM_VCPU_ARCH_R24(k1)
+    LONG_S  $25, KVM_VCPU_ARCH_R25(k1)
 
     /* Guest k0/k1 saved later */
 
-    LONG_S  $28, VCPU_R28(k1)
-    LONG_S  $29, VCPU_R29(k1)
-    LONG_S  $30, VCPU_R30(k1)
-    LONG_S  $31, VCPU_R31(k1)
+    LONG_S  $28, KVM_VCPU_ARCH_R28(k1)
+    LONG_S  $29, KVM_VCPU_ARCH_R29(k1)
+    LONG_S  $30, KVM_VCPU_ARCH_R30(k1)
+    LONG_S  $31, KVM_VCPU_ARCH_R31(k1)
 
     /* We need to save hi/lo and restore them on
      * the way out
      */
     mfhi    t0
-    LONG_S  t0, VCPU_HI(k1)
+    LONG_S  t0, KVM_VCPU_ARCH_HI(k1)
 
     mflo    t0
-    LONG_S  t0, VCPU_LO(k1)
+    LONG_S  t0, KVM_VCPU_ARCH_LO(k1)
 
     /* Finally save guest k0/k1 to VCPU */
     mfc0    t0, CP0_ERROREPC
-    LONG_S  t0, VCPU_R26(k1)
+    LONG_S  t0, KVM_VCPU_ARCH_R26(k1)
 
     /* Get GUEST k1 and save it in VCPU */
 	PTR_LI	t1, ~0x2ff
     mfc0    t0, CP0_EBASE
     and     t0, t0, t1
     LONG_L  t0, 0x3000(t0)
-    LONG_S  t0, VCPU_R27(k1)
+    LONG_S  t0, KVM_VCPU_ARCH_R27(k1)
 
     /* Now that context has been saved, we can use other registers */
 
@@ -461,48 +461,48 @@ __kvm_mips_return_to_guest:
     mtc0    zero,  CP0_HWRENA
 
     /* load the guest context from VCPU and return */
-    LONG_L  $0, VCPU_R0(k1)
-    LONG_L  $1, VCPU_R1(k1)
-    LONG_L  $2, VCPU_R2(k1)
-    LONG_L  $3, VCPU_R3(k1)
-    LONG_L  $4, VCPU_R4(k1)
-    LONG_L  $5, VCPU_R5(k1)
-    LONG_L  $6, VCPU_R6(k1)
-    LONG_L  $7, VCPU_R7(k1)
-    LONG_L  $8, VCPU_R8(k1)
-    LONG_L  $9, VCPU_R9(k1)
-    LONG_L  $10, VCPU_R10(k1)
-    LONG_L  $11, VCPU_R11(k1)
-    LONG_L  $12, VCPU_R12(k1)
-    LONG_L  $13, VCPU_R13(k1)
-    LONG_L  $14, VCPU_R14(k1)
-    LONG_L  $15, VCPU_R15(k1)
-    LONG_L  $16, VCPU_R16(k1)
-    LONG_L  $17, VCPU_R17(k1)
-    LONG_L  $18, VCPU_R18(k1)
-    LONG_L  $19, VCPU_R19(k1)
-    LONG_L  $20, VCPU_R20(k1)
-    LONG_L  $21, VCPU_R21(k1)
-    LONG_L  $22, VCPU_R22(k1)
-    LONG_L  $23, VCPU_R23(k1)
-    LONG_L  $24, VCPU_R24(k1)
-    LONG_L  $25, VCPU_R25(k1)
+    LONG_L  $0, KVM_VCPU_ARCH_R0(k1)
+    LONG_L  $1, KVM_VCPU_ARCH_R1(k1)
+    LONG_L  $2, KVM_VCPU_ARCH_R2(k1)
+    LONG_L  $3, KVM_VCPU_ARCH_R3(k1)
+    LONG_L  $4, KVM_VCPU_ARCH_R4(k1)
+    LONG_L  $5, KVM_VCPU_ARCH_R5(k1)
+    LONG_L  $6, KVM_VCPU_ARCH_R6(k1)
+    LONG_L  $7, KVM_VCPU_ARCH_R7(k1)
+    LONG_L  $8, KVM_VCPU_ARCH_R8(k1)
+    LONG_L  $9, KVM_VCPU_ARCH_R9(k1)
+    LONG_L  $10, KVM_VCPU_ARCH_R10(k1)
+    LONG_L  $11, KVM_VCPU_ARCH_R11(k1)
+    LONG_L  $12, KVM_VCPU_ARCH_R12(k1)
+    LONG_L  $13, KVM_VCPU_ARCH_R13(k1)
+    LONG_L  $14, KVM_VCPU_ARCH_R14(k1)
+    LONG_L  $15, KVM_VCPU_ARCH_R15(k1)
+    LONG_L  $16, KVM_VCPU_ARCH_R16(k1)
+    LONG_L  $17, KVM_VCPU_ARCH_R17(k1)
+    LONG_L  $18, KVM_VCPU_ARCH_R18(k1)
+    LONG_L  $19, KVM_VCPU_ARCH_R19(k1)
+    LONG_L  $20, KVM_VCPU_ARCH_R20(k1)
+    LONG_L  $21, KVM_VCPU_ARCH_R21(k1)
+    LONG_L  $22, KVM_VCPU_ARCH_R22(k1)
+    LONG_L  $23, KVM_VCPU_ARCH_R23(k1)
+    LONG_L  $24, KVM_VCPU_ARCH_R24(k1)
+    LONG_L  $25, KVM_VCPU_ARCH_R25(k1)
 
     /* $/k1 loaded later */
-    LONG_L  $28, VCPU_R28(k1)
-    LONG_L  $29, VCPU_R29(k1)
-    LONG_L  $30, VCPU_R30(k1)
-    LONG_L  $31, VCPU_R31(k1)
+    LONG_L  $28, KVM_VCPU_ARCH_R28(k1)
+    LONG_L  $29, KVM_VCPU_ARCH_R29(k1)
+    LONG_L  $30, KVM_VCPU_ARCH_R30(k1)
+    LONG_L  $31, KVM_VCPU_ARCH_R31(k1)
 
 FEXPORT(__kvm_mips_skip_guest_restore)
-    LONG_L  k0, VCPU_HI(k1)
+    LONG_L  k0, KVM_VCPU_ARCH_HI(k1)
     mthi    k0
 
-    LONG_L  k0, VCPU_LO(k1)
+    LONG_L  k0, KVM_VCPU_ARCH_LO(k1)
     mtlo    k0
 
-    LONG_L  k0, VCPU_R26(k1)
-    LONG_L  k1, VCPU_R27(k1)
+    LONG_L  k0, KVM_VCPU_ARCH_R26(k1)
+    LONG_L  k1, KVM_VCPU_ARCH_R27(k1)
 
     eret
 
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 08/31] mips/kvm: Fix code formatting in arch/mips/kvm/kvm_locore.S
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (6 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 07/31] mips/kvm: Rename VCPU_registername to KVM_VCPU_ARCH_registername David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-14 13:21   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 09/31] mips/kvm: Factor trap-and-emulate support into a pluggable implementation David Daney
                   ` (26 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

It was a completely inconsistent mix of spaces and tabs.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/kvm/kvm_locore.S | 921 +++++++++++++++++++++++----------------------
 1 file changed, 464 insertions(+), 457 deletions(-)

diff --git a/arch/mips/kvm/kvm_locore.S b/arch/mips/kvm/kvm_locore.S
index 7a33ee7..7c2933a 100644
--- a/arch/mips/kvm/kvm_locore.S
+++ b/arch/mips/kvm/kvm_locore.S
@@ -1,13 +1,13 @@
 /*
-* This file is subject to the terms and conditions of the GNU General Public
-* License.  See the file "COPYING" in the main directory of this archive
-* for more details.
-*
-* Main entry point for the guest, exception handling.
-*
-* Copyright (C) 2012  MIPS Technologies, Inc.  All rights reserved.
-* Authors: Sanjay Lal <sanjayl@kymasys.com>
-*/
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License.  See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Main entry point for the guest, exception handling.
+ *
+ * Copyright (C) 2012  MIPS Technologies, Inc.  All rights reserved.
+ * Authors: Sanjay Lal <sanjayl@kymasys.com>
+ */
 
 #include <asm/asm.h>
 #include <asm/asmmacro.h>
@@ -57,172 +57,177 @@
  */
 
 FEXPORT(__kvm_mips_vcpu_run)
-    .set    push
-    .set    noreorder
-    .set    noat
-
-    /* k0/k1 not being used in host kernel context */
-	addiu  		k1,sp, -PT_SIZE
-    LONG_S	    $0, PT_R0(k1)
-    LONG_S     	$1, PT_R1(k1)
-    LONG_S     	$2, PT_R2(k1)
-    LONG_S     	$3, PT_R3(k1)
-
-    LONG_S     	$4, PT_R4(k1)
-    LONG_S     	$5, PT_R5(k1)
-    LONG_S     	$6, PT_R6(k1)
-    LONG_S     	$7, PT_R7(k1)
-
-    LONG_S     	$8,  PT_R8(k1)
-    LONG_S     	$9,  PT_R9(k1)
-    LONG_S     	$10, PT_R10(k1)
-    LONG_S     	$11, PT_R11(k1)
-    LONG_S     	$12, PT_R12(k1)
-    LONG_S     	$13, PT_R13(k1)
-    LONG_S     	$14, PT_R14(k1)
-    LONG_S     	$15, PT_R15(k1)
-    LONG_S     	$16, PT_R16(k1)
-    LONG_S     	$17, PT_R17(k1)
-
-    LONG_S     	$18, PT_R18(k1)
-    LONG_S     	$19, PT_R19(k1)
-    LONG_S     	$20, PT_R20(k1)
-    LONG_S     	$21, PT_R21(k1)
-    LONG_S     	$22, PT_R22(k1)
-    LONG_S     	$23, PT_R23(k1)
-    LONG_S     	$24, PT_R24(k1)
-    LONG_S     	$25, PT_R25(k1)
+	.set	push
+	.set	noreorder
+	.set	noat
+
+	/* k0/k1 not being used in host kernel context */
+	addiu	k1, sp, -PT_SIZE
+	LONG_S	$0, PT_R0(k1)
+	LONG_S	$1, PT_R1(k1)
+	LONG_S	$2, PT_R2(k1)
+	LONG_S	$3, PT_R3(k1)
+
+	LONG_S	$4, PT_R4(k1)
+	LONG_S	$5, PT_R5(k1)
+	LONG_S	$6, PT_R6(k1)
+	LONG_S	$7, PT_R7(k1)
+
+	LONG_S	$8,  PT_R8(k1)
+	LONG_S	$9,  PT_R9(k1)
+	LONG_S	$10, PT_R10(k1)
+	LONG_S	$11, PT_R11(k1)
+	LONG_S	$12, PT_R12(k1)
+	LONG_S	$13, PT_R13(k1)
+	LONG_S	$14, PT_R14(k1)
+	LONG_S	$15, PT_R15(k1)
+	LONG_S	$16, PT_R16(k1)
+	LONG_S	$17, PT_R17(k1)
+
+	LONG_S	$18, PT_R18(k1)
+	LONG_S	$19, PT_R19(k1)
+	LONG_S	$20, PT_R20(k1)
+	LONG_S	$21, PT_R21(k1)
+	LONG_S	$22, PT_R22(k1)
+	LONG_S	$23, PT_R23(k1)
+	LONG_S	$24, PT_R24(k1)
+	LONG_S	$25, PT_R25(k1)
 
 	/* XXXKYMA k0/k1 not saved, not being used if we got here through an ioctl() */
 
-    LONG_S     	$28, PT_R28(k1)
-    LONG_S     	$29, PT_R29(k1)
-    LONG_S     	$30, PT_R30(k1)
-    LONG_S     	$31, PT_R31(k1)
+	LONG_S	$28, PT_R28(k1)
+	LONG_S	$29, PT_R29(k1)
+	LONG_S	$30, PT_R30(k1)
+	LONG_S	$31, PT_R31(k1)
 
-    /* Save hi/lo */
-	mflo		v0
-	LONG_S		v0, PT_LO(k1)
-	mfhi   		v1
-	LONG_S		v1, PT_HI(k1)
+	/* Save hi/lo */
+	mflo	v0
+	LONG_S	v0, PT_LO(k1)
+	mfhi	v1
+	LONG_S	v1, PT_HI(k1)
 
 	/* Save host status */
-	mfc0		v0, CP0_STATUS
-	LONG_S		v0, PT_STATUS(k1)
+	mfc0	v0, CP0_STATUS
+	LONG_S	v0, PT_STATUS(k1)
 
 	/* Save host ASID, shove it into the BVADDR location */
-	mfc0 		v1,CP0_ENTRYHI
-	andi		v1, 0xff
-	LONG_S		v1, PT_HOST_ASID(k1)
+	mfc0	v1, CP0_ENTRYHI
+	andi	v1, 0xff
+	LONG_S	v1, PT_HOST_ASID(k1)
 
-    /* Save DDATA_LO, will be used to store pointer to vcpu */
-    mfc0        v1, CP0_DDATA_LO
-    LONG_S      v1, PT_HOST_USERLOCAL(k1)
+	/* Save DDATA_LO, will be used to store pointer to vcpu */
+	mfc0	v1, CP0_DDATA_LO
+	LONG_S	v1, PT_HOST_USERLOCAL(k1)
 
-    /* DDATA_LO has pointer to vcpu */
-    mtc0        a1,CP0_DDATA_LO
+	/* DDATA_LO has pointer to vcpu */
+	mtc0	a1, CP0_DDATA_LO
 
-    /* Offset into vcpu->arch */
-	addiu		k1, a1, VCPU_HOST_ARCH
+	/* Offset into vcpu->arch */
+	addiu	k1, a1, VCPU_HOST_ARCH
 
-    /* Save the host stack to VCPU, used for exception processing when we exit from the Guest */
-    LONG_S      sp, VCPU_HOST_STACK(k1)
+	/*
+	 * Save the host stack to VCPU, used for exception processing
+	 * when we exit from the Guest
+	 */
+	LONG_S	sp, VCPU_HOST_STACK(k1)
 
-    /* Save the kernel gp as well */
-    LONG_S      gp, VCPU_HOST_GP(k1)
+	/* Save the kernel gp as well */
+	LONG_S	gp, VCPU_HOST_GP(k1)
 
 	/* Setup status register for running the guest in UM, interrupts are disabled */
-	li			k0,(ST0_EXL | KSU_USER| ST0_BEV)
-	mtc0		k0,CP0_STATUS
-    ehb
-
-    /* load up the new EBASE */
-    LONG_L      k0, VCPU_GUEST_EBASE(k1)
-    mtc0        k0,CP0_EBASE
-
-    /* Now that the new EBASE has been loaded, unset BEV, set interrupt mask as it was
-     * but make sure that timer interrupts are enabled
-     */
-    li          k0,(ST0_EXL | KSU_USER | ST0_IE)
-    andi        v0, v0, ST0_IM
-    or          k0, k0, v0
-    mtc0        k0,CP0_STATUS
-    ehb
+	li	k0, (ST0_EXL | KSU_USER| ST0_BEV)
+	mtc0	k0, CP0_STATUS
+	ehb
+
+	/* load up the new EBASE */
+	LONG_L	k0, VCPU_GUEST_EBASE(k1)
+	mtc0	k0, CP0_EBASE
+
+	/*
+	 * Now that the new EBASE has been loaded, unset BEV, set
+	 * interrupt mask as it was but make sure that timer interrupts
+	 * are enabled
+	 */
+	li	k0, (ST0_EXL | KSU_USER | ST0_IE)
+	andi	v0, v0, ST0_IM
+	or	k0, k0, v0
+	mtc0	k0, CP0_STATUS
+	ehb
 
 
 	/* Set Guest EPC */
-	LONG_L		t0, KVM_VCPU_ARCH_EPC(k1)
-	mtc0		t0, CP0_EPC
+	LONG_L	t0, KVM_VCPU_ARCH_EPC(k1)
+	mtc0	t0, CP0_EPC
 
 FEXPORT(__kvm_mips_load_asid)
-    /* Set the ASID for the Guest Kernel */
-    sll         t0, t0, 1                       /* with kseg0 @ 0x40000000, kernel */
-                                                /* addresses shift to 0x80000000 */
-    bltz        t0, 1f                          /* If kernel */
-	addiu       t1, k1, VCPU_GUEST_KERNEL_ASID  /* (BD)  */
-    addiu       t1, k1, VCPU_GUEST_USER_ASID    /* else user */
+	/* Set the ASID for the Guest Kernel */
+	sll	t0, t0, 1	/* with kseg0 @ 0x40000000, kernel */
+			        /* addresses shift to 0x80000000 */
+	bltz	t0, 1f		/* If kernel */
+	 addiu	t1, k1, VCPU_GUEST_KERNEL_ASID  /* (BD)  */
+	addiu	t1, k1, VCPU_GUEST_USER_ASID    /* else user */
 1:
-    /* t1: contains the base of the ASID array, need to get the cpu id  */
-    LONG_L      t2, TI_CPU($28)             /* smp_processor_id */
-    sll         t2, t2, 2                   /* x4 */
-    addu        t3, t1, t2
-    LONG_L      k0, (t3)
-    andi        k0, k0, 0xff
-	mtc0		k0,CP0_ENTRYHI
-    ehb
-
-    /* Disable RDHWR access */
-    mtc0    zero,  CP0_HWRENA
-
-    /* Now load up the Guest Context from VCPU */
-    LONG_L     	$1, KVM_VCPU_ARCH_R1(k1)
-    LONG_L     	$2, KVM_VCPU_ARCH_R2(k1)
-    LONG_L     	$3, KVM_VCPU_ARCH_R3(k1)
-
-    LONG_L     	$4, KVM_VCPU_ARCH_R4(k1)
-    LONG_L     	$5, KVM_VCPU_ARCH_R5(k1)
-    LONG_L     	$6, KVM_VCPU_ARCH_R6(k1)
-    LONG_L     	$7, KVM_VCPU_ARCH_R7(k1)
-
-    LONG_L     	$8,  KVM_VCPU_ARCH_R8(k1)
-    LONG_L     	$9,  KVM_VCPU_ARCH_R9(k1)
-    LONG_L     	$10, KVM_VCPU_ARCH_R10(k1)
-    LONG_L     	$11, KVM_VCPU_ARCH_R11(k1)
-    LONG_L     	$12, KVM_VCPU_ARCH_R12(k1)
-    LONG_L     	$13, KVM_VCPU_ARCH_R13(k1)
-    LONG_L     	$14, KVM_VCPU_ARCH_R14(k1)
-    LONG_L     	$15, KVM_VCPU_ARCH_R15(k1)
-    LONG_L     	$16, KVM_VCPU_ARCH_R16(k1)
-    LONG_L     	$17, KVM_VCPU_ARCH_R17(k1)
-    LONG_L     	$18, KVM_VCPU_ARCH_R18(k1)
-    LONG_L     	$19, KVM_VCPU_ARCH_R19(k1)
-    LONG_L     	$20, KVM_VCPU_ARCH_R20(k1)
-    LONG_L     	$21, KVM_VCPU_ARCH_R21(k1)
-    LONG_L     	$22, KVM_VCPU_ARCH_R22(k1)
-    LONG_L     	$23, KVM_VCPU_ARCH_R23(k1)
-    LONG_L     	$24, KVM_VCPU_ARCH_R24(k1)
-    LONG_L     	$25, KVM_VCPU_ARCH_R25(k1)
-
-    /* k0/k1 loaded up later */
-
-    LONG_L     	$28, KVM_VCPU_ARCH_R28(k1)
-    LONG_L     	$29, KVM_VCPU_ARCH_R29(k1)
-    LONG_L     	$30, KVM_VCPU_ARCH_R30(k1)
-    LONG_L     	$31, KVM_VCPU_ARCH_R31(k1)
-
-    /* Restore hi/lo */
-	LONG_L		k0, KVM_VCPU_ARCH_LO(k1)
-	mtlo		k0
-
-	LONG_L		k0, KVM_VCPU_ARCH_HI(k1)
-	mthi   		k0
+	     /* t1: contains the base of the ASID array, need to get the cpu id  */
+	LONG_L	t2, TI_CPU($28)             /* smp_processor_id */
+	sll	t2, t2, 2                   /* x4 */
+	addu	t3, t1, t2
+	LONG_L	k0, (t3)
+	andi	k0, k0, 0xff
+	mtc0	k0, CP0_ENTRYHI
+	ehb
+
+	/* Disable RDHWR access */
+	mtc0	zero, CP0_HWRENA
+
+	/* Now load up the Guest Context from VCPU */
+	LONG_L	$1, KVM_VCPU_ARCH_R1(k1)
+	LONG_L	$2, KVM_VCPU_ARCH_R2(k1)
+	LONG_L	$3, KVM_VCPU_ARCH_R3(k1)
+
+	LONG_L	$4, KVM_VCPU_ARCH_R4(k1)
+	LONG_L	$5, KVM_VCPU_ARCH_R5(k1)
+	LONG_L	$6, KVM_VCPU_ARCH_R6(k1)
+	LONG_L	$7, KVM_VCPU_ARCH_R7(k1)
+
+	LONG_L	$8, KVM_VCPU_ARCH_R8(k1)
+	LONG_L	$9, KVM_VCPU_ARCH_R9(k1)
+	LONG_L	$10, KVM_VCPU_ARCH_R10(k1)
+	LONG_L	$11, KVM_VCPU_ARCH_R11(k1)
+	LONG_L	$12, KVM_VCPU_ARCH_R12(k1)
+	LONG_L	$13, KVM_VCPU_ARCH_R13(k1)
+	LONG_L	$14, KVM_VCPU_ARCH_R14(k1)
+	LONG_L	$15, KVM_VCPU_ARCH_R15(k1)
+	LONG_L	$16, KVM_VCPU_ARCH_R16(k1)
+	LONG_L	$17, KVM_VCPU_ARCH_R17(k1)
+	LONG_L	$18, KVM_VCPU_ARCH_R18(k1)
+	LONG_L	$19, KVM_VCPU_ARCH_R19(k1)
+	LONG_L	$20, KVM_VCPU_ARCH_R20(k1)
+	LONG_L	$21, KVM_VCPU_ARCH_R21(k1)
+	LONG_L	$22, KVM_VCPU_ARCH_R22(k1)
+	LONG_L	$23, KVM_VCPU_ARCH_R23(k1)
+	LONG_L	$24, KVM_VCPU_ARCH_R24(k1)
+	LONG_L	$25, KVM_VCPU_ARCH_R25(k1)
+
+	/* k0/k1 loaded up later */
+
+	LONG_L	$28, KVM_VCPU_ARCH_R28(k1)
+	LONG_L	$29, KVM_VCPU_ARCH_R29(k1)
+	LONG_L	$30, KVM_VCPU_ARCH_R30(k1)
+	LONG_L	$31, KVM_VCPU_ARCH_R31(k1)
+
+	/* Restore hi/lo */
+	LONG_L	k0, KVM_VCPU_ARCH_LO(k1)
+	mtlo	k0
+
+	LONG_L	k0, KVM_VCPU_ARCH_HI(k1)
+	mthi	k0
 
 FEXPORT(__kvm_mips_load_k0k1)
 	/* Restore the guest's k0/k1 registers */
-    LONG_L     	k0, KVM_VCPU_ARCH_R26(k1)
-    LONG_L     	k1, KVM_VCPU_ARCH_R27(k1)
+	LONG_L	k0, KVM_VCPU_ARCH_R26(k1)
+	LONG_L	k1, KVM_VCPU_ARCH_R27(k1)
 
-    /* Jump to guest */
+	/* Jump to guest */
 	eret
 	.set	pop
 
@@ -230,19 +235,19 @@ VECTOR(MIPSX(exception), unknown)
 /*
  * Find out what mode we came from and jump to the proper handler.
  */
-    .set    push
+	.set	push
 	.set	noat
-    .set    noreorder
-    mtc0    k0, CP0_ERROREPC    #01: Save guest k0
-    ehb                         #02:
-
-    mfc0    k0, CP0_EBASE       #02: Get EBASE
-    srl     k0, k0, 10          #03: Get rid of CPUNum
-    sll     k0, k0, 10          #04
-    LONG_S  k1, 0x3000(k0)      #05: Save k1 @ offset 0x3000
-    addiu   k0, k0, 0x2000      #06: Exception handler is installed @ offset 0x2000
-	j	k0				        #07: jump to the function
-	nop				        	#08: branch delay slot
+	.set	noreorder
+	mtc0	k0, CP0_ERROREPC	#01: Save guest k0
+	ehb				#02:
+
+	mfc0	k0, CP0_EBASE		#02: Get EBASE
+	srl	k0, k0, 10		#03: Get rid of CPUNum
+	sll	k0, k0, 10		#04
+	LONG_S	k1, 0x3000(k0)		#05: Save k1 @ offset 0x3000
+	addiu	k0, k0, 0x2000		#06: Exception handler is installed @ offset 0x2000
+	j	k0			#07: jump to the function
+	 nop				#08: branch delay slot
 	.set	push
 VECTOR_END(MIPSX(exceptionEnd))
 .end MIPSX(exception)
@@ -253,327 +258,330 @@ VECTOR_END(MIPSX(exceptionEnd))
  *
  */
 NESTED (MIPSX(GuestException), CALLFRAME_SIZ, ra)
-    .set    push
-    .set    noat
-    .set    noreorder
-
-    /* Get the VCPU pointer from DDTATA_LO */
-    mfc0        k1, CP0_DDATA_LO
-	addiu		k1, k1, VCPU_HOST_ARCH
-
-    /* Start saving Guest context to VCPU */
-    LONG_S  $0, KVM_VCPU_ARCH_R0(k1)
-    LONG_S  $1, KVM_VCPU_ARCH_R1(k1)
-    LONG_S  $2, KVM_VCPU_ARCH_R2(k1)
-    LONG_S  $3, KVM_VCPU_ARCH_R3(k1)
-    LONG_S  $4, KVM_VCPU_ARCH_R4(k1)
-    LONG_S  $5, KVM_VCPU_ARCH_R5(k1)
-    LONG_S  $6, KVM_VCPU_ARCH_R6(k1)
-    LONG_S  $7, KVM_VCPU_ARCH_R7(k1)
-    LONG_S  $8, KVM_VCPU_ARCH_R8(k1)
-    LONG_S  $9, KVM_VCPU_ARCH_R9(k1)
-    LONG_S  $10, KVM_VCPU_ARCH_R10(k1)
-    LONG_S  $11, KVM_VCPU_ARCH_R11(k1)
-    LONG_S  $12, KVM_VCPU_ARCH_R12(k1)
-    LONG_S  $13, KVM_VCPU_ARCH_R13(k1)
-    LONG_S  $14, KVM_VCPU_ARCH_R14(k1)
-    LONG_S  $15, KVM_VCPU_ARCH_R15(k1)
-    LONG_S  $16, KVM_VCPU_ARCH_R16(k1)
-    LONG_S  $17, KVM_VCPU_ARCH_R17(k1)
-    LONG_S  $18, KVM_VCPU_ARCH_R18(k1)
-    LONG_S  $19, KVM_VCPU_ARCH_R19(k1)
-    LONG_S  $20, KVM_VCPU_ARCH_R20(k1)
-    LONG_S  $21, KVM_VCPU_ARCH_R21(k1)
-    LONG_S  $22, KVM_VCPU_ARCH_R22(k1)
-    LONG_S  $23, KVM_VCPU_ARCH_R23(k1)
-    LONG_S  $24, KVM_VCPU_ARCH_R24(k1)
-    LONG_S  $25, KVM_VCPU_ARCH_R25(k1)
-
-    /* Guest k0/k1 saved later */
-
-    LONG_S  $28, KVM_VCPU_ARCH_R28(k1)
-    LONG_S  $29, KVM_VCPU_ARCH_R29(k1)
-    LONG_S  $30, KVM_VCPU_ARCH_R30(k1)
-    LONG_S  $31, KVM_VCPU_ARCH_R31(k1)
-
-    /* We need to save hi/lo and restore them on
-     * the way out
-     */
-    mfhi    t0
-    LONG_S  t0, KVM_VCPU_ARCH_HI(k1)
-
-    mflo    t0
-    LONG_S  t0, KVM_VCPU_ARCH_LO(k1)
-
-    /* Finally save guest k0/k1 to VCPU */
-    mfc0    t0, CP0_ERROREPC
-    LONG_S  t0, KVM_VCPU_ARCH_R26(k1)
-
-    /* Get GUEST k1 and save it in VCPU */
+	.set	push
+	.set	noat
+	.set	noreorder
+
+	/* Get the VCPU pointer from DDTATA_LO */
+	mfc0	k1, CP0_DDATA_LO
+	addiu	k1, k1, VCPU_HOST_ARCH
+
+	/* Start saving Guest context to VCPU */
+	LONG_S	$0, KVM_VCPU_ARCH_R0(k1)
+	LONG_S	$1, KVM_VCPU_ARCH_R1(k1)
+	LONG_S	$2, KVM_VCPU_ARCH_R2(k1)
+	LONG_S	$3, KVM_VCPU_ARCH_R3(k1)
+	LONG_S	$4, KVM_VCPU_ARCH_R4(k1)
+	LONG_S	$5, KVM_VCPU_ARCH_R5(k1)
+	LONG_S	$6, KVM_VCPU_ARCH_R6(k1)
+	LONG_S	$7, KVM_VCPU_ARCH_R7(k1)
+	LONG_S	$8, KVM_VCPU_ARCH_R8(k1)
+	LONG_S	$9, KVM_VCPU_ARCH_R9(k1)
+	LONG_S	$10, KVM_VCPU_ARCH_R10(k1)
+	LONG_S	$11, KVM_VCPU_ARCH_R11(k1)
+	LONG_S	$12, KVM_VCPU_ARCH_R12(k1)
+	LONG_S	$13, KVM_VCPU_ARCH_R13(k1)
+	LONG_S	$14, KVM_VCPU_ARCH_R14(k1)
+	LONG_S	$15, KVM_VCPU_ARCH_R15(k1)
+	LONG_S	$16, KVM_VCPU_ARCH_R16(k1)
+	LONG_S	$17, KVM_VCPU_ARCH_R17(k1)
+	LONG_S	$18, KVM_VCPU_ARCH_R18(k1)
+	LONG_S	$19, KVM_VCPU_ARCH_R19(k1)
+	LONG_S	$20, KVM_VCPU_ARCH_R20(k1)
+	LONG_S	$21, KVM_VCPU_ARCH_R21(k1)
+	LONG_S	$22, KVM_VCPU_ARCH_R22(k1)
+	LONG_S	$23, KVM_VCPU_ARCH_R23(k1)
+	LONG_S	$24, KVM_VCPU_ARCH_R24(k1)
+	LONG_S	$25, KVM_VCPU_ARCH_R25(k1)
+
+	/* Guest k0/k1 saved later */
+
+	LONG_S	$28, KVM_VCPU_ARCH_R28(k1)
+	LONG_S	$29, KVM_VCPU_ARCH_R29(k1)
+	LONG_S	$30, KVM_VCPU_ARCH_R30(k1)
+	LONG_S	$31, KVM_VCPU_ARCH_R31(k1)
+
+	/* We need to save hi/lo and restore them on
+	 * the way out
+	 */
+	mfhi	t0
+	LONG_S	t0, KVM_VCPU_ARCH_HI(k1)
+
+	mflo	t0
+	LONG_S	t0, KVM_VCPU_ARCH_LO(k1)
+
+	/* Finally save guest k0/k1 to VCPU */
+	mfc0	t0, CP0_ERROREPC
+	LONG_S	t0, KVM_VCPU_ARCH_R26(k1)
+
+	/* Get GUEST k1 and save it in VCPU */
 	PTR_LI	t1, ~0x2ff
-    mfc0    t0, CP0_EBASE
-    and     t0, t0, t1
-    LONG_L  t0, 0x3000(t0)
-    LONG_S  t0, KVM_VCPU_ARCH_R27(k1)
-
-    /* Now that context has been saved, we can use other registers */
+	mfc0	t0, CP0_EBASE
+	and	t0, t0, t1
+	LONG_L	t0, 0x3000(t0)
+	LONG_S	t0, KVM_VCPU_ARCH_R27(k1)
 
-    /* Restore vcpu */
-    mfc0        a1, CP0_DDATA_LO
-    move        s1, a1
+	/* Now that context has been saved, we can use other registers */
 
-   /* Restore run (vcpu->run) */
-    LONG_L      a0, VCPU_RUN(a1)
-    /* Save pointer to run in s0, will be saved by the compiler */
-    move        s0, a0
+	/* Restore vcpu */
+	mfc0	a1, CP0_DDATA_LO
+	move	s1, a1
 
+	/* Restore run (vcpu->run) */
+	LONG_L	a0, VCPU_RUN(a1)
+	/* Save pointer to run in s0, will be saved by the compiler */
+	move	s0, a0
 
-    /* Save Host level EPC, BadVaddr and Cause to VCPU, useful to process the exception */
-    mfc0    k0,CP0_EPC
-    LONG_S  k0, KVM_VCPU_ARCH_EPC(k1)
+	/* Save Host level EPC, BadVaddr and Cause to VCPU, useful to
+	 * process the exception */
+	mfc0	k0,CP0_EPC
+	LONG_S	k0, KVM_VCPU_ARCH_EPC(k1)
 
-    mfc0    k0, CP0_BADVADDR
-    LONG_S  k0, VCPU_HOST_CP0_BADVADDR(k1)
+	mfc0	k0, CP0_BADVADDR
+	LONG_S	k0, VCPU_HOST_CP0_BADVADDR(k1)
 
-    mfc0    k0, CP0_CAUSE
-    LONG_S  k0, VCPU_HOST_CP0_CAUSE(k1)
+	mfc0	k0, CP0_CAUSE
+	LONG_S	k0, VCPU_HOST_CP0_CAUSE(k1)
 
-    mfc0    k0, CP0_ENTRYHI
-    LONG_S  k0, VCPU_HOST_ENTRYHI(k1)
+	mfc0	k0, CP0_ENTRYHI
+	LONG_S	k0, VCPU_HOST_ENTRYHI(k1)
 
-    /* Now restore the host state just enough to run the handlers */
+	/* Now restore the host state just enough to run the handlers */
 
-    /* Swtich EBASE to the one used by Linux */
-    /* load up the host EBASE */
-    mfc0        v0, CP0_STATUS
+	/* Swtich EBASE to the one used by Linux */
+	/* load up the host EBASE */
+	mfc0	v0, CP0_STATUS
 
-    .set at
-	or          k0, v0, ST0_BEV
-    .set noat
+	.set	at
+	or	k0, v0, ST0_BEV
+	.set	noat
 
-    mtc0        k0, CP0_STATUS
-    ehb
+	mtc0	k0, CP0_STATUS
+	ehb
 
-    LONG_L      k0, VCPU_HOST_EBASE(k1)
-    mtc0        k0,CP0_EBASE
+	LONG_L	k0, VCPU_HOST_EBASE(k1)
+	mtc0	k0,CP0_EBASE
 
 
-    /* Now that the new EBASE has been loaded, unset BEV and KSU_USER */
-    .set at
-	and         v0, v0, ~(ST0_EXL | KSU_USER | ST0_IE)
-    or          v0, v0, ST0_CU0
-    .set noat
-    mtc0        v0, CP0_STATUS
-    ehb
+	/* Now that the new EBASE has been loaded, unset BEV and KSU_USER */
+	.set	at
+	and	v0, v0, ~(ST0_EXL | KSU_USER | ST0_IE)
+	or	v0, v0, ST0_CU0
+	.set	noat
+	mtc0	v0, CP0_STATUS
+	ehb
 
-    /* Load up host GP */
-    LONG_L  gp, VCPU_HOST_GP(k1)
+	/* Load up host GP */
+	LONG_L	gp, VCPU_HOST_GP(k1)
 
-    /* Need a stack before we can jump to "C" */
-    LONG_L  sp, VCPU_HOST_STACK(k1)
+	/* Need a stack before we can jump to "C" */
+	LONG_L	sp, VCPU_HOST_STACK(k1)
 
-    /* Saved host state */
-    addiu   sp,sp, -PT_SIZE
+	/* Saved host state */
+	addiu	sp,sp, -PT_SIZE
 
-    /* XXXKYMA do we need to load the host ASID, maybe not because the
-     * kernel entries are marked GLOBAL, need to verify
-     */
+	/* XXXKYMA do we need to load the host ASID, maybe not because the
+	 * kernel entries are marked GLOBAL, need to verify
+	 */
 
-    /* Restore host DDATA_LO */
-    LONG_L      k0, PT_HOST_USERLOCAL(sp)
-    mtc0        k0, CP0_DDATA_LO
+	/* Restore host DDATA_LO */
+	LONG_L	k0, PT_HOST_USERLOCAL(sp)
+	mtc0	k0, CP0_DDATA_LO
 
-    /* Restore RDHWR access */
+	/* Restore RDHWR access */
 	PTR_LI	k0, 0x2000000F
-    mtc0    k0,  CP0_HWRENA
+	mtc0	k0, CP0_HWRENA
 
-    /* Jump to handler */
+	/* Jump to handler */
 FEXPORT(__kvm_mips_jump_to_handler)
-    /* XXXKYMA: not sure if this is safe, how large is the stack?? */
-    /* Now jump to the kvm_mips_handle_exit() to see if we can deal with this in the kernel */
+	/* XXXKYMA: not sure if this is safe, how large is the stack??
+	 * Now jump to the kvm_mips_handle_exit() to see if we can deal
+	 * with this in the kernel */
 	PTR_LA	t9, kvm_mips_handle_exit
-    jalr.hb     t9
-    addiu       sp,sp, -CALLFRAME_SIZ           /* BD Slot */
-
-    /* Return from handler Make sure interrupts are disabled */
-    di
-    ehb
-
-    /* XXXKYMA: k0/k1 could have been blown away if we processed an exception
-     * while we were handling the exception from the guest, reload k1
-     */
-    move        k1, s1
-	addiu		k1, k1, VCPU_HOST_ARCH
-
-    /* Check return value, should tell us if we are returning to the host (handle I/O etc)
-     * or resuming the guest
-     */
-    andi        t0, v0, RESUME_HOST
-    bnez        t0, __kvm_mips_return_to_host
-    nop
+	jalr.hb	t9
+	 addiu	sp,sp, -CALLFRAME_SIZ           /* BD Slot */
+
+	/* Return from handler Make sure interrupts are disabled */
+	di
+	ehb
+
+	/* XXXKYMA: k0/k1 could have been blown away if we processed
+	 * an exception while we were handling the exception from the
+	 * guest, reload k1
+	 */
+
+	move	k1, s1
+	addiu	k1, k1, VCPU_HOST_ARCH
+
+	/* Check return value, should tell us if we are returning to the
+	 * host (handle I/O etc)or resuming the guest
+	 */
+	andi	t0, v0, RESUME_HOST
+	bnez	t0, __kvm_mips_return_to_host
+	 nop
 
 __kvm_mips_return_to_guest:
-    /* Put the saved pointer to vcpu (s1) back into the DDATA_LO Register */
-    mtc0        s1, CP0_DDATA_LO
-
-    /* Load up the Guest EBASE to minimize the window where BEV is set */
-    LONG_L      t0, VCPU_GUEST_EBASE(k1)
-
-    /* Switch EBASE back to the one used by KVM */
-    mfc0        v1, CP0_STATUS
-    .set at
-	or          k0, v1, ST0_BEV
-    .set noat
-    mtc0        k0, CP0_STATUS
-    ehb
-    mtc0        t0,CP0_EBASE
-
-    /* Setup status register for running guest in UM */
-    .set at
-    or     v1, v1, (ST0_EXL | KSU_USER | ST0_IE)
-    and     v1, v1, ~ST0_CU0
-    .set noat
-    mtc0    v1, CP0_STATUS
-    ehb
+   	 /* Put the saved pointer to vcpu (s1) back into the DDATA_LO Register */
+	mtc0	s1, CP0_DDATA_LO
+
+	/* Load up the Guest EBASE to minimize the window where BEV is set */
+	LONG_L	t0, VCPU_GUEST_EBASE(k1)
 
+	/* Switch EBASE back to the one used by KVM */
+	mfc0	v1, CP0_STATUS
+	.set	at
+	or	k0, v1, ST0_BEV
+	.set	noat
+	mtc0	k0, CP0_STATUS
+	ehb
+	mtc0	t0, CP0_EBASE
+
+	/* Setup status register for running guest in UM */
+	.set	at
+	or	v1, v1, (ST0_EXL | KSU_USER | ST0_IE)
+	and	v1, v1, ~ST0_CU0
+	.set	noat
+	mtc0	v1, CP0_STATUS
+	ehb
 
 	/* Set Guest EPC */
-	LONG_L		t0, KVM_VCPU_ARCH_EPC(k1)
-	mtc0		t0, CP0_EPC
-
-    /* Set the ASID for the Guest Kernel */
-    sll         t0, t0, 1                       /* with kseg0 @ 0x40000000, kernel */
-                                                /* addresses shift to 0x80000000 */
-    bltz        t0, 1f                          /* If kernel */
-	addiu       t1, k1, VCPU_GUEST_KERNEL_ASID  /* (BD)  */
-    addiu       t1, k1, VCPU_GUEST_USER_ASID    /* else user */
+	LONG_L	t0, KVM_VCPU_ARCH_EPC(k1)
+	mtc0	t0, CP0_EPC
+
+	/* Set the ASID for the Guest Kernel */
+	sll	t0, t0, 1	/* with kseg0 @ 0x40000000, kernel */
+				/* addresses shift to 0x80000000 */
+	bltz	t0, 1f		/* If kernel */
+	 addiu	t1, k1, VCPU_GUEST_KERNEL_ASID  /* (BD)  */
+	addiu	t1, k1, VCPU_GUEST_USER_ASID    /* else user */
 1:
-    /* t1: contains the base of the ASID array, need to get the cpu id  */
-    LONG_L      t2, TI_CPU($28)             /* smp_processor_id */
-    sll         t2, t2, 2                   /* x4 */
-    addu        t3, t1, t2
-    LONG_L      k0, (t3)
-    andi        k0, k0, 0xff
-	mtc0		k0,CP0_ENTRYHI
-    ehb
-
-    /* Disable RDHWR access */
-    mtc0    zero,  CP0_HWRENA
+	/* t1: contains the base of the ASID array, need to get the cpu id  */
+	LONG_L	t2, TI_CPU($28)		/* smp_processor_id */
+	sll	t2, t2, 2		/* x4 */
+	addu	t3, t1, t2
+	LONG_L	k0, (t3)
+	andi	k0, k0, 0xff
+	mtc0	k0,CP0_ENTRYHI
+	ehb
+
+	/* Disable RDHWR access */
+	mtc0    zero,  CP0_HWRENA
 
     /* load the guest context from VCPU and return */
-    LONG_L  $0, KVM_VCPU_ARCH_R0(k1)
-    LONG_L  $1, KVM_VCPU_ARCH_R1(k1)
-    LONG_L  $2, KVM_VCPU_ARCH_R2(k1)
-    LONG_L  $3, KVM_VCPU_ARCH_R3(k1)
-    LONG_L  $4, KVM_VCPU_ARCH_R4(k1)
-    LONG_L  $5, KVM_VCPU_ARCH_R5(k1)
-    LONG_L  $6, KVM_VCPU_ARCH_R6(k1)
-    LONG_L  $7, KVM_VCPU_ARCH_R7(k1)
-    LONG_L  $8, KVM_VCPU_ARCH_R8(k1)
-    LONG_L  $9, KVM_VCPU_ARCH_R9(k1)
-    LONG_L  $10, KVM_VCPU_ARCH_R10(k1)
-    LONG_L  $11, KVM_VCPU_ARCH_R11(k1)
-    LONG_L  $12, KVM_VCPU_ARCH_R12(k1)
-    LONG_L  $13, KVM_VCPU_ARCH_R13(k1)
-    LONG_L  $14, KVM_VCPU_ARCH_R14(k1)
-    LONG_L  $15, KVM_VCPU_ARCH_R15(k1)
-    LONG_L  $16, KVM_VCPU_ARCH_R16(k1)
-    LONG_L  $17, KVM_VCPU_ARCH_R17(k1)
-    LONG_L  $18, KVM_VCPU_ARCH_R18(k1)
-    LONG_L  $19, KVM_VCPU_ARCH_R19(k1)
-    LONG_L  $20, KVM_VCPU_ARCH_R20(k1)
-    LONG_L  $21, KVM_VCPU_ARCH_R21(k1)
-    LONG_L  $22, KVM_VCPU_ARCH_R22(k1)
-    LONG_L  $23, KVM_VCPU_ARCH_R23(k1)
-    LONG_L  $24, KVM_VCPU_ARCH_R24(k1)
-    LONG_L  $25, KVM_VCPU_ARCH_R25(k1)
-
-    /* $/k1 loaded later */
-    LONG_L  $28, KVM_VCPU_ARCH_R28(k1)
-    LONG_L  $29, KVM_VCPU_ARCH_R29(k1)
-    LONG_L  $30, KVM_VCPU_ARCH_R30(k1)
-    LONG_L  $31, KVM_VCPU_ARCH_R31(k1)
+	LONG_L	$0, KVM_VCPU_ARCH_R0(k1)
+	LONG_L	$1, KVM_VCPU_ARCH_R1(k1)
+	LONG_L	$2, KVM_VCPU_ARCH_R2(k1)
+	LONG_L	$3, KVM_VCPU_ARCH_R3(k1)
+	LONG_L	$4, KVM_VCPU_ARCH_R4(k1)
+	LONG_L	$5, KVM_VCPU_ARCH_R5(k1)
+	LONG_L	$6, KVM_VCPU_ARCH_R6(k1)
+	LONG_L	$7, KVM_VCPU_ARCH_R7(k1)
+	LONG_L	$8, KVM_VCPU_ARCH_R8(k1)
+	LONG_L	$9, KVM_VCPU_ARCH_R9(k1)
+	LONG_L	$10, KVM_VCPU_ARCH_R10(k1)
+	LONG_L	$11, KVM_VCPU_ARCH_R11(k1)
+	LONG_L	$12, KVM_VCPU_ARCH_R12(k1)
+	LONG_L	$13, KVM_VCPU_ARCH_R13(k1)
+	LONG_L	$14, KVM_VCPU_ARCH_R14(k1)
+	LONG_L	$15, KVM_VCPU_ARCH_R15(k1)
+	LONG_L	$16, KVM_VCPU_ARCH_R16(k1)
+	LONG_L	$17, KVM_VCPU_ARCH_R17(k1)
+	LONG_L	$18, KVM_VCPU_ARCH_R18(k1)
+	LONG_L	$19, KVM_VCPU_ARCH_R19(k1)
+	LONG_L	$20, KVM_VCPU_ARCH_R20(k1)
+	LONG_L	$21, KVM_VCPU_ARCH_R21(k1)
+	LONG_L	$22, KVM_VCPU_ARCH_R22(k1)
+	LONG_L	$23, KVM_VCPU_ARCH_R23(k1)
+	LONG_L	$24, KVM_VCPU_ARCH_R24(k1)
+	LONG_L	$25, KVM_VCPU_ARCH_R25(k1)
+
+	/* $/k1 loaded later */
+	LONG_L	$28, KVM_VCPU_ARCH_R28(k1)
+	LONG_L	$29, KVM_VCPU_ARCH_R29(k1)
+	LONG_L	$30, KVM_VCPU_ARCH_R30(k1)
+	LONG_L	$31, KVM_VCPU_ARCH_R31(k1)
 
 FEXPORT(__kvm_mips_skip_guest_restore)
-    LONG_L  k0, KVM_VCPU_ARCH_HI(k1)
-    mthi    k0
+	LONG_L	k0, KVM_VCPU_ARCH_HI(k1)
+	mthi	k0
 
-    LONG_L  k0, KVM_VCPU_ARCH_LO(k1)
-    mtlo    k0
+	LONG_L	k0, KVM_VCPU_ARCH_LO(k1)
+	mtlo	k0
 
-    LONG_L  k0, KVM_VCPU_ARCH_R26(k1)
-    LONG_L  k1, KVM_VCPU_ARCH_R27(k1)
+	LONG_L	k0, KVM_VCPU_ARCH_R26(k1)
+	LONG_L	k1, KVM_VCPU_ARCH_R27(k1)
 
-    eret
+	eret
 
 __kvm_mips_return_to_host:
-    /* EBASE is already pointing to Linux */
-    LONG_L  k1, VCPU_HOST_STACK(k1)
-	addiu  	k1,k1, -PT_SIZE
-
-    /* Restore host DDATA_LO */
-    LONG_L      k0, PT_HOST_USERLOCAL(k1)
-    mtc0        k0, CP0_DDATA_LO
-
-    /* Restore host ASID */
-    LONG_L      k0, PT_HOST_ASID(sp)
-    andi        k0, 0xff
-    mtc0        k0,CP0_ENTRYHI
-    ehb
-
-    /* Load context saved on the host stack */
-    LONG_L  $0, PT_R0(k1)
-    LONG_L  $1, PT_R1(k1)
-
-    /* r2/v0 is the return code, shift it down by 2 (arithmetic) to recover the err code  */
-    sra     k0, v0, 2
-    move    $2, k0
-
-    LONG_L  $3, PT_R3(k1)
-    LONG_L  $4, PT_R4(k1)
-    LONG_L  $5, PT_R5(k1)
-    LONG_L  $6, PT_R6(k1)
-    LONG_L  $7, PT_R7(k1)
-    LONG_L  $8, PT_R8(k1)
-    LONG_L  $9, PT_R9(k1)
-    LONG_L  $10, PT_R10(k1)
-    LONG_L  $11, PT_R11(k1)
-    LONG_L  $12, PT_R12(k1)
-    LONG_L  $13, PT_R13(k1)
-    LONG_L  $14, PT_R14(k1)
-    LONG_L  $15, PT_R15(k1)
-    LONG_L  $16, PT_R16(k1)
-    LONG_L  $17, PT_R17(k1)
-    LONG_L  $18, PT_R18(k1)
-    LONG_L  $19, PT_R19(k1)
-    LONG_L  $20, PT_R20(k1)
-    LONG_L  $21, PT_R21(k1)
-    LONG_L  $22, PT_R22(k1)
-    LONG_L  $23, PT_R23(k1)
-    LONG_L  $24, PT_R24(k1)
-    LONG_L  $25, PT_R25(k1)
-
-    /* Host k0/k1 were not saved */
-
-    LONG_L  $28, PT_R28(k1)
-    LONG_L  $29, PT_R29(k1)
-    LONG_L  $30, PT_R30(k1)
-
-    LONG_L  k0, PT_HI(k1)
-    mthi    k0
-
-    LONG_L  k0, PT_LO(k1)
-    mtlo    k0
-
-    /* Restore RDHWR access */
+	/* EBASE is already pointing to Linux */
+	LONG_L	k1, VCPU_HOST_STACK(k1)
+	addiu	k1,k1, -PT_SIZE
+
+	/* Restore host DDATA_LO */
+	LONG_L	k0, PT_HOST_USERLOCAL(k1)
+	mtc0	k0, CP0_DDATA_LO
+
+	/* Restore host ASID */
+	LONG_L	k0, PT_HOST_ASID(sp)
+	andi	k0, 0xff
+	mtc0	k0,CP0_ENTRYHI
+	ehb
+
+	/* Load context saved on the host stack */
+	LONG_L	$0, PT_R0(k1)
+	LONG_L	$1, PT_R1(k1)
+
+	/* r2/v0 is the return code, shift it down by 2 (arithmetic)
+	 * to recover the err code  */
+	sra	k0, v0, 2
+	move	$2, k0
+
+	LONG_L	$3, PT_R3(k1)
+	LONG_L	$4, PT_R4(k1)
+	LONG_L	$5, PT_R5(k1)
+	LONG_L	$6, PT_R6(k1)
+	LONG_L	$7, PT_R7(k1)
+	LONG_L	$8, PT_R8(k1)
+	LONG_L	$9, PT_R9(k1)
+	LONG_L	$10, PT_R10(k1)
+	LONG_L	$11, PT_R11(k1)
+	LONG_L	$12, PT_R12(k1)
+	LONG_L	$13, PT_R13(k1)
+	LONG_L	$14, PT_R14(k1)
+	LONG_L	$15, PT_R15(k1)
+	LONG_L	$16, PT_R16(k1)
+	LONG_L	$17, PT_R17(k1)
+	LONG_L	$18, PT_R18(k1)
+	LONG_L	$19, PT_R19(k1)
+	LONG_L	$20, PT_R20(k1)
+	LONG_L	$21, PT_R21(k1)
+	LONG_L	$22, PT_R22(k1)
+	LONG_L	$23, PT_R23(k1)
+	LONG_L	$24, PT_R24(k1)
+	LONG_L	$25, PT_R25(k1)
+
+	/* Host k0/k1 were not saved */
+
+	LONG_L	$28, PT_R28(k1)
+	LONG_L	$29, PT_R29(k1)
+	LONG_L	$30, PT_R30(k1)
+
+	LONG_L	k0, PT_HI(k1)
+	mthi	k0
+
+	LONG_L	k0, PT_LO(k1)
+	mtlo	k0
+
+	/* Restore RDHWR access */
 	PTR_LI	k0, 0x2000000F
-    mtc0    k0,  CP0_HWRENA
+	mtc0	k0,  CP0_HWRENA
 
 
-    /* Restore RA, which is the address we will return to */
-    LONG_L  ra, PT_R31(k1)
-    j       ra
-    nop
+	/* Restore RA, which is the address we will return to */
+	LONG_L  ra, PT_R31(k1)
+	j       ra
+	 nop
 
     .set    pop
 VECTOR_END(MIPSX(GuestExceptionEnd))
@@ -627,24 +635,23 @@ MIPSX(exceptions):
 
 #define HW_SYNCI_Step       $1
 LEAF(MIPSX(SyncICache))
-    .set    push
+	.set	push
 	.set	mips32r2
-    beq     a1, zero, 20f
-    nop
-    addu    a1, a0, a1
-    rdhwr   v0, HW_SYNCI_Step
-    beq     v0, zero, 20f
-    nop
-
+	beq	a1, zero, 20f
+	 nop
+	addu	a1, a0, a1
+	rdhwr	v0, HW_SYNCI_Step
+	beq	v0, zero, 20f
+	 nop
 10:
-    synci   0(a0)
-    addu    a0, a0, v0
-    sltu    v1, a0, a1
-    bne     v1, zero, 10b
-    nop
-    sync
+	synci	0(a0)
+	addu	a0, a0, v0
+	sltu	v1, a0, a1
+	bne	v1, zero, 10b
+	 nop
+	sync
 20:
-    jr.hb   ra
-    nop
-    .set pop
+	jr.hb	ra
+	 nop
+	.set	pop
 END(MIPSX(SyncICache))
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 09/31] mips/kvm: Factor trap-and-emulate support into a pluggable implementation.
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (7 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 08/31] mips/kvm: Fix code formatting in arch/mips/kvm/kvm_locore.S David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-14 13:22   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 10/31] mips/kvm: Implement ioctls to get and set FPU registers David Daney
                   ` (25 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

Many of the arch entry points now dispatch through an ops vector.
This will allow alternate implementations based on the MIPSVZ hardware
support to be added.

There is no substantive code change here, just moving things around.
The biggest change is that the implementation specific data needed by
the vcpu and kvm is held in separate structures, pointed to by
vcpu->arch.impl and kvm->arch.impl, minor changes to the assembly code
was made to traverse these pointers.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/include/asm/kvm_host.h    | 610 ++----------------------
 arch/mips/include/asm/kvm_mips_te.h | 589 +++++++++++++++++++++++
 arch/mips/kernel/asm-offsets.c      | 100 ++--
 arch/mips/kvm/kvm_locore.S          |  47 +-
 arch/mips/kvm/kvm_mips.c            | 709 ++--------------------------
 arch/mips/kvm/kvm_mips_comm.h       |   1 +
 arch/mips/kvm/kvm_mips_commpage.c   |   9 +-
 arch/mips/kvm/kvm_mips_emul.c       | 172 ++++---
 arch/mips/kvm/kvm_mips_int.c        |  45 +-
 arch/mips/kvm/kvm_mips_int.h        |   2 -
 arch/mips/kvm/kvm_mips_stats.c      |   6 +-
 arch/mips/kvm/kvm_tlb.c             | 138 +++---
 arch/mips/kvm/kvm_trap_emul.c       | 912 ++++++++++++++++++++++++++++++++++--
 13 files changed, 1807 insertions(+), 1533 deletions(-)
 create mode 100644 arch/mips/include/asm/kvm_mips_te.h

diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
index d9ee320..16013c7 100644
--- a/arch/mips/include/asm/kvm_host.h
+++ b/arch/mips/include/asm/kvm_host.h
@@ -35,51 +35,9 @@
 #define KVM_PAGES_PER_HPAGE(x)	1
 
 
-
-/* Special address that contains the comm page, used for reducing # of traps */
-#define KVM_GUEST_COMMPAGE_ADDR     0x0
-
-#define KVM_GUEST_KERNEL_MODE(vcpu)	((kvm_read_c0_guest_status(vcpu->arch.cop0) & (ST0_EXL | ST0_ERL)) || \
-					((kvm_read_c0_guest_status(vcpu->arch.cop0) & KSU_USER) == 0))
-
-#define KVM_GUEST_KUSEG             0x00000000UL
-#define KVM_GUEST_KSEG0             0x40000000UL
-#define KVM_GUEST_KSEG23            0x60000000UL
-#define KVM_GUEST_KSEGX(a)          ((_ACAST32_(a)) & 0x60000000)
-#define KVM_GUEST_CPHYSADDR(a)      ((_ACAST32_(a)) & 0x1fffffff)
-
-#define KVM_GUEST_CKSEG0ADDR(a)		(KVM_GUEST_CPHYSADDR(a) | KVM_GUEST_KSEG0)
-#define KVM_GUEST_CKSEG1ADDR(a)		(KVM_GUEST_CPHYSADDR(a) | KVM_GUEST_KSEG1)
-#define KVM_GUEST_CKSEG23ADDR(a)	(KVM_GUEST_CPHYSADDR(a) | KVM_GUEST_KSEG23)
-
-/*
- * Map an address to a certain kernel segment
- */
-#define KVM_GUEST_KSEG0ADDR(a)		(KVM_GUEST_CPHYSADDR(a) | KVM_GUEST_KSEG0)
-#define KVM_GUEST_KSEG1ADDR(a)		(KVM_GUEST_CPHYSADDR(a) | KVM_GUEST_KSEG1)
-#define KVM_GUEST_KSEG23ADDR(a)		(KVM_GUEST_CPHYSADDR(a) | KVM_GUEST_KSEG23)
-
-#define KVM_INVALID_PAGE            0xdeadbeef
-#define KVM_INVALID_INST            0xdeadbeef
-#define KVM_INVALID_ADDR            0xdeadbeef
-
-#define KVM_MALTA_GUEST_RTC_ADDR    0xb8000070UL
-
-#define GUEST_TICKS_PER_JIFFY (40000000/HZ)
-#define MS_TO_NS(x) (x * 1E6L)
-
-#define CAUSEB_DC       27
-#define CAUSEF_DC       (_ULCAST_(1)   << 27)
-
 struct kvm;
-struct kvm_run;
 struct kvm_vcpu;
-struct kvm_interrupt;
-
-extern atomic_t kvm_mips_instance;
-extern pfn_t(*kvm_mips_gfn_to_pfn) (struct kvm *kvm, gfn_t gfn);
-extern void (*kvm_mips_release_pfn_clean) (pfn_t pfn);
-extern bool(*kvm_mips_is_error_pfn) (pfn_t pfn);
+enum kvm_mr_change;
 
 struct kvm_vm_stat {
 	u32 remote_tlb_flush;
@@ -103,561 +61,49 @@ struct kvm_vcpu_stat {
 	u32 halt_wakeup;
 };
 
-enum kvm_mips_exit_types {
-	WAIT_EXITS,
-	CACHE_EXITS,
-	SIGNAL_EXITS,
-	INT_EXITS,
-	COP_UNUSABLE_EXITS,
-	TLBMOD_EXITS,
-	TLBMISS_LD_EXITS,
-	TLBMISS_ST_EXITS,
-	ADDRERR_ST_EXITS,
-	ADDRERR_LD_EXITS,
-	SYSCALL_EXITS,
-	RESVD_INST_EXITS,
-	BREAK_INST_EXITS,
-	FLUSH_DCACHE_EXITS,
-	MAX_KVM_MIPS_EXIT_TYPES
-};
-
 struct kvm_arch_memory_slot {
 };
 
-struct kvm_arch {
-	/* Guest GVA->HPA page table */
-	unsigned long *guest_pmap;
-	unsigned long guest_pmap_npages;
-
-	/* Wired host TLB used for the commpage */
-	int commpage_tlb;
-};
-
-#define N_MIPS_COPROC_REGS      32
-#define N_MIPS_COPROC_SEL   	8
-
-struct mips_coproc {
-	unsigned long reg[N_MIPS_COPROC_REGS][N_MIPS_COPROC_SEL];
-#ifdef CONFIG_KVM_MIPS_DEBUG_COP0_COUNTERS
-	unsigned long stat[N_MIPS_COPROC_REGS][N_MIPS_COPROC_SEL];
-#endif
+struct kvm_mips_ops {
+	int (*vcpu_runnable)(struct kvm_vcpu *vcpu);
+	void (*free_vcpus)(struct kvm *kvm);
+	void (*destroy_vm)(struct kvm *kvm);
+	void (*commit_memory_region)(struct kvm *kvm,
+				     struct kvm_userspace_memory_region *mem,
+				     const struct kvm_memory_slot *old,
+				     enum kvm_mr_change change);
+	struct kvm_vcpu *(*vcpu_create)(struct kvm *kvm, unsigned int id);
+	void (*vcpu_free)(struct kvm_vcpu *vcpu);
+	int (*vcpu_run)(struct kvm_vcpu *vcpu, struct kvm_run *run);
+	long (*vm_ioctl)(struct kvm *kvm, unsigned int ioctl,
+			 unsigned long arg);
+	long (*vcpu_ioctl)(struct kvm_vcpu *vcpu, unsigned int ioctl,
+			   unsigned long arg);
+	int (*get_reg)(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
+	int (*set_reg)(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg,
+		       u64 v);
+	int (*cpu_has_pending_timer)(struct kvm_vcpu *vcpu);
+	int (*vcpu_init)(struct kvm_vcpu *vcpu);
+	int (*vcpu_setup)(struct kvm_vcpu *vcpu);
+	void (*vcpu_load)(struct kvm_vcpu *vcpu, int cpu);
+	void (*vcpu_put)(struct kvm_vcpu *vcpu);
 };
 
-/*
- * Coprocessor 0 register names
- */
-#define	MIPS_CP0_TLB_INDEX	    0
-#define	MIPS_CP0_TLB_RANDOM	    1
-#define	MIPS_CP0_TLB_LOW	    2
-#define	MIPS_CP0_TLB_LO0	    2
-#define	MIPS_CP0_TLB_LO1	    3
-#define	MIPS_CP0_TLB_CONTEXT	4
-#define	MIPS_CP0_TLB_PG_MASK	5
-#define	MIPS_CP0_TLB_WIRED	    6
-#define	MIPS_CP0_HWRENA 	    7
-#define	MIPS_CP0_BAD_VADDR	    8
-#define	MIPS_CP0_COUNT	        9
-#define	MIPS_CP0_TLB_HI	        10
-#define	MIPS_CP0_COMPARE	    11
-#define	MIPS_CP0_STATUS	        12
-#define	MIPS_CP0_CAUSE	        13
-#define	MIPS_CP0_EXC_PC	        14
-#define	MIPS_CP0_PRID		    15
-#define	MIPS_CP0_CONFIG	        16
-#define	MIPS_CP0_LLADDR	        17
-#define	MIPS_CP0_WATCH_LO	    18
-#define	MIPS_CP0_WATCH_HI	    19
-#define	MIPS_CP0_TLB_XCONTEXT   20
-#define	MIPS_CP0_ECC		    26
-#define	MIPS_CP0_CACHE_ERR	    27
-#define	MIPS_CP0_TAG_LO	        28
-#define	MIPS_CP0_TAG_HI	        29
-#define	MIPS_CP0_ERROR_PC	    30
-#define	MIPS_CP0_DEBUG	        23
-#define	MIPS_CP0_DEPC		    24
-#define	MIPS_CP0_PERFCNT	    25
-#define	MIPS_CP0_ERRCTL         26
-#define	MIPS_CP0_DATA_LO	    28
-#define	MIPS_CP0_DATA_HI	    29
-#define	MIPS_CP0_DESAVE	        31
-
-#define MIPS_CP0_CONFIG_SEL	    0
-#define MIPS_CP0_CONFIG1_SEL    1
-#define MIPS_CP0_CONFIG2_SEL    2
-#define MIPS_CP0_CONFIG3_SEL    3
-
-/* Config0 register bits */
-#define CP0C0_M    31
-#define CP0C0_K23  28
-#define CP0C0_KU   25
-#define CP0C0_MDU  20
-#define CP0C0_MM   17
-#define CP0C0_BM   16
-#define CP0C0_BE   15
-#define CP0C0_AT   13
-#define CP0C0_AR   10
-#define CP0C0_MT   7
-#define CP0C0_VI   3
-#define CP0C0_K0   0
-
-/* Config1 register bits */
-#define CP0C1_M    31
-#define CP0C1_MMU  25
-#define CP0C1_IS   22
-#define CP0C1_IL   19
-#define CP0C1_IA   16
-#define CP0C1_DS   13
-#define CP0C1_DL   10
-#define CP0C1_DA   7
-#define CP0C1_C2   6
-#define CP0C1_MD   5
-#define CP0C1_PC   4
-#define CP0C1_WR   3
-#define CP0C1_CA   2
-#define CP0C1_EP   1
-#define CP0C1_FP   0
-
-/* Config2 Register bits */
-#define CP0C2_M    31
-#define CP0C2_TU   28
-#define CP0C2_TS   24
-#define CP0C2_TL   20
-#define CP0C2_TA   16
-#define CP0C2_SU   12
-#define CP0C2_SS   8
-#define CP0C2_SL   4
-#define CP0C2_SA   0
-
-/* Config3 Register bits */
-#define CP0C3_M    31
-#define CP0C3_ISA_ON_EXC 16
-#define CP0C3_ULRI  13
-#define CP0C3_DSPP 10
-#define CP0C3_LPA  7
-#define CP0C3_VEIC 6
-#define CP0C3_VInt 5
-#define CP0C3_SP   4
-#define CP0C3_MT   2
-#define CP0C3_SM   1
-#define CP0C3_TL   0
-
-/* Have config1, Cacheable, noncoherent, write-back, write allocate*/
-#define MIPS_CONFIG0                                              \
-  ((1 << CP0C0_M) | (0x3 << CP0C0_K0))
-
-/* Have config2, no coprocessor2 attached, no MDMX support attached,
-   no performance counters, watch registers present,
-   no code compression, EJTAG present, no FPU, no watch registers */
-#define MIPS_CONFIG1                                              \
-((1 << CP0C1_M) |                                                 \
- (0 << CP0C1_C2) | (0 << CP0C1_MD) | (0 << CP0C1_PC) |            \
- (0 << CP0C1_WR) | (0 << CP0C1_CA) | (1 << CP0C1_EP) |            \
- (0 << CP0C1_FP))
-
-/* Have config3, no tertiary/secondary caches implemented */
-#define MIPS_CONFIG2                                              \
-((1 << CP0C2_M))
-
-/* No config4, no DSP ASE, no large physaddr (PABITS),
-   no external interrupt controller, no vectored interrupts,
-   no 1kb pages, no SmartMIPS ASE, no trace logic */
-#define MIPS_CONFIG3                                              \
-((0 << CP0C3_M) | (0 << CP0C3_DSPP) | (0 << CP0C3_LPA) |          \
- (0 << CP0C3_VEIC) | (0 << CP0C3_VInt) | (0 << CP0C3_SP) |        \
- (0 << CP0C3_SM) | (0 << CP0C3_TL))
-
-/* MMU types, the first four entries have the same layout as the
-   CP0C0_MT field.  */
-enum mips_mmu_types {
-	MMU_TYPE_NONE,
-	MMU_TYPE_R4000,
-	MMU_TYPE_RESERVED,
-	MMU_TYPE_FMT,
-	MMU_TYPE_R3000,
-	MMU_TYPE_R6000,
-	MMU_TYPE_R8000
-};
-
-/*
- * Trap codes
- */
-#define T_INT           0	/* Interrupt pending */
-#define T_TLB_MOD       1	/* TLB modified fault */
-#define T_TLB_LD_MISS       2	/* TLB miss on load or ifetch */
-#define T_TLB_ST_MISS       3	/* TLB miss on a store */
-#define T_ADDR_ERR_LD       4	/* Address error on a load or ifetch */
-#define T_ADDR_ERR_ST       5	/* Address error on a store */
-#define T_BUS_ERR_IFETCH    6	/* Bus error on an ifetch */
-#define T_BUS_ERR_LD_ST     7	/* Bus error on a load or store */
-#define T_SYSCALL       8	/* System call */
-#define T_BREAK         9	/* Breakpoint */
-#define T_RES_INST      10	/* Reserved instruction exception */
-#define T_COP_UNUSABLE      11	/* Coprocessor unusable */
-#define T_OVFLOW        12	/* Arithmetic overflow */
-
-/*
- * Trap definitions added for r4000 port.
- */
-#define T_TRAP          13	/* Trap instruction */
-#define T_VCEI          14	/* Virtual coherency exception */
-#define T_FPE           15	/* Floating point exception */
-#define T_WATCH         23	/* Watch address reference */
-#define T_VCED          31	/* Virtual coherency data */
-
-/* Resume Flags */
-#define RESUME_FLAG_DR          (1<<0)	/* Reload guest nonvolatile state? */
-#define RESUME_FLAG_HOST        (1<<1)	/* Resume host? */
-
-#define RESUME_GUEST            0
-#define RESUME_GUEST_DR         RESUME_FLAG_DR
-#define RESUME_HOST             RESUME_FLAG_HOST
-
-enum emulation_result {
-	EMULATE_DONE,		/* no further processing */
-	EMULATE_DO_MMIO,	/* kvm_run filled with MMIO request */
-	EMULATE_FAIL,		/* can't emulate this instruction */
-	EMULATE_WAIT,		/* WAIT instruction */
-	EMULATE_PRIV_FAIL,
+struct kvm_arch {
+	const struct kvm_mips_ops *ops;
+	void *impl;
 };
 
-#define MIPS3_PG_G  0x00000001	/* Global; ignore ASID if in lo0 & lo1 */
-#define MIPS3_PG_V  0x00000002	/* Valid */
-#define MIPS3_PG_NV 0x00000000
-#define MIPS3_PG_D  0x00000004	/* Dirty */
-
-#define mips3_paddr_to_tlbpfn(x) \
-    (((unsigned long)(x) >> MIPS3_PG_SHIFT) & MIPS3_PG_FRAME)
-#define mips3_tlbpfn_to_paddr(x) \
-    ((unsigned long)((x) & MIPS3_PG_FRAME) << MIPS3_PG_SHIFT)
-
-#define MIPS3_PG_SHIFT      6
-#define MIPS3_PG_FRAME      0x3fffffc0
-
-#define VPN2_MASK           0xffffe000
-#define TLB_IS_GLOBAL(x)    (((x).tlb_lo0 & MIPS3_PG_G) && ((x).tlb_lo1 & MIPS3_PG_G))
-#define TLB_VPN2(x)         ((x).tlb_hi & VPN2_MASK)
-#define TLB_ASID(x)         ((x).tlb_hi & ASID_MASK)
-#define TLB_IS_VALID(x, va) (((va) & (1 << PAGE_SHIFT)) ? ((x).tlb_lo1 & MIPS3_PG_V) : ((x).tlb_lo0 & MIPS3_PG_V))
-
-struct kvm_mips_tlb {
-	long tlb_mask;
-	long tlb_hi;
-	long tlb_lo0;
-	long tlb_lo1;
-};
 
-#define KVM_MIPS_GUEST_TLB_SIZE     64
 struct kvm_vcpu_arch {
-	void *host_ebase, *guest_ebase;
-	unsigned long host_stack;
-	unsigned long host_gp;
-
-	/* Host CP0 registers used when handling exits from guest */
-	unsigned long host_cp0_badvaddr;
-	unsigned long host_cp0_cause;
-	unsigned long host_cp0_epc;
-	unsigned long host_cp0_entryhi;
-	uint32_t guest_inst;
-
 	/* GPRS */
 	unsigned long gprs[32];
 	unsigned long hi;
 	unsigned long lo;
 	unsigned long epc;
 
-	/* FPU State */
-	struct mips_fpu_struct fpu;
-
-	/* COP0 State */
-	struct mips_coproc *cop0;
-
-	/* Host KSEG0 address of the EI/DI offset */
-	void *kseg0_commpage;
-
-	u32 io_gpr;		/* GPR used as IO source/target */
-
-	/* Used to calibrate the virutal count register for the guest */
-	int32_t host_cp0_count;
-
-	/* Bitmask of exceptions that are pending */
-	unsigned long pending_exceptions;
-
-	/* Bitmask of pending exceptions to be cleared */
-	unsigned long pending_exceptions_clr;
-
-	unsigned long pending_load_cause;
-
-	/* Save/Restore the entryhi register when are are preempted/scheduled back in */
-	unsigned long preempt_entryhi;
-
-	/* S/W Based TLB for guest */
-	struct kvm_mips_tlb guest_tlb[KVM_MIPS_GUEST_TLB_SIZE];
-
-	/* Cached guest kernel/user ASIDs */
-	uint32_t guest_user_asid[NR_CPUS];
-	uint32_t guest_kernel_asid[NR_CPUS];
-	struct mm_struct guest_kernel_mm, guest_user_mm;
-
-	struct kvm_mips_tlb shadow_tlb[NR_CPUS][KVM_MIPS_GUEST_TLB_SIZE];
-
-
-	struct hrtimer comparecount_timer;
-
-	int last_sched_cpu;
-
-	/* WAIT executed */
-	int wait;
+	void *impl;
 };
 
 
-#define kvm_read_c0_guest_index(cop0)               (cop0->reg[MIPS_CP0_TLB_INDEX][0])
-#define kvm_write_c0_guest_index(cop0, val)         (cop0->reg[MIPS_CP0_TLB_INDEX][0] = val)
-#define kvm_read_c0_guest_entrylo0(cop0)            (cop0->reg[MIPS_CP0_TLB_LO0][0])
-#define kvm_read_c0_guest_entrylo1(cop0)            (cop0->reg[MIPS_CP0_TLB_LO1][0])
-#define kvm_read_c0_guest_context(cop0)             (cop0->reg[MIPS_CP0_TLB_CONTEXT][0])
-#define kvm_write_c0_guest_context(cop0, val)       (cop0->reg[MIPS_CP0_TLB_CONTEXT][0] = (val))
-#define kvm_read_c0_guest_userlocal(cop0)           (cop0->reg[MIPS_CP0_TLB_CONTEXT][2])
-#define kvm_read_c0_guest_pagemask(cop0)            (cop0->reg[MIPS_CP0_TLB_PG_MASK][0])
-#define kvm_write_c0_guest_pagemask(cop0, val)      (cop0->reg[MIPS_CP0_TLB_PG_MASK][0] = (val))
-#define kvm_read_c0_guest_wired(cop0)               (cop0->reg[MIPS_CP0_TLB_WIRED][0])
-#define kvm_write_c0_guest_wired(cop0, val)         (cop0->reg[MIPS_CP0_TLB_WIRED][0] = (val))
-#define kvm_read_c0_guest_badvaddr(cop0)            (cop0->reg[MIPS_CP0_BAD_VADDR][0])
-#define kvm_write_c0_guest_badvaddr(cop0, val)      (cop0->reg[MIPS_CP0_BAD_VADDR][0] = (val))
-#define kvm_read_c0_guest_count(cop0)               (cop0->reg[MIPS_CP0_COUNT][0])
-#define kvm_write_c0_guest_count(cop0, val)         (cop0->reg[MIPS_CP0_COUNT][0] = (val))
-#define kvm_read_c0_guest_entryhi(cop0)             (cop0->reg[MIPS_CP0_TLB_HI][0])
-#define kvm_write_c0_guest_entryhi(cop0, val)       (cop0->reg[MIPS_CP0_TLB_HI][0] = (val))
-#define kvm_read_c0_guest_compare(cop0)             (cop0->reg[MIPS_CP0_COMPARE][0])
-#define kvm_write_c0_guest_compare(cop0, val)       (cop0->reg[MIPS_CP0_COMPARE][0] = (val))
-#define kvm_read_c0_guest_status(cop0)              (cop0->reg[MIPS_CP0_STATUS][0])
-#define kvm_write_c0_guest_status(cop0, val)        (cop0->reg[MIPS_CP0_STATUS][0] = (val))
-#define kvm_read_c0_guest_intctl(cop0)              (cop0->reg[MIPS_CP0_STATUS][1])
-#define kvm_write_c0_guest_intctl(cop0, val)        (cop0->reg[MIPS_CP0_STATUS][1] = (val))
-#define kvm_read_c0_guest_cause(cop0)               (cop0->reg[MIPS_CP0_CAUSE][0])
-#define kvm_write_c0_guest_cause(cop0, val)         (cop0->reg[MIPS_CP0_CAUSE][0] = (val))
-#define kvm_read_c0_guest_epc(cop0)                 (cop0->reg[MIPS_CP0_EXC_PC][0])
-#define kvm_write_c0_guest_epc(cop0, val)           (cop0->reg[MIPS_CP0_EXC_PC][0] = (val))
-#define kvm_read_c0_guest_prid(cop0)                (cop0->reg[MIPS_CP0_PRID][0])
-#define kvm_write_c0_guest_prid(cop0, val)          (cop0->reg[MIPS_CP0_PRID][0] = (val))
-#define kvm_read_c0_guest_ebase(cop0)               (cop0->reg[MIPS_CP0_PRID][1])
-#define kvm_write_c0_guest_ebase(cop0, val)         (cop0->reg[MIPS_CP0_PRID][1] = (val))
-#define kvm_read_c0_guest_config(cop0)              (cop0->reg[MIPS_CP0_CONFIG][0])
-#define kvm_read_c0_guest_config1(cop0)             (cop0->reg[MIPS_CP0_CONFIG][1])
-#define kvm_read_c0_guest_config2(cop0)             (cop0->reg[MIPS_CP0_CONFIG][2])
-#define kvm_read_c0_guest_config3(cop0)             (cop0->reg[MIPS_CP0_CONFIG][3])
-#define kvm_read_c0_guest_config7(cop0)             (cop0->reg[MIPS_CP0_CONFIG][7])
-#define kvm_write_c0_guest_config(cop0, val)        (cop0->reg[MIPS_CP0_CONFIG][0] = (val))
-#define kvm_write_c0_guest_config1(cop0, val)       (cop0->reg[MIPS_CP0_CONFIG][1] = (val))
-#define kvm_write_c0_guest_config2(cop0, val)       (cop0->reg[MIPS_CP0_CONFIG][2] = (val))
-#define kvm_write_c0_guest_config3(cop0, val)       (cop0->reg[MIPS_CP0_CONFIG][3] = (val))
-#define kvm_write_c0_guest_config7(cop0, val)       (cop0->reg[MIPS_CP0_CONFIG][7] = (val))
-#define kvm_read_c0_guest_errorepc(cop0)            (cop0->reg[MIPS_CP0_ERROR_PC][0])
-#define kvm_write_c0_guest_errorepc(cop0, val)      (cop0->reg[MIPS_CP0_ERROR_PC][0] = (val))
-
-#define kvm_set_c0_guest_status(cop0, val)          (cop0->reg[MIPS_CP0_STATUS][0] |= (val))
-#define kvm_clear_c0_guest_status(cop0, val)        (cop0->reg[MIPS_CP0_STATUS][0] &= ~(val))
-#define kvm_set_c0_guest_cause(cop0, val)           (cop0->reg[MIPS_CP0_CAUSE][0] |= (val))
-#define kvm_clear_c0_guest_cause(cop0, val)         (cop0->reg[MIPS_CP0_CAUSE][0] &= ~(val))
-#define kvm_change_c0_guest_cause(cop0, change, val)  \
-{                                                     \
-    kvm_clear_c0_guest_cause(cop0, change);           \
-    kvm_set_c0_guest_cause(cop0, ((val) & (change))); \
-}
-#define kvm_set_c0_guest_ebase(cop0, val)           (cop0->reg[MIPS_CP0_PRID][1] |= (val))
-#define kvm_clear_c0_guest_ebase(cop0, val)         (cop0->reg[MIPS_CP0_PRID][1] &= ~(val))
-#define kvm_change_c0_guest_ebase(cop0, change, val)  \
-{                                                     \
-    kvm_clear_c0_guest_ebase(cop0, change);           \
-    kvm_set_c0_guest_ebase(cop0, ((val) & (change))); \
-}
-
-
-struct kvm_mips_callbacks {
-	int (*handle_cop_unusable) (struct kvm_vcpu *vcpu);
-	int (*handle_tlb_mod) (struct kvm_vcpu *vcpu);
-	int (*handle_tlb_ld_miss) (struct kvm_vcpu *vcpu);
-	int (*handle_tlb_st_miss) (struct kvm_vcpu *vcpu);
-	int (*handle_addr_err_st) (struct kvm_vcpu *vcpu);
-	int (*handle_addr_err_ld) (struct kvm_vcpu *vcpu);
-	int (*handle_syscall) (struct kvm_vcpu *vcpu);
-	int (*handle_res_inst) (struct kvm_vcpu *vcpu);
-	int (*handle_break) (struct kvm_vcpu *vcpu);
-	int (*vm_init) (struct kvm *kvm);
-	int (*vcpu_init) (struct kvm_vcpu *vcpu);
-	int (*vcpu_setup) (struct kvm_vcpu *vcpu);
-	 gpa_t(*gva_to_gpa) (gva_t gva);
-	void (*queue_timer_int) (struct kvm_vcpu *vcpu);
-	void (*dequeue_timer_int) (struct kvm_vcpu *vcpu);
-	void (*queue_io_int) (struct kvm_vcpu *vcpu,
-			      struct kvm_mips_interrupt *irq);
-	void (*dequeue_io_int) (struct kvm_vcpu *vcpu,
-				struct kvm_mips_interrupt *irq);
-	int (*irq_deliver) (struct kvm_vcpu *vcpu, unsigned int priority,
-			    uint32_t cause);
-	int (*irq_clear) (struct kvm_vcpu *vcpu, unsigned int priority,
-			  uint32_t cause);
-};
-extern struct kvm_mips_callbacks *kvm_mips_callbacks;
-int kvm_mips_emulation_init(struct kvm_mips_callbacks **install_callbacks);
-
-/* Debug: dump vcpu state */
-int kvm_arch_vcpu_dump_regs(struct kvm_vcpu *vcpu);
-
-/* Trampoline ASM routine to start running in "Guest" context */
-extern int __kvm_mips_vcpu_run(struct kvm_run *run, struct kvm_vcpu *vcpu);
-
-/* TLB handling */
-uint32_t kvm_get_kernel_asid(struct kvm_vcpu *vcpu);
-
-uint32_t kvm_get_user_asid(struct kvm_vcpu *vcpu);
-
-uint32_t kvm_get_commpage_asid (struct kvm_vcpu *vcpu);
-
-extern int kvm_mips_handle_kseg0_tlb_fault(unsigned long badbaddr,
-					   struct kvm_vcpu *vcpu);
-
-extern int kvm_mips_handle_commpage_tlb_fault(unsigned long badvaddr,
-					      struct kvm_vcpu *vcpu);
-
-extern int kvm_mips_handle_mapped_seg_tlb_fault(struct kvm_vcpu *vcpu,
-						struct kvm_mips_tlb *tlb,
-						unsigned long *hpa0,
-						unsigned long *hpa1);
-
-extern enum emulation_result kvm_mips_handle_tlbmiss(unsigned long cause,
-						     uint32_t *opc,
-						     struct kvm_run *run,
-						     struct kvm_vcpu *vcpu);
-
-extern enum emulation_result kvm_mips_handle_tlbmod(unsigned long cause,
-						    uint32_t *opc,
-						    struct kvm_run *run,
-						    struct kvm_vcpu *vcpu);
-
-extern void kvm_mips_dump_host_tlbs(void);
-extern void kvm_mips_dump_guest_tlbs(struct kvm_vcpu *vcpu);
-extern void kvm_mips_dump_shadow_tlbs(struct kvm_vcpu *vcpu);
-extern void kvm_mips_flush_host_tlb(int skip_kseg0);
-extern int kvm_mips_host_tlb_inv(struct kvm_vcpu *vcpu, unsigned long entryhi);
-extern int kvm_mips_host_tlb_inv_index(struct kvm_vcpu *vcpu, int index);
-
-extern int kvm_mips_guest_tlb_lookup(struct kvm_vcpu *vcpu,
-				     unsigned long entryhi);
-extern int kvm_mips_host_tlb_lookup(struct kvm_vcpu *vcpu, unsigned long vaddr);
-extern unsigned long kvm_mips_translate_guest_kseg0_to_hpa(struct kvm_vcpu *vcpu,
-						   unsigned long gva);
-extern void kvm_get_new_mmu_context(struct mm_struct *mm, unsigned long cpu,
-				    struct kvm_vcpu *vcpu);
-extern void kvm_shadow_tlb_put(struct kvm_vcpu *vcpu);
-extern void kvm_shadow_tlb_load(struct kvm_vcpu *vcpu);
-extern void kvm_local_flush_tlb_all(void);
-extern void kvm_mips_init_shadow_tlb(struct kvm_vcpu *vcpu);
-extern void kvm_mips_alloc_new_mmu_context(struct kvm_vcpu *vcpu);
-extern void kvm_mips_vcpu_load(struct kvm_vcpu *vcpu, int cpu);
-extern void kvm_mips_vcpu_put(struct kvm_vcpu *vcpu);
-
-/* Emulation */
-uint32_t kvm_get_inst(uint32_t *opc, struct kvm_vcpu *vcpu);
-enum emulation_result update_pc(struct kvm_vcpu *vcpu, uint32_t cause);
-
-extern enum emulation_result kvm_mips_emulate_inst(unsigned long cause,
-						   uint32_t *opc,
-						   struct kvm_run *run,
-						   struct kvm_vcpu *vcpu);
-
-extern enum emulation_result kvm_mips_emulate_syscall(unsigned long cause,
-						      uint32_t *opc,
-						      struct kvm_run *run,
-						      struct kvm_vcpu *vcpu);
-
-extern enum emulation_result kvm_mips_emulate_tlbmiss_ld(unsigned long cause,
-							 uint32_t *opc,
-							 struct kvm_run *run,
-							 struct kvm_vcpu *vcpu);
-
-extern enum emulation_result kvm_mips_emulate_tlbinv_ld(unsigned long cause,
-							uint32_t *opc,
-							struct kvm_run *run,
-							struct kvm_vcpu *vcpu);
-
-extern enum emulation_result kvm_mips_emulate_tlbmiss_st(unsigned long cause,
-							 uint32_t *opc,
-							 struct kvm_run *run,
-							 struct kvm_vcpu *vcpu);
-
-extern enum emulation_result kvm_mips_emulate_tlbinv_st(unsigned long cause,
-							uint32_t *opc,
-							struct kvm_run *run,
-							struct kvm_vcpu *vcpu);
-
-extern enum emulation_result kvm_mips_emulate_tlbmod(unsigned long cause,
-						     uint32_t *opc,
-						     struct kvm_run *run,
-						     struct kvm_vcpu *vcpu);
-
-extern enum emulation_result kvm_mips_emulate_fpu_exc(unsigned long cause,
-						      uint32_t *opc,
-						      struct kvm_run *run,
-						      struct kvm_vcpu *vcpu);
-
-extern enum emulation_result kvm_mips_handle_ri(unsigned long cause,
-						uint32_t *opc,
-						struct kvm_run *run,
-						struct kvm_vcpu *vcpu);
-
-extern enum emulation_result kvm_mips_emulate_ri_exc(unsigned long cause,
-						     uint32_t *opc,
-						     struct kvm_run *run,
-						     struct kvm_vcpu *vcpu);
-
-extern enum emulation_result kvm_mips_emulate_bp_exc(unsigned long cause,
-						     uint32_t *opc,
-						     struct kvm_run *run,
-						     struct kvm_vcpu *vcpu);
-
-extern enum emulation_result kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu,
-							 struct kvm_run *run);
-
-enum emulation_result kvm_mips_emulate_count(struct kvm_vcpu *vcpu);
-
-enum emulation_result kvm_mips_check_privilege(unsigned long cause,
-					       uint32_t *opc,
-					       struct kvm_run *run,
-					       struct kvm_vcpu *vcpu);
-
-enum emulation_result kvm_mips_emulate_cache(uint32_t inst,
-					     uint32_t *opc,
-					     uint32_t cause,
-					     struct kvm_run *run,
-					     struct kvm_vcpu *vcpu);
-enum emulation_result kvm_mips_emulate_CP0(uint32_t inst,
-					   uint32_t *opc,
-					   uint32_t cause,
-					   struct kvm_run *run,
-					   struct kvm_vcpu *vcpu);
-enum emulation_result kvm_mips_emulate_store(uint32_t inst,
-					     uint32_t cause,
-					     struct kvm_run *run,
-					     struct kvm_vcpu *vcpu);
-enum emulation_result kvm_mips_emulate_load(uint32_t inst,
-					    uint32_t cause,
-					    struct kvm_run *run,
-					    struct kvm_vcpu *vcpu);
-
-/* Dynamic binary translation */
-extern int kvm_mips_trans_cache_index(uint32_t inst, uint32_t *opc,
-				      struct kvm_vcpu *vcpu);
-extern int kvm_mips_trans_cache_va(uint32_t inst, uint32_t *opc,
-				   struct kvm_vcpu *vcpu);
-extern int kvm_mips_trans_mfc0(uint32_t inst, uint32_t *opc,
-			       struct kvm_vcpu *vcpu);
-extern int kvm_mips_trans_mtc0(uint32_t inst, uint32_t *opc,
-			       struct kvm_vcpu *vcpu);
-
-/* Misc */
-extern void mips32_SyncICache(unsigned long addr, unsigned long size);
-extern int kvm_mips_dump_stats(struct kvm_vcpu *vcpu);
-extern unsigned long kvm_mips_get_ramsize(struct kvm *kvm);
-
-
 #endif /* __MIPS_KVM_HOST_H__ */
diff --git a/arch/mips/include/asm/kvm_mips_te.h b/arch/mips/include/asm/kvm_mips_te.h
new file mode 100644
index 0000000..a9dd202
--- /dev/null
+++ b/arch/mips/include/asm/kvm_mips_te.h
@@ -0,0 +1,589 @@
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License.  See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2012  MIPS Technologies, Inc.  All rights reserved.
+ * Authors: Sanjay Lal <sanjayl@kymasys.com>
+ */
+#ifndef __KVM_MIPS_EMUL_H__
+#define __KVM_MIPS_EMUL_H__
+
+
+/* Special address that contains the comm page, used for reducing # of traps */
+#define KVM_GUEST_COMMPAGE_ADDR     0x0
+
+#define KVM_GUEST_KERNEL_MODE(vcpu)					\
+({									\
+	struct kvm_mips_vcpu_te *_te = vcpu->arch.impl;			\
+	(kvm_read_c0_guest_status(_te->cop0) & (ST0_EXL | ST0_ERL)) ||	\
+		((kvm_read_c0_guest_status(_te->cop0) & KSU_USER) == 0); \
+})
+
+#define KVM_GUEST_KUSEG             0x00000000UL
+#define KVM_GUEST_KSEG0             0x40000000UL
+#define KVM_GUEST_KSEG23            0x60000000UL
+#define KVM_GUEST_KSEGX(a)          ((_ACAST32_(a)) & 0x60000000)
+#define KVM_GUEST_CPHYSADDR(a)      ((_ACAST32_(a)) & 0x1fffffff)
+
+#define KVM_GUEST_CKSEG0ADDR(a)	(KVM_GUEST_CPHYSADDR(a) | KVM_GUEST_KSEG0)
+#define KVM_GUEST_CKSEG1ADDR(a)	(KVM_GUEST_CPHYSADDR(a) | KVM_GUEST_KSEG1)
+#define KVM_GUEST_CKSEG23ADDR(a) (KVM_GUEST_CPHYSADDR(a) | KVM_GUEST_KSEG23)
+
+/*
+ * Map an address to a certain kernel segment
+ */
+#define KVM_GUEST_KSEG0ADDR(a)	(KVM_GUEST_CPHYSADDR(a) | KVM_GUEST_KSEG0)
+#define KVM_GUEST_KSEG1ADDR(a)	(KVM_GUEST_CPHYSADDR(a) | KVM_GUEST_KSEG1)
+#define KVM_GUEST_KSEG23ADDR(a)	(KVM_GUEST_CPHYSADDR(a) | KVM_GUEST_KSEG23)
+
+#define KVM_INVALID_PAGE            0xdeadbeef
+#define KVM_INVALID_INST            0xdeadbeef
+#define KVM_INVALID_ADDR            0xdeadbeef
+
+#define KVM_MALTA_GUEST_RTC_ADDR    0xb8000070UL
+
+#define GUEST_TICKS_PER_JIFFY (40000000/HZ)
+#define MS_TO_NS(x) (x * 1E6L)
+
+#define CAUSEB_DC       27
+#define CAUSEF_DC       (_ULCAST_(1)   << 27)
+
+extern atomic_t kvm_mips_instance;
+extern pfn_t(*kvm_mips_gfn_to_pfn) (struct kvm *kvm, gfn_t gfn);
+extern void (*kvm_mips_release_pfn_clean) (pfn_t pfn);
+extern bool(*kvm_mips_is_error_pfn) (pfn_t pfn);
+
+enum kvm_mips_exit_types {
+	WAIT_EXITS,
+	CACHE_EXITS,
+	SIGNAL_EXITS,
+	INT_EXITS,
+	COP_UNUSABLE_EXITS,
+	TLBMOD_EXITS,
+	TLBMISS_LD_EXITS,
+	TLBMISS_ST_EXITS,
+	ADDRERR_ST_EXITS,
+	ADDRERR_LD_EXITS,
+	SYSCALL_EXITS,
+	RESVD_INST_EXITS,
+	BREAK_INST_EXITS,
+	FLUSH_DCACHE_EXITS,
+	MAX_KVM_MIPS_EXIT_TYPES
+};
+
+
+#define N_MIPS_COPROC_REGS	32
+#define N_MIPS_COPROC_SEL	8
+
+struct mips_coproc {
+	unsigned long reg[N_MIPS_COPROC_REGS][N_MIPS_COPROC_SEL];
+#ifdef CONFIG_KVM_MIPS_DEBUG_COP0_COUNTERS
+	unsigned long stat[N_MIPS_COPROC_REGS][N_MIPS_COPROC_SEL];
+#endif
+};
+
+/*
+ * Coprocessor 0 register names
+ */
+#define	MIPS_CP0_TLB_INDEX	0
+#define	MIPS_CP0_TLB_RANDOM	1
+#define	MIPS_CP0_TLB_LOW	2
+#define	MIPS_CP0_TLB_LO0	2
+#define	MIPS_CP0_TLB_LO1	3
+#define	MIPS_CP0_TLB_CONTEXT	4
+#define	MIPS_CP0_TLB_PG_MASK	5
+#define	MIPS_CP0_TLB_WIRED	6
+#define	MIPS_CP0_HWRENA		7
+#define	MIPS_CP0_BAD_VADDR	8
+#define	MIPS_CP0_COUNT		9
+#define	MIPS_CP0_TLB_HI		10
+#define	MIPS_CP0_COMPARE	11
+#define	MIPS_CP0_STATUS		12
+#define	MIPS_CP0_CAUSE		13
+#define	MIPS_CP0_EXC_PC		14
+#define	MIPS_CP0_PRID		15
+#define	MIPS_CP0_CONFIG		16
+#define	MIPS_CP0_LLADDR		17
+#define	MIPS_CP0_WATCH_LO	18
+#define	MIPS_CP0_WATCH_HI	19
+#define	MIPS_CP0_TLB_XCONTEXT	20
+#define	MIPS_CP0_ECC		26
+#define	MIPS_CP0_CACHE_ERR	27
+#define	MIPS_CP0_TAG_LO		28
+#define	MIPS_CP0_TAG_HI		29
+#define	MIPS_CP0_ERROR_PC	30
+#define	MIPS_CP0_DEBUG		23
+#define	MIPS_CP0_DEPC		24
+#define	MIPS_CP0_PERFCNT	25
+#define	MIPS_CP0_ERRCTL		26
+#define	MIPS_CP0_DATA_LO	28
+#define	MIPS_CP0_DATA_HI	29
+#define	MIPS_CP0_DESAVE		31
+
+#define MIPS_CP0_CONFIG_SEL	0
+#define MIPS_CP0_CONFIG1_SEL	1
+#define MIPS_CP0_CONFIG2_SEL	2
+#define MIPS_CP0_CONFIG3_SEL	3
+
+/* Config0 register bits */
+#define CP0C0_M    31
+#define CP0C0_K23  28
+#define CP0C0_KU   25
+#define CP0C0_MDU  20
+#define CP0C0_MM   17
+#define CP0C0_BM   16
+#define CP0C0_BE   15
+#define CP0C0_AT   13
+#define CP0C0_AR   10
+#define CP0C0_MT   7
+#define CP0C0_VI   3
+#define CP0C0_K0   0
+
+/* Config1 register bits */
+#define CP0C1_M    31
+#define CP0C1_MMU  25
+#define CP0C1_IS   22
+#define CP0C1_IL   19
+#define CP0C1_IA   16
+#define CP0C1_DS   13
+#define CP0C1_DL   10
+#define CP0C1_DA   7
+#define CP0C1_C2   6
+#define CP0C1_MD   5
+#define CP0C1_PC   4
+#define CP0C1_WR   3
+#define CP0C1_CA   2
+#define CP0C1_EP   1
+#define CP0C1_FP   0
+
+/* Config2 Register bits */
+#define CP0C2_M    31
+#define CP0C2_TU   28
+#define CP0C2_TS   24
+#define CP0C2_TL   20
+#define CP0C2_TA   16
+#define CP0C2_SU   12
+#define CP0C2_SS   8
+#define CP0C2_SL   4
+#define CP0C2_SA   0
+
+/* Config3 Register bits */
+#define CP0C3_M    31
+#define CP0C3_ISA_ON_EXC 16
+#define CP0C3_ULRI  13
+#define CP0C3_DSPP 10
+#define CP0C3_LPA  7
+#define CP0C3_VEIC 6
+#define CP0C3_VInt 5
+#define CP0C3_SP   4
+#define CP0C3_MT   2
+#define CP0C3_SM   1
+#define CP0C3_TL   0
+
+/* Have config1, Cacheable, noncoherent, write-back, write allocate*/
+#define MIPS_CONFIG0 ((1 << CP0C0_M) | (0x3 << CP0C0_K0))
+
+/*
+ * Have config2, no coprocessor2 attached, no MDMX support attached,
+ * no performance counters, watch registers present, no code
+ * compression, EJTAG present, no FPU, no watch registers
+ */
+#define MIPS_CONFIG1							\
+	((1 << CP0C1_M) |						\
+	 (0 << CP0C1_C2) | (0 << CP0C1_MD) | (0 << CP0C1_PC) |		\
+	 (0 << CP0C1_WR) | (0 << CP0C1_CA) | (1 << CP0C1_EP) |		\
+	 (0 << CP0C1_FP))
+
+/* Have config3, no tertiary/secondary caches implemented */
+#define MIPS_CONFIG2 ((1 << CP0C2_M))
+
+/*
+ * No config4, no DSP ASE, no large physaddr (PABITS), no external
+ * interrupt controller, no vectored interrupts, no 1kb pages, no
+ * SmartMIPS ASE, no trace logic
+ */
+#define MIPS_CONFIG3							\
+	((0 << CP0C3_M) | (0 << CP0C3_DSPP) | (0 << CP0C3_LPA) |	\
+	 (0 << CP0C3_VEIC) | (0 << CP0C3_VInt) | (0 << CP0C3_SP) |	\
+	 (0 << CP0C3_SM) | (0 << CP0C3_TL))
+
+/* MMU types, the first four entries have the same layout as the
+   CP0C0_MT field.  */
+enum mips_mmu_types {
+	MMU_TYPE_NONE,
+	MMU_TYPE_R4000,
+	MMU_TYPE_RESERVED,
+	MMU_TYPE_FMT,
+	MMU_TYPE_R3000,
+	MMU_TYPE_R6000,
+	MMU_TYPE_R8000
+};
+
+/*
+ * Trap codes
+ */
+#define T_INT           0	/* Interrupt pending */
+#define T_TLB_MOD       1	/* TLB modified fault */
+#define T_TLB_LD_MISS       2	/* TLB miss on load or ifetch */
+#define T_TLB_ST_MISS       3	/* TLB miss on a store */
+#define T_ADDR_ERR_LD       4	/* Address error on a load or ifetch */
+#define T_ADDR_ERR_ST       5	/* Address error on a store */
+#define T_BUS_ERR_IFETCH    6	/* Bus error on an ifetch */
+#define T_BUS_ERR_LD_ST     7	/* Bus error on a load or store */
+#define T_SYSCALL       8	/* System call */
+#define T_BREAK         9	/* Breakpoint */
+#define T_RES_INST      10	/* Reserved instruction exception */
+#define T_COP_UNUSABLE      11	/* Coprocessor unusable */
+#define T_OVFLOW        12	/* Arithmetic overflow */
+
+/*
+ * Trap definitions added for r4000 port.
+ */
+#define T_TRAP          13	/* Trap instruction */
+#define T_VCEI          14	/* Virtual coherency exception */
+#define T_FPE           15	/* Floating point exception */
+#define T_WATCH         23	/* Watch address reference */
+#define T_VCED          31	/* Virtual coherency data */
+
+/* Resume Flags */
+#define RESUME_FLAG_DR          (1<<0)	/* Reload guest nonvolatile state? */
+#define RESUME_FLAG_HOST        (1<<1)	/* Resume host? */
+
+#define RESUME_GUEST            0
+#define RESUME_GUEST_DR         RESUME_FLAG_DR
+#define RESUME_HOST             RESUME_FLAG_HOST
+
+enum emulation_result {
+	EMULATE_DONE,		/* no further processing */
+	EMULATE_DO_MMIO,	/* kvm_run filled with MMIO request */
+	EMULATE_FAIL,		/* can't emulate this instruction */
+	EMULATE_WAIT,		/* WAIT instruction */
+	EMULATE_PRIV_FAIL,
+};
+
+#define MIPS3_PG_G  0x00000001	/* Global; ignore ASID if in lo0 & lo1 */
+#define MIPS3_PG_V  0x00000002	/* Valid */
+#define MIPS3_PG_NV 0x00000000
+#define MIPS3_PG_D  0x00000004	/* Dirty */
+
+#define mips3_paddr_to_tlbpfn(x)					\
+	(((unsigned long)(x) >> MIPS3_PG_SHIFT) & MIPS3_PG_FRAME)
+#define mips3_tlbpfn_to_paddr(x)					\
+	((unsigned long)((x) & MIPS3_PG_FRAME) << MIPS3_PG_SHIFT)
+
+#define MIPS3_PG_SHIFT      6
+#define MIPS3_PG_FRAME      0x3fffffc0
+
+#define VPN2_MASK           0xffffe000
+#define TLB_IS_GLOBAL(x)    (((x).tlb_lo0 & MIPS3_PG_G) && ((x).tlb_lo1 & MIPS3_PG_G))
+#define TLB_VPN2(x)         ((x).tlb_hi & VPN2_MASK)
+#define TLB_ASID(x)         ((x).tlb_hi & ASID_MASK)
+#define TLB_IS_VALID(x, va) (((va) & (1 << PAGE_SHIFT)) ? ((x).tlb_lo1 & MIPS3_PG_V) : ((x).tlb_lo0 & MIPS3_PG_V))
+
+struct kvm_mips_te {
+	/* Guest GVA->HPA page table */
+	unsigned long *guest_pmap;
+	unsigned long guest_pmap_npages;
+
+	/* Wired host TLB used for the commpage */
+	int commpage_tlb;
+};
+
+struct kvm_mips_tlb {
+	long tlb_mask;
+	long tlb_hi;
+	long tlb_lo0;
+	long tlb_lo1;
+};
+
+#define KVM_MIPS_GUEST_TLB_SIZE     64
+
+/* Trap and Emulate VCPU state. */
+struct kvm_mips_vcpu_te {
+	struct kvm_vcpu *vcpu;
+	struct kvm_mips_te *kvm_mips_te;
+	void *host_ebase, *guest_ebase;
+	unsigned long host_stack;
+	unsigned long host_gp;
+
+	/* Host CP0 registers used when handling exits from guest */
+	unsigned long host_cp0_badvaddr;
+	unsigned long host_cp0_cause;
+	unsigned long host_cp0_epc;
+	unsigned long host_cp0_entryhi;
+	uint32_t guest_inst;
+
+	/* COP0 State */
+	struct mips_coproc *cop0;
+
+	/* Host KSEG0 address of the EI/DI offset */
+	void *kseg0_commpage;
+
+	u32 io_gpr;		/* GPR used as IO source/target */
+
+	/* Used to calibrate the virutal count register for the guest */
+	int32_t host_cp0_count;
+
+	/* Bitmask of exceptions that are pending */
+	unsigned long pending_exceptions;
+
+	/* Bitmask of pending exceptions to be cleared */
+	unsigned long pending_exceptions_clr;
+
+	unsigned long pending_load_cause;
+
+	/*
+	 * Save/Restore the entryhi register when are are
+	 * preempted/scheduled back in
+	 */
+	unsigned long preempt_entryhi;
+
+	/* S/W Based TLB for guest */
+	struct kvm_mips_tlb guest_tlb[KVM_MIPS_GUEST_TLB_SIZE];
+
+	/* Cached guest kernel/user ASIDs */
+	uint32_t guest_user_asid[NR_CPUS];
+	uint32_t guest_kernel_asid[NR_CPUS];
+	struct mm_struct guest_kernel_mm, guest_user_mm;
+
+	struct kvm_mips_tlb shadow_tlb[NR_CPUS][KVM_MIPS_GUEST_TLB_SIZE];
+
+
+	struct hrtimer comparecount_timer;
+
+	int last_sched_cpu;
+
+	/* WAIT executed */
+	int wait;
+};
+
+
+#define kvm_read_c0_guest_index(cop0)               (cop0->reg[MIPS_CP0_TLB_INDEX][0])
+#define kvm_write_c0_guest_index(cop0, val)         (cop0->reg[MIPS_CP0_TLB_INDEX][0] = val)
+#define kvm_read_c0_guest_entrylo0(cop0)            (cop0->reg[MIPS_CP0_TLB_LO0][0])
+#define kvm_read_c0_guest_entrylo1(cop0)            (cop0->reg[MIPS_CP0_TLB_LO1][0])
+#define kvm_read_c0_guest_context(cop0)             (cop0->reg[MIPS_CP0_TLB_CONTEXT][0])
+#define kvm_write_c0_guest_context(cop0, val)       (cop0->reg[MIPS_CP0_TLB_CONTEXT][0] = (val))
+#define kvm_read_c0_guest_userlocal(cop0)           (cop0->reg[MIPS_CP0_TLB_CONTEXT][2])
+#define kvm_read_c0_guest_pagemask(cop0)            (cop0->reg[MIPS_CP0_TLB_PG_MASK][0])
+#define kvm_write_c0_guest_pagemask(cop0, val)      (cop0->reg[MIPS_CP0_TLB_PG_MASK][0] = (val))
+#define kvm_read_c0_guest_wired(cop0)               (cop0->reg[MIPS_CP0_TLB_WIRED][0])
+#define kvm_write_c0_guest_wired(cop0, val)         (cop0->reg[MIPS_CP0_TLB_WIRED][0] = (val))
+#define kvm_read_c0_guest_badvaddr(cop0)            (cop0->reg[MIPS_CP0_BAD_VADDR][0])
+#define kvm_write_c0_guest_badvaddr(cop0, val)      (cop0->reg[MIPS_CP0_BAD_VADDR][0] = (val))
+#define kvm_read_c0_guest_count(cop0)               (cop0->reg[MIPS_CP0_COUNT][0])
+#define kvm_write_c0_guest_count(cop0, val)         (cop0->reg[MIPS_CP0_COUNT][0] = (val))
+#define kvm_read_c0_guest_entryhi(cop0)             (cop0->reg[MIPS_CP0_TLB_HI][0])
+#define kvm_write_c0_guest_entryhi(cop0, val)       (cop0->reg[MIPS_CP0_TLB_HI][0] = (val))
+#define kvm_read_c0_guest_compare(cop0)             (cop0->reg[MIPS_CP0_COMPARE][0])
+#define kvm_write_c0_guest_compare(cop0, val)       (cop0->reg[MIPS_CP0_COMPARE][0] = (val))
+#define kvm_read_c0_guest_status(cop0)              (cop0->reg[MIPS_CP0_STATUS][0])
+#define kvm_write_c0_guest_status(cop0, val)        (cop0->reg[MIPS_CP0_STATUS][0] = (val))
+#define kvm_read_c0_guest_intctl(cop0)              (cop0->reg[MIPS_CP0_STATUS][1])
+#define kvm_write_c0_guest_intctl(cop0, val)        (cop0->reg[MIPS_CP0_STATUS][1] = (val))
+#define kvm_read_c0_guest_cause(cop0)               (cop0->reg[MIPS_CP0_CAUSE][0])
+#define kvm_write_c0_guest_cause(cop0, val)         (cop0->reg[MIPS_CP0_CAUSE][0] = (val))
+#define kvm_read_c0_guest_epc(cop0)                 (cop0->reg[MIPS_CP0_EXC_PC][0])
+#define kvm_write_c0_guest_epc(cop0, val)           (cop0->reg[MIPS_CP0_EXC_PC][0] = (val))
+#define kvm_read_c0_guest_prid(cop0)                (cop0->reg[MIPS_CP0_PRID][0])
+#define kvm_write_c0_guest_prid(cop0, val)          (cop0->reg[MIPS_CP0_PRID][0] = (val))
+#define kvm_read_c0_guest_ebase(cop0)               (cop0->reg[MIPS_CP0_PRID][1])
+#define kvm_write_c0_guest_ebase(cop0, val)         (cop0->reg[MIPS_CP0_PRID][1] = (val))
+#define kvm_read_c0_guest_config(cop0)              (cop0->reg[MIPS_CP0_CONFIG][0])
+#define kvm_read_c0_guest_config1(cop0)             (cop0->reg[MIPS_CP0_CONFIG][1])
+#define kvm_read_c0_guest_config2(cop0)             (cop0->reg[MIPS_CP0_CONFIG][2])
+#define kvm_read_c0_guest_config3(cop0)             (cop0->reg[MIPS_CP0_CONFIG][3])
+#define kvm_read_c0_guest_config7(cop0)             (cop0->reg[MIPS_CP0_CONFIG][7])
+#define kvm_write_c0_guest_config(cop0, val)        (cop0->reg[MIPS_CP0_CONFIG][0] = (val))
+#define kvm_write_c0_guest_config1(cop0, val)       (cop0->reg[MIPS_CP0_CONFIG][1] = (val))
+#define kvm_write_c0_guest_config2(cop0, val)       (cop0->reg[MIPS_CP0_CONFIG][2] = (val))
+#define kvm_write_c0_guest_config3(cop0, val)       (cop0->reg[MIPS_CP0_CONFIG][3] = (val))
+#define kvm_write_c0_guest_config7(cop0, val)       (cop0->reg[MIPS_CP0_CONFIG][7] = (val))
+#define kvm_read_c0_guest_errorepc(cop0)            (cop0->reg[MIPS_CP0_ERROR_PC][0])
+#define kvm_write_c0_guest_errorepc(cop0, val)      (cop0->reg[MIPS_CP0_ERROR_PC][0] = (val))
+
+#define kvm_set_c0_guest_status(cop0, val)          (cop0->reg[MIPS_CP0_STATUS][0] |= (val))
+#define kvm_clear_c0_guest_status(cop0, val)        (cop0->reg[MIPS_CP0_STATUS][0] &= ~(val))
+#define kvm_set_c0_guest_cause(cop0, val)           (cop0->reg[MIPS_CP0_CAUSE][0] |= (val))
+#define kvm_clear_c0_guest_cause(cop0, val)         (cop0->reg[MIPS_CP0_CAUSE][0] &= ~(val))
+#define kvm_change_c0_guest_cause(cop0, change, val)	\
+{							\
+	kvm_clear_c0_guest_cause(cop0, change);		\
+	kvm_set_c0_guest_cause(cop0, ((val) & (change)));	\
+}
+#define kvm_set_c0_guest_ebase(cop0, val)           (cop0->reg[MIPS_CP0_PRID][1] |= (val))
+#define kvm_clear_c0_guest_ebase(cop0, val)         (cop0->reg[MIPS_CP0_PRID][1] &= ~(val))
+#define kvm_change_c0_guest_ebase(cop0, change, val)	\
+{							\
+	kvm_clear_c0_guest_ebase(cop0, change);		\
+	kvm_set_c0_guest_ebase(cop0, ((val) & (change)));	\
+}
+
+
+struct kvm_mips_callbacks {
+	int (*handle_cop_unusable) (struct kvm_vcpu *vcpu);
+	int (*handle_tlb_mod) (struct kvm_vcpu *vcpu);
+	int (*handle_tlb_ld_miss) (struct kvm_vcpu *vcpu);
+	int (*handle_tlb_st_miss) (struct kvm_vcpu *vcpu);
+	int (*handle_addr_err_st) (struct kvm_vcpu *vcpu);
+	int (*handle_addr_err_ld) (struct kvm_vcpu *vcpu);
+	int (*handle_syscall) (struct kvm_vcpu *vcpu);
+	int (*handle_res_inst) (struct kvm_vcpu *vcpu);
+	int (*handle_break) (struct kvm_vcpu *vcpu);
+	int (*vm_init) (struct kvm *kvm);
+	int (*vcpu_init) (struct kvm_vcpu *vcpu);
+	int (*vcpu_setup) (struct kvm_vcpu *vcpu);
+	 gpa_t(*gva_to_gpa) (gva_t gva);
+	void (*queue_timer_int) (struct kvm_vcpu *vcpu);
+	void (*dequeue_timer_int) (struct kvm_vcpu *vcpu);
+	void (*queue_io_int) (struct kvm_vcpu *vcpu,
+			      struct kvm_mips_interrupt *irq);
+	void (*dequeue_io_int) (struct kvm_vcpu *vcpu,
+				struct kvm_mips_interrupt *irq);
+	int (*irq_deliver) (struct kvm_vcpu *vcpu, unsigned int priority,
+			    u32 cause);
+	int (*irq_clear) (struct kvm_vcpu *vcpu, unsigned int priority,
+			  u32 cause);
+};
+extern struct kvm_mips_callbacks *kvm_mips_callbacks;
+int kvm_mips_emulation_init(struct kvm_mips_callbacks **install_callbacks);
+
+/* Debug: dump vcpu state */
+int kvm_mips_te_vcpu_dump_regs(struct kvm_vcpu *vcpu);
+
+/* Trampoline ASM routine to start running in "Guest" context */
+int __kvm_mips_vcpu_run(struct kvm_run *run, struct kvm_vcpu *vcpu);
+
+/* TLB handling */
+u32 kvm_get_kernel_asid(struct kvm_vcpu *vcpu);
+
+u32 kvm_get_user_asid(struct kvm_vcpu *vcpu);
+
+u32 kvm_get_commpage_asid(struct kvm_vcpu *vcpu);
+
+int kvm_mips_handle_kseg0_tlb_fault(unsigned long badbaddr,
+				    struct kvm_vcpu *vcpu);
+
+int kvm_mips_handle_commpage_tlb_fault(unsigned long badvaddr,
+				       struct kvm_vcpu *vcpu);
+
+int kvm_mips_handle_mapped_seg_tlb_fault(struct kvm_vcpu *vcpu,
+					 struct kvm_mips_tlb *tlb,
+					 unsigned long *hpa0,
+					 unsigned long *hpa1);
+
+enum emulation_result kvm_mips_handle_tlbmiss(unsigned long cause, u32 *opc,
+					      struct kvm_run *run,
+					      struct kvm_vcpu *vcpu);
+
+enum emulation_result kvm_mips_handle_tlbmod(unsigned long cause, u32 *opc,
+					     struct kvm_run *run,
+					     struct kvm_vcpu *vcpu);
+
+void kvm_mips_dump_host_tlbs(void);
+void kvm_mips_dump_guest_tlbs(struct kvm_vcpu *vcpu);
+void kvm_mips_dump_shadow_tlbs(struct kvm_vcpu *vcpu);
+void kvm_mips_flush_host_tlb(int skip_kseg0);
+int kvm_mips_host_tlb_inv(struct kvm_vcpu *vcpu, unsigned long entryhi);
+int kvm_mips_host_tlb_inv_index(struct kvm_vcpu *vcpu, int index);
+
+int kvm_mips_guest_tlb_lookup(struct kvm_vcpu *vcpu, unsigned long entryhi);
+int kvm_mips_host_tlb_lookup(struct kvm_vcpu *vcpu, unsigned long vaddr);
+unsigned long kvm_mips_translate_guest_kseg0_to_hpa(struct kvm_vcpu *vcpu,
+						    unsigned long gva);
+void kvm_get_new_mmu_context(struct mm_struct *mm, unsigned long cpu,
+			     struct kvm_vcpu *vcpu);
+void kvm_shadow_tlb_put(struct kvm_vcpu *vcpu);
+void kvm_shadow_tlb_load(struct kvm_vcpu *vcpu);
+void kvm_local_flush_tlb_all(void);
+void kvm_mips_init_shadow_tlb(struct kvm_vcpu *vcpu);
+void kvm_mips_alloc_new_mmu_context(struct kvm_vcpu *vcpu);
+void kvm_mips_vcpu_load(struct kvm_vcpu *vcpu, int cpu);
+void kvm_mips_vcpu_put(struct kvm_vcpu *vcpu);
+void kvm_mips_te_vcpu_load(struct kvm_vcpu *vcpu, int cpu);
+void kvm_mips_te_vcpu_put(struct kvm_vcpu *vcpu);
+int kvm_mips_te_arch_init(void *opaque);
+int kvm_mips_te_init_vm(struct kvm *kvm, unsigned long type);
+/* Emulation */
+u32 kvm_get_inst(u32 *opc, struct kvm_vcpu *vcpu);
+enum emulation_result update_pc(struct kvm_vcpu *vcpu, u32 cause);
+
+enum emulation_result kvm_mips_emulate_inst(unsigned long cause, u32 *opc,
+					    struct kvm_run *run,
+					    struct kvm_vcpu *vcpu);
+
+enum emulation_result kvm_mips_emulate_syscall(unsigned long cause, u32 *opc,
+					       struct kvm_run *run,
+					       struct kvm_vcpu *vcpu);
+
+enum emulation_result kvm_mips_emulate_tlbmiss_ld(unsigned long cause, u32 *opc,
+						  struct kvm_run *run,
+						  struct kvm_vcpu *vcpu);
+
+enum emulation_result kvm_mips_emulate_tlbinv_ld(unsigned long cause, u32 *opc,
+						 struct kvm_run *run,
+						 struct kvm_vcpu *vcpu);
+
+enum emulation_result kvm_mips_emulate_tlbmiss_st(unsigned long cause, u32 *opc,
+						  struct kvm_run *run,
+						  struct kvm_vcpu *vcpu);
+
+enum emulation_result kvm_mips_emulate_tlbinv_st(unsigned long cause, u32 *opc,
+						 struct kvm_run *run,
+						 struct kvm_vcpu *vcpu);
+
+enum emulation_result kvm_mips_emulate_tlbmod(unsigned long cause, u32 *opc,
+					      struct kvm_run *run,
+					      struct kvm_vcpu *vcpu);
+
+enum emulation_result kvm_mips_emulate_fpu_exc(unsigned long cause, u32 *opc,
+					       struct kvm_run *run,
+					       struct kvm_vcpu *vcpu);
+
+enum emulation_result kvm_mips_handle_ri(unsigned long cause, u32 *opc,
+					 struct kvm_run *run,
+					 struct kvm_vcpu *vcpu);
+
+enum emulation_result kvm_mips_emulate_ri_exc(unsigned long cause, u32 *opc,
+					      struct kvm_run *run,
+					      struct kvm_vcpu *vcpu);
+
+enum emulation_result kvm_mips_emulate_bp_exc(unsigned long cause, u32 *opc,
+					      struct kvm_run *run,
+					      struct kvm_vcpu *vcpu);
+
+enum emulation_result kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu,
+						  struct kvm_run *run);
+
+enum emulation_result kvm_mips_emulate_count(struct kvm_vcpu *vcpu);
+
+enum emulation_result kvm_mips_check_privilege(unsigned long cause, u32 *opc,
+					       struct kvm_run *run,
+					       struct kvm_vcpu *vcpu);
+
+enum emulation_result kvm_mips_emulate_cache(u32 inst, u32 *opc, u32 cause,
+					     struct kvm_run *run,
+					     struct kvm_vcpu *vcpu);
+enum emulation_result kvm_mips_emulate_CP0(u32 inst, u32 *opc, u32 cause,
+					   struct kvm_run *run,
+					   struct kvm_vcpu *vcpu);
+enum emulation_result kvm_mips_emulate_store(u32 inst, u32 cause,
+					     struct kvm_run *run,
+					     struct kvm_vcpu *vcpu);
+enum emulation_result kvm_mips_emulate_load(u32 inst, u32 cause,
+					    struct kvm_run *run,
+					    struct kvm_vcpu *vcpu);
+
+/* Dynamic binary translation */
+int kvm_mips_trans_cache_index(u32 inst, u32 *opc, struct kvm_vcpu *vcpu);
+int kvm_mips_trans_cache_va(u32 inst, u32 *opc, struct kvm_vcpu *vcpu);
+int kvm_mips_trans_mfc0(u32 inst, u32 *opc, struct kvm_vcpu *vcpu);
+int kvm_mips_trans_mtc0(u32 inst, u32 *opc, struct kvm_vcpu *vcpu);
+
+/* Misc */
+void mips32_SyncICache(unsigned long addr, unsigned long size);
+int kvm_mips_dump_stats(struct kvm_vcpu *vcpu);
+unsigned long kvm_mips_get_ramsize(struct kvm *kvm);
+
+#endif /* __KVM_MIPS_EMUL_H__ */
diff --git a/arch/mips/kernel/asm-offsets.c b/arch/mips/kernel/asm-offsets.c
index a0aa12c..5a9222e 100644
--- a/arch/mips/kernel/asm-offsets.c
+++ b/arch/mips/kernel/asm-offsets.c
@@ -18,6 +18,7 @@
 #include <asm/processor.h>
 
 #include <linux/kvm_host.h>
+#include <asm/kvm_mips_te.h>
 
 void output_ptreg_defines(void)
 {
@@ -331,6 +332,7 @@ void output_pbe_defines(void)
 }
 #endif
 
+#if IS_ENABLED(CONFIG_KVM)
 void output_kvm_defines(void)
 {
 	COMMENT(" KVM/MIPS Specfic offsets. ");
@@ -338,59 +340,59 @@ void output_kvm_defines(void)
 	OFFSET(VCPU_RUN, kvm_vcpu, run);
 	OFFSET(VCPU_HOST_ARCH, kvm_vcpu, arch);
 
-	OFFSET(VCPU_HOST_EBASE, kvm_vcpu_arch, host_ebase);
-	OFFSET(VCPU_GUEST_EBASE, kvm_vcpu_arch, guest_ebase);
 
-	OFFSET(VCPU_HOST_STACK, kvm_vcpu_arch, host_stack);
-	OFFSET(VCPU_HOST_GP, kvm_vcpu_arch, host_gp);
+	OFFSET(KVM_VCPU_ARCH_R0, kvm_vcpu, arch.gprs[0]);
+	OFFSET(KVM_VCPU_ARCH_R1, kvm_vcpu, arch.gprs[1]);
+	OFFSET(KVM_VCPU_ARCH_R2, kvm_vcpu, arch.gprs[2]);
+	OFFSET(KVM_VCPU_ARCH_R3, kvm_vcpu, arch.gprs[3]);
+	OFFSET(KVM_VCPU_ARCH_R4, kvm_vcpu, arch.gprs[4]);
+	OFFSET(KVM_VCPU_ARCH_R5, kvm_vcpu, arch.gprs[5]);
+	OFFSET(KVM_VCPU_ARCH_R6, kvm_vcpu, arch.gprs[6]);
+	OFFSET(KVM_VCPU_ARCH_R7, kvm_vcpu, arch.gprs[7]);
+	OFFSET(KVM_VCPU_ARCH_R8, kvm_vcpu, arch.gprs[8]);
+	OFFSET(KVM_VCPU_ARCH_R9, kvm_vcpu, arch.gprs[9]);
+	OFFSET(KVM_VCPU_ARCH_R10, kvm_vcpu, arch.gprs[10]);
+	OFFSET(KVM_VCPU_ARCH_R11, kvm_vcpu, arch.gprs[11]);
+	OFFSET(KVM_VCPU_ARCH_R12, kvm_vcpu, arch.gprs[12]);
+	OFFSET(KVM_VCPU_ARCH_R13, kvm_vcpu, arch.gprs[13]);
+	OFFSET(KVM_VCPU_ARCH_R14, kvm_vcpu, arch.gprs[14]);
+	OFFSET(KVM_VCPU_ARCH_R15, kvm_vcpu, arch.gprs[15]);
+	OFFSET(KVM_VCPU_ARCH_R16, kvm_vcpu, arch.gprs[16]);
+	OFFSET(KVM_VCPU_ARCH_R17, kvm_vcpu, arch.gprs[17]);
+	OFFSET(KVM_VCPU_ARCH_R18, kvm_vcpu, arch.gprs[18]);
+	OFFSET(KVM_VCPU_ARCH_R19, kvm_vcpu, arch.gprs[19]);
+	OFFSET(KVM_VCPU_ARCH_R20, kvm_vcpu, arch.gprs[20]);
+	OFFSET(KVM_VCPU_ARCH_R21, kvm_vcpu, arch.gprs[21]);
+	OFFSET(KVM_VCPU_ARCH_R22, kvm_vcpu, arch.gprs[22]);
+	OFFSET(KVM_VCPU_ARCH_R23, kvm_vcpu, arch.gprs[23]);
+	OFFSET(KVM_VCPU_ARCH_R24, kvm_vcpu, arch.gprs[24]);
+	OFFSET(KVM_VCPU_ARCH_R25, kvm_vcpu, arch.gprs[25]);
+	OFFSET(KVM_VCPU_ARCH_R26, kvm_vcpu, arch.gprs[26]);
+	OFFSET(KVM_VCPU_ARCH_R27, kvm_vcpu, arch.gprs[27]);
+	OFFSET(KVM_VCPU_ARCH_R28, kvm_vcpu, arch.gprs[28]);
+	OFFSET(KVM_VCPU_ARCH_R29, kvm_vcpu, arch.gprs[29]);
+	OFFSET(KVM_VCPU_ARCH_R30, kvm_vcpu, arch.gprs[30]);
+	OFFSET(KVM_VCPU_ARCH_R31, kvm_vcpu, arch.gprs[31]);
+	OFFSET(KVM_VCPU_ARCH_LO, kvm_vcpu, arch.lo);
+	OFFSET(KVM_VCPU_ARCH_HI, kvm_vcpu, arch.hi);
+	OFFSET(KVM_VCPU_ARCH_EPC, kvm_vcpu, arch.epc);
+	OFFSET(KVM_VCPU_ARCH_IMPL, kvm_vcpu, arch.impl);
 
-	OFFSET(VCPU_HOST_CP0_BADVADDR, kvm_vcpu_arch, host_cp0_badvaddr);
-	OFFSET(VCPU_HOST_CP0_CAUSE, kvm_vcpu_arch, host_cp0_cause);
-	OFFSET(VCPU_HOST_EPC, kvm_vcpu_arch, host_cp0_epc);
-	OFFSET(VCPU_HOST_ENTRYHI, kvm_vcpu_arch, host_cp0_entryhi);
-
-	OFFSET(VCPU_GUEST_INST, kvm_vcpu_arch, guest_inst);
-
-	OFFSET(KVM_VCPU_ARCH_R0, kvm_vcpu_arch, gprs[0]);
-	OFFSET(KVM_VCPU_ARCH_R1, kvm_vcpu_arch, gprs[1]);
-	OFFSET(KVM_VCPU_ARCH_R2, kvm_vcpu_arch, gprs[2]);
-	OFFSET(KVM_VCPU_ARCH_R3, kvm_vcpu_arch, gprs[3]);
-	OFFSET(KVM_VCPU_ARCH_R4, kvm_vcpu_arch, gprs[4]);
-	OFFSET(KVM_VCPU_ARCH_R5, kvm_vcpu_arch, gprs[5]);
-	OFFSET(KVM_VCPU_ARCH_R6, kvm_vcpu_arch, gprs[6]);
-	OFFSET(KVM_VCPU_ARCH_R7, kvm_vcpu_arch, gprs[7]);
-	OFFSET(KVM_VCPU_ARCH_R8, kvm_vcpu_arch, gprs[8]);
-	OFFSET(KVM_VCPU_ARCH_R9, kvm_vcpu_arch, gprs[9]);
-	OFFSET(KVM_VCPU_ARCH_R10, kvm_vcpu_arch, gprs[10]);
-	OFFSET(KVM_VCPU_ARCH_R11, kvm_vcpu_arch, gprs[11]);
-	OFFSET(KVM_VCPU_ARCH_R12, kvm_vcpu_arch, gprs[12]);
-	OFFSET(KVM_VCPU_ARCH_R13, kvm_vcpu_arch, gprs[13]);
-	OFFSET(KVM_VCPU_ARCH_R14, kvm_vcpu_arch, gprs[14]);
-	OFFSET(KVM_VCPU_ARCH_R15, kvm_vcpu_arch, gprs[15]);
-	OFFSET(KVM_VCPU_ARCH_R16, kvm_vcpu_arch, gprs[16]);
-	OFFSET(KVM_VCPU_ARCH_R17, kvm_vcpu_arch, gprs[17]);
-	OFFSET(KVM_VCPU_ARCH_R18, kvm_vcpu_arch, gprs[18]);
-	OFFSET(KVM_VCPU_ARCH_R19, kvm_vcpu_arch, gprs[19]);
-	OFFSET(KVM_VCPU_ARCH_R20, kvm_vcpu_arch, gprs[20]);
-	OFFSET(KVM_VCPU_ARCH_R21, kvm_vcpu_arch, gprs[21]);
-	OFFSET(KVM_VCPU_ARCH_R22, kvm_vcpu_arch, gprs[22]);
-	OFFSET(KVM_VCPU_ARCH_R23, kvm_vcpu_arch, gprs[23]);
-	OFFSET(KVM_VCPU_ARCH_R24, kvm_vcpu_arch, gprs[24]);
-	OFFSET(KVM_VCPU_ARCH_R25, kvm_vcpu_arch, gprs[25]);
-	OFFSET(KVM_VCPU_ARCH_R26, kvm_vcpu_arch, gprs[26]);
-	OFFSET(KVM_VCPU_ARCH_R27, kvm_vcpu_arch, gprs[27]);
-	OFFSET(KVM_VCPU_ARCH_R28, kvm_vcpu_arch, gprs[28]);
-	OFFSET(KVM_VCPU_ARCH_R29, kvm_vcpu_arch, gprs[29]);
-	OFFSET(KVM_VCPU_ARCH_R30, kvm_vcpu_arch, gprs[30]);
-	OFFSET(KVM_VCPU_ARCH_R31, kvm_vcpu_arch, gprs[31]);
-	OFFSET(KVM_VCPU_ARCH_LO, kvm_vcpu_arch, lo);
-	OFFSET(KVM_VCPU_ARCH_HI, kvm_vcpu_arch, hi);
-	OFFSET(KVM_VCPU_ARCH_EPC, kvm_vcpu_arch, epc);
-	OFFSET(VCPU_COP0, kvm_vcpu_arch, cop0);
-	OFFSET(VCPU_GUEST_KERNEL_ASID, kvm_vcpu_arch, guest_kernel_asid);
-	OFFSET(VCPU_GUEST_USER_ASID, kvm_vcpu_arch, guest_user_asid);
+	OFFSET(KVM_MIPS_VCPU_TE_HOST_EBASE, kvm_mips_vcpu_te, host_ebase);
+	OFFSET(KVM_MIPS_VCPU_TE_GUEST_EBASE, kvm_mips_vcpu_te, guest_ebase);
+	OFFSET(KVM_MIPS_VCPU_TE_HOST_STACK, kvm_mips_vcpu_te, host_stack);
+	OFFSET(KVM_MIPS_VCPU_TE_HOST_GP, kvm_mips_vcpu_te, host_gp);
+	OFFSET(KVM_MIPS_VCPU_TE_HOST_CP0_BADVADDR, kvm_mips_vcpu_te, host_cp0_badvaddr);
+	OFFSET(KVM_MIPS_VCPU_TE_HOST_CP0_CAUSE, kvm_mips_vcpu_te, host_cp0_cause);
+	OFFSET(KVM_MIPS_VCPU_TE_HOST_EPC, kvm_mips_vcpu_te, host_cp0_epc);
+	OFFSET(KVM_MIPS_VCPU_TE_HOST_ENTRYHI, kvm_mips_vcpu_te, host_cp0_entryhi);
+	OFFSET(KVM_MIPS_VCPU_TE_GUEST_INST, kvm_mips_vcpu_te, guest_inst);
+	OFFSET(KVM_MIPS_VCPU_TE_COP0, kvm_mips_vcpu_te, cop0);
+	OFFSET(KVM_MIPS_VCPU_TE_GUEST_KERNEL_ASID, kvm_mips_vcpu_te, guest_kernel_asid);
+	OFFSET(KVM_MIPS_VCPU_TE_GUEST_USER_ASID, kvm_mips_vcpu_te, guest_user_asid);
 
 	OFFSET(COP0_TLB_HI, mips_coproc, reg[MIPS_CP0_TLB_HI][0]);
 	OFFSET(COP0_STATUS, mips_coproc, reg[MIPS_CP0_STATUS][0]);
 	BLANK();
 }
+#endif
diff --git a/arch/mips/kvm/kvm_locore.S b/arch/mips/kvm/kvm_locore.S
index 7c2933a..efcd366 100644
--- a/arch/mips/kvm/kvm_locore.S
+++ b/arch/mips/kvm/kvm_locore.S
@@ -122,17 +122,17 @@ FEXPORT(__kvm_mips_vcpu_run)
 	/* DDATA_LO has pointer to vcpu */
 	mtc0	a1, CP0_DDATA_LO
 
-	/* Offset into vcpu->arch */
-	addiu	k1, a1, VCPU_HOST_ARCH
+	move	k1, a1
+	LONG_L	t1, KVM_VCPU_ARCH_IMPL(k1)
 
 	/*
 	 * Save the host stack to VCPU, used for exception processing
 	 * when we exit from the Guest
 	 */
-	LONG_S	sp, VCPU_HOST_STACK(k1)
+	LONG_S	sp, KVM_MIPS_VCPU_TE_HOST_STACK(t1)
 
 	/* Save the kernel gp as well */
-	LONG_S	gp, VCPU_HOST_GP(k1)
+	LONG_S	gp, KVM_MIPS_VCPU_TE_HOST_GP(t1)
 
 	/* Setup status register for running the guest in UM, interrupts are disabled */
 	li	k0, (ST0_EXL | KSU_USER| ST0_BEV)
@@ -140,7 +140,7 @@ FEXPORT(__kvm_mips_vcpu_run)
 	ehb
 
 	/* load up the new EBASE */
-	LONG_L	k0, VCPU_GUEST_EBASE(k1)
+	LONG_L	k0, KVM_MIPS_VCPU_TE_GUEST_EBASE(t1)
 	mtc0	k0, CP0_EBASE
 
 	/*
@@ -159,13 +159,13 @@ FEXPORT(__kvm_mips_vcpu_run)
 	LONG_L	t0, KVM_VCPU_ARCH_EPC(k1)
 	mtc0	t0, CP0_EPC
 
-FEXPORT(__kvm_mips_load_asid)
 	/* Set the ASID for the Guest Kernel */
 	sll	t0, t0, 1	/* with kseg0 @ 0x40000000, kernel */
 			        /* addresses shift to 0x80000000 */
+	LONG_L	v0, KVM_VCPU_ARCH_IMPL(k1)
 	bltz	t0, 1f		/* If kernel */
-	 addiu	t1, k1, VCPU_GUEST_KERNEL_ASID  /* (BD)  */
-	addiu	t1, k1, VCPU_GUEST_USER_ASID    /* else user */
+	 addiu	t1, v0, KVM_MIPS_VCPU_TE_GUEST_KERNEL_ASID	/* (BD)  */
+	addiu	t1, v0, KVM_MIPS_VCPU_TE_GUEST_USER_ASID	/* else user */
 1:
 	     /* t1: contains the base of the ASID array, need to get the cpu id  */
 	LONG_L	t2, TI_CPU($28)             /* smp_processor_id */
@@ -222,7 +222,6 @@ FEXPORT(__kvm_mips_load_asid)
 	LONG_L	k0, KVM_VCPU_ARCH_HI(k1)
 	mthi	k0
 
-FEXPORT(__kvm_mips_load_k0k1)
 	/* Restore the guest's k0/k1 registers */
 	LONG_L	k0, KVM_VCPU_ARCH_R26(k1)
 	LONG_L	k1, KVM_VCPU_ARCH_R27(k1)
@@ -264,7 +263,6 @@ NESTED (MIPSX(GuestException), CALLFRAME_SIZ, ra)
 
 	/* Get the VCPU pointer from DDTATA_LO */
 	mfc0	k1, CP0_DDATA_LO
-	addiu	k1, k1, VCPU_HOST_ARCH
 
 	/* Start saving Guest context to VCPU */
 	LONG_S	$0, KVM_VCPU_ARCH_R0(k1)
@@ -334,17 +332,18 @@ NESTED (MIPSX(GuestException), CALLFRAME_SIZ, ra)
 
 	/* Save Host level EPC, BadVaddr and Cause to VCPU, useful to
 	 * process the exception */
+	LONG_L	t1, KVM_VCPU_ARCH_IMPL(k1)
 	mfc0	k0,CP0_EPC
 	LONG_S	k0, KVM_VCPU_ARCH_EPC(k1)
 
 	mfc0	k0, CP0_BADVADDR
-	LONG_S	k0, VCPU_HOST_CP0_BADVADDR(k1)
+	LONG_S	k0, KVM_MIPS_VCPU_TE_HOST_CP0_BADVADDR(t1)
 
 	mfc0	k0, CP0_CAUSE
-	LONG_S	k0, VCPU_HOST_CP0_CAUSE(k1)
+	LONG_S	k0, KVM_MIPS_VCPU_TE_HOST_CP0_CAUSE(t1)
 
 	mfc0	k0, CP0_ENTRYHI
-	LONG_S	k0, VCPU_HOST_ENTRYHI(k1)
+	LONG_S	k0, KVM_MIPS_VCPU_TE_HOST_ENTRYHI(t1)
 
 	/* Now restore the host state just enough to run the handlers */
 
@@ -359,8 +358,8 @@ NESTED (MIPSX(GuestException), CALLFRAME_SIZ, ra)
 	mtc0	k0, CP0_STATUS
 	ehb
 
-	LONG_L	k0, VCPU_HOST_EBASE(k1)
-	mtc0	k0,CP0_EBASE
+	LONG_L	k0, KVM_MIPS_VCPU_TE_HOST_EBASE(t1)
+	mtc0	k0, CP0_EBASE
 
 
 	/* Now that the new EBASE has been loaded, unset BEV and KSU_USER */
@@ -372,10 +371,10 @@ NESTED (MIPSX(GuestException), CALLFRAME_SIZ, ra)
 	ehb
 
 	/* Load up host GP */
-	LONG_L	gp, VCPU_HOST_GP(k1)
+	LONG_L	gp, KVM_MIPS_VCPU_TE_HOST_GP(t1)
 
 	/* Need a stack before we can jump to "C" */
-	LONG_L	sp, VCPU_HOST_STACK(k1)
+	LONG_L	sp, KVM_MIPS_VCPU_TE_HOST_STACK(t1)
 
 	/* Saved host state */
 	addiu	sp,sp, -PT_SIZE
@@ -397,7 +396,7 @@ FEXPORT(__kvm_mips_jump_to_handler)
 	/* XXXKYMA: not sure if this is safe, how large is the stack??
 	 * Now jump to the kvm_mips_handle_exit() to see if we can deal
 	 * with this in the kernel */
-	PTR_LA	t9, kvm_mips_handle_exit
+	PTR_LA	t9, kvm_mips_te_handle_exit
 	jalr.hb	t9
 	 addiu	sp,sp, -CALLFRAME_SIZ           /* BD Slot */
 
@@ -411,7 +410,6 @@ FEXPORT(__kvm_mips_jump_to_handler)
 	 */
 
 	move	k1, s1
-	addiu	k1, k1, VCPU_HOST_ARCH
 
 	/* Check return value, should tell us if we are returning to the
 	 * host (handle I/O etc)or resuming the guest
@@ -420,12 +418,12 @@ FEXPORT(__kvm_mips_jump_to_handler)
 	bnez	t0, __kvm_mips_return_to_host
 	 nop
 
-__kvm_mips_return_to_guest:
+	LONG_L	v0, KVM_VCPU_ARCH_IMPL(k1)
    	 /* Put the saved pointer to vcpu (s1) back into the DDATA_LO Register */
 	mtc0	s1, CP0_DDATA_LO
 
 	/* Load up the Guest EBASE to minimize the window where BEV is set */
-	LONG_L	t0, VCPU_GUEST_EBASE(k1)
+	LONG_L	t0, KVM_MIPS_VCPU_TE_GUEST_EBASE(v0)
 
 	/* Switch EBASE back to the one used by KVM */
 	mfc0	v1, CP0_STATUS
@@ -452,8 +450,8 @@ __kvm_mips_return_to_guest:
 	sll	t0, t0, 1	/* with kseg0 @ 0x40000000, kernel */
 				/* addresses shift to 0x80000000 */
 	bltz	t0, 1f		/* If kernel */
-	 addiu	t1, k1, VCPU_GUEST_KERNEL_ASID  /* (BD)  */
-	addiu	t1, k1, VCPU_GUEST_USER_ASID    /* else user */
+	 addiu	t1, v0, KVM_MIPS_VCPU_TE_GUEST_KERNEL_ASID	/* (BD)  */
+	addiu	t1, v0, KVM_MIPS_VCPU_TE_GUEST_USER_ASID	/* else user */
 1:
 	/* t1: contains the base of the ASID array, need to get the cpu id  */
 	LONG_L	t2, TI_CPU($28)		/* smp_processor_id */
@@ -514,8 +512,9 @@ FEXPORT(__kvm_mips_skip_guest_restore)
 	eret
 
 __kvm_mips_return_to_host:
+	LONG_L	t1, KVM_VCPU_ARCH_IMPL(k1)
 	/* EBASE is already pointing to Linux */
-	LONG_L	k1, VCPU_HOST_STACK(k1)
+	LONG_L	k1, KVM_MIPS_VCPU_TE_HOST_STACK(t1)
 	addiu	k1,k1, -PT_SIZE
 
 	/* Restore host DDATA_LO */
diff --git a/arch/mips/kvm/kvm_mips.c b/arch/mips/kvm/kvm_mips.c
index 4ac5ab4..041caad 100644
--- a/arch/mips/kvm/kvm_mips.c
+++ b/arch/mips/kvm/kvm_mips.c
@@ -15,21 +15,12 @@
 #include <linux/vmalloc.h>
 #include <linux/fs.h>
 #include <linux/bootmem.h>
+#include <linux/kvm_host.h>
+
 #include <asm/page.h>
 #include <asm/cacheflush.h>
 #include <asm/mmu_context.h>
-
-#include <linux/kvm_host.h>
-
-#include "kvm_mips_int.h"
-#include "kvm_mips_comm.h"
-
-#define CREATE_TRACE_POINTS
-#include "trace.h"
-
-#ifndef VECTORSPACING
-#define VECTORSPACING 0x100	/* for EI/VI mode */
-#endif
+#include <asm/kvm_mips_te.h>
 
 #define VCPU_STAT(x) offsetof(struct kvm_vcpu, stat.x), KVM_STAT_VCPU
 struct kvm_stats_debugfs_item debugfs_entries[] = {
@@ -51,16 +42,6 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
 	{NULL}
 };
 
-static int kvm_mips_reset_vcpu(struct kvm_vcpu *vcpu)
-{
-	int i;
-	for_each_possible_cpu(i) {
-		vcpu->arch.guest_kernel_asid[i] = 0;
-		vcpu->arch.guest_user_asid[i] = 0;
-	}
-	return 0;
-}
-
 gfn_t unalias_gfn(struct kvm *kvm, gfn_t gfn)
 {
 	return gfn;
@@ -71,7 +52,7 @@ gfn_t unalias_gfn(struct kvm *kvm, gfn_t gfn)
  */
 int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
 {
-	return !!(vcpu->arch.pending_exceptions);
+	return vcpu->kvm->arch.ops->vcpu_runnable(vcpu);
 }
 
 int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu)
@@ -104,92 +85,22 @@ void kvm_arch_check_processor_compat(void *rtn)
 	return;
 }
 
-static void kvm_mips_init_tlbs(struct kvm *kvm)
-{
-	unsigned long wired;
-
-	/* Add a wired entry to the TLB, it is used to map the commpage to the Guest kernel */
-	wired = read_c0_wired();
-	write_c0_wired(wired + 1);
-	mtc0_tlbw_hazard();
-	kvm->arch.commpage_tlb = wired;
-
-	kvm_debug("[%d] commpage TLB: %d\n", smp_processor_id(),
-		  kvm->arch.commpage_tlb);
-}
-
-static void kvm_mips_init_vm_percpu(void *arg)
-{
-	struct kvm *kvm = (struct kvm *)arg;
-
-	kvm_mips_init_tlbs(kvm);
-	kvm_mips_callbacks->vm_init(kvm);
-
-}
+int kvm_mips_te_init_vm(struct kvm *kvm, unsigned long type);
 
 int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 {
-	if (atomic_inc_return(&kvm_mips_instance) == 1) {
-		kvm_info("%s: 1st KVM instance, setup host TLB parameters\n",
-			 __func__);
-		on_each_cpu(kvm_mips_init_vm_percpu, kvm, 1);
-	}
-
-
-	return 0;
-}
-
-void kvm_mips_free_vcpus(struct kvm *kvm)
-{
-	unsigned int i;
-	struct kvm_vcpu *vcpu;
-
-	/* Put the pages we reserved for the guest pmap */
-	for (i = 0; i < kvm->arch.guest_pmap_npages; i++) {
-		if (kvm->arch.guest_pmap[i] != KVM_INVALID_PAGE)
-			kvm_mips_release_pfn_clean(kvm->arch.guest_pmap[i]);
-	}
-
-	if (kvm->arch.guest_pmap)
-		kfree(kvm->arch.guest_pmap);
-
-	kvm_for_each_vcpu(i, vcpu, kvm) {
-		kvm_arch_vcpu_free(vcpu);
-	}
-
-	mutex_lock(&kvm->lock);
-
-	for (i = 0; i < atomic_read(&kvm->online_vcpus); i++)
-		kvm->vcpus[i] = NULL;
-
-	atomic_set(&kvm->online_vcpus, 0);
-
-	mutex_unlock(&kvm->lock);
+	if (type == 0)
+		return kvm_mips_te_init_vm(kvm, type);
+	return -EINVAL;
 }
 
 void kvm_arch_sync_events(struct kvm *kvm)
 {
 }
 
-static void kvm_mips_uninit_tlbs(void *arg)
-{
-	/* Restore wired count */
-	write_c0_wired(0);
-	mtc0_tlbw_hazard();
-	/* Clear out all the TLBs */
-	kvm_local_flush_tlb_all();
-}
-
 void kvm_arch_destroy_vm(struct kvm *kvm)
 {
-	kvm_mips_free_vcpus(kvm);
-
-	/* If this is the last instance, restore wired count */
-	if (atomic_dec_return(&kvm_mips_instance) == 0) {
-		kvm_info("%s: last KVM instance, restoring TLB parameters\n",
-			 __func__);
-		on_each_cpu(kvm_mips_uninit_tlbs, NULL, 1);
-	}
+	kvm->arch.ops->destroy_vm(kvm);
 }
 
 long
@@ -221,41 +132,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
                                 const struct kvm_memory_slot *old,
                                 enum kvm_mr_change change)
 {
-	unsigned long npages = 0;
-	int i, err = 0;
-
-	kvm_debug("%s: kvm: %p slot: %d, GPA: %llx, size: %llx, QVA: %llx\n",
-		  __func__, kvm, mem->slot, mem->guest_phys_addr,
-		  mem->memory_size, mem->userspace_addr);
-
-	/* Setup Guest PMAP table */
-	if (!kvm->arch.guest_pmap) {
-		if (mem->slot == 0)
-			npages = mem->memory_size >> PAGE_SHIFT;
-
-		if (npages) {
-			kvm->arch.guest_pmap_npages = npages;
-			kvm->arch.guest_pmap =
-			    kzalloc(npages * sizeof(unsigned long), GFP_KERNEL);
-
-			if (!kvm->arch.guest_pmap) {
-				kvm_err("Failed to allocate guest PMAP");
-				err = -ENOMEM;
-				goto out;
-			}
-
-			kvm_info
-			    ("Allocated space for Guest PMAP Table (%ld pages) @ %p\n",
-			     npages, kvm->arch.guest_pmap);
-
-			/* Now setup the page table */
-			for (i = 0; i < npages; i++) {
-				kvm->arch.guest_pmap[i] = KVM_INVALID_PAGE;
-			}
-		}
-	}
-out:
-	return;
+	kvm->arch.ops->commit_memory_region(kvm, mem, old, change);
 }
 
 void kvm_arch_flush_shadow_all(struct kvm *kvm)
@@ -273,123 +150,12 @@ void kvm_arch_flush_shadow(struct kvm *kvm)
 
 struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id)
 {
-	extern char mips32_exception[], mips32_exceptionEnd[];
-	extern char mips32_GuestException[], mips32_GuestExceptionEnd[];
-	int err, size, offset;
-	void *gebase;
-	int i;
-
-	struct kvm_vcpu *vcpu = kzalloc(sizeof(struct kvm_vcpu), GFP_KERNEL);
-
-	if (!vcpu) {
-		err = -ENOMEM;
-		goto out;
-	}
-
-	err = kvm_vcpu_init(vcpu, kvm, id);
-
-	if (err)
-		goto out_free_cpu;
-
-	kvm_info("kvm @ %p: create cpu %d at %p\n", kvm, id, vcpu);
-
-	/* Allocate space for host mode exception handlers that handle
-	 * guest mode exits
-	 */
-	if (cpu_has_veic || cpu_has_vint) {
-		size = 0x200 + VECTORSPACING * 64;
-	} else {
-		size = 0x200;
-	}
-
-	/* Save Linux EBASE */
-	vcpu->arch.host_ebase = (void *)(long)(read_c0_ebase() & 0x3ff);
-
-	gebase = kzalloc(ALIGN(size, PAGE_SIZE), GFP_KERNEL);
-
-	if (!gebase) {
-		err = -ENOMEM;
-		goto out_free_cpu;
-	}
-	kvm_info("Allocated %d bytes for KVM Exception Handlers @ %p\n",
-		 ALIGN(size, PAGE_SIZE), gebase);
-
-	/* Save new ebase */
-	vcpu->arch.guest_ebase = gebase;
-
-	/* Copy L1 Guest Exception handler to correct offset */
-
-	/* TLB Refill, EXL = 0 */
-	memcpy(gebase, mips32_exception,
-	       mips32_exceptionEnd - mips32_exception);
-
-	/* General Exception Entry point */
-	memcpy(gebase + 0x180, mips32_exception,
-	       mips32_exceptionEnd - mips32_exception);
-
-	/* For vectored interrupts poke the exception code @ all offsets 0-7 */
-	for (i = 0; i < 8; i++) {
-		kvm_debug("L1 Vectored handler @ %p\n",
-			  gebase + 0x200 + (i * VECTORSPACING));
-		memcpy(gebase + 0x200 + (i * VECTORSPACING), mips32_exception,
-		       mips32_exceptionEnd - mips32_exception);
-	}
-
-	/* General handler, relocate to unmapped space for sanity's sake */
-	offset = 0x2000;
-	kvm_info("Installing KVM Exception handlers @ %p, %#x bytes\n",
-		 gebase + offset,
-		 (unsigned)(mips32_GuestExceptionEnd - mips32_GuestException));
-
-	memcpy(gebase + offset, mips32_GuestException,
-	       mips32_GuestExceptionEnd - mips32_GuestException);
-
-	/* Invalidate the icache for these ranges */
-	mips32_SyncICache((unsigned long) gebase, ALIGN(size, PAGE_SIZE));
-
-	/* Allocate comm page for guest kernel, a TLB will be reserved for mapping GVA @ 0xFFFF8000 to this page */
-	vcpu->arch.kseg0_commpage = kzalloc(PAGE_SIZE << 1, GFP_KERNEL);
-
-	if (!vcpu->arch.kseg0_commpage) {
-		err = -ENOMEM;
-		goto out_free_gebase;
-	}
-
-	kvm_info("Allocated COMM page @ %p\n", vcpu->arch.kseg0_commpage);
-	kvm_mips_commpage_init(vcpu);
-
-	/* Init */
-	vcpu->arch.last_sched_cpu = -1;
-
-	/* Start off the timer */
-	kvm_mips_emulate_count(vcpu);
-
-	return vcpu;
-
-out_free_gebase:
-	kfree(gebase);
-
-out_free_cpu:
-	kfree(vcpu);
-
-out:
-	return ERR_PTR(err);
+	return kvm->arch.ops->vcpu_create(kvm, id);
 }
 
 void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
 {
-	hrtimer_cancel(&vcpu->arch.comparecount_timer);
-
-	kvm_vcpu_uninit(vcpu);
-
-	kvm_mips_dump_stats(vcpu);
-
-	if (vcpu->arch.guest_ebase)
-		kfree(vcpu->arch.guest_ebase);
-
-	if (vcpu->arch.kseg0_commpage)
-		kfree(vcpu->arch.kseg0_commpage);
-
+	vcpu->kvm->arch.ops->vcpu_free(vcpu);
 }
 
 void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
@@ -406,70 +172,9 @@ kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
 
 int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 {
-	int r = 0;
-	sigset_t sigsaved;
-
-	if (vcpu->sigset_active)
-		sigprocmask(SIG_SETMASK, &vcpu->sigset, &sigsaved);
-
-	if (vcpu->mmio_needed) {
-		if (!vcpu->mmio_is_write)
-			kvm_mips_complete_mmio_load(vcpu, run);
-		vcpu->mmio_needed = 0;
-	}
-
-	/* Check if we have any exceptions/interrupts pending */
-	kvm_mips_deliver_interrupts(vcpu,
-				    kvm_read_c0_guest_cause(vcpu->arch.cop0));
-
-	local_irq_disable();
-	kvm_guest_enter();
-
-	r = __kvm_mips_vcpu_run(run, vcpu);
-
-	kvm_guest_exit();
-	local_irq_enable();
-
-	if (vcpu->sigset_active)
-		sigprocmask(SIG_SETMASK, &sigsaved, NULL);
-
-	return r;
+	return vcpu->kvm->arch.ops->vcpu_run(vcpu, run);
 }
 
-int
-kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, struct kvm_mips_interrupt *irq)
-{
-	int intr = (int)irq->irq;
-	struct kvm_vcpu *dvcpu = NULL;
-
-	if (intr == 3 || intr == -3 || intr == 4 || intr == -4)
-		kvm_debug("%s: CPU: %d, INTR: %d\n", __func__, irq->cpu,
-			  (int)intr);
-
-	if (irq->cpu == -1)
-		dvcpu = vcpu;
-	else
-		dvcpu = vcpu->kvm->vcpus[irq->cpu];
-
-	if (intr == 2 || intr == 3 || intr == 4) {
-		kvm_mips_callbacks->queue_io_int(dvcpu, irq);
-
-	} else if (intr == -2 || intr == -3 || intr == -4) {
-		kvm_mips_callbacks->dequeue_io_int(dvcpu, irq);
-	} else {
-		kvm_err("%s: invalid interrupt ioctl (%d:%d)\n", __func__,
-			irq->cpu, irq->irq);
-		return -EINVAL;
-	}
-
-	dvcpu->arch.wait = 0;
-
-	if (waitqueue_active(&dvcpu->wq)) {
-		wake_up_interruptible(&dvcpu->wq);
-	}
-
-	return 0;
-}
 
 int
 kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
@@ -568,8 +273,6 @@ static int kvm_mips_get_reg(struct kvm_vcpu *vcpu,
 			    const struct kvm_one_reg *reg)
 {
 	u64 __user *uaddr = (u64 __user *)(long)reg->addr;
-
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
 	s64 v;
 
 	switch (reg->id) {
@@ -585,51 +288,9 @@ static int kvm_mips_get_reg(struct kvm_vcpu *vcpu,
 	case KVM_REG_MIPS_PC:
 		v = (long)vcpu->arch.epc;
 		break;
-
-	case KVM_REG_MIPS_CP0_INDEX:
-		v = (long)kvm_read_c0_guest_index(cop0);
-		break;
-	case KVM_REG_MIPS_CP0_CONTEXT:
-		v = (long)kvm_read_c0_guest_context(cop0);
-		break;
-	case KVM_REG_MIPS_CP0_PAGEMASK:
-		v = (long)kvm_read_c0_guest_pagemask(cop0);
-		break;
-	case KVM_REG_MIPS_CP0_WIRED:
-		v = (long)kvm_read_c0_guest_wired(cop0);
-		break;
-	case KVM_REG_MIPS_CP0_BADVADDR:
-		v = (long)kvm_read_c0_guest_badvaddr(cop0);
-		break;
-	case KVM_REG_MIPS_CP0_ENTRYHI:
-		v = (long)kvm_read_c0_guest_entryhi(cop0);
-		break;
-	case KVM_REG_MIPS_CP0_STATUS:
-		v = (long)kvm_read_c0_guest_status(cop0);
-		break;
-	case KVM_REG_MIPS_CP0_CAUSE:
-		v = (long)kvm_read_c0_guest_cause(cop0);
-		break;
-	case KVM_REG_MIPS_CP0_ERROREPC:
-		v = (long)kvm_read_c0_guest_errorepc(cop0);
-		break;
-	case KVM_REG_MIPS_CP0_CONFIG:
-		v = (long)kvm_read_c0_guest_config(cop0);
-		break;
-	case KVM_REG_MIPS_CP0_CONFIG1:
-		v = (long)kvm_read_c0_guest_config1(cop0);
-		break;
-	case KVM_REG_MIPS_CP0_CONFIG2:
-		v = (long)kvm_read_c0_guest_config2(cop0);
-		break;
-	case KVM_REG_MIPS_CP0_CONFIG3:
-		v = (long)kvm_read_c0_guest_config3(cop0);
-		break;
-	case KVM_REG_MIPS_CP0_CONFIG7:
-		v = (long)kvm_read_c0_guest_config7(cop0);
-		break;
 	default:
-		return -EINVAL;
+		return vcpu->kvm->arch.ops->get_reg(vcpu, reg);
+
 	}
 	return put_user(v, uaddr);
 }
@@ -638,7 +299,6 @@ static int kvm_mips_set_reg(struct kvm_vcpu *vcpu,
 			    const struct kvm_one_reg *reg)
 {
 	u64 __user *uaddr = (u64 __user *)(long)reg->addr;
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
 	u64 v;
 
 	if (get_user(v, uaddr) != 0)
@@ -661,41 +321,14 @@ static int kvm_mips_set_reg(struct kvm_vcpu *vcpu,
 		vcpu->arch.epc = v;
 		break;
 
-	case KVM_REG_MIPS_CP0_INDEX:
-		kvm_write_c0_guest_index(cop0, v);
-		break;
-	case KVM_REG_MIPS_CP0_CONTEXT:
-		kvm_write_c0_guest_context(cop0, v);
-		break;
-	case KVM_REG_MIPS_CP0_PAGEMASK:
-		kvm_write_c0_guest_pagemask(cop0, v);
-		break;
-	case KVM_REG_MIPS_CP0_WIRED:
-		kvm_write_c0_guest_wired(cop0, v);
-		break;
-	case KVM_REG_MIPS_CP0_BADVADDR:
-		kvm_write_c0_guest_badvaddr(cop0, v);
-		break;
-	case KVM_REG_MIPS_CP0_ENTRYHI:
-		kvm_write_c0_guest_entryhi(cop0, v);
-		break;
-	case KVM_REG_MIPS_CP0_STATUS:
-		kvm_write_c0_guest_status(cop0, v);
-		break;
-	case KVM_REG_MIPS_CP0_CAUSE:
-		kvm_write_c0_guest_cause(cop0, v);
-		break;
-	case KVM_REG_MIPS_CP0_ERROREPC:
-		kvm_write_c0_guest_errorepc(cop0, v);
-		break;
 	default:
-		return -EINVAL;
+		return vcpu->kvm->arch.ops->set_reg(vcpu, reg, v);
 	}
 	return 0;
 }
 
-long
-kvm_arch_vcpu_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg)
+long kvm_arch_vcpu_ioctl(struct file *filp, unsigned int ioctl,
+			 unsigned long arg)
 {
 	struct kvm_vcpu *vcpu = filp->private_data;
 	void __user *argp = (void __user *)arg;
@@ -732,28 +365,9 @@ kvm_arch_vcpu_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg)
 			return -EFAULT;
 		return 0;
 	}
-	case KVM_NMI:
-		/* Treat the NMI as a CPU reset */
-		r = kvm_mips_reset_vcpu(vcpu);
-		break;
-	case KVM_INTERRUPT:
-		{
-			struct kvm_mips_interrupt irq;
-			r = -EFAULT;
-			if (copy_from_user(&irq, argp, sizeof(irq)))
-				goto out;
-
-			kvm_debug("[%d] %s: irq: %d\n", vcpu->vcpu_id, __func__,
-				  irq.irq);
-
-			r = kvm_vcpu_ioctl_interrupt(vcpu, &irq);
-			break;
-		}
 	default:
-		r = -ENOIOCTLCMD;
+		r = vcpu->kvm->arch.ops->vcpu_ioctl(vcpu, ioctl, arg);
 	}
-
-out:
 	return r;
 }
 
@@ -807,24 +421,30 @@ long kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg)
 	return r;
 }
 
+int kvm_mips_te_arch_init(void *opaque);
+void kvm_mips_te_arch_exit(void);
+
 int kvm_arch_init(void *opaque)
 {
-	int ret;
-
-	if (kvm_mips_callbacks) {
-		kvm_err("kvm: module already exists\n");
-		return -EEXIST;
-	}
+	return kvm_mips_te_arch_init(opaque);
+}
 
-	ret = kvm_mips_emulation_init(&kvm_mips_callbacks);
+void kvm_arch_exit(void)
+{
+	kvm_mips_te_arch_exit();
+}
 
-	return ret;
+void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+{
+	vcpu->kvm->arch.ops->vcpu_load(vcpu, cpu);
 }
+EXPORT_SYMBOL(kvm_arch_vcpu_load);
 
-void kvm_arch_exit(void)
+void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
 {
-	kvm_mips_callbacks = NULL;
+	vcpu->kvm->arch.ops->vcpu_put(vcpu);
 }
+EXPORT_SYMBOL(kvm_arch_vcpu_put);
 
 int
 kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
@@ -878,37 +498,7 @@ int kvm_dev_ioctl_check_extension(long ext)
 
 int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
 {
-	return kvm_mips_pending_timer(vcpu);
-}
-
-int kvm_arch_vcpu_dump_regs(struct kvm_vcpu *vcpu)
-{
-	int i;
-	struct mips_coproc *cop0;
-
-	if (!vcpu)
-		return -1;
-
-	printk("VCPU Register Dump:\n");
-	printk("\tepc = 0x%08lx\n", vcpu->arch.epc);;
-	printk("\texceptions: %08lx\n", vcpu->arch.pending_exceptions);
-
-	for (i = 0; i < 32; i += 4) {
-		printk("\tgpr%02d: %08lx %08lx %08lx %08lx\n", i,
-		       vcpu->arch.gprs[i],
-		       vcpu->arch.gprs[i + 1],
-		       vcpu->arch.gprs[i + 2], vcpu->arch.gprs[i + 3]);
-	}
-	printk("\thi: 0x%08lx\n", vcpu->arch.hi);
-	printk("\tlo: 0x%08lx\n", vcpu->arch.lo);
-
-	cop0 = vcpu->arch.cop0;
-	printk("\tStatus: 0x%08lx, Cause: 0x%08lx\n",
-	       kvm_read_c0_guest_status(cop0), kvm_read_c0_guest_cause(cop0));
-
-	printk("\tEPC: 0x%08lx\n", kvm_read_c0_guest_epc(cop0));
-
-	return 0;
+	return vcpu->kvm->arch.ops->cpu_has_pending_timer(vcpu);
 }
 
 int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
@@ -939,40 +529,9 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 	return 0;
 }
 
-void kvm_mips_comparecount_func(unsigned long data)
-{
-	struct kvm_vcpu *vcpu = (struct kvm_vcpu *)data;
-
-	kvm_mips_callbacks->queue_timer_int(vcpu);
-
-	vcpu->arch.wait = 0;
-	if (waitqueue_active(&vcpu->wq)) {
-		wake_up_interruptible(&vcpu->wq);
-	}
-}
-
-/*
- * low level hrtimer wake routine.
- */
-enum hrtimer_restart kvm_mips_comparecount_wakeup(struct hrtimer *timer)
-{
-	struct kvm_vcpu *vcpu;
-
-	vcpu = container_of(timer, struct kvm_vcpu, arch.comparecount_timer);
-	kvm_mips_comparecount_func((unsigned long) vcpu);
-	hrtimer_forward_now(&vcpu->arch.comparecount_timer,
-			    ktime_set(0, MS_TO_NS(10)));
-	return HRTIMER_RESTART;
-}
-
 int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
 {
-	kvm_mips_callbacks->vcpu_init(vcpu);
-	hrtimer_init(&vcpu->arch.comparecount_timer, CLOCK_MONOTONIC,
-		     HRTIMER_MODE_REL);
-	vcpu->arch.comparecount_timer.function = kvm_mips_comparecount_wakeup;
-	kvm_mips_init_shadow_tlb(vcpu);
-	return 0;
+	return vcpu->kvm->arch.ops->vcpu_init(vcpu);
 }
 
 void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu)
@@ -989,208 +548,18 @@ kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu, struct kvm_translation *tr)
 /* Initial guest state */
 int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
 {
-	return kvm_mips_callbacks->vcpu_setup(vcpu);
-}
-
-static
-void kvm_mips_set_c0_status(void)
-{
-	uint32_t status = read_c0_status();
-
-	if (cpu_has_fpu)
-		status |= (ST0_CU1);
-
-	if (cpu_has_dsp)
-		status |= (ST0_MX);
-
-	write_c0_status(status);
-	ehb();
-}
-
-/*
- * Return value is in the form (errcode<<2 | RESUME_FLAG_HOST | RESUME_FLAG_NV)
- */
-int kvm_mips_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
-{
-	uint32_t cause = vcpu->arch.host_cp0_cause;
-	uint32_t exccode = (cause >> CAUSEB_EXCCODE) & 0x1f;
-	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.epc;
-	unsigned long badvaddr = vcpu->arch.host_cp0_badvaddr;
-	enum emulation_result er = EMULATE_DONE;
-	int ret = RESUME_GUEST;
-
-	/* Set a default exit reason */
-	run->exit_reason = KVM_EXIT_UNKNOWN;
-	run->ready_for_interrupt_injection = 1;
-
-	/* Set the appropriate status bits based on host CPU features, before we hit the scheduler */
-	kvm_mips_set_c0_status();
-
-	local_irq_enable();
-
-	kvm_debug("kvm_mips_handle_exit: cause: %#x, PC: %p, kvm_run: %p, kvm_vcpu: %p\n",
-			cause, opc, run, vcpu);
-
-	/* Do a privilege check, if in UM most of these exit conditions end up
-	 * causing an exception to be delivered to the Guest Kernel
-	 */
-	er = kvm_mips_check_privilege(cause, opc, run, vcpu);
-	if (er == EMULATE_PRIV_FAIL) {
-		goto skip_emul;
-	} else if (er == EMULATE_FAIL) {
-		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
-		ret = RESUME_HOST;
-		goto skip_emul;
-	}
-
-	switch (exccode) {
-	case T_INT:
-		kvm_debug("[%d]T_INT @ %p\n", vcpu->vcpu_id, opc);
-
-		++vcpu->stat.int_exits;
-		trace_kvm_exit(vcpu, INT_EXITS);
-
-		if (need_resched()) {
-			cond_resched();
-		}
-
-		ret = RESUME_GUEST;
-		break;
-
-	case T_COP_UNUSABLE:
-		kvm_debug("T_COP_UNUSABLE: @ PC: %p\n", opc);
-
-		++vcpu->stat.cop_unusable_exits;
-		trace_kvm_exit(vcpu, COP_UNUSABLE_EXITS);
-		ret = kvm_mips_callbacks->handle_cop_unusable(vcpu);
-		/* XXXKYMA: Might need to return to user space */
-		if (run->exit_reason == KVM_EXIT_IRQ_WINDOW_OPEN) {
-			ret = RESUME_HOST;
-		}
-		break;
-
-	case T_TLB_MOD:
-		++vcpu->stat.tlbmod_exits;
-		trace_kvm_exit(vcpu, TLBMOD_EXITS);
-		ret = kvm_mips_callbacks->handle_tlb_mod(vcpu);
-		break;
-
-	case T_TLB_ST_MISS:
-		kvm_debug
-		    ("TLB ST fault:  cause %#x, status %#lx, PC: %p, BadVaddr: %#lx\n",
-		     cause, kvm_read_c0_guest_status(vcpu->arch.cop0), opc,
-		     badvaddr);
-
-		++vcpu->stat.tlbmiss_st_exits;
-		trace_kvm_exit(vcpu, TLBMISS_ST_EXITS);
-		ret = kvm_mips_callbacks->handle_tlb_st_miss(vcpu);
-		break;
-
-	case T_TLB_LD_MISS:
-		kvm_debug("TLB LD fault: cause %#x, PC: %p, BadVaddr: %#lx\n",
-			  cause, opc, badvaddr);
-
-		++vcpu->stat.tlbmiss_ld_exits;
-		trace_kvm_exit(vcpu, TLBMISS_LD_EXITS);
-		ret = kvm_mips_callbacks->handle_tlb_ld_miss(vcpu);
-		break;
-
-	case T_ADDR_ERR_ST:
-		++vcpu->stat.addrerr_st_exits;
-		trace_kvm_exit(vcpu, ADDRERR_ST_EXITS);
-		ret = kvm_mips_callbacks->handle_addr_err_st(vcpu);
-		break;
-
-	case T_ADDR_ERR_LD:
-		++vcpu->stat.addrerr_ld_exits;
-		trace_kvm_exit(vcpu, ADDRERR_LD_EXITS);
-		ret = kvm_mips_callbacks->handle_addr_err_ld(vcpu);
-		break;
-
-	case T_SYSCALL:
-		++vcpu->stat.syscall_exits;
-		trace_kvm_exit(vcpu, SYSCALL_EXITS);
-		ret = kvm_mips_callbacks->handle_syscall(vcpu);
-		break;
-
-	case T_RES_INST:
-		++vcpu->stat.resvd_inst_exits;
-		trace_kvm_exit(vcpu, RESVD_INST_EXITS);
-		ret = kvm_mips_callbacks->handle_res_inst(vcpu);
-		break;
-
-	case T_BREAK:
-		++vcpu->stat.break_inst_exits;
-		trace_kvm_exit(vcpu, BREAK_INST_EXITS);
-		ret = kvm_mips_callbacks->handle_break(vcpu);
-		break;
-
-	default:
-		kvm_err
-		    ("Exception Code: %d, not yet handled, @ PC: %p, inst: 0x%08x  BadVaddr: %#lx Status: %#lx\n",
-		     exccode, opc, kvm_get_inst(opc, vcpu), badvaddr,
-		     kvm_read_c0_guest_status(vcpu->arch.cop0));
-		kvm_arch_vcpu_dump_regs(vcpu);
-		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
-		ret = RESUME_HOST;
-		break;
-
-	}
-
-skip_emul:
-	local_irq_disable();
-
-	if (er == EMULATE_DONE && !(ret & RESUME_HOST))
-		kvm_mips_deliver_interrupts(vcpu, cause);
-
-	if (!(ret & RESUME_HOST)) {
-		/* Only check for signals if not already exiting to userspace  */
-		if (signal_pending(current)) {
-			run->exit_reason = KVM_EXIT_INTR;
-			ret = (-EINTR << 2) | RESUME_HOST;
-			++vcpu->stat.signal_exits;
-			trace_kvm_exit(vcpu, SIGNAL_EXITS);
-		}
-	}
-
-	return ret;
+	return vcpu->kvm->arch.ops->vcpu_setup(vcpu);
 }
 
 int __init kvm_mips_init(void)
 {
-	int ret;
-
-	ret = kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
-
-	if (ret)
-		return ret;
-
-	/* On MIPS, kernel modules are executed from "mapped space", which requires TLBs.
-	 * The TLB handling code is statically linked with the rest of the kernel (kvm_tlb.c)
-	 * to avoid the possibility of double faulting. The issue is that the TLB code
-	 * references routines that are part of the the KVM module,
-	 * which are only available once the module is loaded.
-	 */
-	kvm_mips_gfn_to_pfn = gfn_to_pfn;
-	kvm_mips_release_pfn_clean = kvm_release_pfn_clean;
-	kvm_mips_is_error_pfn = is_error_pfn;
-
-	pr_info("KVM/MIPS Initialized\n");
-	return 0;
+	return kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
 }
 
 void __exit kvm_mips_exit(void)
 {
 	kvm_exit();
-
-	kvm_mips_gfn_to_pfn = NULL;
-	kvm_mips_release_pfn_clean = NULL;
-	kvm_mips_is_error_pfn = NULL;
-
-	pr_info("KVM/MIPS unloaded\n");
 }
 
 module_init(kvm_mips_init);
 module_exit(kvm_mips_exit);
-
-EXPORT_TRACEPOINT_SYMBOL(kvm_exit);
diff --git a/arch/mips/kvm/kvm_mips_comm.h b/arch/mips/kvm/kvm_mips_comm.h
index a4a8c85..d087c5c 100644
--- a/arch/mips/kvm/kvm_mips_comm.h
+++ b/arch/mips/kvm/kvm_mips_comm.h
@@ -11,6 +11,7 @@
 
 #ifndef __KVM_MIPS_COMMPAGE_H__
 #define __KVM_MIPS_COMMPAGE_H__
+#include <asm/kvm_mips_te.h>
 
 struct kvm_mips_commpage {
 	struct mips_coproc cop0;	/* COP0 state is mapped into Guest kernel via commpage */
diff --git a/arch/mips/kvm/kvm_mips_commpage.c b/arch/mips/kvm/kvm_mips_commpage.c
index 3873b1e..2734348 100644
--- a/arch/mips/kvm/kvm_mips_commpage.c
+++ b/arch/mips/kvm/kvm_mips_commpage.c
@@ -22,16 +22,19 @@
 
 #include <linux/kvm_host.h>
 
+#include <asm/kvm_mips_te.h>
+
 #include "kvm_mips_comm.h"
 
 void kvm_mips_commpage_init(struct kvm_vcpu *vcpu)
 {
-	struct kvm_mips_commpage *page = vcpu->arch.kseg0_commpage;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct kvm_mips_commpage *page = vcpu_te->kseg0_commpage;
 	memset(page, 0, sizeof(struct kvm_mips_commpage));
 
 	/* Specific init values for fields */
-	vcpu->arch.cop0 = &page->cop0;
-	memset(vcpu->arch.cop0, 0, sizeof(struct mips_coproc));
+	vcpu_te->cop0 = &page->cop0;
+	memset(vcpu_te->cop0, 0, sizeof(struct mips_coproc));
 
 	return;
 }
diff --git a/arch/mips/kvm/kvm_mips_emul.c b/arch/mips/kvm/kvm_mips_emul.c
index 7cbc3bc..90db96a 100644
--- a/arch/mips/kvm/kvm_mips_emul.c
+++ b/arch/mips/kvm/kvm_mips_emul.c
@@ -28,6 +28,8 @@
 #include <asm/r4kcache.h>
 #define CONFIG_MIPS_MT
 
+#include <asm/kvm_mips_te.h>
+
 #include "kvm_mips_opcode.h"
 #include "kvm_mips_int.h"
 #include "kvm_mips_comm.h"
@@ -234,16 +236,17 @@ enum emulation_result update_pc(struct kvm_vcpu *vcpu, uint32_t cause)
  */
 enum emulation_result kvm_mips_emulate_count(struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct mips_coproc *cop0 = vcpu_te->cop0;
 	enum emulation_result er = EMULATE_DONE;
 
 	/* If COUNT is enabled */
 	if (!(kvm_read_c0_guest_cause(cop0) & CAUSEF_DC)) {
-		hrtimer_try_to_cancel(&vcpu->arch.comparecount_timer);
-		hrtimer_start(&vcpu->arch.comparecount_timer,
+		hrtimer_try_to_cancel(&vcpu_te->comparecount_timer);
+		hrtimer_start(&vcpu_te->comparecount_timer,
 			      ktime_set(0, MS_TO_NS(10)), HRTIMER_MODE_REL);
 	} else {
-		hrtimer_try_to_cancel(&vcpu->arch.comparecount_timer);
+		hrtimer_try_to_cancel(&vcpu_te->comparecount_timer);
 	}
 
 	return er;
@@ -251,7 +254,8 @@ enum emulation_result kvm_mips_emulate_count(struct kvm_vcpu *vcpu)
 
 enum emulation_result kvm_mips_emul_eret(struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct mips_coproc *cop0 = vcpu_te->cop0;
 	enum emulation_result er = EMULATE_DONE;
 
 	if (kvm_read_c0_guest_status(cop0) & ST0_EXL) {
@@ -274,15 +278,16 @@ enum emulation_result kvm_mips_emul_eret(struct kvm_vcpu *vcpu)
 
 enum emulation_result kvm_mips_emul_wait(struct kvm_vcpu *vcpu)
 {
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
 	enum emulation_result er = EMULATE_DONE;
 
 	kvm_debug("[%#lx] !!!WAIT!!! (%#lx)\n", vcpu->arch.epc,
-		  vcpu->arch.pending_exceptions);
+		  vcpu_te->pending_exceptions);
 
 	++vcpu->stat.wait_exits;
 	trace_kvm_exit(vcpu, WAIT_EXITS);
-	if (!vcpu->arch.pending_exceptions) {
-		vcpu->arch.wait = 1;
+	if (!vcpu_te->pending_exceptions) {
+		vcpu_te->wait = 1;
 		kvm_vcpu_block(vcpu);
 
 		/* We we are runnable, then definitely go off to user space to check if any
@@ -302,7 +307,8 @@ enum emulation_result kvm_mips_emul_wait(struct kvm_vcpu *vcpu)
  */
 enum emulation_result kvm_mips_emul_tlbr(struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct mips_coproc *cop0 = vcpu_te->cop0;
 	enum emulation_result er = EMULATE_FAIL;
 	uint32_t pc = vcpu->arch.epc;
 
@@ -313,7 +319,8 @@ enum emulation_result kvm_mips_emul_tlbr(struct kvm_vcpu *vcpu)
 /* Write Guest TLB Entry @ Index */
 enum emulation_result kvm_mips_emul_tlbwi(struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct mips_coproc *cop0 = vcpu_te->cop0;
 	int index = kvm_read_c0_guest_index(cop0);
 	enum emulation_result er = EMULATE_DONE;
 	struct kvm_mips_tlb *tlb = NULL;
@@ -330,7 +337,7 @@ enum emulation_result kvm_mips_emul_tlbwi(struct kvm_vcpu *vcpu)
 		index = (index & ~0x80000000) % KVM_MIPS_GUEST_TLB_SIZE;
 	}
 
-	tlb = &vcpu->arch.guest_tlb[index];
+	tlb = &vcpu_te->guest_tlb[index];
 #if 1
 	/* Probe the shadow host TLB for the entry being overwritten, if one matches, invalidate it */
 	kvm_mips_host_tlb_inv(vcpu, tlb->tlb_hi);
@@ -353,7 +360,8 @@ enum emulation_result kvm_mips_emul_tlbwi(struct kvm_vcpu *vcpu)
 /* Write Guest TLB Entry @ Random Index */
 enum emulation_result kvm_mips_emul_tlbwr(struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct mips_coproc *cop0 = vcpu_te->cop0;
 	enum emulation_result er = EMULATE_DONE;
 	struct kvm_mips_tlb *tlb = NULL;
 	uint32_t pc = vcpu->arch.epc;
@@ -371,7 +379,7 @@ enum emulation_result kvm_mips_emul_tlbwr(struct kvm_vcpu *vcpu)
 		return EMULATE_FAIL;
 	}
 
-	tlb = &vcpu->arch.guest_tlb[index];
+	tlb = &vcpu_te->guest_tlb[index];
 
 #if 1
 	/* Probe the shadow host TLB for the entry being overwritten, if one matches, invalidate it */
@@ -394,7 +402,8 @@ enum emulation_result kvm_mips_emul_tlbwr(struct kvm_vcpu *vcpu)
 
 enum emulation_result kvm_mips_emul_tlbp(struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct mips_coproc *cop0 = vcpu_te->cop0;
 	long entryhi = kvm_read_c0_guest_entryhi(cop0);
 	enum emulation_result er = EMULATE_DONE;
 	uint32_t pc = vcpu->arch.epc;
@@ -414,7 +423,8 @@ enum emulation_result
 kvm_mips_emulate_CP0(uint32_t inst, uint32_t *opc, uint32_t cause,
 		     struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct mips_coproc *cop0 = vcpu_te->cop0;
 	enum emulation_result er = EMULATE_DONE;
 	int32_t rt, rd, copz, sel, co_bit, op;
 	uint32_t pc = vcpu->arch.epc;
@@ -657,6 +667,7 @@ enum emulation_result
 kvm_mips_emulate_store(uint32_t inst, uint32_t cause,
 		       struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
 	enum emulation_result er = EMULATE_DO_MMIO;
 	int32_t op, base, rt, offset;
 	uint32_t bytes;
@@ -685,8 +696,7 @@ kvm_mips_emulate_store(uint32_t inst, uint32_t cause,
 			       run->mmio.len);
 		}
 		run->mmio.phys_addr =
-		    kvm_mips_callbacks->gva_to_gpa(vcpu->arch.
-						   host_cp0_badvaddr);
+		    kvm_mips_callbacks->gva_to_gpa(vcpu_te->host_cp0_badvaddr);
 		if (run->mmio.phys_addr == KVM_INVALID_ADDR) {
 			er = EMULATE_FAIL;
 			break;
@@ -697,7 +707,7 @@ kvm_mips_emulate_store(uint32_t inst, uint32_t cause,
 		vcpu->mmio_is_write = 1;
 		*(u8 *) data = vcpu->arch.gprs[rt];
 		kvm_debug("OP_SB: eaddr: %#lx, gpr: %#lx, data: %#x\n",
-			  vcpu->arch.host_cp0_badvaddr, vcpu->arch.gprs[rt],
+			  vcpu_te->host_cp0_badvaddr, vcpu->arch.gprs[rt],
 			  *(uint8_t *) data);
 
 		break;
@@ -709,8 +719,7 @@ kvm_mips_emulate_store(uint32_t inst, uint32_t cause,
 			       run->mmio.len);
 		}
 		run->mmio.phys_addr =
-		    kvm_mips_callbacks->gva_to_gpa(vcpu->arch.
-						   host_cp0_badvaddr);
+		    kvm_mips_callbacks->gva_to_gpa(vcpu_te->host_cp0_badvaddr);
 		if (run->mmio.phys_addr == KVM_INVALID_ADDR) {
 			er = EMULATE_FAIL;
 			break;
@@ -723,7 +732,7 @@ kvm_mips_emulate_store(uint32_t inst, uint32_t cause,
 		*(uint32_t *) data = vcpu->arch.gprs[rt];
 
 		kvm_debug("[%#lx] OP_SW: eaddr: %#lx, gpr: %#lx, data: %#x\n",
-			  vcpu->arch.epc, vcpu->arch.host_cp0_badvaddr,
+			  vcpu->arch.epc, vcpu_te->host_cp0_badvaddr,
 			  vcpu->arch.gprs[rt], *(uint32_t *) data);
 		break;
 
@@ -734,8 +743,7 @@ kvm_mips_emulate_store(uint32_t inst, uint32_t cause,
 			       run->mmio.len);
 		}
 		run->mmio.phys_addr =
-		    kvm_mips_callbacks->gva_to_gpa(vcpu->arch.
-						   host_cp0_badvaddr);
+		    kvm_mips_callbacks->gva_to_gpa(vcpu_te->host_cp0_badvaddr);
 		if (run->mmio.phys_addr == KVM_INVALID_ADDR) {
 			er = EMULATE_FAIL;
 			break;
@@ -748,7 +756,7 @@ kvm_mips_emulate_store(uint32_t inst, uint32_t cause,
 		*(uint16_t *) data = vcpu->arch.gprs[rt];
 
 		kvm_debug("[%#lx] OP_SH: eaddr: %#lx, gpr: %#lx, data: %#x\n",
-			  vcpu->arch.epc, vcpu->arch.host_cp0_badvaddr,
+			  vcpu->arch.epc, vcpu_te->host_cp0_badvaddr,
 			  vcpu->arch.gprs[rt], *(uint32_t *) data);
 		break;
 
@@ -772,6 +780,7 @@ enum emulation_result
 kvm_mips_emulate_load(uint32_t inst, uint32_t cause,
 		      struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
 	enum emulation_result er = EMULATE_DO_MMIO;
 	int32_t op, base, rt, offset;
 	uint32_t bytes;
@@ -781,8 +790,8 @@ kvm_mips_emulate_load(uint32_t inst, uint32_t cause,
 	offset = inst & 0xffff;
 	op = (inst >> 26) & 0x3f;
 
-	vcpu->arch.pending_load_cause = cause;
-	vcpu->arch.io_gpr = rt;
+	vcpu_te->pending_load_cause = cause;
+	vcpu_te->io_gpr = rt;
 
 	switch (op) {
 	case lw_op:
@@ -794,8 +803,7 @@ kvm_mips_emulate_load(uint32_t inst, uint32_t cause,
 			break;
 		}
 		run->mmio.phys_addr =
-		    kvm_mips_callbacks->gva_to_gpa(vcpu->arch.
-						   host_cp0_badvaddr);
+		    kvm_mips_callbacks->gva_to_gpa(vcpu_te->host_cp0_badvaddr);
 		if (run->mmio.phys_addr == KVM_INVALID_ADDR) {
 			er = EMULATE_FAIL;
 			break;
@@ -817,8 +825,7 @@ kvm_mips_emulate_load(uint32_t inst, uint32_t cause,
 			break;
 		}
 		run->mmio.phys_addr =
-		    kvm_mips_callbacks->gva_to_gpa(vcpu->arch.
-						   host_cp0_badvaddr);
+		    kvm_mips_callbacks->gva_to_gpa(vcpu_te->host_cp0_badvaddr);
 		if (run->mmio.phys_addr == KVM_INVALID_ADDR) {
 			er = EMULATE_FAIL;
 			break;
@@ -846,8 +853,7 @@ kvm_mips_emulate_load(uint32_t inst, uint32_t cause,
 			break;
 		}
 		run->mmio.phys_addr =
-		    kvm_mips_callbacks->gva_to_gpa(vcpu->arch.
-						   host_cp0_badvaddr);
+		    kvm_mips_callbacks->gva_to_gpa(vcpu_te->host_cp0_badvaddr);
 		if (run->mmio.phys_addr == KVM_INVALID_ADDR) {
 			er = EMULATE_FAIL;
 			break;
@@ -877,22 +883,24 @@ int kvm_mips_sync_icache(unsigned long va, struct kvm_vcpu *vcpu)
 {
 	unsigned long offset = (va & ~PAGE_MASK);
 	struct kvm *kvm = vcpu->kvm;
+	struct kvm_mips_te *kvm_mips_te = kvm->arch.impl;
 	unsigned long pa;
 	gfn_t gfn;
 	pfn_t pfn;
 
 	gfn = va >> PAGE_SHIFT;
 
-	if (gfn >= kvm->arch.guest_pmap_npages) {
+	if (gfn >= kvm_mips_te->guest_pmap_npages) {
 		printk("%s: Invalid gfn: %#llx\n", __func__, gfn);
 		kvm_mips_dump_host_tlbs();
-		kvm_arch_vcpu_dump_regs(vcpu);
+		kvm_mips_te_vcpu_dump_regs(vcpu);
 		return -1;
 	}
-	pfn = kvm->arch.guest_pmap[gfn];
+	pfn = kvm_mips_te->guest_pmap[gfn];
 	pa = (pfn << PAGE_SHIFT) | offset;
 
-	printk("%s: va: %#lx, unmapped: %#lx\n", __func__, va, CKSEG0ADDR(pa));
+	printk("%s: va: %#lx, unmapped: %#lx\n", __func__, va,
+	       (unsigned long)CKSEG0ADDR(pa));
 
 	mips32_SyncICache(CKSEG0ADDR(pa), 32);
 	return 0;
@@ -915,7 +923,8 @@ enum emulation_result
 kvm_mips_emulate_cache(uint32_t inst, uint32_t *opc, uint32_t cause,
 		       struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct mips_coproc *cop0 = vcpu_te->cop0;
 	enum emulation_result er = EMULATE_DONE;
 	int32_t offset, cache, op_inst, op, base;
 	struct kvm_vcpu_arch *arch = &vcpu->arch;
@@ -990,14 +999,14 @@ kvm_mips_emulate_cache(uint32_t inst, uint32_t *opc, uint32_t cause,
 						   (cop0) & ASID_MASK));
 
 		if (index < 0) {
-			vcpu->arch.host_cp0_entryhi = (va & VPN2_MASK);
-			vcpu->arch.host_cp0_badvaddr = va;
+			vcpu_te->host_cp0_entryhi = (va & VPN2_MASK);
+			vcpu_te->host_cp0_badvaddr = va;
 			er = kvm_mips_emulate_tlbmiss_ld(cause, NULL, run,
 							 vcpu);
 			preempt_enable();
 			goto dont_update_pc;
 		} else {
-			struct kvm_mips_tlb *tlb = &vcpu->arch.guest_tlb[index];
+			struct kvm_mips_tlb *tlb = &vcpu_te->guest_tlb[index];
 			/* Check if the entry is valid, if not then setup a TLB invalid exception to the guest */
 			if (!TLB_IS_VALID(*tlb, va)) {
 				er = kvm_mips_emulate_tlbinv_ld(cause, NULL,
@@ -1102,7 +1111,7 @@ kvm_mips_emulate_inst(unsigned long cause, uint32_t *opc,
 	default:
 		printk("Instruction emulation not supported (%p/%#x)\n", opc,
 		       inst);
-		kvm_arch_vcpu_dump_regs(vcpu);
+		kvm_mips_te_vcpu_dump_regs(vcpu);
 		er = EMULATE_FAIL;
 		break;
 	}
@@ -1114,7 +1123,8 @@ enum emulation_result
 kvm_mips_emulate_syscall(unsigned long cause, uint32_t *opc,
 			 struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct mips_coproc *cop0 = vcpu_te->cop0;
 	struct kvm_vcpu_arch *arch = &vcpu->arch;
 	enum emulation_result er = EMULATE_DONE;
 
@@ -1148,10 +1158,11 @@ enum emulation_result
 kvm_mips_emulate_tlbmiss_ld(unsigned long cause, uint32_t *opc,
 			    struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct mips_coproc *cop0 = vcpu_te->cop0;
 	struct kvm_vcpu_arch *arch = &vcpu->arch;
 	enum emulation_result er = EMULATE_DONE;
-	unsigned long entryhi = (vcpu->arch.  host_cp0_badvaddr & VPN2_MASK) |
+	unsigned long entryhi = (vcpu_te->host_cp0_badvaddr & VPN2_MASK) |
 				(kvm_read_c0_guest_entryhi(cop0) & ASID_MASK);
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
@@ -1181,7 +1192,7 @@ kvm_mips_emulate_tlbmiss_ld(unsigned long cause, uint32_t *opc,
 				  (T_TLB_LD_MISS << CAUSEB_EXCCODE));
 
 	/* setup badvaddr, context and entryhi registers for the guest */
-	kvm_write_c0_guest_badvaddr(cop0, vcpu->arch.host_cp0_badvaddr);
+	kvm_write_c0_guest_badvaddr(cop0, vcpu_te->host_cp0_badvaddr);
 	/* XXXKYMA: is the context register used by linux??? */
 	kvm_write_c0_guest_entryhi(cop0, entryhi);
 	/* Blow away the shadow host TLBs */
@@ -1194,11 +1205,12 @@ enum emulation_result
 kvm_mips_emulate_tlbinv_ld(unsigned long cause, uint32_t *opc,
 			   struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct mips_coproc *cop0 = vcpu_te->cop0;
 	struct kvm_vcpu_arch *arch = &vcpu->arch;
 	enum emulation_result er = EMULATE_DONE;
 	unsigned long entryhi =
-		(vcpu->arch.host_cp0_badvaddr & VPN2_MASK) |
+		(vcpu_te->host_cp0_badvaddr & VPN2_MASK) |
 		(kvm_read_c0_guest_entryhi(cop0) & ASID_MASK);
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
@@ -1227,7 +1239,7 @@ kvm_mips_emulate_tlbinv_ld(unsigned long cause, uint32_t *opc,
 				  (T_TLB_LD_MISS << CAUSEB_EXCCODE));
 
 	/* setup badvaddr, context and entryhi registers for the guest */
-	kvm_write_c0_guest_badvaddr(cop0, vcpu->arch.host_cp0_badvaddr);
+	kvm_write_c0_guest_badvaddr(cop0, vcpu_te->host_cp0_badvaddr);
 	/* XXXKYMA: is the context register used by linux??? */
 	kvm_write_c0_guest_entryhi(cop0, entryhi);
 	/* Blow away the shadow host TLBs */
@@ -1240,10 +1252,11 @@ enum emulation_result
 kvm_mips_emulate_tlbmiss_st(unsigned long cause, uint32_t *opc,
 			    struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct mips_coproc *cop0 = vcpu_te->cop0;
 	struct kvm_vcpu_arch *arch = &vcpu->arch;
 	enum emulation_result er = EMULATE_DONE;
-	unsigned long entryhi = (vcpu->arch.host_cp0_badvaddr & VPN2_MASK) |
+	unsigned long entryhi = (vcpu_te->host_cp0_badvaddr & VPN2_MASK) |
 				(kvm_read_c0_guest_entryhi(cop0) & ASID_MASK);
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
@@ -1271,7 +1284,7 @@ kvm_mips_emulate_tlbmiss_st(unsigned long cause, uint32_t *opc,
 				  (T_TLB_ST_MISS << CAUSEB_EXCCODE));
 
 	/* setup badvaddr, context and entryhi registers for the guest */
-	kvm_write_c0_guest_badvaddr(cop0, vcpu->arch.host_cp0_badvaddr);
+	kvm_write_c0_guest_badvaddr(cop0, vcpu_te->host_cp0_badvaddr);
 	/* XXXKYMA: is the context register used by linux??? */
 	kvm_write_c0_guest_entryhi(cop0, entryhi);
 	/* Blow away the shadow host TLBs */
@@ -1284,10 +1297,11 @@ enum emulation_result
 kvm_mips_emulate_tlbinv_st(unsigned long cause, uint32_t *opc,
 			   struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct mips_coproc *cop0 = vcpu_te->cop0;
 	struct kvm_vcpu_arch *arch = &vcpu->arch;
 	enum emulation_result er = EMULATE_DONE;
-	unsigned long entryhi = (vcpu->arch.host_cp0_badvaddr & VPN2_MASK) |
+	unsigned long entryhi = (vcpu_te->host_cp0_badvaddr & VPN2_MASK) |
 		(kvm_read_c0_guest_entryhi(cop0) & ASID_MASK);
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
@@ -1315,7 +1329,7 @@ kvm_mips_emulate_tlbinv_st(unsigned long cause, uint32_t *opc,
 				  (T_TLB_ST_MISS << CAUSEB_EXCCODE));
 
 	/* setup badvaddr, context and entryhi registers for the guest */
-	kvm_write_c0_guest_badvaddr(cop0, vcpu->arch.host_cp0_badvaddr);
+	kvm_write_c0_guest_badvaddr(cop0, vcpu_te->host_cp0_badvaddr);
 	/* XXXKYMA: is the context register used by linux??? */
 	kvm_write_c0_guest_entryhi(cop0, entryhi);
 	/* Blow away the shadow host TLBs */
@@ -1337,8 +1351,9 @@ kvm_mips_handle_tlbmod(unsigned long cause, uint32_t *opc,
 	 */
 	index = kvm_mips_guest_tlb_lookup(vcpu, entryhi);
 	if (index < 0) {
+		struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
 		/* XXXKYMA Invalidate and retry */
-		kvm_mips_host_tlb_inv(vcpu, vcpu->arch.host_cp0_badvaddr);
+		kvm_mips_host_tlb_inv(vcpu, vcpu_te->host_cp0_badvaddr);
 		kvm_err("%s: host got TLBMOD for %#lx but entry not present in Guest TLB\n",
 		     __func__, entryhi);
 		kvm_mips_dump_guest_tlbs(vcpu);
@@ -1355,8 +1370,9 @@ enum emulation_result
 kvm_mips_emulate_tlbmod(unsigned long cause, uint32_t *opc,
 			struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
-	unsigned long entryhi = (vcpu->arch.host_cp0_badvaddr & VPN2_MASK) |
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct mips_coproc *cop0 = vcpu_te->cop0;
+	unsigned long entryhi = (vcpu_te->host_cp0_badvaddr & VPN2_MASK) |
 				(kvm_read_c0_guest_entryhi(cop0) & ASID_MASK);
 	struct kvm_vcpu_arch *arch = &vcpu->arch;
 	enum emulation_result er = EMULATE_DONE;
@@ -1384,7 +1400,7 @@ kvm_mips_emulate_tlbmod(unsigned long cause, uint32_t *opc,
 	kvm_change_c0_guest_cause(cop0, (0xff), (T_TLB_MOD << CAUSEB_EXCCODE));
 
 	/* setup badvaddr, context and entryhi registers for the guest */
-	kvm_write_c0_guest_badvaddr(cop0, vcpu->arch.host_cp0_badvaddr);
+	kvm_write_c0_guest_badvaddr(cop0, vcpu_te->host_cp0_badvaddr);
 	/* XXXKYMA: is the context register used by linux??? */
 	kvm_write_c0_guest_entryhi(cop0, entryhi);
 	/* Blow away the shadow host TLBs */
@@ -1397,7 +1413,8 @@ enum emulation_result
 kvm_mips_emulate_fpu_exc(unsigned long cause, uint32_t *opc,
 			 struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct mips_coproc *cop0 = vcpu_te->cop0;
 	struct kvm_vcpu_arch *arch = &vcpu->arch;
 	enum emulation_result er = EMULATE_DONE;
 
@@ -1426,7 +1443,8 @@ enum emulation_result
 kvm_mips_emulate_ri_exc(unsigned long cause, uint32_t *opc,
 			struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct mips_coproc *cop0 = vcpu_te->cop0;
 	struct kvm_vcpu_arch *arch = &vcpu->arch;
 	enum emulation_result er = EMULATE_DONE;
 
@@ -1460,7 +1478,8 @@ enum emulation_result
 kvm_mips_emulate_bp_exc(unsigned long cause, uint32_t *opc,
 			struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct mips_coproc *cop0 = vcpu_te->cop0;
 	struct kvm_vcpu_arch *arch = &vcpu->arch;
 	enum emulation_result er = EMULATE_DONE;
 
@@ -1511,7 +1530,8 @@ enum emulation_result
 kvm_mips_handle_ri(unsigned long cause, uint32_t *opc,
 		   struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct mips_coproc *cop0 = vcpu_te->cop0;
 	struct kvm_vcpu_arch *arch = &vcpu->arch;
 	enum emulation_result er = EMULATE_DONE;
 	unsigned long curr_pc;
@@ -1595,7 +1615,8 @@ kvm_mips_handle_ri(unsigned long cause, uint32_t *opc,
 enum emulation_result
 kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu, struct kvm_run *run)
 {
-	unsigned long *gpr = &vcpu->arch.gprs[vcpu->arch.io_gpr];
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	unsigned long *gpr = &vcpu->arch.gprs[vcpu_te->io_gpr];
 	enum emulation_result er = EMULATE_DONE;
 	unsigned long curr_pc;
 
@@ -1610,7 +1631,7 @@ kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	 * an error and we want to rollback the PC
 	 */
 	curr_pc = vcpu->arch.epc;
-	er = update_pc(vcpu, vcpu->arch.pending_load_cause);
+	er = update_pc(vcpu, vcpu_te->pending_load_cause);
 	if (er == EMULATE_FAIL)
 		return er;
 
@@ -1634,10 +1655,10 @@ kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		break;
 	}
 
-	if (vcpu->arch.pending_load_cause & CAUSEF_BD)
+	if (vcpu_te->pending_load_cause & CAUSEF_BD)
 		kvm_debug
 		    ("[%#lx] Completing %d byte BD Load to gpr %d (0x%08lx) type %d\n",
-		     vcpu->arch.epc, run->mmio.len, vcpu->arch.io_gpr, *gpr,
+		     vcpu->arch.epc, run->mmio.len, vcpu_te->io_gpr, *gpr,
 		     vcpu->mmio_needed);
 
 done:
@@ -1649,7 +1670,8 @@ kvm_mips_emulate_exc(unsigned long cause, uint32_t *opc,
 		     struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
 	uint32_t exccode = (cause >> CAUSEB_EXCCODE) & 0x1f;
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct mips_coproc *cop0 = vcpu_te->cop0;
 	struct kvm_vcpu_arch *arch = &vcpu->arch;
 	enum emulation_result er = EMULATE_DONE;
 
@@ -1668,7 +1690,7 @@ kvm_mips_emulate_exc(unsigned long cause, uint32_t *opc,
 
 		/* Set PC to the exception entry point */
 		arch->epc = KVM_GUEST_KSEG0 + 0x180;
-		kvm_write_c0_guest_badvaddr(cop0, vcpu->arch.host_cp0_badvaddr);
+		kvm_write_c0_guest_badvaddr(cop0, vcpu_te->host_cp0_badvaddr);
 
 		kvm_debug("Delivering EXC %d @ pc %#lx, badVaddr: %#lx\n",
 			  exccode, kvm_read_c0_guest_epc(cop0),
@@ -1687,7 +1709,8 @@ kvm_mips_check_privilege(unsigned long cause, uint32_t *opc,
 {
 	enum emulation_result er = EMULATE_DONE;
 	uint32_t exccode = (cause >> CAUSEB_EXCCODE) & 0x1f;
-	unsigned long badvaddr = vcpu->arch.host_cp0_badvaddr;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	unsigned long badvaddr = vcpu_te->host_cp0_badvaddr;
 
 	int usermode = !KVM_GUEST_KERNEL_MODE(vcpu);
 
@@ -1771,11 +1794,12 @@ kvm_mips_handle_tlbmiss(unsigned long cause, uint32_t *opc,
 {
 	enum emulation_result er = EMULATE_DONE;
 	uint32_t exccode = (cause >> CAUSEB_EXCCODE) & 0x1f;
-	unsigned long va = vcpu->arch.host_cp0_badvaddr;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	unsigned long va = vcpu_te->host_cp0_badvaddr;
 	int index;
 
 	kvm_debug("kvm_mips_handle_tlbmiss: badvaddr: %#lx, entryhi: %#lx\n",
-		  vcpu->arch.host_cp0_badvaddr, vcpu->arch.host_cp0_entryhi);
+		  vcpu_te->host_cp0_badvaddr, vcpu_te->host_cp0_entryhi);
 
 	/* KVM would not have got the exception if this entry was valid in the shadow host TLB
 	 * Check the Guest TLB, if the entry is not there then send the guest an
@@ -1784,8 +1808,8 @@ kvm_mips_handle_tlbmiss(unsigned long cause, uint32_t *opc,
 	 */
 	index = kvm_mips_guest_tlb_lookup(vcpu,
 					  (va & VPN2_MASK) |
-					  (kvm_read_c0_guest_entryhi
-					   (vcpu->arch.cop0) & ASID_MASK));
+					  (kvm_read_c0_guest_entryhi(vcpu_te->cop0) &
+					   ASID_MASK));
 	if (index < 0) {
 		if (exccode == T_TLB_LD_MISS) {
 			er = kvm_mips_emulate_tlbmiss_ld(cause, opc, run, vcpu);
@@ -1796,7 +1820,7 @@ kvm_mips_handle_tlbmiss(unsigned long cause, uint32_t *opc,
 			er = EMULATE_FAIL;
 		}
 	} else {
-		struct kvm_mips_tlb *tlb = &vcpu->arch.guest_tlb[index];
+		struct kvm_mips_tlb *tlb = &vcpu_te->guest_tlb[index];
 
 		/* Check if the entry is valid, if not then setup a TLB invalid exception to the guest */
 		if (!TLB_IS_VALID(*tlb, va)) {
diff --git a/arch/mips/kvm/kvm_mips_int.c b/arch/mips/kvm/kvm_mips_int.c
index c1ba08b..c7ff6be 100644
--- a/arch/mips/kvm/kvm_mips_int.c
+++ b/arch/mips/kvm/kvm_mips_int.c
@@ -20,25 +20,30 @@
 
 #include <linux/kvm_host.h>
 
+#include <asm/kvm_mips_te.h>
+
 #include "kvm_mips_int.h"
 
-void kvm_mips_queue_irq(struct kvm_vcpu *vcpu, uint32_t priority)
+static void kvm_mips_queue_irq(struct kvm_vcpu *vcpu, uint32_t priority)
 {
-	set_bit(priority, &vcpu->arch.pending_exceptions);
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	set_bit(priority, &vcpu_te->pending_exceptions);
 }
 
-void kvm_mips_dequeue_irq(struct kvm_vcpu *vcpu, uint32_t priority)
+static void kvm_mips_dequeue_irq(struct kvm_vcpu *vcpu, uint32_t priority)
 {
-	clear_bit(priority, &vcpu->arch.pending_exceptions);
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	clear_bit(priority, &vcpu_te->pending_exceptions);
 }
 
 void kvm_mips_queue_timer_int_cb(struct kvm_vcpu *vcpu)
 {
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
 	/* Cause bits to reflect the pending timer interrupt,
 	 * the EXC code will be set when we are actually
 	 * delivering the interrupt:
 	 */
-	kvm_set_c0_guest_cause(vcpu->arch.cop0, (C_IRQ5 | C_TI));
+	kvm_set_c0_guest_cause(vcpu_te->cop0, (C_IRQ5 | C_TI));
 
 	/* Queue up an INT exception for the core */
 	kvm_mips_queue_irq(vcpu, MIPS_EXC_INT_TIMER);
@@ -47,13 +52,15 @@ void kvm_mips_queue_timer_int_cb(struct kvm_vcpu *vcpu)
 
 void kvm_mips_dequeue_timer_int_cb(struct kvm_vcpu *vcpu)
 {
-	kvm_clear_c0_guest_cause(vcpu->arch.cop0, (C_IRQ5 | C_TI));
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	kvm_clear_c0_guest_cause(vcpu_te->cop0, (C_IRQ5 | C_TI));
 	kvm_mips_dequeue_irq(vcpu, MIPS_EXC_INT_TIMER);
 }
 
 void
 kvm_mips_queue_io_int_cb(struct kvm_vcpu *vcpu, struct kvm_mips_interrupt *irq)
 {
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
 	int intr = (int)irq->irq;
 
 	/* Cause bits to reflect the pending IO interrupt,
@@ -62,18 +69,18 @@ kvm_mips_queue_io_int_cb(struct kvm_vcpu *vcpu, struct kvm_mips_interrupt *irq)
 	 */
 	switch (intr) {
 	case 2:
-		kvm_set_c0_guest_cause(vcpu->arch.cop0, (C_IRQ0));
+		kvm_set_c0_guest_cause(vcpu_te->cop0, (C_IRQ0));
 		/* Queue up an INT exception for the core */
 		kvm_mips_queue_irq(vcpu, MIPS_EXC_INT_IO);
 		break;
 
 	case 3:
-		kvm_set_c0_guest_cause(vcpu->arch.cop0, (C_IRQ1));
+		kvm_set_c0_guest_cause(vcpu_te->cop0, (C_IRQ1));
 		kvm_mips_queue_irq(vcpu, MIPS_EXC_INT_IPI_1);
 		break;
 
 	case 4:
-		kvm_set_c0_guest_cause(vcpu->arch.cop0, (C_IRQ2));
+		kvm_set_c0_guest_cause(vcpu_te->cop0, (C_IRQ2));
 		kvm_mips_queue_irq(vcpu, MIPS_EXC_INT_IPI_2);
 		break;
 
@@ -87,20 +94,21 @@ void
 kvm_mips_dequeue_io_int_cb(struct kvm_vcpu *vcpu,
 			   struct kvm_mips_interrupt *irq)
 {
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
 	int intr = (int)irq->irq;
 	switch (intr) {
 	case -2:
-		kvm_clear_c0_guest_cause(vcpu->arch.cop0, (C_IRQ0));
+		kvm_clear_c0_guest_cause(vcpu_te->cop0, (C_IRQ0));
 		kvm_mips_dequeue_irq(vcpu, MIPS_EXC_INT_IO);
 		break;
 
 	case -3:
-		kvm_clear_c0_guest_cause(vcpu->arch.cop0, (C_IRQ1));
+		kvm_clear_c0_guest_cause(vcpu_te->cop0, (C_IRQ1));
 		kvm_mips_dequeue_irq(vcpu, MIPS_EXC_INT_IPI_1);
 		break;
 
 	case -4:
-		kvm_clear_c0_guest_cause(vcpu->arch.cop0, (C_IRQ2));
+		kvm_clear_c0_guest_cause(vcpu_te->cop0, (C_IRQ2));
 		kvm_mips_dequeue_irq(vcpu, MIPS_EXC_INT_IPI_2);
 		break;
 
@@ -119,7 +127,8 @@ kvm_mips_irq_deliver_cb(struct kvm_vcpu *vcpu, unsigned int priority,
 	uint32_t exccode;
 
 	struct kvm_vcpu_arch *arch = &vcpu->arch;
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct mips_coproc *cop0 = vcpu_te->cop0;
 
 	switch (priority) {
 	case MIPS_EXC_INT_TIMER:
@@ -189,7 +198,7 @@ kvm_mips_irq_deliver_cb(struct kvm_vcpu *vcpu, unsigned int priority,
 		else
 			arch->epc = KVM_GUEST_KSEG0 + 0x180;
 
-		clear_bit(priority, &vcpu->arch.pending_exceptions);
+		clear_bit(priority, &vcpu_te->pending_exceptions);
 	}
 
 	return allowed;
@@ -204,8 +213,9 @@ kvm_mips_irq_clear_cb(struct kvm_vcpu *vcpu, unsigned int priority,
 
 void kvm_mips_deliver_interrupts(struct kvm_vcpu *vcpu, uint32_t cause)
 {
-	unsigned long *pending = &vcpu->arch.pending_exceptions;
-	unsigned long *pending_clr = &vcpu->arch.pending_exceptions_clr;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	unsigned long *pending = &vcpu_te->pending_exceptions;
+	unsigned long *pending_clr = &vcpu_te->pending_exceptions_clr;
 	unsigned int priority;
 
 	if (!(*pending) && !(*pending_clr))
@@ -239,5 +249,6 @@ void kvm_mips_deliver_interrupts(struct kvm_vcpu *vcpu, uint32_t cause)
 
 int kvm_mips_pending_timer(struct kvm_vcpu *vcpu)
 {
-	return test_bit(MIPS_EXC_INT_TIMER, &vcpu->arch.pending_exceptions);
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	return test_bit(MIPS_EXC_INT_TIMER, &vcpu_te->pending_exceptions);
 }
diff --git a/arch/mips/kvm/kvm_mips_int.h b/arch/mips/kvm/kvm_mips_int.h
index 20da7d2..08cf7d1 100644
--- a/arch/mips/kvm/kvm_mips_int.h
+++ b/arch/mips/kvm/kvm_mips_int.h
@@ -32,8 +32,6 @@
 #define KVM_MIPS_IRQ_DELIVER_ALL_AT_ONCE (0)
 #define KVM_MIPS_IRQ_CLEAR_ALL_AT_ONCE   (0)
 
-void kvm_mips_queue_irq(struct kvm_vcpu *vcpu, uint32_t priority);
-void kvm_mips_dequeue_irq(struct kvm_vcpu *vcpu, uint32_t priority);
 int kvm_mips_pending_timer(struct kvm_vcpu *vcpu);
 
 void kvm_mips_queue_timer_int_cb(struct kvm_vcpu *vcpu);
diff --git a/arch/mips/kvm/kvm_mips_stats.c b/arch/mips/kvm/kvm_mips_stats.c
index 075904b..5d0c9ef 100644
--- a/arch/mips/kvm/kvm_mips_stats.c
+++ b/arch/mips/kvm/kvm_mips_stats.c
@@ -10,6 +10,7 @@
 */
 
 #include <linux/kvm_host.h>
+#include <asm/kvm_mips_te.h>
 
 char *kvm_mips_exit_types_str[MAX_KVM_MIPS_EXIT_TYPES] = {
 	"WAIT",
@@ -66,14 +67,15 @@ char *kvm_cop0_str[N_MIPS_COPROC_REGS] = {
 int kvm_mips_dump_stats(struct kvm_vcpu *vcpu)
 {
 #ifdef CONFIG_KVM_MIPS_DEBUG_COP0_COUNTERS
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
 	int i, j;
 
 	printk("\nKVM VCPU[%d] COP0 Access Profile:\n", vcpu->vcpu_id);
 	for (i = 0; i < N_MIPS_COPROC_REGS; i++) {
 		for (j = 0; j < N_MIPS_COPROC_SEL; j++) {
-			if (vcpu->arch.cop0->stat[i][j])
+			if (vcpu_te->cop0->stat[i][j])
 				printk("%s[%d]: %lu\n", kvm_cop0_str[i], j,
-				       vcpu->arch.cop0->stat[i][j]);
+				       vcpu_te->cop0->stat[i][j]);
 		}
 	}
 #endif
diff --git a/arch/mips/kvm/kvm_tlb.c b/arch/mips/kvm/kvm_tlb.c
index 5e189be..ffe6088 100644
--- a/arch/mips/kvm/kvm_tlb.c
+++ b/arch/mips/kvm/kvm_tlb.c
@@ -30,6 +30,8 @@
 #include <asm/r4kcache.h>
 #define CONFIG_MIPS_MT
 
+#include <asm/kvm_mips_te.h>
+
 #define KVM_GUEST_PC_TLB    0
 #define KVM_GUEST_SP_TLB    1
 
@@ -53,18 +55,22 @@ EXPORT_SYMBOL(kvm_mips_is_error_pfn);
 
 uint32_t kvm_mips_get_kernel_asid(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.guest_kernel_asid[smp_processor_id()] & ASID_MASK;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+
+	return vcpu_te->guest_kernel_asid[smp_processor_id()] & ASID_MASK;
 }
 
 
 uint32_t kvm_mips_get_user_asid(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.guest_user_asid[smp_processor_id()] & ASID_MASK;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	return  vcpu_te->guest_user_asid[smp_processor_id()] & ASID_MASK;
 }
 
 inline uint32_t kvm_mips_get_commpage_asid (struct kvm_vcpu *vcpu)
 {
-	return vcpu->kvm->arch.commpage_tlb;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	return vcpu_te->kvm_mips_te->commpage_tlb;
 }
 
 
@@ -122,7 +128,9 @@ void kvm_mips_dump_host_tlbs(void)
 
 void kvm_mips_dump_guest_tlbs(struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct mips_coproc *cop0 = vcpu_te->cop0;
+
 	struct kvm_mips_tlb tlb;
 	int i;
 
@@ -130,7 +138,7 @@ void kvm_mips_dump_guest_tlbs(struct kvm_vcpu *vcpu)
 	printk("Guest EntryHi: %#lx\n", kvm_read_c0_guest_entryhi(cop0));
 
 	for (i = 0; i < KVM_MIPS_GUEST_TLB_SIZE; i++) {
-		tlb = vcpu->arch.guest_tlb[i];
+		tlb = vcpu_te->guest_tlb[i];
 		printk("TLB%c%3d Hi 0x%08lx ",
 		       (tlb.tlb_lo0 | tlb.tlb_lo1) & MIPS3_PG_V ? ' ' : '*',
 		       i, tlb.tlb_hi);
@@ -150,11 +158,12 @@ void kvm_mips_dump_guest_tlbs(struct kvm_vcpu *vcpu)
 void kvm_mips_dump_shadow_tlbs(struct kvm_vcpu *vcpu)
 {
 	int i;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
 	volatile struct kvm_mips_tlb tlb;
 
 	printk("Shadow TLBs:\n");
 	for (i = 0; i < KVM_MIPS_GUEST_TLB_SIZE; i++) {
-		tlb = vcpu->arch.shadow_tlb[smp_processor_id()][i];
+		tlb = vcpu_te->shadow_tlb[smp_processor_id()][i];
 		printk("TLB%c%3d Hi 0x%08lx ",
 		       (tlb.tlb_lo0 | tlb.tlb_lo1) & MIPS3_PG_V ? ' ' : '*',
 		       i, tlb.tlb_hi);
@@ -173,10 +182,11 @@ void kvm_mips_dump_shadow_tlbs(struct kvm_vcpu *vcpu)
 
 static int kvm_mips_map_page(struct kvm *kvm, gfn_t gfn)
 {
+	struct kvm_mips_te *kvm_mips_te = kvm->arch.impl;
 	int srcu_idx, err = 0;
 	pfn_t pfn;
 
-	if (kvm->arch.guest_pmap[gfn] != KVM_INVALID_PAGE)
+	if (kvm_mips_te->guest_pmap[gfn] != KVM_INVALID_PAGE)
 		return 0;
 
         srcu_idx = srcu_read_lock(&kvm->srcu);
@@ -188,7 +198,7 @@ static int kvm_mips_map_page(struct kvm *kvm, gfn_t gfn)
 		goto out;
 	}
 
-	kvm->arch.guest_pmap[gfn] = pfn;
+	kvm_mips_te->guest_pmap[gfn] = pfn;
 out:
 	srcu_read_unlock(&kvm->srcu, srcu_idx);
 	return err;
@@ -200,7 +210,7 @@ unsigned long kvm_mips_translate_guest_kseg0_to_hpa(struct kvm_vcpu *vcpu,
 {
 	gfn_t gfn;
 	uint32_t offset = gva & ~PAGE_MASK;
-	struct kvm *kvm = vcpu->kvm;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
 
 	if (KVM_GUEST_KSEGX(gva) != KVM_GUEST_KSEG0) {
 		kvm_err("%s/%p: Invalid gva: %#lx\n", __func__,
@@ -210,7 +220,7 @@ unsigned long kvm_mips_translate_guest_kseg0_to_hpa(struct kvm_vcpu *vcpu,
 
 	gfn = (KVM_GUEST_CPHYSADDR(gva) >> PAGE_SHIFT);
 
-	if (gfn >= kvm->arch.guest_pmap_npages) {
+	if (gfn >= vcpu_te->kvm_mips_te->guest_pmap_npages) {
 		kvm_err("%s: Invalid gfn: %#llx, GVA: %#lx\n", __func__, gfn,
 			gva);
 		return KVM_INVALID_PAGE;
@@ -219,7 +229,7 @@ unsigned long kvm_mips_translate_guest_kseg0_to_hpa(struct kvm_vcpu *vcpu,
 	if (kvm_mips_map_page(vcpu->kvm, gfn) < 0)
 		return KVM_INVALID_ADDR;
 
-	return (kvm->arch.guest_pmap[gfn] << PAGE_SHIFT) + offset;
+	return (vcpu_te->kvm_mips_te->guest_pmap[gfn] << PAGE_SHIFT) + offset;
 }
 
 /* XXXKYMA: Must be called with interrupts disabled */
@@ -301,7 +311,8 @@ int kvm_mips_handle_kseg0_tlb_fault(unsigned long badvaddr,
 	unsigned long vaddr = 0;
 	unsigned long entryhi = 0, entrylo0 = 0, entrylo1 = 0;
 	int even;
-	struct kvm *kvm = vcpu->kvm;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct kvm_mips_te *kvm_mips_te = vcpu_te->kvm_mips_te;
 	const int flush_dcache_mask = 0;
 
 
@@ -312,7 +323,7 @@ int kvm_mips_handle_kseg0_tlb_fault(unsigned long badvaddr,
 	}
 
 	gfn = (KVM_GUEST_CPHYSADDR(badvaddr) >> PAGE_SHIFT);
-	if (gfn >= kvm->arch.guest_pmap_npages) {
+	if (gfn >= kvm_mips_te->guest_pmap_npages) {
 		kvm_err("%s: Invalid gfn: %#llx, BadVaddr: %#lx\n", __func__,
 			gfn, badvaddr);
 		kvm_mips_dump_host_tlbs();
@@ -328,11 +339,11 @@ int kvm_mips_handle_kseg0_tlb_fault(unsigned long badvaddr,
 		return -1;
 
 	if (even) {
-		pfn0 = kvm->arch.guest_pmap[gfn];
-		pfn1 = kvm->arch.guest_pmap[gfn ^ 0x1];
+		pfn0 = kvm_mips_te->guest_pmap[gfn];
+		pfn1 = kvm_mips_te->guest_pmap[gfn ^ 0x1];
 	} else {
-		pfn0 = kvm->arch.guest_pmap[gfn ^ 0x1];
-		pfn1 = kvm->arch.guest_pmap[gfn];
+		pfn0 = kvm_mips_te->guest_pmap[gfn ^ 0x1];
+		pfn1 = kvm_mips_te->guest_pmap[gfn];
 	}
 
 	entryhi = (vaddr | kvm_mips_get_kernel_asid(vcpu));
@@ -351,9 +362,10 @@ int kvm_mips_handle_commpage_tlb_fault(unsigned long badvaddr,
 	pfn_t pfn0, pfn1;
 	unsigned long flags, old_entryhi = 0, vaddr = 0;
 	unsigned long entrylo0 = 0, entrylo1 = 0;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
 
 
-	pfn0 = CPHYSADDR((unsigned long)vcpu->arch.kseg0_commpage) >> PAGE_SHIFT;
+	pfn0 = CPHYSADDR((unsigned long)vcpu_te->kseg0_commpage) >> PAGE_SHIFT;
 	pfn1 = 0;
 	entrylo0 = mips3_paddr_to_tlbpfn(pfn0 << PAGE_SHIFT) | (0x3 << 3) | (1 << 2) |
 			(0x1 << 1);
@@ -395,6 +407,8 @@ kvm_mips_handle_mapped_seg_tlb_fault(struct kvm_vcpu *vcpu,
 	struct kvm_mips_tlb *tlb, unsigned long *hpa0, unsigned long *hpa1)
 {
 	unsigned long entryhi = 0, entrylo0 = 0, entrylo1 = 0;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct kvm_mips_te *kvm_mips_te = vcpu_te->kvm_mips_te;
 	struct kvm *kvm = vcpu->kvm;
 	pfn_t pfn0, pfn1;
 
@@ -409,8 +423,8 @@ kvm_mips_handle_mapped_seg_tlb_fault(struct kvm_vcpu *vcpu,
 		if (kvm_mips_map_page(kvm, mips3_tlbpfn_to_paddr(tlb->tlb_lo1) >> PAGE_SHIFT) < 0)
 			return -1;
 
-		pfn0 = kvm->arch.guest_pmap[mips3_tlbpfn_to_paddr(tlb->tlb_lo0) >> PAGE_SHIFT];
-		pfn1 = kvm->arch.guest_pmap[mips3_tlbpfn_to_paddr(tlb->tlb_lo1) >> PAGE_SHIFT];
+		pfn0 = kvm_mips_te->guest_pmap[mips3_tlbpfn_to_paddr(tlb->tlb_lo0) >> PAGE_SHIFT];
+		pfn1 = kvm_mips_te->guest_pmap[mips3_tlbpfn_to_paddr(tlb->tlb_lo1) >> PAGE_SHIFT];
 	}
 
 	if (hpa0)
@@ -440,7 +454,8 @@ int kvm_mips_guest_tlb_lookup(struct kvm_vcpu *vcpu, unsigned long entryhi)
 {
 	int i;
 	int index = -1;
-	struct kvm_mips_tlb *tlb = vcpu->arch.guest_tlb;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct kvm_mips_tlb *tlb = vcpu_te->guest_tlb;
 
 
 	for (i = 0; i < KVM_MIPS_GUEST_TLB_SIZE; i++) {
@@ -664,6 +679,7 @@ void kvm_shadow_tlb_put(struct kvm_vcpu *vcpu)
 	unsigned long old_pagemask;
 	int entry = 0;
 	int cpu = smp_processor_id();
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
 
 	local_irq_save(flags);
 
@@ -676,10 +692,10 @@ void kvm_shadow_tlb_put(struct kvm_vcpu *vcpu)
 		tlb_read();
 		tlbw_use_hazard();
 
-		vcpu->arch.shadow_tlb[cpu][entry].tlb_hi = read_c0_entryhi();
-		vcpu->arch.shadow_tlb[cpu][entry].tlb_lo0 = read_c0_entrylo0();
-		vcpu->arch.shadow_tlb[cpu][entry].tlb_lo1 = read_c0_entrylo1();
-		vcpu->arch.shadow_tlb[cpu][entry].tlb_mask = read_c0_pagemask();
+		vcpu_te->shadow_tlb[cpu][entry].tlb_hi = read_c0_entryhi();
+		vcpu_te->shadow_tlb[cpu][entry].tlb_lo0 = read_c0_entrylo0();
+		vcpu_te->shadow_tlb[cpu][entry].tlb_lo1 = read_c0_entrylo1();
+		vcpu_te->shadow_tlb[cpu][entry].tlb_mask = read_c0_pagemask();
 	}
 
 	write_c0_entryhi(old_entryhi);
@@ -696,16 +712,17 @@ void kvm_shadow_tlb_load(struct kvm_vcpu *vcpu)
 	unsigned long old_ctx;
 	int entry;
 	int cpu = smp_processor_id();
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
 
 	local_irq_save(flags);
 
 	old_ctx = read_c0_entryhi();
 
 	for (entry = 0; entry < current_cpu_data.tlbsize; entry++) {
-		write_c0_entryhi(vcpu->arch.shadow_tlb[cpu][entry].tlb_hi);
+		write_c0_entryhi(vcpu_te->shadow_tlb[cpu][entry].tlb_hi);
 		mtc0_tlbw_hazard();
-		write_c0_entrylo0(vcpu->arch.shadow_tlb[cpu][entry].tlb_lo0);
-		write_c0_entrylo1(vcpu->arch.shadow_tlb[cpu][entry].tlb_lo1);
+		write_c0_entrylo0(vcpu_te->shadow_tlb[cpu][entry].tlb_lo0);
+		write_c0_entrylo1(vcpu_te->shadow_tlb[cpu][entry].tlb_lo1);
 
 		write_c0_index(entry);
 		mtc0_tlbw_hazard();
@@ -752,32 +769,34 @@ void kvm_local_flush_tlb_all(void)
 void kvm_mips_init_shadow_tlb(struct kvm_vcpu *vcpu)
 {
 	int cpu, entry;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
 
 	for_each_possible_cpu(cpu) {
 		for (entry = 0; entry < current_cpu_data.tlbsize; entry++) {
-			vcpu->arch.shadow_tlb[cpu][entry].tlb_hi =
+			vcpu_te->shadow_tlb[cpu][entry].tlb_hi =
 			    UNIQUE_ENTRYHI(entry);
-			vcpu->arch.shadow_tlb[cpu][entry].tlb_lo0 = 0x0;
-			vcpu->arch.shadow_tlb[cpu][entry].tlb_lo1 = 0x0;
-			vcpu->arch.shadow_tlb[cpu][entry].tlb_mask =
+			vcpu_te->shadow_tlb[cpu][entry].tlb_lo0 = 0x0;
+			vcpu_te->shadow_tlb[cpu][entry].tlb_lo1 = 0x0;
+			vcpu_te->shadow_tlb[cpu][entry].tlb_mask =
 			    read_c0_pagemask();
 #ifdef DEBUG
 			kvm_debug
 			    ("shadow_tlb[%d][%d]: tlb_hi: %#lx, lo0: %#lx, lo1: %#lx\n",
 			     cpu, entry,
-			     vcpu->arch.shadow_tlb[cpu][entry].tlb_hi,
-			     vcpu->arch.shadow_tlb[cpu][entry].tlb_lo0,
-			     vcpu->arch.shadow_tlb[cpu][entry].tlb_lo1);
+			     vcpu_te->shadow_tlb[cpu][entry].tlb_hi,
+			     vcpu_te->shadow_tlb[cpu][entry].tlb_lo0,
+			     vcpu_te->shadow_tlb[cpu][entry].tlb_lo1);
 #endif
 		}
 	}
 }
 
 /* Restore ASID once we are scheduled back after preemption */
-void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+void kvm_mips_te_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 {
 	unsigned long flags;
 	int newasid = 0;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
 
 #ifdef DEBUG
 	kvm_debug("%s: vcpu %p, cpu: %d\n", __func__, vcpu, cpu);
@@ -787,27 +806,24 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 
 	local_irq_save(flags);
 
-	if (((vcpu->arch.
-	      guest_kernel_asid[cpu] ^ asid_cache(cpu)) & ASID_VERSION_MASK)) {
-		kvm_get_new_mmu_context(&vcpu->arch.guest_kernel_mm, cpu, vcpu);
-		vcpu->arch.guest_kernel_asid[cpu] =
-		    vcpu->arch.guest_kernel_mm.context.asid[cpu];
-		kvm_get_new_mmu_context(&vcpu->arch.guest_user_mm, cpu, vcpu);
-		vcpu->arch.guest_user_asid[cpu] =
-		    vcpu->arch.guest_user_mm.context.asid[cpu];
+	if (((vcpu_te->guest_kernel_asid[cpu] ^ asid_cache(cpu)) & ASID_VERSION_MASK)) {
+		kvm_get_new_mmu_context(&vcpu_te->guest_kernel_mm, cpu, vcpu);
+		vcpu_te->guest_kernel_asid[cpu] = vcpu_te->guest_kernel_mm.context.asid[cpu];
+		kvm_get_new_mmu_context(&vcpu_te->guest_user_mm, cpu, vcpu);
+		vcpu_te->guest_user_asid[cpu] =	vcpu_te->guest_user_mm.context.asid[cpu];
 		newasid++;
 
 		kvm_info("[%d]: cpu_context: %#lx\n", cpu,
 			 cpu_context(cpu, current->mm));
 		kvm_info("[%d]: Allocated new ASID for Guest Kernel: %#x\n",
-			 cpu, vcpu->arch.guest_kernel_asid[cpu]);
+			 cpu, vcpu_te->guest_kernel_asid[cpu]);
 		kvm_info("[%d]: Allocated new ASID for Guest User: %#x\n", cpu,
-			 vcpu->arch.guest_user_asid[cpu]);
+			 vcpu_te->guest_user_asid[cpu]);
 	}
 
-	if (vcpu->arch.last_sched_cpu != cpu) {
+	if (vcpu_te->last_sched_cpu != cpu) {
 		kvm_info("[%d->%d]KVM VCPU[%d] switch\n",
-			 vcpu->arch.last_sched_cpu, cpu, vcpu->vcpu_id);
+			 vcpu_te->last_sched_cpu, cpu, vcpu->vcpu_id);
 	}
 
 	/* Only reload shadow host TLB if new ASIDs haven't been allocated */
@@ -821,8 +837,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 	if (!newasid) {
 		/* If we preempted while the guest was executing, then reload the pre-empted ASID */
 		if (current->flags & PF_VCPU) {
-			write_c0_entryhi(vcpu->arch.
-					 preempt_entryhi & ASID_MASK);
+			write_c0_entryhi(vcpu_te->preempt_entryhi & ASID_MASK);
 			ehb();
 		}
 	} else {
@@ -834,13 +849,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 		 */
 		if (current->flags & PF_VCPU) {
 			if (KVM_GUEST_KERNEL_MODE(vcpu))
-				write_c0_entryhi(vcpu->arch.
-						 guest_kernel_asid[cpu] &
-						 ASID_MASK);
+				write_c0_entryhi(vcpu_te->guest_kernel_asid[cpu] & ASID_MASK);
 			else
-				write_c0_entryhi(vcpu->arch.
-						 guest_user_asid[cpu] &
-						 ASID_MASK);
+				write_c0_entryhi(vcpu_te->guest_user_asid[cpu] & ASID_MASK);
 			ehb();
 		}
 	}
@@ -850,8 +861,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 }
 
 /* ASID can change if another task is scheduled during preemption */
-void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
+void kvm_mips_te_vcpu_put(struct kvm_vcpu *vcpu)
 {
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
 	unsigned long flags;
 	uint32_t cpu;
 
@@ -860,8 +872,8 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
 	cpu = smp_processor_id();
 
 
-	vcpu->arch.preempt_entryhi = read_c0_entryhi();
-	vcpu->arch.last_sched_cpu = cpu;
+	vcpu_te->preempt_entryhi = read_c0_entryhi();
+	vcpu_te->last_sched_cpu = cpu;
 
 #if 0
 	if ((atomic_read(&kvm_mips_instance) > 1)) {
@@ -883,7 +895,8 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
 
 uint32_t kvm_get_inst(uint32_t *opc, struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct mips_coproc *cop0 = vcpu_te->cop0;
 	unsigned long paddr, flags;
 	uint32_t inst;
 	int index;
@@ -910,8 +923,7 @@ uint32_t kvm_get_inst(uint32_t *opc, struct kvm_vcpu *vcpu)
 				return KVM_INVALID_INST;
 			}
 			kvm_mips_handle_mapped_seg_tlb_fault(vcpu,
-							     &vcpu->arch.
-							     guest_tlb[index],
+							     &vcpu_te->guest_tlb[index],
 							     NULL, NULL);
 			inst = *(opc);
 		}
@@ -945,5 +957,3 @@ EXPORT_SYMBOL(kvm_shadow_tlb_load);
 EXPORT_SYMBOL(kvm_mips_dump_shadow_tlbs);
 EXPORT_SYMBOL(kvm_mips_dump_guest_tlbs);
 EXPORT_SYMBOL(kvm_get_inst);
-EXPORT_SYMBOL(kvm_arch_vcpu_load);
-EXPORT_SYMBOL(kvm_arch_vcpu_put);
diff --git a/arch/mips/kvm/kvm_trap_emul.c b/arch/mips/kvm/kvm_trap_emul.c
index 8d0ab12..80c4ffa 100644
--- a/arch/mips/kvm/kvm_trap_emul.c
+++ b/arch/mips/kvm/kvm_trap_emul.c
@@ -16,8 +16,786 @@
 
 #include <linux/kvm_host.h>
 
+#include <asm/kvm_mips_te.h>
+
 #include "kvm_mips_opcode.h"
 #include "kvm_mips_int.h"
+#include "kvm_mips_comm.h"
+
+#define CREATE_TRACE_POINTS
+#include "trace.h"
+
+#ifndef VECTORSPACING
+#define VECTORSPACING 0x100	/* for EI/VI mode */
+#endif
+
+static int kvm_mips_te_reset_vcpu(struct kvm_vcpu *vcpu)
+{
+	struct kvm_mips_vcpu_te *te = vcpu->arch.impl;
+	int i;
+
+	for_each_possible_cpu(i) {
+		te->guest_kernel_asid[i] = 0;
+		te->guest_user_asid[i] = 0;
+	}
+	return 0;
+}
+
+static int kvm_mips_te_vcpu_runnable(struct kvm_vcpu *vcpu)
+{
+	struct kvm_mips_vcpu_te *te = vcpu->arch.impl;
+	return !!(te->pending_exceptions);
+}
+
+static void kvm_mips_te_init_tlbs(struct kvm *kvm)
+{
+	struct kvm_mips_te *kvm_mips_te = kvm->arch.impl;
+	unsigned long wired;
+
+	/*
+	 * Add a wired entry to the TLB, it is used to map the
+	 * commpage to the Guest kernel
+	 */
+	wired = read_c0_wired();
+	write_c0_wired(wired + 1);
+	mtc0_tlbw_hazard();
+	kvm_mips_te->commpage_tlb = wired;
+
+	kvm_debug("[%d] commpage TLB: %d\n", smp_processor_id(),
+		  kvm_mips_te->commpage_tlb);
+}
+
+static void kvm_mips_te_init_vm_percpu(void *arg)
+{
+	struct kvm *kvm = (struct kvm *)arg;
+
+	kvm_mips_te_init_tlbs(kvm);
+	kvm_mips_callbacks->vm_init(kvm);
+}
+
+static void kvm_mips_te_free_vcpus(struct kvm *kvm)
+{
+	struct kvm_mips_te *kvm_mips_te = kvm->arch.impl;
+	unsigned int i;
+	struct kvm_vcpu *vcpu;
+
+	/* Put the pages we reserved for the guest pmap */
+	for (i = 0; i < kvm_mips_te->guest_pmap_npages; i++) {
+		if (kvm_mips_te->guest_pmap[i] != KVM_INVALID_PAGE)
+			kvm_mips_release_pfn_clean(kvm_mips_te->guest_pmap[i]);
+	}
+
+	kfree(kvm_mips_te->guest_pmap);
+
+	kvm_for_each_vcpu(i, vcpu, kvm) {
+		kvm_arch_vcpu_free(vcpu);
+	}
+
+	mutex_lock(&kvm->lock);
+
+	for (i = 0; i < atomic_read(&kvm->online_vcpus); i++)
+		kvm->vcpus[i] = NULL;
+
+	atomic_set(&kvm->online_vcpus, 0);
+
+	mutex_unlock(&kvm->lock);
+}
+
+static void kvm_mips_te_uninit_tlbs(void *arg)
+{
+	/* Restore wired count */
+	write_c0_wired(0);
+	mtc0_tlbw_hazard();
+	/* Clear out all the TLBs */
+	kvm_local_flush_tlb_all();
+}
+
+static void kvm_mips_te_destroy_vm(struct kvm *kvm)
+{
+	kvm_mips_te_free_vcpus(kvm);
+
+	/* If this is the last instance, restore wired count */
+	if (atomic_dec_return(&kvm_mips_instance) == 0) {
+		kvm_info("%s: last KVM instance, restoring TLB parameters\n",
+			 __func__);
+		on_each_cpu(kvm_mips_te_uninit_tlbs, NULL, 1);
+	}
+	kfree(kvm->arch.impl);
+}
+
+static void kvm_mips_te_commit_memory_region(struct kvm *kvm,
+					     struct kvm_userspace_memory_region *mem,
+					     const struct kvm_memory_slot *old,
+					     enum kvm_mr_change change)
+{
+	struct kvm_mips_te *kvm_mips_te = kvm->arch.impl;
+	unsigned long npages = 0;
+	int i, err = 0;
+
+	kvm_debug("%s: kvm: %p slot: %d, GPA: %llx, size: %llx, QVA: %llx\n",
+		  __func__, kvm, mem->slot, mem->guest_phys_addr,
+		  mem->memory_size, mem->userspace_addr);
+
+	/* Setup Guest PMAP table */
+	if (!kvm_mips_te->guest_pmap) {
+		if (mem->slot == 0)
+			npages = mem->memory_size >> PAGE_SHIFT;
+
+		if (npages) {
+			kvm_mips_te->guest_pmap_npages = npages;
+			kvm_mips_te->guest_pmap =
+			    kzalloc(npages * sizeof(unsigned long), GFP_KERNEL);
+
+			if (!kvm_mips_te->guest_pmap) {
+				kvm_err("Failed to allocate guest PMAP");
+				err = -ENOMEM;
+				goto out;
+			}
+
+			kvm_info("Allocated space for Guest PMAP Table (%ld pages) @ %p\n",
+				 npages, kvm_mips_te->guest_pmap);
+
+			/* Now setup the page table */
+			for (i = 0; i < npages; i++)
+				kvm_mips_te->guest_pmap[i] = KVM_INVALID_PAGE;
+		}
+	}
+out:
+	return;
+}
+
+static struct kvm_vcpu *kvm_mips_te_vcpu_create(struct kvm *kvm, unsigned int id)
+{
+	extern char mips32_exception[], mips32_exceptionEnd[];
+	extern char mips32_GuestException[], mips32_GuestExceptionEnd[];
+	int err, size, offset;
+	void *gebase;
+	int i;
+	struct kvm_mips_te *kvm_mips_te = kvm->arch.impl;
+	struct kvm_mips_vcpu_te *vcpu_te;
+	struct kvm_vcpu *vcpu;
+
+	vcpu_te = kzalloc(sizeof(struct kvm_mips_vcpu_te), GFP_KERNEL);
+	if (!vcpu_te) {
+		err = -ENOMEM;
+		goto out;
+	}
+
+	vcpu = kzalloc(sizeof(struct kvm_vcpu), GFP_KERNEL);
+	if (!vcpu) {
+		err = -ENOMEM;
+		goto out;
+	}
+	vcpu->arch.impl = vcpu_te;
+	vcpu_te->vcpu = vcpu;
+	vcpu_te->kvm_mips_te = kvm_mips_te;
+
+	err = kvm_vcpu_init(vcpu, kvm, id);
+
+	if (err)
+		goto out_free_cpu;
+
+	kvm_info("kvm @ %p: create cpu %d at %p\n", kvm, id, vcpu);
+
+	/* Allocate space for host mode exception handlers that handle
+	 * guest mode exits
+	 */
+	if (cpu_has_veic || cpu_has_vint)
+		size = 0x200 + VECTORSPACING * 64;
+	else
+		size = 0x200;
+
+	/* Save Linux EBASE */
+	vcpu_te->host_ebase = (void *)(long)(read_c0_ebase() & 0x3ff);
+
+	gebase = kzalloc(ALIGN(size, PAGE_SIZE), GFP_KERNEL);
+
+	if (!gebase) {
+		err = -ENOMEM;
+		goto out_free_cpu;
+	}
+	kvm_info("Allocated %d bytes for KVM Exception Handlers @ %p\n",
+		 ALIGN(size, PAGE_SIZE), gebase);
+
+	/* Save new ebase */
+	vcpu_te->guest_ebase = gebase;
+
+	/* Copy L1 Guest Exception handler to correct offset */
+
+	/* TLB Refill, EXL = 0 */
+	memcpy(gebase, mips32_exception,
+	       mips32_exceptionEnd - mips32_exception);
+
+	/* General Exception Entry point */
+	memcpy(gebase + 0x180, mips32_exception,
+	       mips32_exceptionEnd - mips32_exception);
+
+	/* For vectored interrupts poke the exception code @ all offsets 0-7 */
+	for (i = 0; i < 8; i++) {
+		kvm_debug("L1 Vectored handler @ %p\n",
+			  gebase + 0x200 + (i * VECTORSPACING));
+		memcpy(gebase + 0x200 + (i * VECTORSPACING), mips32_exception,
+		       mips32_exceptionEnd - mips32_exception);
+	}
+
+	/* General handler, relocate to unmapped space for sanity's sake */
+	offset = 0x2000;
+	kvm_info("Installing KVM Exception handlers @ %p, %#x bytes\n",
+		 gebase + offset,
+		 (unsigned)(mips32_GuestExceptionEnd - mips32_GuestException));
+
+	memcpy(gebase + offset, mips32_GuestException,
+	       mips32_GuestExceptionEnd - mips32_GuestException);
+
+	/* Invalidate the icache for these ranges */
+	mips32_SyncICache((unsigned long) gebase, ALIGN(size, PAGE_SIZE));
+
+	/*
+	 * Allocate comm page for guest kernel, a TLB will be reserved
+	 * for mapping GVA @ 0xFFFF8000 to this page
+	 */
+	vcpu_te->kseg0_commpage = kzalloc(PAGE_SIZE << 1, GFP_KERNEL);
+
+	if (!vcpu_te->kseg0_commpage) {
+		err = -ENOMEM;
+		goto out_free_gebase;
+	}
+
+	kvm_info("Allocated COMM page @ %p\n", vcpu_te->kseg0_commpage);
+	kvm_mips_commpage_init(vcpu);
+
+	/* Init */
+	vcpu_te->last_sched_cpu = -1;
+
+	/* Start off the timer */
+	kvm_mips_emulate_count(vcpu);
+
+	return vcpu;
+
+out_free_gebase:
+	kfree(gebase);
+
+out_free_cpu:
+	kfree(vcpu);
+
+out:
+	kfree(vcpu_te);
+	return ERR_PTR(err);
+}
+
+static void kvm_mips_te_vcpu_free(struct kvm_vcpu *vcpu)
+{
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	hrtimer_cancel(&vcpu_te->comparecount_timer);
+
+	kvm_vcpu_uninit(vcpu);
+
+	kvm_mips_dump_stats(vcpu);
+
+	kfree(vcpu_te->guest_ebase);
+	kfree(vcpu_te->kseg0_commpage);
+	kfree(vcpu_te);
+}
+
+static int kvm_mips_te_vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+	int r = 0;
+	sigset_t sigsaved;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+
+	if (vcpu->sigset_active)
+		sigprocmask(SIG_SETMASK, &vcpu->sigset, &sigsaved);
+
+	if (vcpu->mmio_needed) {
+		if (!vcpu->mmio_is_write)
+			kvm_mips_complete_mmio_load(vcpu, run);
+		vcpu->mmio_needed = 0;
+	}
+
+	/* Check if we have any exceptions/interrupts pending */
+	kvm_mips_deliver_interrupts(vcpu,
+				    kvm_read_c0_guest_cause(vcpu_te->cop0));
+
+	local_irq_disable();
+	kvm_guest_enter();
+
+	r = __kvm_mips_vcpu_run(run, vcpu);
+
+	kvm_guest_exit();
+	local_irq_enable();
+
+	if (vcpu->sigset_active)
+		sigprocmask(SIG_SETMASK, &sigsaved, NULL);
+
+	return r;
+}
+
+static int kvm_mips_te_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu,
+					    struct kvm_mips_interrupt *irq)
+{
+	struct kvm_mips_vcpu_te *dvcpu_te;
+	int intr = (int)irq->irq;
+	struct kvm_vcpu *dvcpu = NULL;
+
+	if (intr == 3 || intr == -3 || intr == 4 || intr == -4)
+		kvm_debug("%s: CPU: %d, INTR: %d\n", __func__, irq->cpu,
+			  (int)intr);
+
+	if (irq->cpu == -1)
+		dvcpu = vcpu;
+	else
+		dvcpu = vcpu->kvm->vcpus[irq->cpu];
+
+	if (intr == 2 || intr == 3 || intr == 4) {
+		kvm_mips_callbacks->queue_io_int(dvcpu, irq);
+
+	} else if (intr == -2 || intr == -3 || intr == -4) {
+		kvm_mips_callbacks->dequeue_io_int(dvcpu, irq);
+	} else {
+		kvm_err("%s: invalid interrupt ioctl (%d:%d)\n", __func__,
+			irq->cpu, irq->irq);
+		return -EINVAL;
+	}
+	dvcpu_te = dvcpu->arch.impl;
+	dvcpu_te->wait = 0;
+
+	if (waitqueue_active(&dvcpu->wq))
+		wake_up_interruptible(&dvcpu->wq);
+
+	return 0;
+}
+
+static long kvm_mips_te_vcpu_ioctl(struct kvm_vcpu *vcpu,
+				   unsigned int ioctl,
+				   unsigned long arg)
+{
+	void __user *argp = (void __user *)arg;
+	long r;
+
+	switch (ioctl) {
+	case KVM_NMI:
+		/* Treat the NMI as a CPU reset */
+		r = kvm_mips_te_reset_vcpu(vcpu);
+		break;
+	case KVM_INTERRUPT:
+		{
+			struct kvm_mips_interrupt irq;
+			r = -EFAULT;
+			if (copy_from_user(&irq, argp, sizeof(irq)))
+				goto out;
+
+			kvm_debug("[%d] %s: irq: %d\n", vcpu->vcpu_id, __func__,
+				  irq.irq);
+
+			r = kvm_mips_te_vcpu_ioctl_interrupt(vcpu, &irq);
+			break;
+		}
+	default:
+		r = -ENOIOCTLCMD;
+		break;
+	}
+out:
+	return r;
+}
+
+#define KVM_REG_MIPS_CP0_INDEX (0x10000 + 8 * 0 + 0)
+#define KVM_REG_MIPS_CP0_ENTRYLO0 (0x10000 + 8 * 2 + 0)
+#define KVM_REG_MIPS_CP0_ENTRYLO1 (0x10000 + 8 * 3 + 0)
+#define KVM_REG_MIPS_CP0_CONTEXT (0x10000 + 8 * 4 + 0)
+#define KVM_REG_MIPS_CP0_USERLOCAL (0x10000 + 8 * 4 + 2)
+#define KVM_REG_MIPS_CP0_PAGEMASK (0x10000 + 8 * 5 + 0)
+#define KVM_REG_MIPS_CP0_PAGEGRAIN (0x10000 + 8 * 5 + 1)
+#define KVM_REG_MIPS_CP0_WIRED (0x10000 + 8 * 6 + 0)
+#define KVM_REG_MIPS_CP0_HWRENA (0x10000 + 8 * 7 + 0)
+#define KVM_REG_MIPS_CP0_BADVADDR (0x10000 + 8 * 8 + 0)
+#define KVM_REG_MIPS_CP0_COUNT (0x10000 + 8 * 9 + 0)
+#define KVM_REG_MIPS_CP0_ENTRYHI (0x10000 + 8 * 10 + 0)
+#define KVM_REG_MIPS_CP0_COMPARE (0x10000 + 8 * 11 + 0)
+#define KVM_REG_MIPS_CP0_STATUS (0x10000 + 8 * 12 + 0)
+#define KVM_REG_MIPS_CP0_CAUSE (0x10000 + 8 * 13 + 0)
+#define KVM_REG_MIPS_CP0_EBASE (0x10000 + 8 * 15 + 1)
+#define KVM_REG_MIPS_CP0_CONFIG (0x10000 + 8 * 16 + 0)
+#define KVM_REG_MIPS_CP0_CONFIG1 (0x10000 + 8 * 16 + 1)
+#define KVM_REG_MIPS_CP0_CONFIG2 (0x10000 + 8 * 16 + 2)
+#define KVM_REG_MIPS_CP0_CONFIG3 (0x10000 + 8 * 16 + 3)
+#define KVM_REG_MIPS_CP0_CONFIG7 (0x10000 + 8 * 16 + 7)
+#define KVM_REG_MIPS_CP0_XCONTEXT (0x10000 + 8 * 20 + 0)
+#define KVM_REG_MIPS_CP0_ERROREPC (0x10000 + 8 * 30 + 0)
+
+static int kvm_mips_te_get_reg(struct kvm_vcpu *vcpu,
+			       const struct kvm_one_reg *reg)
+{
+	u64 __user *uaddr = (u64 __user *)(long)reg->addr;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct mips_coproc *cop0 = vcpu_te->cop0;
+	s64 v;
+
+	switch (reg->id) {
+	case KVM_REG_MIPS_CP0_INDEX:
+		v = (long)kvm_read_c0_guest_index(cop0);
+		break;
+	case KVM_REG_MIPS_CP0_CONTEXT:
+		v = (long)kvm_read_c0_guest_context(cop0);
+		break;
+	case KVM_REG_MIPS_CP0_PAGEMASK:
+		v = (long)kvm_read_c0_guest_pagemask(cop0);
+		break;
+	case KVM_REG_MIPS_CP0_WIRED:
+		v = (long)kvm_read_c0_guest_wired(cop0);
+		break;
+	case KVM_REG_MIPS_CP0_BADVADDR:
+		v = (long)kvm_read_c0_guest_badvaddr(cop0);
+		break;
+	case KVM_REG_MIPS_CP0_ENTRYHI:
+		v = (long)kvm_read_c0_guest_entryhi(cop0);
+		break;
+	case KVM_REG_MIPS_CP0_STATUS:
+		v = (long)kvm_read_c0_guest_status(cop0);
+		break;
+	case KVM_REG_MIPS_CP0_CAUSE:
+		v = (long)kvm_read_c0_guest_cause(cop0);
+		break;
+	case KVM_REG_MIPS_CP0_ERROREPC:
+		v = (long)kvm_read_c0_guest_errorepc(cop0);
+		break;
+	case KVM_REG_MIPS_CP0_CONFIG:
+		v = (long)kvm_read_c0_guest_config(cop0);
+		break;
+	case KVM_REG_MIPS_CP0_CONFIG1:
+		v = (long)kvm_read_c0_guest_config1(cop0);
+		break;
+	case KVM_REG_MIPS_CP0_CONFIG2:
+		v = (long)kvm_read_c0_guest_config2(cop0);
+		break;
+	case KVM_REG_MIPS_CP0_CONFIG3:
+		v = (long)kvm_read_c0_guest_config3(cop0);
+		break;
+	case KVM_REG_MIPS_CP0_CONFIG7:
+		v = (long)kvm_read_c0_guest_config7(cop0);
+		break;
+	default:
+		return -EINVAL;
+	}
+	return put_user(v, uaddr);
+}
+
+static int kvm_mips_te_set_reg(struct kvm_vcpu *vcpu,
+			       const struct kvm_one_reg *reg,
+			       u64 v)
+{
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct mips_coproc *cop0 = vcpu_te->cop0;
+
+	switch (reg->id) {
+	case KVM_REG_MIPS_CP0_INDEX:
+		kvm_write_c0_guest_index(cop0, v);
+		break;
+	case KVM_REG_MIPS_CP0_CONTEXT:
+		kvm_write_c0_guest_context(cop0, v);
+		break;
+	case KVM_REG_MIPS_CP0_PAGEMASK:
+		kvm_write_c0_guest_pagemask(cop0, v);
+		break;
+	case KVM_REG_MIPS_CP0_WIRED:
+		kvm_write_c0_guest_wired(cop0, v);
+		break;
+	case KVM_REG_MIPS_CP0_BADVADDR:
+		kvm_write_c0_guest_badvaddr(cop0, v);
+		break;
+	case KVM_REG_MIPS_CP0_ENTRYHI:
+		kvm_write_c0_guest_entryhi(cop0, v);
+		break;
+	case KVM_REG_MIPS_CP0_STATUS:
+		kvm_write_c0_guest_status(cop0, v);
+		break;
+	case KVM_REG_MIPS_CP0_CAUSE:
+		kvm_write_c0_guest_cause(cop0, v);
+		break;
+	case KVM_REG_MIPS_CP0_ERROREPC:
+		kvm_write_c0_guest_errorepc(cop0, v);
+		break;
+	default:
+		return -EINVAL;
+	}
+	return 0;
+}
+
+int kvm_mips_te_arch_init(void *opaque)
+{
+	int ret;
+
+	if (kvm_mips_callbacks) {
+		kvm_err("kvm: module already exists\n");
+		return -EEXIST;
+	}
+
+	/*
+	 * On MIPS, kernel modules are executed from "mapped space",
+	 * which requires TLBs.  The TLB handling code is statically
+	 * linked with the rest of the kernel (kvm_tlb.c) to avoid the
+	 * possibility of double faulting. The issue is that the TLB
+	 * code references routines that are part of the the KVM
+	 * module, which are only available once the module is loaded.
+	 */
+	kvm_mips_gfn_to_pfn = gfn_to_pfn;
+	kvm_mips_release_pfn_clean = kvm_release_pfn_clean;
+	kvm_mips_is_error_pfn = is_error_pfn;
+
+	ret = kvm_mips_emulation_init(&kvm_mips_callbacks);
+
+	pr_info("KVM/MIPS Initialized\n");
+
+	return ret;
+}
+
+void kvm_mips_te_arch_exit(void)
+{
+	kvm_mips_callbacks = NULL;
+
+	kvm_mips_gfn_to_pfn = NULL;
+	kvm_mips_release_pfn_clean = NULL;
+	kvm_mips_is_error_pfn = NULL;
+
+	pr_info("KVM/MIPS unloaded\n");
+}
+
+int kvm_mips_te_vcpu_dump_regs(struct kvm_vcpu *vcpu)
+{
+	int i;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct mips_coproc *cop0 = vcpu_te->cop0;
+
+	if (!vcpu)
+		return -1;
+
+	printk("VCPU Register Dump:\n");
+	printk("\tepc = 0x%08lx\n", vcpu->arch.epc);;
+	printk("\texceptions: %08lx\n", vcpu_te->pending_exceptions);
+
+	for (i = 0; i < 32; i += 4) {
+		printk("\tgpr%02d: %08lx %08lx %08lx %08lx\n", i,
+		       vcpu->arch.gprs[i],
+		       vcpu->arch.gprs[i + 1],
+		       vcpu->arch.gprs[i + 2], vcpu->arch.gprs[i + 3]);
+	}
+	printk("\thi: 0x%08lx\n", vcpu->arch.hi);
+	printk("\tlo: 0x%08lx\n", vcpu->arch.lo);
+
+	printk("\tStatus: 0x%08lx, Cause: 0x%08lx\n",
+	       kvm_read_c0_guest_status(cop0), kvm_read_c0_guest_cause(cop0));
+
+	printk("\tEPC: 0x%08lx\n", kvm_read_c0_guest_epc(cop0));
+
+	return 0;
+}
+
+static void kvm_mips_te_comparecount_func(unsigned long data)
+{
+	struct kvm_vcpu *vcpu = (struct kvm_vcpu *)data;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+
+	kvm_mips_callbacks->queue_timer_int(vcpu);
+
+	vcpu_te->wait = 0;
+	if (waitqueue_active(&vcpu->wq))
+		wake_up_interruptible(&vcpu->wq);
+}
+
+/*
+ * low level hrtimer wake routine.
+ */
+static enum hrtimer_restart kvm_mips_te_comparecount_wakeup(struct hrtimer *timer)
+{
+	struct kvm_mips_vcpu_te *vcpu_te;
+
+	vcpu_te = container_of(timer,
+			       struct kvm_mips_vcpu_te,
+			       comparecount_timer);
+	kvm_mips_te_comparecount_func((unsigned long)vcpu_te->vcpu);
+	hrtimer_forward_now(&vcpu_te->comparecount_timer,
+			    ktime_set(0, MS_TO_NS(10)));
+	return HRTIMER_RESTART;
+}
+
+static int kvm_mips_te_vcpu_init(struct kvm_vcpu *vcpu)
+{
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+
+	kvm_mips_callbacks->vcpu_init(vcpu);
+	hrtimer_init(&vcpu_te->comparecount_timer, CLOCK_MONOTONIC,
+		     HRTIMER_MODE_REL);
+	vcpu_te->comparecount_timer.function = kvm_mips_te_comparecount_wakeup;
+	kvm_mips_init_shadow_tlb(vcpu);
+	return 0;
+}
+
+static int kvm_mips_te_vcpu_setup(struct kvm_vcpu *vcpu)
+{
+	return kvm_mips_callbacks->vcpu_setup(vcpu);
+}
+
+static void kvm_mips_set_c0_status(void)
+{
+	uint32_t status = read_c0_status();
+
+	if (cpu_has_fpu)
+		status |= (ST0_CU1);
+
+	if (cpu_has_dsp)
+		status |= (ST0_MX);
+
+	write_c0_status(status);
+	ehb();
+}
+
+/*
+ * Return value is in the form (errcode<<2 | RESUME_FLAG_HOST | RESUME_FLAG_NV)
+ */
+int kvm_mips_te_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
+{
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	uint32_t cause = vcpu_te->host_cp0_cause;
+	uint32_t exccode = (cause >> CAUSEB_EXCCODE) & 0x1f;
+	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.epc;
+	unsigned long badvaddr = vcpu_te->host_cp0_badvaddr;
+	enum emulation_result er = EMULATE_DONE;
+	int ret = RESUME_GUEST;
+
+	/* Set a default exit reason */
+	run->exit_reason = KVM_EXIT_UNKNOWN;
+	run->ready_for_interrupt_injection = 1;
+
+	/*
+	 * Set the appropriate status bits based on host CPU features,
+	 * before we hit the scheduler
+	 */
+	kvm_mips_set_c0_status();
+
+	local_irq_enable();
+
+	kvm_debug("kvm_mips_handle_exit: cause: %#x, PC: %p, kvm_run: %p, kvm_vcpu: %p\n",
+			cause, opc, run, vcpu);
+
+	/* Do a privilege check, if in UM most of these exit conditions end up
+	 * causing an exception to be delivered to the Guest Kernel
+	 */
+	er = kvm_mips_check_privilege(cause, opc, run, vcpu);
+	if (er == EMULATE_PRIV_FAIL) {
+		goto skip_emul;
+	} else if (er == EMULATE_FAIL) {
+		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		ret = RESUME_HOST;
+		goto skip_emul;
+	}
+
+	switch (exccode) {
+	case T_INT:
+		kvm_debug("[%d]T_INT @ %p\n", vcpu->vcpu_id, opc);
+
+		++vcpu->stat.int_exits;
+		trace_kvm_exit(vcpu, INT_EXITS);
+
+		if (need_resched())
+			cond_resched();
+
+		ret = RESUME_GUEST;
+		break;
+
+	case T_COP_UNUSABLE:
+		kvm_debug("T_COP_UNUSABLE: @ PC: %p\n", opc);
+
+		++vcpu->stat.cop_unusable_exits;
+		trace_kvm_exit(vcpu, COP_UNUSABLE_EXITS);
+		ret = kvm_mips_callbacks->handle_cop_unusable(vcpu);
+		/* XXXKYMA: Might need to return to user space */
+		if (run->exit_reason == KVM_EXIT_IRQ_WINDOW_OPEN)
+			ret = RESUME_HOST;
+		break;
+
+	case T_TLB_MOD:
+		++vcpu->stat.tlbmod_exits;
+		trace_kvm_exit(vcpu, TLBMOD_EXITS);
+		ret = kvm_mips_callbacks->handle_tlb_mod(vcpu);
+		break;
+
+	case T_TLB_ST_MISS:
+		kvm_debug("TLB ST fault:  cause %#x, status %#lx, PC: %p, BadVaddr: %#lx\n",
+		     cause, kvm_read_c0_guest_status(vcpu_te->cop0), opc,
+		     badvaddr);
+
+		++vcpu->stat.tlbmiss_st_exits;
+		trace_kvm_exit(vcpu, TLBMISS_ST_EXITS);
+		ret = kvm_mips_callbacks->handle_tlb_st_miss(vcpu);
+		break;
+
+	case T_TLB_LD_MISS:
+		kvm_debug("TLB LD fault: cause %#x, PC: %p, BadVaddr: %#lx\n",
+			  cause, opc, badvaddr);
+
+		++vcpu->stat.tlbmiss_ld_exits;
+		trace_kvm_exit(vcpu, TLBMISS_LD_EXITS);
+		ret = kvm_mips_callbacks->handle_tlb_ld_miss(vcpu);
+		break;
+
+	case T_ADDR_ERR_ST:
+		++vcpu->stat.addrerr_st_exits;
+		trace_kvm_exit(vcpu, ADDRERR_ST_EXITS);
+		ret = kvm_mips_callbacks->handle_addr_err_st(vcpu);
+		break;
+
+	case T_ADDR_ERR_LD:
+		++vcpu->stat.addrerr_ld_exits;
+		trace_kvm_exit(vcpu, ADDRERR_LD_EXITS);
+		ret = kvm_mips_callbacks->handle_addr_err_ld(vcpu);
+		break;
+
+	case T_SYSCALL:
+		++vcpu->stat.syscall_exits;
+		trace_kvm_exit(vcpu, SYSCALL_EXITS);
+		ret = kvm_mips_callbacks->handle_syscall(vcpu);
+		break;
+
+	case T_RES_INST:
+		++vcpu->stat.resvd_inst_exits;
+		trace_kvm_exit(vcpu, RESVD_INST_EXITS);
+		ret = kvm_mips_callbacks->handle_res_inst(vcpu);
+		break;
+
+	case T_BREAK:
+		++vcpu->stat.break_inst_exits;
+		trace_kvm_exit(vcpu, BREAK_INST_EXITS);
+		ret = kvm_mips_callbacks->handle_break(vcpu);
+		break;
+
+	default:
+		kvm_err("Exception Code: %d, not yet handled, @ PC: %p, inst: 0x%08x  BadVaddr: %#lx Status: %#lx\n",
+		     exccode, opc, kvm_get_inst(opc, vcpu), badvaddr,
+		     kvm_read_c0_guest_status(vcpu_te->cop0));
+		kvm_mips_te_vcpu_dump_regs(vcpu);
+		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		ret = RESUME_HOST;
+		break;
+
+	}
+
+skip_emul:
+	local_irq_disable();
+
+	if (er == EMULATE_DONE && !(ret & RESUME_HOST))
+		kvm_mips_deliver_interrupts(vcpu, cause);
+
+	if (!(ret & RESUME_HOST)) {
+		/* Only check for signals if not already exiting to userspace  */
+		if (signal_pending(current)) {
+			run->exit_reason = KVM_EXIT_INTR;
+			ret = (-EINTR << 2) | RESUME_HOST;
+			++vcpu->stat.signal_exits;
+			trace_kvm_exit(vcpu, SIGNAL_EXITS);
+		}
+	}
+
+	return ret;
+}
 
 static gpa_t kvm_trap_emul_gva_to_gpa_cb(gva_t gva)
 {
@@ -42,9 +820,10 @@ static gpa_t kvm_trap_emul_gva_to_gpa_cb(gva_t gva)
 
 static int kvm_trap_emul_handle_cop_unusable(struct kvm_vcpu *vcpu)
 {
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
 	struct kvm_run *run = vcpu->run;
 	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.epc;
-	unsigned long cause = vcpu->arch.host_cp0_cause;
+	unsigned long cause = vcpu_te->host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
 
@@ -76,19 +855,19 @@ static int kvm_trap_emul_handle_cop_unusable(struct kvm_vcpu *vcpu)
 
 static int kvm_trap_emul_handle_tlb_mod(struct kvm_vcpu *vcpu)
 {
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
 	struct kvm_run *run = vcpu->run;
 	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.epc;
-	unsigned long badvaddr = vcpu->arch.host_cp0_badvaddr;
-	unsigned long cause = vcpu->arch.host_cp0_cause;
+	unsigned long badvaddr = vcpu_te->host_cp0_badvaddr;
+	u32 cause = vcpu_te->host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
 
 	if (KVM_GUEST_KSEGX(badvaddr) < KVM_GUEST_KSEG0
 	    || KVM_GUEST_KSEGX(badvaddr) == KVM_GUEST_KSEG23) {
 #ifdef DEBUG
-		kvm_debug
-		    ("USER/KSEG23 ADDR TLB MOD fault: cause %#lx, PC: %p, BadVaddr: %#lx\n",
-		     cause, opc, badvaddr);
+		kvm_debug("USER/KSEG23 ADDR TLB MOD fault: cause %#x, PC: %p, BadVaddr: %#lx\n",
+			  cause, opc, badvaddr);
 #endif
 		er = kvm_mips_handle_tlbmod(cause, opc, run, vcpu);
 
@@ -102,19 +881,17 @@ static int kvm_trap_emul_handle_tlb_mod(struct kvm_vcpu *vcpu)
 		/* XXXKYMA: The guest kernel does not expect to get this fault when we are not
 		 * using HIGHMEM. Need to address this in a HIGHMEM kernel
 		 */
-		printk
-		    ("TLB MOD fault not handled, cause %#lx, PC: %p, BadVaddr: %#lx\n",
-		     cause, opc, badvaddr);
+		printk("TLB MOD fault not handled, cause %#x, PC: %p, BadVaddr: %#lx\n",
+		       cause, opc, badvaddr);
 		kvm_mips_dump_host_tlbs();
-		kvm_arch_vcpu_dump_regs(vcpu);
+		kvm_mips_te_vcpu_dump_regs(vcpu);
 		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		ret = RESUME_HOST;
 	} else {
-		printk
-		    ("Illegal TLB Mod fault address , cause %#lx, PC: %p, BadVaddr: %#lx\n",
-		     cause, opc, badvaddr);
+		printk("Illegal TLB Mod fault address , cause %#x, PC: %p, BadVaddr: %#lx\n",
+		       cause, opc, badvaddr);
 		kvm_mips_dump_host_tlbs();
-		kvm_arch_vcpu_dump_regs(vcpu);
+		kvm_mips_te_vcpu_dump_regs(vcpu);
 		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		ret = RESUME_HOST;
 	}
@@ -123,10 +900,11 @@ static int kvm_trap_emul_handle_tlb_mod(struct kvm_vcpu *vcpu)
 
 static int kvm_trap_emul_handle_tlb_st_miss(struct kvm_vcpu *vcpu)
 {
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
 	struct kvm_run *run = vcpu->run;
 	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.epc;
-	unsigned long badvaddr = vcpu->arch.host_cp0_badvaddr;
-	unsigned long cause = vcpu->arch.host_cp0_cause;
+	unsigned long badvaddr = vcpu_te->host_cp0_badvaddr;
+	u32 cause = vcpu_te->host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
 
@@ -139,8 +917,7 @@ static int kvm_trap_emul_handle_tlb_st_miss(struct kvm_vcpu *vcpu)
 	} else if (KVM_GUEST_KSEGX(badvaddr) < KVM_GUEST_KSEG0
 		   || KVM_GUEST_KSEGX(badvaddr) == KVM_GUEST_KSEG23) {
 #ifdef DEBUG
-		kvm_debug
-		    ("USER ADDR TLB LD fault: cause %#lx, PC: %p, BadVaddr: %#lx\n",
+		kvm_debug("USER ADDR TLB LD fault: cause %#x, PC: %p, BadVaddr: %#lx\n",
 		     cause, opc, badvaddr);
 #endif
 		er = kvm_mips_handle_tlbmiss(cause, opc, run, vcpu);
@@ -154,17 +931,15 @@ static int kvm_trap_emul_handle_tlb_st_miss(struct kvm_vcpu *vcpu)
 		/* All KSEG0 faults are handled by KVM, as the guest kernel does not
 		 * expect to ever get them
 		 */
-		if (kvm_mips_handle_kseg0_tlb_fault
-		    (vcpu->arch.host_cp0_badvaddr, vcpu) < 0) {
+		if (kvm_mips_handle_kseg0_tlb_fault(vcpu_te->host_cp0_badvaddr, vcpu) < 0) {
 			run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 			ret = RESUME_HOST;
 		}
 	} else {
-		kvm_err
-		    ("Illegal TLB LD fault address , cause %#lx, PC: %p, BadVaddr: %#lx\n",
-		     cause, opc, badvaddr);
+		kvm_err("Illegal TLB LD fault address , cause %#x, PC: %p, BadVaddr: %#lx\n",
+			cause, opc, badvaddr);
 		kvm_mips_dump_host_tlbs();
-		kvm_arch_vcpu_dump_regs(vcpu);
+		kvm_mips_te_vcpu_dump_regs(vcpu);
 		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		ret = RESUME_HOST;
 	}
@@ -173,10 +948,11 @@ static int kvm_trap_emul_handle_tlb_st_miss(struct kvm_vcpu *vcpu)
 
 static int kvm_trap_emul_handle_tlb_ld_miss(struct kvm_vcpu *vcpu)
 {
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
 	struct kvm_run *run = vcpu->run;
 	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.epc;
-	unsigned long badvaddr = vcpu->arch.host_cp0_badvaddr;
-	unsigned long cause = vcpu->arch.host_cp0_cause;
+	unsigned long badvaddr = vcpu_te->host_cp0_badvaddr;
+	u32 cause = vcpu_te->host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
 
@@ -208,17 +984,15 @@ static int kvm_trap_emul_handle_tlb_ld_miss(struct kvm_vcpu *vcpu)
 			ret = RESUME_HOST;
 		}
 	} else if (KVM_GUEST_KSEGX(badvaddr) == KVM_GUEST_KSEG0) {
-		if (kvm_mips_handle_kseg0_tlb_fault
-		    (vcpu->arch.host_cp0_badvaddr, vcpu) < 0) {
+		if (kvm_mips_handle_kseg0_tlb_fault(vcpu_te->host_cp0_badvaddr, vcpu) < 0) {
 			run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 			ret = RESUME_HOST;
 		}
 	} else {
-		printk
-		    ("Illegal TLB ST fault address , cause %#lx, PC: %p, BadVaddr: %#lx\n",
-		     cause, opc, badvaddr);
+		printk("Illegal TLB ST fault address , cause %#x, PC: %p, BadVaddr: %#lx\n",
+		       cause, opc, badvaddr);
 		kvm_mips_dump_host_tlbs();
-		kvm_arch_vcpu_dump_regs(vcpu);
+		kvm_mips_te_vcpu_dump_regs(vcpu);
 		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		ret = RESUME_HOST;
 	}
@@ -227,10 +1001,11 @@ static int kvm_trap_emul_handle_tlb_ld_miss(struct kvm_vcpu *vcpu)
 
 static int kvm_trap_emul_handle_addr_err_st(struct kvm_vcpu *vcpu)
 {
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
 	struct kvm_run *run = vcpu->run;
 	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.epc;
-	unsigned long badvaddr = vcpu->arch.host_cp0_badvaddr;
-	unsigned long cause = vcpu->arch.host_cp0_cause;
+	unsigned long badvaddr = vcpu_te->host_cp0_badvaddr;
+	u32 cause = vcpu_te->host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
 
@@ -249,9 +1024,8 @@ static int kvm_trap_emul_handle_addr_err_st(struct kvm_vcpu *vcpu)
 			ret = RESUME_HOST;
 		}
 	} else {
-		printk
-		    ("Address Error (STORE): cause %#lx, PC: %p, BadVaddr: %#lx\n",
-		     cause, opc, badvaddr);
+		printk("Address Error (STORE): cause %#x, PC: %p, BadVaddr: %#lx\n",
+		       cause, opc, badvaddr);
 		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		ret = RESUME_HOST;
 	}
@@ -260,10 +1034,11 @@ static int kvm_trap_emul_handle_addr_err_st(struct kvm_vcpu *vcpu)
 
 static int kvm_trap_emul_handle_addr_err_ld(struct kvm_vcpu *vcpu)
 {
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
 	struct kvm_run *run = vcpu->run;
 	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.epc;
-	unsigned long badvaddr = vcpu->arch.host_cp0_badvaddr;
-	unsigned long cause = vcpu->arch.host_cp0_cause;
+	unsigned long badvaddr = vcpu_te->host_cp0_badvaddr;
+	u32 cause = vcpu_te->host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
 
@@ -281,9 +1056,8 @@ static int kvm_trap_emul_handle_addr_err_ld(struct kvm_vcpu *vcpu)
 			ret = RESUME_HOST;
 		}
 	} else {
-		printk
-		    ("Address Error (LOAD): cause %#lx, PC: %p, BadVaddr: %#lx\n",
-		     cause, opc, badvaddr);
+		printk("Address Error (LOAD): cause %#x, PC: %p, BadVaddr: %#lx\n",
+		       cause, opc, badvaddr);
 		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 		ret = RESUME_HOST;
 		er = EMULATE_FAIL;
@@ -293,9 +1067,10 @@ static int kvm_trap_emul_handle_addr_err_ld(struct kvm_vcpu *vcpu)
 
 static int kvm_trap_emul_handle_syscall(struct kvm_vcpu *vcpu)
 {
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
 	struct kvm_run *run = vcpu->run;
 	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.epc;
-	unsigned long cause = vcpu->arch.host_cp0_cause;
+	u32 cause = vcpu_te->host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
 
@@ -311,9 +1086,10 @@ static int kvm_trap_emul_handle_syscall(struct kvm_vcpu *vcpu)
 
 static int kvm_trap_emul_handle_res_inst(struct kvm_vcpu *vcpu)
 {
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
 	struct kvm_run *run = vcpu->run;
 	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.epc;
-	unsigned long cause = vcpu->arch.host_cp0_cause;
+	u32 cause = vcpu_te->host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
 
@@ -329,9 +1105,10 @@ static int kvm_trap_emul_handle_res_inst(struct kvm_vcpu *vcpu)
 
 static int kvm_trap_emul_handle_break(struct kvm_vcpu *vcpu)
 {
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
 	struct kvm_run *run = vcpu->run;
 	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.epc;
-	unsigned long cause = vcpu->arch.host_cp0_cause;
+	u32 cause = vcpu_te->host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
 
@@ -357,7 +1134,8 @@ static int kvm_trap_emul_vcpu_init(struct kvm_vcpu *vcpu)
 
 static int kvm_trap_emul_vcpu_setup(struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	struct kvm_mips_vcpu_te *vcpu_te = vcpu->arch.impl;
+	struct mips_coproc *cop0 = vcpu_te->cop0;
 	uint32_t config1;
 	int vcpu_id = vcpu->vcpu_id;
 
@@ -430,3 +1208,45 @@ int kvm_mips_emulation_init(struct kvm_mips_callbacks **install_callbacks)
 	*install_callbacks = &kvm_trap_emul_callbacks;
 	return 0;
 }
+
+static long kvm_mips_te_vm_ioctl(struct kvm *kvm, unsigned int ioctl,
+				 unsigned long arg)
+{
+	return -ENOIOCTLCMD;
+}
+
+static const struct kvm_mips_ops kvm_mips_te_ops = {
+	.vcpu_runnable = kvm_mips_te_vcpu_runnable,
+	.destroy_vm = kvm_mips_te_destroy_vm,
+	.commit_memory_region = kvm_mips_te_commit_memory_region,
+	.vcpu_create = kvm_mips_te_vcpu_create,
+	.vcpu_free = kvm_mips_te_vcpu_free,
+	.vcpu_run = kvm_mips_te_vcpu_run,
+	.vm_ioctl = kvm_mips_te_vm_ioctl,
+	.vcpu_ioctl = kvm_mips_te_vcpu_ioctl,
+	.get_reg = kvm_mips_te_get_reg,
+	.set_reg = kvm_mips_te_set_reg,
+	.cpu_has_pending_timer = kvm_mips_pending_timer,
+	.vcpu_init = kvm_mips_te_vcpu_init,
+	.vcpu_setup = kvm_mips_te_vcpu_setup,
+	.vcpu_load = kvm_mips_te_vcpu_load,
+	.vcpu_put = kvm_mips_te_vcpu_put,
+};
+
+int kvm_mips_te_init_vm(struct kvm *kvm, unsigned long type)
+{
+	kvm->arch.ops = &kvm_mips_te_ops;
+	kvm->arch.impl =  kzalloc(sizeof(struct kvm_mips_te), GFP_KERNEL);
+	if (!kvm->arch.impl)
+		return -ENOMEM;
+
+	if (atomic_inc_return(&kvm_mips_instance) == 1) {
+		kvm_info("%s: 1st KVM instance, setup host TLB parameters\n",
+			 __func__);
+		on_each_cpu(kvm_mips_te_init_vm_percpu, kvm, 1);
+	}
+	return 0;
+}
+
+
+EXPORT_TRACEPOINT_SYMBOL(kvm_exit);
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 10/31] mips/kvm: Implement ioctls to get and set FPU registers.
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (8 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 09/31] mips/kvm: Factor trap-and-emulate support into a pluggable implementation David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-14 16:11   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 11/31] MIPS: Rearrange branch.c so it can be used by kvm code David Daney
                   ` (24 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

The current implementation does nothing with them, but future MIPSVZ
work need them.  Also add the asm-offsets accessors for the fields.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/include/asm/kvm_host.h |  8 ++++++++
 arch/mips/kernel/asm-offsets.c   |  8 ++++++++
 arch/mips/kvm/kvm_mips.c         | 26 ++++++++++++++++++++++++--
 3 files changed, 40 insertions(+), 2 deletions(-)

diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
index 16013c7..505b804 100644
--- a/arch/mips/include/asm/kvm_host.h
+++ b/arch/mips/include/asm/kvm_host.h
@@ -102,6 +102,14 @@ struct kvm_vcpu_arch {
 	unsigned long lo;
 	unsigned long epc;
 
+	/* FPU state */
+	u64 fpr[32];
+	u32 fir;
+	u32 fccr;
+	u32 fexr;
+	u32 fenr;
+	u32 fcsr;
+
 	void *impl;
 };
 
diff --git a/arch/mips/kernel/asm-offsets.c b/arch/mips/kernel/asm-offsets.c
index 5a9222e..03bf363 100644
--- a/arch/mips/kernel/asm-offsets.c
+++ b/arch/mips/kernel/asm-offsets.c
@@ -377,6 +377,14 @@ void output_kvm_defines(void)
 	OFFSET(KVM_VCPU_ARCH_HI, kvm_vcpu, arch.hi);
 	OFFSET(KVM_VCPU_ARCH_EPC, kvm_vcpu, arch.epc);
 	OFFSET(KVM_VCPU_ARCH_IMPL, kvm_vcpu, arch.impl);
+	BLANK();
+	OFFSET(KVM_VCPU_ARCH_FPR,	kvm_vcpu, arch.fpr);
+	OFFSET(KVM_VCPU_ARCH_FIR,	kvm_vcpu, arch.fir);
+	OFFSET(KVM_VCPU_ARCH_FCCR,	kvm_vcpu, arch.fccr);
+	OFFSET(KVM_VCPU_ARCH_FEXR,	kvm_vcpu, arch.fexr);
+	OFFSET(KVM_VCPU_ARCH_FENR,	kvm_vcpu, arch.fenr);
+	OFFSET(KVM_VCPU_ARCH_FCSR,	kvm_vcpu, arch.fcsr);
+	BLANK();
 
 	OFFSET(KVM_MIPS_VCPU_TE_HOST_EBASE, kvm_mips_vcpu_te, host_ebase);
 	OFFSET(KVM_MIPS_VCPU_TE_GUEST_EBASE, kvm_mips_vcpu_te, guest_ebase);
diff --git a/arch/mips/kvm/kvm_mips.c b/arch/mips/kvm/kvm_mips.c
index 041caad..18c8dc8 100644
--- a/arch/mips/kvm/kvm_mips.c
+++ b/arch/mips/kvm/kvm_mips.c
@@ -465,12 +465,34 @@ int kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
 
 int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
 {
-	return -ENOIOCTLCMD;
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(vcpu->arch.fpr); i++)
+		fpu->fpr[i] = vcpu->arch.fpr[i];
+
+	fpu->fir = vcpu->arch.fir;
+	fpu->fccr = vcpu->arch.fccr;
+	fpu->fexr = vcpu->arch.fexr;
+	fpu->fenr = vcpu->arch.fenr;
+	fpu->fcsr = vcpu->arch.fcsr;
+
+	return 0;
 }
 
 int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
 {
-	return -ENOIOCTLCMD;
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(vcpu->arch.fpr); i++)
+		vcpu->arch.fpr[i] = fpu->fpr[i];
+
+	vcpu->arch.fir = fpu->fir;
+	vcpu->arch.fccr = fpu->fccr;
+	vcpu->arch.fexr = fpu->fexr;
+	vcpu->arch.fenr = fpu->fenr;
+	vcpu->arch.fcsr = fpu->fcsr;
+
+	return 0;
 }
 
 int kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 11/31] MIPS: Rearrange branch.c so it can be used by kvm code.
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (9 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 10/31] mips/kvm: Implement ioctls to get and set FPU registers David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-14 22:03   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 12/31] MIPS: Add instruction format information for WAIT, MTC0, MFC0, et al David Daney
                   ` (23 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

Introduce __compute_return_epc_for_insn0() entry point.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/include/asm/branch.h |  7 +++++
 arch/mips/kernel/branch.c      | 63 +++++++++++++++++++++++++++++++-----------
 2 files changed, 54 insertions(+), 16 deletions(-)

diff --git a/arch/mips/include/asm/branch.h b/arch/mips/include/asm/branch.h
index e28a3e0..b3de685 100644
--- a/arch/mips/include/asm/branch.h
+++ b/arch/mips/include/asm/branch.h
@@ -37,6 +37,13 @@ static inline unsigned long exception_epc(struct pt_regs *regs)
 
 #define BRANCH_LIKELY_TAKEN 0x0001
 
+extern int __compute_return_epc(struct pt_regs *regs);
+extern int __compute_return_epc_for_insn(struct pt_regs *regs,
+					 union mips_instruction insn);
+extern int __compute_return_epc_for_insn0(struct pt_regs *regs,
+					  union mips_instruction insn,
+					  unsigned int (*get_fcr31)(void));
+
 static inline int compute_return_epc(struct pt_regs *regs)
 {
 	if (get_isa16_mode(regs->cp0_epc)) {
diff --git a/arch/mips/kernel/branch.c b/arch/mips/kernel/branch.c
index 46c2ad0..e47145b 100644
--- a/arch/mips/kernel/branch.c
+++ b/arch/mips/kernel/branch.c
@@ -195,17 +195,18 @@ int __MIPS16e_compute_return_epc(struct pt_regs *regs)
 }
 
 /**
- * __compute_return_epc_for_insn - Computes the return address and do emulate
+ * __compute_return_epc_for_insn0 - Computes the return address and do emulate
  *				    branch simulation, if required.
  *
  * @regs:	Pointer to pt_regs
  * @insn:	branch instruction to decode
- * @returns:	-EFAULT on error and forces SIGBUS, and on success
+ * @returns:	-EFAULT on error, and on success
  *		returns 0 or BRANCH_LIKELY_TAKEN as appropriate after
  *		evaluating the branch.
  */
-int __compute_return_epc_for_insn(struct pt_regs *regs,
-				   union mips_instruction insn)
+int __compute_return_epc_for_insn0(struct pt_regs *regs,
+				   union mips_instruction insn,
+				   unsigned int (*get_fcr31)(void))
 {
 	unsigned int bit, fcr31, dspcontrol;
 	long epc = regs->cp0_epc;
@@ -281,7 +282,7 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
 
 		case bposge32_op:
 			if (!cpu_has_dsp)
-				goto sigill;
+				return -EFAULT;
 
 			dspcontrol = rddsp(0x01);
 
@@ -364,13 +365,7 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
 	 * And now the FPA/cp1 branch instructions.
 	 */
 	case cop1_op:
-		preempt_disable();
-		if (is_fpu_owner())
-			asm volatile("cfc1\t%0,$31" : "=r" (fcr31));
-		else
-			fcr31 = current->thread.fpu.fcr31;
-		preempt_enable();
-
+		fcr31 = get_fcr31();
 		bit = (insn.i_format.rt >> 2);
 		bit += (bit != 0);
 		bit += 23;
@@ -434,11 +429,47 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
 	}
 
 	return ret;
+}
+EXPORT_SYMBOL_GPL(__compute_return_epc_for_insn0);
 
-sigill:
-	printk("%s: DSP branch but not DSP ASE - sending SIGBUS.\n", current->comm);
-	force_sig(SIGBUS, current);
-	return -EFAULT;
+static unsigned int __get_fcr31(void)
+{
+	unsigned int fcr31;
+
+	preempt_disable();
+	if (is_fpu_owner())
+		asm volatile(
+			".set push\n"
+			"\t.set mips1\n"
+			"\tcfc1\t%0,$31\n"
+			"\t.set pop" : "=r" (fcr31));
+	else
+		fcr31 = current->thread.fpu.fcr31;
+	preempt_enable();
+	return fcr31;
+}
+
+/**
+ * __compute_return_epc_for_insn - Computes the return address and do emulate
+ *				    branch simulation, if required.
+ *
+ * @regs:	Pointer to pt_regs
+ * @insn:	branch instruction to decode
+ * @returns:	-EFAULT on error and forces SIGBUS, and on success
+ *		returns 0 or BRANCH_LIKELY_TAKEN as appropriate after
+ *		evaluating the branch.
+ */
+int __compute_return_epc_for_insn(struct pt_regs *regs,
+				   union mips_instruction insn)
+{
+	int r =  __compute_return_epc_for_insn0(regs, insn, __get_fcr31);
+
+	if (r < 0) {
+		printk("%s: DSP branch but not DSP ASE - sending SIGBUS.\n", current->comm);
+		force_sig(SIGBUS, current);
+	}
+
+	return r;
 }
 EXPORT_SYMBOL_GPL(__compute_return_epc_for_insn);
 
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 12/31] MIPS: Add instruction format information for WAIT, MTC0, MFC0, et al.
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (10 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 11/31] MIPS: Rearrange branch.c so it can be used by kvm code David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-14 22:07   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 13/31] mips/kvm: Add accessors for MIPS VZ registers David Daney
                   ` (22 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

---
 arch/mips/include/uapi/asm/inst.h | 23 ++++++++++++++++++++++-
 1 file changed, 22 insertions(+), 1 deletion(-)

diff --git a/arch/mips/include/uapi/asm/inst.h b/arch/mips/include/uapi/asm/inst.h
index 0f4aec2..133abc1 100644
--- a/arch/mips/include/uapi/asm/inst.h
+++ b/arch/mips/include/uapi/asm/inst.h
@@ -117,7 +117,8 @@ enum bcop_op {
 enum cop0_coi_func {
 	tlbr_op	      = 0x01, tlbwi_op	    = 0x02,
 	tlbwr_op      = 0x06, tlbp_op	    = 0x08,
-	rfe_op	      = 0x10, eret_op	    = 0x18
+	rfe_op	      = 0x10, eret_op	    = 0x18,
+	wait_op	      = 0x20
 };
 
 /*
@@ -567,6 +568,24 @@ struct b_format {			/* BREAK and SYSCALL */
 	;)))
 };
 
+struct c0_format {			/* WAIT, TLB?? */
+	BITFIELD_FIELD(unsigned int opcode : 6,
+	BITFIELD_FIELD(unsigned int co : 1,
+	BITFIELD_FIELD(unsigned int code : 19,
+	BITFIELD_FIELD(unsigned int func : 6,
+	;))))
+};
+
+struct c0m_format {			/* MTC0, MFC0, ... */
+	BITFIELD_FIELD(unsigned int opcode : 6,
+	BITFIELD_FIELD(unsigned int func : 5,
+	BITFIELD_FIELD(unsigned int rt : 5,
+	BITFIELD_FIELD(unsigned int rd : 5,
+	BITFIELD_FIELD(unsigned int code : 8,
+	BITFIELD_FIELD(unsigned int sel : 3,
+	;))))))
+};
+
 struct ps_format {			/* MIPS-3D / paired single format */
 	BITFIELD_FIELD(unsigned int opcode : 6,
 	BITFIELD_FIELD(unsigned int rs : 5,
@@ -857,6 +876,8 @@ union mips_instruction {
 	struct f_format f_format;
 	struct ma_format ma_format;
 	struct b_format b_format;
+	struct c0_format c0_format;
+	struct c0m_format c0m_format;
 	struct ps_format ps_format;
 	struct v_format v_format;
 	struct fb_format fb_format;
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 13/31] mips/kvm: Add accessors for MIPS VZ registers.
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (11 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 12/31] MIPS: Add instruction format information for WAIT, MTC0, MFC0, et al David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-14 22:10   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 14/31] mips/kvm: Add thread_info flag to indicate operation in MIPS VZ Guest Mode David Daney
                   ` (21 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

There are accessors for both the guest control registers as well as
guest CP0 context.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/include/asm/mipsregs.h | 260 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 260 insertions(+)

diff --git a/arch/mips/include/asm/mipsregs.h b/arch/mips/include/asm/mipsregs.h
index 6f03c72..0addfec 100644
--- a/arch/mips/include/asm/mipsregs.h
+++ b/arch/mips/include/asm/mipsregs.h
@@ -50,10 +50,13 @@
 #define CP0_WIRED $6
 #define CP0_INFO $7
 #define CP0_BADVADDR $8
+#define CP0_BADINSTR $8, 1
+#define CP0_BADINSTRP $8, 2
 #define CP0_COUNT $9
 #define CP0_ENTRYHI $10
 #define CP0_COMPARE $11
 #define CP0_STATUS $12
+#define CP0_GUESTCTL0 $12, 6
 #define CP0_CAUSE $13
 #define CP0_EPC $14
 #define CP0_PRID $15
@@ -623,6 +626,10 @@
 #define MIPS_FPIR_L		(_ULCAST_(1) << 21)
 #define MIPS_FPIR_F64		(_ULCAST_(1) << 22)
 
+/* Bits in the MIPS VZ GuestCtl0 Register */
+#define MIPS_GUESTCTL0B_GM	31
+#define MIPS_GUESTCTL0F_GM	(_ULCAST_(1) << MIPS_GUESTCTL0B_GM)
+
 #ifndef __ASSEMBLY__
 
 /*
@@ -851,6 +858,144 @@ do {									\
 	local_irq_restore(__flags);					\
 } while (0)
 
+/*
+ * Macros to access the VZ Guest system control coprocessor
+ */
+
+#define __read_32bit_gc0_register(source, sel)				\
+	({ int __res;							\
+		__asm__ __volatile__(					\
+		".set mips64r2\n\t"					\
+		".set\tvirt\n\t"					\
+		".ifeq 0-" #sel "\n\t"					\
+		"mfgc0\t%0, " #source ", 0\n\t"				\
+		".endif\n\t"						\
+		".ifeq 1-" #sel "\n\t"					\
+		"mfgc0\t%0, " #source ", 1\n\t"				\
+		".endif\n\t"						\
+		".ifeq 2-" #sel "\n\t"					\
+		"mfgc0\t%0, " #source ", 2\n\t"				\
+		".endif\n\t"						\
+		".ifeq 3-" #sel "\n\t"					\
+		"mfgc0\t%0, " #source ", 3\n\t"				\
+		".endif\n\t"						\
+		".ifeq 4-" #sel "\n\t"					\
+		"mfgc0\t%0, " #source ", 4\n\t"				\
+		".endif\n\t"						\
+		".ifeq 5-" #sel "\n\t"					\
+		"mfgc0\t%0, " #source ", 5\n\t"				\
+		".endif\n\t"						\
+		".ifeq 6-" #sel "\n\t"					\
+		"mfgc0\t%0, " #source ", 6\n\t"				\
+		".endif\n\t"						\
+		".ifeq 7-" #sel "\n\t"					\
+		"mfgc0\t%0, " #source ", 7\n\t"				\
+		".endif\n\t"						\
+		".set\tmips0"						\
+		: "=r" (__res));					\
+	__res;								\
+})
+
+#define __read_64bit_gc0_register(source, sel)				\
+	({ unsigned long long __res;					\
+		__asm__ __volatile__(					\
+		".set mips64r2\n\t"					\
+		".set\tvirt\n\t"					\
+		".ifeq 0-" #sel "\n\t"					\
+		"dmfgc0\t%0, " #source ", 0\n\t"			\
+		".endif\n\t"						\
+		".ifeq 1-" #sel "\n\t"					\
+		"dmfgc0\t%0, " #source ", 1\n\t"			\
+		".endif\n\t"						\
+		".ifeq 2-" #sel "\n\t"					\
+		"dmfgc0\t%0, " #source ", 2\n\t"			\
+		".endif\n\t"						\
+		".ifeq 3-" #sel "\n\t"					\
+		"dmfgc0\t%0, " #source ", 3\n\t"			\
+		".endif\n\t"						\
+		".ifeq 4-" #sel "\n\t"					\
+		"dmfgc0\t%0, " #source ", 4\n\t"			\
+		".endif\n\t"						\
+		".ifeq 5-" #sel "\n\t"					\
+		"dmfgc0\t%0, " #source ", 5\n\t"			\
+		".endif\n\t"						\
+		".ifeq 6-" #sel "\n\t"					\
+		"dmfgc0\t%0, " #source ", 6\n\t"			\
+		".endif\n\t"						\
+		".ifeq 7-" #sel "\n\t"					\
+		"dmfgc0\t%0, " #source ", 7\n\t"			\
+		".endif\n\t"						\
+		".set\tmips0"						\
+		: "=r" (__res));					\
+	__res;								\
+})
+
+#define __write_32bit_gc0_register(source, sel, value)			\
+do {									\
+	__asm__ __volatile__(						\
+		".set mips64r2\n\t"					\
+		".set\tvirt\n\t"					\
+		".ifeq 0-" #sel "\n\t"					\
+		"mtgc0\t%z0, " #source ", 0\n\t"			\
+		".endif\n\t"						\
+		".ifeq 1-" #sel "\n\t"					\
+		"mtgc0\t%z0, " #source ", 1\n\t"			\
+		".endif\n\t"						\
+		".ifeq 2-" #sel "\n\t"					\
+		"mtgc0\t%z0, " #source ", 2\n\t"			\
+		".endif\n\t"						\
+		".ifeq 3-" #sel "\n\t"					\
+		"mtgc0\t%z0, " #source ", 3\n\t"			\
+		".endif\n\t"						\
+		".ifeq 4-" #sel "\n\t"					\
+		"mtgc0\t%z0, " #source ", 4\n\t"			\
+		".endif\n\t"						\
+		".ifeq 5-" #sel "\n\t"					\
+		"mtgc0\t%z0, " #source ", 5\n\t"			\
+		".endif\n\t"						\
+		".ifeq 6-" #sel "\n\t"					\
+		"mtgc0\t%z0, " #source ", 6\n\t"			\
+		".endif\n\t"						\
+		".ifeq 7-" #sel "\n\t"					\
+		"mtgc0\t%z0, " #source ", 7\n\t"			\
+		".endif\n\t"						\
+		".set\tmips0"						\
+		: : "Jr" ((unsigned int)(value)));			\
+} while (0)
+
+#define __write_64bit_gc0_register(source, sel, value)			\
+do {									\
+	__asm__ __volatile__(						\
+		".set mips64r2\n\t"					\
+		".set\tvirt\n\t"					\
+		".ifeq 0-" #sel "\n\t"					\
+		"dmtgc0\t%z0, " #source ", 0\n\t"			\
+		".endif\n\t"						\
+		".ifeq 1-" #sel "\n\t"					\
+		"dmtgc0\t%z0, " #source ", 1\n\t"			\
+		".endif\n\t"						\
+		".ifeq 2-" #sel "\n\t"					\
+		"dmtgc0\t%z0, " #source ", 2\n\t"			\
+		".endif\n\t"						\
+		".ifeq 3-" #sel "\n\t"					\
+		"dmtgc0\t%z0, " #source ", 3\n\t"			\
+		".endif\n\t"						\
+		".ifeq 4-" #sel "\n\t"					\
+		"dmtgc0\t%z0, " #source ", 4\n\t"			\
+		".endif\n\t"						\
+		".ifeq 5-" #sel "\n\t"					\
+		"dmtgc0\t%z0, " #source ", 5\n\t"			\
+		".endif\n\t"						\
+		".ifeq 6-" #sel "\n\t"					\
+		"dmtgc0\t%z0, " #source ", 6\n\t"			\
+		".endif\n\t"						\
+		".ifeq 7-" #sel "\n\t"					\
+		"dmtgc0\t%z0, " #source ", 7\n\t"			\
+		".endif\n\t"						\
+		".set\tmips0"						\
+		: : "Jr" (value));					\
+} while (0)
+
 #define read_c0_index()		__read_32bit_c0_register($0, 0)
 #define write_c0_index(val)	__write_32bit_c0_register($0, 0, val)
 
@@ -889,6 +1034,12 @@ do {									\
 #define read_c0_badvaddr()	__read_ulong_c0_register($8, 0)
 #define write_c0_badvaddr(val)	__write_ulong_c0_register($8, 0, val)
 
+#define read_c0_badinstr()	__read_32bit_c0_register($8, 1)
+#define write_c0_badinstr(val)	__write_32bit_c0_register($8, 1, val)
+
+#define read_c0_badinstrp()	__read_32bit_c0_register($8, 2)
+#define write_c0_badinstrp(val)	__write_32bit_c0_register($8, 2, val)
+
 #define read_c0_count()		__read_32bit_c0_register($9, 0)
 #define write_c0_count(val)	__write_32bit_c0_register($9, 0, val)
 
@@ -1162,6 +1313,93 @@ do {									\
 #define read_c0_brcm_sleepcount()	__read_32bit_c0_register($22, 7)
 #define write_c0_brcm_sleepcount(val)	__write_32bit_c0_register($22, 7, val)
 
+/* MIPS VZ */
+#define read_c0_guestctl0()		__read_32bit_c0_register($12, 6)
+#define write_c0_guestctl0(val)		__write_32bit_c0_register($12, 6, val)
+
+#define read_c0_guestctl1()		__read_32bit_c0_register($10, 4)
+#define write_c0_guestctl1(val)		__write_32bit_c0_register($10, 4, val)
+
+#define read_c0_guestctl2()		__read_32bit_c0_register($10, 5)
+#define write_c0_guestctl2(val)		__write_32bit_c0_register($10, 5, val)
+
+#define read_c0_guestctl3()		__read_32bit_c0_register($10, 6)
+#define write_c0_guestctl3(val)		__write_32bit_c0_register($10, 6, val)
+
+#define read_c0_gtoffset()		__read_32bit_c0_register($12, 7)
+#define write_c0_gtoffset(val)		__write_32bit_c0_register($12, 7, val)
+
+#define read_gc0_index()		__read_32bit_gc0_register($0, 0)
+#define write_gc0_index(val)		__write_32bit_gc0_register($0, 0, val)
+
+#define read_gc0_entrylo0()		__read_64bit_gc0_register($2, 0)
+#define write_gc0_entrylo0(val)		__write_64bit_gc0_register($2, 0, val)
+
+#define read_gc0_entrylo1()		__read_64bit_gc0_register($3, 0)
+#define write_gc0_entrylo1(val)		__write_64bit_gc0_register($3, 0, val)
+
+#define read_gc0_context()		__read_64bit_gc0_register($4, 0)
+#define write_gc0_context(val)		__write_64bit_gc0_register($4, 0, val)
+
+#define read_gc0_userlocal()		__read_64bit_gc0_register($4, 2)
+#define write_gc0_userlocal(val)	__write_64bit_gc0_register($4, 2, val)
+
+#define read_gc0_pagemask()		__read_32bit_gc0_register($5, 0)
+#define write_gc0_pagemask(val)		__write_32bit_gc0_register($5, 0, val)
+
+#define read_gc0_pagegrain()		__read_32bit_gc0_register($5, 1)
+#define write_gc0_pagegrain(val)	__write_32bit_gc0_register($5, 1, val)
+
+#define read_gc0_wired()		__read_32bit_gc0_register($6, 0)
+#define write_gc0_wired(val)		__write_32bit_gc0_register($6, 0, val)
+
+#define read_gc0_hwrena()		__read_32bit_gc0_register($7, 0)
+#define write_gc0_hwrena(val)		__write_32bit_gc0_register($7, 0, val)
+
+#define read_gc0_badvaddr()		__read_64bit_gc0_register($8, 0)
+#define write_gc0_badvaddr(val)		__write_64bit_gc0_register($8, 0, val)
+
+#define read_gc0_count()		__read_32bit_gc0_register($9, 0)
+/* Not possible to write gc0_count. */
+
+#define read_gc0_entryhi()		__read_64bit_gc0_register($10, 0)
+#define write_gc0_entryhi(val)		__write_64bit_gc0_register($10, 0, val)
+
+#define read_gc0_compare()		__read_32bit_gc0_register($11, 0)
+#define write_gc0_compare(val)		__write_32bit_gc0_register($11, 0, val)
+
+#define read_gc0_status()		__read_32bit_gc0_register($12, 0)
+#define write_gc0_status(val)		__write_32bit_gc0_register($12, 0, val)
+
+#define read_gc0_cause()		__read_32bit_gc0_register($13, 0)
+#define write_gc0_cause(val)		__write_32bit_gc0_register($13, 0, val)
+
+#define read_gc0_ebase()		__read_64bit_gc0_register($15, 1)
+#define write_gc0_ebase(val)		__write_64bit_gc0_register($15, 1, val)
+
+#define read_gc0_config()		__read_32bit_gc0_register($16, 0)
+#define read_gc0_config1()		__read_32bit_gc0_register($16, 1)
+#define read_gc0_config2()		__read_32bit_gc0_register($16, 2)
+#define read_gc0_config3()		__read_32bit_gc0_register($16, 3)
+#define read_gc0_config4()		__read_32bit_gc0_register($16, 4)
+#define read_gc0_config5()		__read_32bit_gc0_register($16, 5)
+#define read_gc0_config6()		__read_32bit_gc0_register($16, 6)
+#define read_gc0_config7()		__read_32bit_gc0_register($16, 7)
+#define write_gc0_config(val)		__write_32bit_gc0_register($16, 0, val)
+#define write_gc0_config1(val)		__write_32bit_gc0_register($16, 1, val)
+#define write_gc0_config2(val)		__write_32bit_gc0_register($16, 2, val)
+#define write_gc0_config3(val)		__write_32bit_gc0_register($16, 3, val)
+#define write_gc0_config4(val)		__write_32bit_gc0_register($16, 4, val)
+#define write_gc0_config5(val)		__write_32bit_gc0_register($16, 5, val)
+#define write_gc0_config6(val)		__write_32bit_gc0_register($16, 6, val)
+#define write_gc0_config7(val)		__write_32bit_gc0_register($16, 7, val)
+
+#define read_gc0_xcontext()		__read_64bit_gc0_register($20, 0)
+#define write_gc0_xcontext(val)		__write_64bit_gc0_register($20, 0, val)
+
+#define read_gc0_kscratch(idx)		__read_64bit_gc0_register($31, (idx))
+#define write_gc0_kscratch(idx, val)	__write_64bit_gc0_register($31, (idx), val)
+
 /*
  * Macros to access the floating point coprocessor control registers
  */
@@ -1633,6 +1871,28 @@ static inline void tlb_write_random(void)
 		".set reorder");
 }
 
+static inline void guest_tlb_write_indexed(void)
+{
+	__asm__ __volatile__(
+		".set push\n\t"
+		".set mips64r2\n\t"
+		".set virt\n\t"
+		".set noreorder\n\t"
+		"tlbgwi\n\t"
+		".set pop");
+}
+
+static inline void guest_tlb_read(void)
+{
+	__asm__ __volatile__(
+		".set push\n\t"
+		".set mips64r2\n\t"
+		".set virt\n\t"
+		".set noreorder\n\t"
+		"tlbgr\n\t"
+		".set pop");
+}
+
 /*
  * Manipulate bits in a c0 register.
  */
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 14/31] mips/kvm: Add thread_info flag to indicate operation in MIPS VZ Guest Mode.
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (12 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 13/31] mips/kvm: Add accessors for MIPS VZ registers David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-14 22:10   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 15/31] mips/kvm: Exception handling to leave and reenter guest mode David Daney
                   ` (20 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/include/asm/thread_info.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/mips/include/asm/thread_info.h b/arch/mips/include/asm/thread_info.h
index 895320e..a7a894a 100644
--- a/arch/mips/include/asm/thread_info.h
+++ b/arch/mips/include/asm/thread_info.h
@@ -109,6 +109,7 @@ static inline struct thread_info *current_thread_info(void)
 #define TIF_RESTORE_SIGMASK	9	/* restore signal mask in do_signal() */
 #define TIF_USEDFPU		16	/* FPU was used by this task this quantum (SMP) */
 #define TIF_MEMDIE		18	/* is terminating due to OOM killer */
+#define TIF_GUESTMODE		19	/* If set, running in VZ Guest mode. */
 #define TIF_FIXADE		20	/* Fix address errors in software */
 #define TIF_LOGADE		21	/* Log address errors to syslog */
 #define TIF_32BIT_REGS		22	/* also implies 16/32 fprs */
@@ -124,6 +125,7 @@ static inline struct thread_info *current_thread_info(void)
 #define _TIF_SECCOMP		(1<<TIF_SECCOMP)
 #define _TIF_NOTIFY_RESUME	(1<<TIF_NOTIFY_RESUME)
 #define _TIF_USEDFPU		(1<<TIF_USEDFPU)
+#define _TIF_GUESTMODE		(1<<TIF_GUESTMODE)
 #define _TIF_FIXADE		(1<<TIF_FIXADE)
 #define _TIF_LOGADE		(1<<TIF_LOGADE)
 #define _TIF_32BIT_REGS		(1<<TIF_32BIT_REGS)
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 15/31] mips/kvm: Exception handling to leave and reenter guest mode.
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (13 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 14/31] mips/kvm: Add thread_info flag to indicate operation in MIPS VZ Guest Mode David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-15 10:00   ` Ralf Baechle
  2013-10-11 12:51     ` James Hogan
  2013-06-07 23:03 ` [PATCH 16/31] mips/kvm: Add exception handler for MIPSVZ Guest exceptions David Daney
                   ` (19 subsequent siblings)
  34 siblings, 2 replies; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

Currently this is a little complex, here are the facts about how it works:

o When running in Guest mode we set the high bit of CP0_XCONTEXT.  If
  this bit is clear, we don't do anything special on an exception.

o If we are in guest mode, upon an exception we:

  1) load the stack pointer from the mips_kvm_rootsp array instead of
     kernelsp.

  2) Clear GuestCtl[GM] and high bit of CP0_XCONTEXT.

  3) Restore host ASID and PGD pointer.

o Upon restarting from an exception we test the task TIF_GUESTMODE
  flag if it is clear, nothing special is done.

o If Guest mode is active for the thread we:

  1) Compare the stack pointer to mips_kvm_rootsp, if it doesn't match
     we are not reentering guest mode, so no more special processing
     is done.

  2) If reentering guest mode:

  2a) Set high bit of CP0_XCONTEXT and GuestCtl[GM].

  2b) Set Guest mode ASID and PGD pointer.

This allows a single set of exception handlers to be used for both
host and guest mode operation.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/include/asm/stackframe.h | 135 ++++++++++++++++++++++++++++++++++++-
 1 file changed, 132 insertions(+), 3 deletions(-)

diff --git a/arch/mips/include/asm/stackframe.h b/arch/mips/include/asm/stackframe.h
index 20627b2..bf2ec48 100644
--- a/arch/mips/include/asm/stackframe.h
+++ b/arch/mips/include/asm/stackframe.h
@@ -17,6 +17,7 @@
 #include <asm/asmmacro.h>
 #include <asm/mipsregs.h>
 #include <asm/asm-offsets.h>
+#include <asm/thread_info.h>
 
 /*
  * For SMTC kernel, global IE should be left set, and interrupts
@@ -98,7 +99,9 @@
 #define CPU_ID_REG CP0_CONTEXT
 #define CPU_ID_MFC0 MFC0
 #endif
-		.macro	get_saved_sp	/* SMP variation */
+#define CPU_ID_MASK ((1 << 13) - 1)
+
+		.macro	get_saved_sp_for_save_some	/* SMP variation */
 		CPU_ID_MFC0	k0, CPU_ID_REG
 #if defined(CONFIG_32BIT) || defined(KBUILD_64BIT_SYM32)
 		lui	k1, %hi(kernelsp)
@@ -110,15 +113,49 @@
 		dsll	k1, 16
 #endif
 		LONG_SRL	k0, PTEBASE_SHIFT
+#ifdef CONFIG_KVM_MIPSVZ
+		andi	k0, CPU_ID_MASK /* high bits indicate guest mode. */
+#endif
 		LONG_ADDU	k1, k0
 		LONG_L	k1, %lo(kernelsp)(k1)
 		.endm
 
+		.macro get_saved_sp
+		CPU_ID_MFC0	k0, CPU_ID_REG
+		get_saved_sp_for_save_some
+		.endm
+
+		.macro	get_mips_kvm_rootsp	/* SMP variation */
+#if defined(CONFIG_32BIT) || defined(KBUILD_64BIT_SYM32)
+		lui	k1, %hi(mips_kvm_rootsp)
+#else
+		lui	k1, %highest(mips_kvm_rootsp)
+		daddiu	k1, %higher(mips_kvm_rootsp)
+		dsll	k1, 16
+		daddiu	k1, %hi(mips_kvm_rootsp)
+		dsll	k1, 16
+#endif
+		LONG_SRL	k0, PTEBASE_SHIFT
+		andi	k0, CPU_ID_MASK /* high bits indicate guest mode. */
+		LONG_ADDU	k1, k0
+		LONG_L	k1, %lo(mips_kvm_rootsp)(k1)
+		.endm
+
 		.macro	set_saved_sp stackp temp temp2
 		CPU_ID_MFC0	\temp, CPU_ID_REG
 		LONG_SRL	\temp, PTEBASE_SHIFT
+#ifdef CONFIG_KVM_MIPSVZ
+		andi	k0, CPU_ID_MASK /* high bits indicate guest mode. */
+#endif
 		LONG_S	\stackp, kernelsp(\temp)
 		.endm
+
+		.macro	set_mips_kvm_rootsp stackp temp
+		CPU_ID_MFC0	\temp, CPU_ID_REG
+		LONG_SRL	\temp, PTEBASE_SHIFT
+		andi	k0, CPU_ID_MASK /* high bits indicate guest mode. */
+		LONG_S	\stackp, mips_kvm_rootsp(\temp)
+		.endm
 #else
 		.macro	get_saved_sp	/* Uniprocessor variation */
 #ifdef CONFIG_CPU_JUMP_WORKAROUNDS
@@ -152,9 +189,27 @@
 		LONG_L	k1, %lo(kernelsp)(k1)
 		.endm
 
+		.macro	get_mips_kvm_rootsp	/* Uniprocessor variation */
+#if defined(CONFIG_32BIT) || defined(KBUILD_64BIT_SYM32)
+		lui	k1, %hi(mips_kvm_rootsp)
+#else
+		lui	k1, %highest(mips_kvm_rootsp)
+		daddiu	k1, %higher(mips_kvm_rootsp)
+		dsll	k1, k1, 16
+		daddiu	k1, %hi(mips_kvm_rootsp)
+		dsll	k1, k1, 16
+#endif
+		LONG_L	k1, %lo(mips_kvm_rootsp)(k1)
+		.endm
+
+
 		.macro	set_saved_sp stackp temp temp2
 		LONG_S	\stackp, kernelsp
 		.endm
+
+		.macro	set_mips_kvm_rootsp stackp temp
+		LONG_S	\stackp, mips_kvm_rootsp
+		.endm
 #endif
 
 		.macro	SAVE_SOME
@@ -164,11 +219,21 @@
 		mfc0	k0, CP0_STATUS
 		sll	k0, 3		/* extract cu0 bit */
 		.set	noreorder
+#ifdef CONFIG_KVM_MIPSVZ
+		bgez	k0, 7f
+		 CPU_ID_MFC0	k0, CPU_ID_REG
+		bgez	k0, 8f
+		 move	k1, sp
+		get_mips_kvm_rootsp
+		b	8f
+		 nop
+#else
 		bltz	k0, 8f
 		 move	k1, sp
+#endif
 		.set	reorder
 		/* Called from user mode, new stack. */
-		get_saved_sp
+7:		get_saved_sp_for_save_some
 #ifndef CONFIG_CPU_DADDI_WORKAROUNDS
 8:		move	k0, sp
 		PTR_SUBU sp, k1, PT_SIZE
@@ -227,6 +292,35 @@
 		LONG_S	$31, PT_R31(sp)
 		ori	$28, sp, _THREAD_MASK
 		xori	$28, _THREAD_MASK
+#ifdef CONFIG_KVM_MIPSVZ
+		CPU_ID_MFC0	k0, CPU_ID_REG
+		.set	noreorder
+		bgez	k0, 8f
+		/* Must clear GuestCtl0[GM] */
+		 dins	k0, zero, 63, 1
+		.set	reorder
+		dmtc0	k0, CPU_ID_REG
+		mfc0	k0, CP0_GUESTCTL0
+		ins	k0, zero, MIPS_GUESTCTL0B_GM, 1
+		mtc0	k0, CP0_GUESTCTL0
+		LONG_L	v0, TI_TASK($28)
+		lw	v1, THREAD_MM_ASID(v0)
+		dmtc0	v1, CP0_ENTRYHI
+		LONG_L	v1, TASK_MM(v0)
+		.set	noreorder
+		jal	tlbmiss_handler_setup_pgd_array
+		 LONG_L	a0, MM_PGD(v1)
+		.set	reorder
+		/*
+		 * With KVM_MIPSVZ, we must not clobber k0/k1
+		 * they were saved before they were used
+		 */
+8:
+		MFC0	k0, CP0_KSCRATCH1
+		MFC0	v1, CP0_KSCRATCH2
+		LONG_S	k0, PT_R26(sp)
+		LONG_S	v1, PT_R27(sp)
+#endif
 #ifdef CONFIG_CPU_CAVIUM_OCTEON
 		.set	mips64
 		pref	0, 0($28)	/* Prefetch the current pointer */
@@ -439,10 +533,45 @@
 		.set	mips0
 #endif /* CONFIG_MIPS_MT_SMTC */
 		LONG_L	v1, PT_EPC(sp)
+		LONG_L	$25, PT_R25(sp)
 		MTC0	v1, CP0_EPC
+#ifdef CONFIG_KVM_MIPSVZ
+	/*
+	 * Only if TIF_GUESTMODE && sp is the saved KVM sp, return to
+	 * guest mode.
+	 */
+		LONG_L	v0, TI_FLAGS($28)
+		li	k1, _TIF_GUESTMODE
+		and	v0, v0, k1
+		beqz	v0, 8f
+		CPU_ID_MFC0	k0, CPU_ID_REG
+		get_mips_kvm_rootsp
+		PTR_SUBU k1, k1, PT_SIZE
+		bne	k1, sp, 8f
+	/* Set the high order bit of CPU_ID_REG to indicate guest mode. */
+		dli	v0, 1
+		dmfc0	v1, CPU_ID_REG
+		dins	v1, v0, 63, 1
+		dmtc0	v1, CPU_ID_REG
+		/* Must set GuestCtl0[GM] */
+		mfc0	v1, CP0_GUESTCTL0
+		ins	v1, v0, MIPS_GUESTCTL0B_GM, 1
+		mtc0	v1, CP0_GUESTCTL0
+
+		LONG_L	v0, TI_TASK($28)
+		lw	v1, THREAD_GUEST_ASID(v0)
+		dmtc0	v1, CP0_ENTRYHI
+		LONG_L	v1, THREAD_VCPU(v0)
+		LONG_L	v1, KVM_VCPU_KVM(v1)
+		LONG_L	v1, KVM_ARCH_IMPL(v1)
+		.set	noreorder
+		jal	tlbmiss_handler_setup_pgd_array
+		 LONG_L	a0, KVM_MIPS_VZ_PGD(v1)
+		.set	reorder
+8:
+#endif
 		LONG_L	$31, PT_R31(sp)
 		LONG_L	$28, PT_R28(sp)
-		LONG_L	$25, PT_R25(sp)
 #ifdef CONFIG_64BIT
 		LONG_L	$8, PT_R8(sp)
 		LONG_L	$9, PT_R9(sp)
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 16/31] mips/kvm: Add exception handler for MIPSVZ Guest exceptions.
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (14 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 15/31] mips/kvm: Exception handling to leave and reenter guest mode David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-15 10:23   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 17/31] MIPS: Quit exposing Kconfig symbols in uapi headers David Daney
                   ` (18 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/kernel/genex.S | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/mips/kernel/genex.S b/arch/mips/kernel/genex.S
index 163e299..ce0be96 100644
--- a/arch/mips/kernel/genex.S
+++ b/arch/mips/kernel/genex.S
@@ -486,6 +486,9 @@ NESTED(nmi_handler, PT_SIZE, sp)
 	BUILD_HANDLER mcheck mcheck cli verbose		/* #24 */
 	BUILD_HANDLER mt mt sti silent			/* #25 */
 	BUILD_HANDLER dsp dsp sti silent		/* #26 */
+#ifdef CONFIG_KVM_MIPSVZ
+	BUILD_HANDLER hypervisor hypervisor cli silent	/* #27 */
+#endif
 	BUILD_HANDLER reserved reserved sti verbose	/* others */
 
 	.align	5
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 17/31] MIPS: Quit exposing Kconfig symbols in uapi headers.
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (15 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 16/31] mips/kvm: Add exception handler for MIPSVZ Guest exceptions David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-14 11:12   ` Ralf Baechle
  2013-06-15 17:13   ` Paolo Bonzini
  2013-06-07 23:03 ` [PATCH 18/31] mips/kvm: Add pt_regs slots for BadInstr and BadInstrP David Daney
                   ` (17 subsequent siblings)
  34 siblings, 2 replies; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

The kernel's struct pt_regs has many fields conditional on various
Kconfig variables, we cannot be exporting this garbage to user-space.

Move the kernel's definition to asm/ptrace.h, and put a uapi only
version in uapi/asm/ptrace.h gated by #ifndef __KERNEL__

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/include/asm/ptrace.h      | 32 ++++++++++++++++++++++++++++++++
 arch/mips/include/uapi/asm/ptrace.h | 17 ++---------------
 2 files changed, 34 insertions(+), 15 deletions(-)

diff --git a/arch/mips/include/asm/ptrace.h b/arch/mips/include/asm/ptrace.h
index a3186f2..5e6cd09 100644
--- a/arch/mips/include/asm/ptrace.h
+++ b/arch/mips/include/asm/ptrace.h
@@ -16,6 +16,38 @@
 #include <asm/isadep.h>
 #include <uapi/asm/ptrace.h>
 
+/*
+ * This struct defines the way the registers are stored on the stack during a
+ * system call/exception. As usual the registers k0/k1 aren't being saved.
+ */
+struct pt_regs {
+#ifdef CONFIG_32BIT
+	/* Pad bytes for argument save space on the stack. */
+	unsigned long pad0[6];
+#endif
+
+	/* Saved main processor registers. */
+	unsigned long regs[32];
+
+	/* Saved special registers. */
+	unsigned long cp0_status;
+	unsigned long hi;
+	unsigned long lo;
+#ifdef CONFIG_CPU_HAS_SMARTMIPS
+	unsigned long acx;
+#endif
+	unsigned long cp0_badvaddr;
+	unsigned long cp0_cause;
+	unsigned long cp0_epc;
+#ifdef CONFIG_MIPS_MT_SMTC
+	unsigned long cp0_tcstatus;
+#endif /* CONFIG_MIPS_MT_SMTC */
+#ifdef CONFIG_CPU_CAVIUM_OCTEON
+	unsigned long long mpl[3];	  /* MTM{0,1,2} */
+	unsigned long long mtp[3];	  /* MTP{0,1,2} */
+#endif
+} __aligned(8);
+
 struct task_struct;
 
 extern int ptrace_getregs(struct task_struct *child, __s64 __user *data);
diff --git a/arch/mips/include/uapi/asm/ptrace.h b/arch/mips/include/uapi/asm/ptrace.h
index 4d58d84..b26f7e3 100644
--- a/arch/mips/include/uapi/asm/ptrace.h
+++ b/arch/mips/include/uapi/asm/ptrace.h
@@ -22,16 +22,12 @@
 #define DSP_CONTROL	77
 #define ACX		78
 
+#ifndef __KERNEL__
 /*
  * This struct defines the way the registers are stored on the stack during a
  * system call/exception. As usual the registers k0/k1 aren't being saved.
  */
 struct pt_regs {
-#ifdef CONFIG_32BIT
-	/* Pad bytes for argument save space on the stack. */
-	unsigned long pad0[6];
-#endif
-
 	/* Saved main processor registers. */
 	unsigned long regs[32];
 
@@ -39,20 +35,11 @@ struct pt_regs {
 	unsigned long cp0_status;
 	unsigned long hi;
 	unsigned long lo;
-#ifdef CONFIG_CPU_HAS_SMARTMIPS
-	unsigned long acx;
-#endif
 	unsigned long cp0_badvaddr;
 	unsigned long cp0_cause;
 	unsigned long cp0_epc;
-#ifdef CONFIG_MIPS_MT_SMTC
-	unsigned long cp0_tcstatus;
-#endif /* CONFIG_MIPS_MT_SMTC */
-#ifdef CONFIG_CPU_CAVIUM_OCTEON
-	unsigned long long mpl[3];	  /* MTM{0,1,2} */
-	unsigned long long mtp[3];	  /* MTP{0,1,2} */
-#endif
 } __attribute__ ((aligned (8)));
+#endif /* __KERNEL__ */
 
 /* Arbitrarily choose the same ptrace numbers as used by the Sparc code. */
 #define PTRACE_GETREGS		12
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 18/31] mips/kvm: Add pt_regs slots for BadInstr and BadInstrP
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (16 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 17/31] MIPS: Quit exposing Kconfig symbols in uapi headers David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-15 10:25   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 19/31] mips/kvm: Add host definitions for MIPS VZ based host David Daney
                   ` (16 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

These save the instruction word to be used by MIPSVZ code for
instruction emulation.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/include/asm/ptrace.h | 4 ++++
 arch/mips/kernel/asm-offsets.c | 4 ++++
 2 files changed, 8 insertions(+)

diff --git a/arch/mips/include/asm/ptrace.h b/arch/mips/include/asm/ptrace.h
index 5e6cd09..d080716 100644
--- a/arch/mips/include/asm/ptrace.h
+++ b/arch/mips/include/asm/ptrace.h
@@ -46,6 +46,10 @@ struct pt_regs {
 	unsigned long long mpl[3];	  /* MTM{0,1,2} */
 	unsigned long long mtp[3];	  /* MTP{0,1,2} */
 #endif
+#ifdef CONFIG_KVM_MIPSVZ
+	unsigned int cp0_badinstr;	/* Only populated on do_page_fault_{0,1} */
+	unsigned int cp0_badinstrp;	/* Only populated on do_page_fault_{0,1} */
+#endif
 } __aligned(8);
 
 struct task_struct;
diff --git a/arch/mips/kernel/asm-offsets.c b/arch/mips/kernel/asm-offsets.c
index 03bf363..c5cc28f 100644
--- a/arch/mips/kernel/asm-offsets.c
+++ b/arch/mips/kernel/asm-offsets.c
@@ -71,6 +71,10 @@ void output_ptreg_defines(void)
 	OFFSET(PT_MPL, pt_regs, mpl);
 	OFFSET(PT_MTP, pt_regs, mtp);
 #endif /* CONFIG_CPU_CAVIUM_OCTEON */
+#ifdef CONFIG_KVM_MIPSVZ
+	OFFSET(PT_BADINSTR, pt_regs, cp0_badinstr);
+	OFFSET(PT_BADINSTRP, pt_regs, cp0_badinstrp);
+#endif
 	DEFINE(PT_SIZE, sizeof(struct pt_regs));
 	BLANK();
 }
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 19/31] mips/kvm: Add host definitions for MIPS VZ based host.
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (17 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 18/31] mips/kvm: Add pt_regs slots for BadInstr and BadInstrP David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-16  8:49   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 20/31] mips/kvm: Hook into TLB fault handlers David Daney
                   ` (15 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/include/asm/kvm_mips_vz.h | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)
 create mode 100644 arch/mips/include/asm/kvm_mips_vz.h

diff --git a/arch/mips/include/asm/kvm_mips_vz.h b/arch/mips/include/asm/kvm_mips_vz.h
new file mode 100644
index 0000000..dfc6951
--- /dev/null
+++ b/arch/mips/include/asm/kvm_mips_vz.h
@@ -0,0 +1,29 @@
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License.  See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2013 Cavium, Inc.
+ */
+#ifndef _ASM_KVM_MIPS_VZ_H
+#define _ASM_KVM_MIPS_VZ_H
+
+struct kvm;
+
+struct kvm_mips_vz {
+	struct mutex guest_mm_lock;
+	pgd_t *pgd;			/* Translations for this host. */
+	spinlock_t irq_chip_lock;
+	struct page *irq_chip;
+	unsigned int asid[NR_CPUS];	/* Per CPU ASIDs for pgd. */
+};
+
+bool mipsvz_page_fault(struct pt_regs *regs, unsigned long write,
+		       unsigned long address);
+
+bool mipsvz_cp_unusable(struct pt_regs *regs);
+int mipsvz_arch_init(void *opaque);
+int mipsvz_arch_hardware_enable(void *garbage);
+int mipsvz_init_vm(struct kvm *kvm, unsigned long type);
+
+#endif /* _ASM_KVM_MIPS_VZ_H */
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 20/31] mips/kvm: Hook into TLB fault handlers.
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (18 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 19/31] mips/kvm: Add host definitions for MIPS VZ based host David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-07 23:34   ` Sergei Shtylyov
  2013-06-16  8:51   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 21/31] mips/kvm: Allow set_except_vector() to be used from MIPSVZ code David Daney
                   ` (14 subsequent siblings)
  34 siblings, 2 replies; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

If the CPU is operating in guest mode when a TLB related excpetion
occurs, give KVM a chance to do emulation.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/mm/fault.c       | 8 ++++++++
 arch/mips/mm/tlbex-fault.S | 6 ++++++
 2 files changed, 14 insertions(+)

diff --git a/arch/mips/mm/fault.c b/arch/mips/mm/fault.c
index 0fead53..9391da49 100644
--- a/arch/mips/mm/fault.c
+++ b/arch/mips/mm/fault.c
@@ -26,6 +26,7 @@
 #include <asm/ptrace.h>
 #include <asm/highmem.h>		/* For VMALLOC_END */
 #include <linux/kdebug.h>
+#include <asm/kvm_mips_vz.h>
 
 /*
  * This routine handles page faults.  It determines the address,
@@ -50,6 +51,13 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs, unsigned long writ
 	       field, regs->cp0_epc);
 #endif
 
+#ifdef CONFIG_KVM_MIPSVZ
+	if (test_tsk_thread_flag(current, TIF_GUESTMODE)) {
+		if (mipsvz_page_fault(regs, write, address))
+			return;
+	}
+#endif
+
 #ifdef CONFIG_KPROBES
 	/*
 	 * This is to notify the fault handler of the kprobes.	The
diff --git a/arch/mips/mm/tlbex-fault.S b/arch/mips/mm/tlbex-fault.S
index 318855e..df0f70b 100644
--- a/arch/mips/mm/tlbex-fault.S
+++ b/arch/mips/mm/tlbex-fault.S
@@ -14,6 +14,12 @@
 	NESTED(tlb_do_page_fault_\write, PT_SIZE, sp)
 	SAVE_ALL
 	MFC0	a2, CP0_BADVADDR
+#ifdef CONFIG_KVM_MIPSVZ
+	mfc0	v0, CP0_BADINSTR
+	mfc0	v1, CP0_BADINSTRP
+	sw	v0, PT_BADINSTR(sp)
+	sw	v1, PT_BADINSTRP(sp)
+#endif
 	KMODE
 	move	a0, sp
 	REG_S	a2, PT_BVADDR(sp)
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 21/31] mips/kvm: Allow set_except_vector() to be used from MIPSVZ code.
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (19 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 20/31] mips/kvm: Hook into TLB fault handlers David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-16 11:22   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 22/31] mips/kvm: Split get_new_mmu_context into two parts David Daney
                   ` (13 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

We need to move it out of __init so we don't have section mismatch problems.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/include/asm/uasm.h | 2 +-
 arch/mips/kernel/traps.c     | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/mips/include/asm/uasm.h b/arch/mips/include/asm/uasm.h
index 370d967..90b4f5e 100644
--- a/arch/mips/include/asm/uasm.h
+++ b/arch/mips/include/asm/uasm.h
@@ -11,7 +11,7 @@
 
 #include <linux/types.h>
 
-#ifdef CONFIG_EXPORT_UASM
+#if defined(CONFIG_EXPORT_UASM) || IS_ENABLED(CONFIG_KVM_MIPSVZ)
 #include <linux/export.h>
 #define __uasminit
 #define __uasminitdata
diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
index f008795..fca0a2f 100644
--- a/arch/mips/kernel/traps.c
+++ b/arch/mips/kernel/traps.c
@@ -1457,7 +1457,7 @@ unsigned long ebase;
 unsigned long exception_handlers[32];
 unsigned long vi_handlers[64];
 
-void __init *set_except_vector(int n, void *addr)
+void __uasminit *set_except_vector(int n, void *addr)
 {
 	unsigned long handler = (unsigned long) addr;
 	unsigned long old_handler;
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 22/31] mips/kvm: Split get_new_mmu_context into two parts.
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (20 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 21/31] mips/kvm: Allow set_except_vector() to be used from MIPSVZ code David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-16 11:26   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 23/31] mips/kvm: Hook into CP unusable exception handler David Daney
                   ` (12 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

The new function (part) get_new_asid() can now be used from MIPSVZ code.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/include/asm/mmu_context.h | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/mips/include/asm/mmu_context.h b/arch/mips/include/asm/mmu_context.h
index 8201160..5609a32 100644
--- a/arch/mips/include/asm/mmu_context.h
+++ b/arch/mips/include/asm/mmu_context.h
@@ -108,8 +108,8 @@ static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
 
 #ifndef CONFIG_MIPS_MT_SMTC
 /* Normal, classic MIPS get_new_mmu_context */
-static inline void
-get_new_mmu_context(struct mm_struct *mm, unsigned long cpu)
+static inline unsigned long
+get_new_asid(unsigned long cpu)
 {
 	extern void kvm_local_flush_tlb_all(void);
 	unsigned long asid = asid_cache(cpu);
@@ -125,7 +125,13 @@ get_new_mmu_context(struct mm_struct *mm, unsigned long cpu)
 		if (!asid)		/* fix version if needed */
 			asid = ASID_FIRST_VERSION;
 	}
+	return asid;
+}
 
+static inline void
+get_new_mmu_context(struct mm_struct *mm, unsigned long cpu)
+{
+	unsigned long asid = get_new_asid(cpu);
 	cpu_context(cpu, mm) = asid_cache(cpu) = asid;
 }
 
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 23/31] mips/kvm: Hook into CP unusable exception handler.
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (21 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 22/31] mips/kvm: Split get_new_mmu_context into two parts David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-16 11:28   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 24/31] mips/kvm: Add thread_struct fields used by MIPSVZ hosts David Daney
                   ` (11 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

The MIPS VZ KVM code needs this to be able to manage the FPU.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/kernel/traps.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
index fca0a2f..2bdeb32 100644
--- a/arch/mips/kernel/traps.c
+++ b/arch/mips/kernel/traps.c
@@ -56,6 +56,7 @@
 #include <asm/types.h>
 #include <asm/stacktrace.h>
 #include <asm/uasm.h>
+#include <asm/kvm_mips_vz.h>
 
 extern void check_wait(void);
 extern asmlinkage void rollback_handle_int(void);
@@ -1045,6 +1046,13 @@ asmlinkage void do_cpu(struct pt_regs *regs)
 	int status;
 	unsigned long __maybe_unused flags;
 
+#ifdef CONFIG_KVM_MIPSVZ
+	if (test_tsk_thread_flag(current, TIF_GUESTMODE)) {
+		if (mipsvz_cp_unusable(regs))
+			return;
+	}
+#endif
+
 	die_if_kernel("do_cpu invoked from kernel context!", regs);
 
 	cpid = (regs->cp0_cause >> CAUSEB_CE) & 3;
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 24/31] mips/kvm: Add thread_struct fields used by MIPSVZ hosts.
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (22 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 23/31] mips/kvm: Hook into CP unusable exception handler David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-16 11:29   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 25/31] mips/kvm: Add some asm-offsets constants used by MIPSVZ David Daney
                   ` (10 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

... and their accessors in asm-offsets.c

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/include/asm/processor.h | 6 ++++++
 arch/mips/kernel/asm-offsets.c    | 5 +++++
 2 files changed, 11 insertions(+)

diff --git a/arch/mips/include/asm/processor.h b/arch/mips/include/asm/processor.h
index 1470b7b..e0aa198 100644
--- a/arch/mips/include/asm/processor.h
+++ b/arch/mips/include/asm/processor.h
@@ -198,6 +198,7 @@ typedef struct {
 #define ARCH_MIN_TASKALIGN	8
 
 struct mips_abi;
+struct kvm_vcpu;
 
 /*
  * If you change thread_struct remember to change the #defines below too!
@@ -230,6 +231,11 @@ struct thread_struct {
 	unsigned long cp0_badvaddr;	/* Last user fault */
 	unsigned long cp0_baduaddr;	/* Last kernel fault accessing USEG */
 	unsigned long error_code;
+#ifdef CONFIG_KVM_MIPSVZ
+	struct kvm_vcpu *vcpu;
+	unsigned int mm_asid;
+	unsigned int guest_asid;
+#endif
 #ifdef CONFIG_CPU_CAVIUM_OCTEON
     struct octeon_cop2_state cp2 __attribute__ ((__aligned__(128)));
     struct octeon_cvmseg_state cvmseg __attribute__ ((__aligned__(128)));
diff --git a/arch/mips/kernel/asm-offsets.c b/arch/mips/kernel/asm-offsets.c
index c5cc28f..37fd9e2 100644
--- a/arch/mips/kernel/asm-offsets.c
+++ b/arch/mips/kernel/asm-offsets.c
@@ -132,6 +132,11 @@ void output_thread_defines(void)
 	       thread.cp0_baduaddr);
 	OFFSET(THREAD_ECODE, task_struct, \
 	       thread.error_code);
+#ifdef CONFIG_KVM_MIPSVZ
+	OFFSET(THREAD_VCPU, task_struct, thread.vcpu);
+	OFFSET(THREAD_MM_ASID, task_struct, thread.mm_asid);
+	OFFSET(THREAD_GUEST_ASID, task_struct, thread.guest_asid);
+#endif
 	BLANK();
 }
 
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 25/31] mips/kvm: Add some asm-offsets constants used by MIPSVZ.
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (23 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 24/31] mips/kvm: Add thread_struct fields used by MIPSVZ hosts David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-16 11:31   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 26/31] mips/kvm: Split up Kconfig and Makefile definitions in preperation for MIPSVZ David Daney
                   ` (9 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/kernel/asm-offsets.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/mips/kernel/asm-offsets.c b/arch/mips/kernel/asm-offsets.c
index 37fd9e2..db09376 100644
--- a/arch/mips/kernel/asm-offsets.c
+++ b/arch/mips/kernel/asm-offsets.c
@@ -19,6 +19,7 @@
 
 #include <linux/kvm_host.h>
 #include <asm/kvm_mips_te.h>
+#include <asm/kvm_mips_vz.h>
 
 void output_ptreg_defines(void)
 {
@@ -345,6 +346,8 @@ void output_pbe_defines(void)
 void output_kvm_defines(void)
 {
 	COMMENT(" KVM/MIPS Specfic offsets. ");
+	OFFSET(KVM_ARCH_IMPL, kvm, arch.impl);
+	OFFSET(KVM_VCPU_KVM, kvm_vcpu, kvm);
 	DEFINE(VCPU_ARCH_SIZE, sizeof(struct kvm_vcpu_arch));
 	OFFSET(VCPU_RUN, kvm_vcpu, run);
 	OFFSET(VCPU_HOST_ARCH, kvm_vcpu, arch);
@@ -411,5 +414,9 @@ void output_kvm_defines(void)
 	OFFSET(COP0_TLB_HI, mips_coproc, reg[MIPS_CP0_TLB_HI][0]);
 	OFFSET(COP0_STATUS, mips_coproc, reg[MIPS_CP0_STATUS][0]);
 	BLANK();
+
+	COMMENT(" Linux struct kvm mipsvz offsets. ");
+	OFFSET(KVM_MIPS_VZ_PGD,	kvm_mips_vz, pgd);
+	BLANK();
 }
 #endif
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 26/31] mips/kvm: Split up Kconfig and Makefile definitions in preperation for MIPSVZ.
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (24 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 25/31] mips/kvm: Add some asm-offsets constants used by MIPSVZ David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-16 11:33   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 27/31] mips/kvm: Gate the use of kvm_local_flush_tlb_all() by KVM_MIPSTE David Daney
                   ` (8 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

Create the symbol KVM_MIPSTE, and use it to select the trap and
emulate specific things.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/kvm/Kconfig  | 14 +++++++++-----
 arch/mips/kvm/Makefile | 14 ++++++++------
 2 files changed, 17 insertions(+), 11 deletions(-)

diff --git a/arch/mips/kvm/Kconfig b/arch/mips/kvm/Kconfig
index 2c15590..95c0d22 100644
--- a/arch/mips/kvm/Kconfig
+++ b/arch/mips/kvm/Kconfig
@@ -16,18 +16,22 @@ menuconfig VIRTUALIZATION
 if VIRTUALIZATION
 
 config KVM
-	tristate "Kernel-based Virtual Machine (KVM) support"
-	depends on HAVE_KVM
+	tristate
 	select PREEMPT_NOTIFIERS
+
+config KVM_MIPSTE
+	tristate "Kernel-based Virtual Machine (KVM) 32-bit trap-and-emulate"
+	depends on HAVE_KVM
+	select KVM
 	select ANON_INODES
 	select KVM_MMIO
 	---help---
-	  Support for hosting Guest kernels.
+	  Support for hosting Guest kernels with modified address space layout.
 	  Currently supported on MIPS32 processors.
 
 config KVM_MIPS_DYN_TRANS
 	bool "KVM/MIPS: Dynamic binary translation to reduce traps"
-	depends on KVM
+	depends on KVM_MIPSTE
 	---help---
 	  When running in Trap & Emulate mode patch privileged
 	  instructions to reduce the number of traps.
@@ -36,7 +40,7 @@ config KVM_MIPS_DYN_TRANS
 
 config KVM_MIPS_DEBUG_COP0_COUNTERS
 	bool "Maintain counters for COP0 accesses"
-	depends on KVM
+	depends on KVM_MIPSTE
 	---help---
 	  Maintain statistics for Guest COP0 accesses.
 	  A histogram of COP0 accesses is printed when the VM is
diff --git a/arch/mips/kvm/Makefile b/arch/mips/kvm/Makefile
index 78d87bb..3377197 100644
--- a/arch/mips/kvm/Makefile
+++ b/arch/mips/kvm/Makefile
@@ -1,13 +1,15 @@
 # Makefile for KVM support for MIPS
 #
 
-common-objs = $(addprefix ../../../virt/kvm/, kvm_main.o coalesced_mmio.o)
+common-objs = $(addprefix ../../../virt/kvm/, kvm_main.o)
 
 EXTRA_CFLAGS += -Ivirt/kvm -Iarch/mips/kvm
 
-kvm-objs := $(common-objs) kvm_mips.o kvm_mips_emul.o kvm_locore.o \
-	    kvm_mips_int.o kvm_mips_stats.o kvm_mips_commpage.o \
-	    kvm_mips_dyntrans.o kvm_trap_emul.o
+kvm_mipste-objs		:= kvm_mips_emul.o kvm_locore.o kvm_mips_int.o \
+			   kvm_mips_stats.o kvm_mips_commpage.o \
+			   kvm_mips_dyntrans.o kvm_trap_emul.o kvm_cb.o \
+			   kvm_tlb.o \
+			   $(addprefix ../../../virt/kvm/, coalesced_mmio.o)
 
-obj-$(CONFIG_KVM)	+= kvm.o
-obj-y			+= kvm_cb.o kvm_tlb.o
+obj-$(CONFIG_KVM)		+= $(common-objs) kvm_mips.o
+obj-$(CONFIG_KVM_MIPSTE)	+= kvm_mipste.o
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 27/31] mips/kvm: Gate the use of kvm_local_flush_tlb_all() by KVM_MIPSTE
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (25 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 26/31] mips/kvm: Split up Kconfig and Makefile definitions in preperation for MIPSVZ David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-16 11:42   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 28/31] mips/kvm: Only use KVM_COALESCED_MMIO_PAGE_OFFSET with KVM_MIPSTE David Daney
                   ` (7 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

Only the trap-and-emulate KVM code needs a Special tlb flusher.  All
other configurations should use the regular version.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/include/asm/mmu_context.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/mips/include/asm/mmu_context.h b/arch/mips/include/asm/mmu_context.h
index 5609a32..04d0b74 100644
--- a/arch/mips/include/asm/mmu_context.h
+++ b/arch/mips/include/asm/mmu_context.h
@@ -117,7 +117,7 @@ get_new_asid(unsigned long cpu)
 	if (! ((asid += ASID_INC) & ASID_MASK) ) {
 		if (cpu_has_vtag_icache)
 			flush_icache_all();
-#ifdef CONFIG_VIRTUALIZATION
+#if IS_ENABLED(CONFIG_KVM_MIPSTE)
 		kvm_local_flush_tlb_all();      /* start new asid cycle */
 #else
 		local_flush_tlb_all();	/* start new asid cycle */
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 28/31] mips/kvm: Only use KVM_COALESCED_MMIO_PAGE_OFFSET with KVM_MIPSTE
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (26 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 27/31] mips/kvm: Gate the use of kvm_local_flush_tlb_all() by KVM_MIPSTE David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-16 12:03   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 29/31] mips/kvm: Add MIPSVZ support David Daney
                   ` (6 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

The forthcoming MIPSVZ code doesn't currently use this, so it must
only be enabled for KVM_MIPSTE.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/include/asm/kvm_host.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
index 505b804..9f209e1 100644
--- a/arch/mips/include/asm/kvm_host.h
+++ b/arch/mips/include/asm/kvm_host.h
@@ -25,7 +25,9 @@
 /* memory slots that does not exposed to userspace */
 #define KVM_PRIVATE_MEM_SLOTS 	0
 
+#ifdef CONFIG_KVM_MIPSTE
 #define KVM_COALESCED_MMIO_PAGE_OFFSET 1
+#endif
 
 /* Don't support huge pages */
 #define KVM_HPAGE_GFN_SHIFT(x)	0
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 29/31] mips/kvm: Add MIPSVZ support.
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (27 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 28/31] mips/kvm: Only use KVM_COALESCED_MMIO_PAGE_OFFSET with KVM_MIPSTE David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-16 11:47   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 30/31] mips/kvm: Enable MIPSVZ in Kconfig/Makefile David Daney
                   ` (5 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

This doen't share much from the trap-and-emulate implementation.

Included is a virtual interrupt controller and I/O emulation.
Currently these have hard coded addresses at:

  0x1e000000 - 64K of I/O space.
  0x1e010000 - virtual interrupt controller.

Future work will allow these windows to be positioned via ioctl.

The MIPSVZ specification has some optional and implementation defined
parts.  This implementation uses the OCTEON III behavior.  For use
with other implementations, we would have to split out the
implementation specific parts.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/kvm/kvm_mips.c         |   31 +-
 arch/mips/kvm/kvm_mipsvz.c       | 1894 ++++++++++++++++++++++++++++++++++++++
 arch/mips/kvm/kvm_mipsvz_guest.S |  234 +++++
 3 files changed, 2156 insertions(+), 3 deletions(-)
 create mode 100644 arch/mips/kvm/kvm_mipsvz.c
 create mode 100644 arch/mips/kvm/kvm_mipsvz_guest.S

diff --git a/arch/mips/kvm/kvm_mips.c b/arch/mips/kvm/kvm_mips.c
index 18c8dc8..e908b3b 100644
--- a/arch/mips/kvm/kvm_mips.c
+++ b/arch/mips/kvm/kvm_mips.c
@@ -21,6 +21,7 @@
 #include <asm/cacheflush.h>
 #include <asm/mmu_context.h>
 #include <asm/kvm_mips_te.h>
+#include <asm/kvm_mips_vz.h>
 
 #define VCPU_STAT(x) offsetof(struct kvm_vcpu, stat.x), KVM_STAT_VCPU
 struct kvm_stats_debugfs_item debugfs_entries[] = {
@@ -62,7 +63,11 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu)
 
 int kvm_arch_hardware_enable(void *garbage)
 {
+#if IS_ENABLED(CONFIG_KVM_MIPSVZ)
+	return mipsvz_arch_hardware_enable(garbage);
+#else
 	return 0;
+#endif
 }
 
 void kvm_arch_hardware_disable(void *garbage)
@@ -89,8 +94,15 @@ int kvm_mips_te_init_vm(struct kvm *kvm, unsigned long type);
 
 int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 {
+#if IS_ENABLED(CONFIG_KVM_MIPSTE)
 	if (type == 0)
 		return kvm_mips_te_init_vm(kvm, type);
+#endif
+#if IS_ENABLED(CONFIG_KVM_MIPSVZ)
+	if (type == 1)
+		return mipsvz_init_vm(kvm, type);
+#endif
+
 	return -EINVAL;
 }
 
@@ -411,27 +423,38 @@ out:
 
 long kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg)
 {
+	struct kvm *kvm = filp->private_data;
 	long r;
 
 	switch (ioctl) {
 	default:
-		r = -ENOIOCTLCMD;
+		r = kvm->arch.ops->vm_ioctl(kvm, ioctl, arg);
 	}
 
 	return r;
 }
 
-int kvm_mips_te_arch_init(void *opaque);
 void kvm_mips_te_arch_exit(void);
 
 int kvm_arch_init(void *opaque)
 {
-	return kvm_mips_te_arch_init(opaque);
+	int r = 0;
+#if IS_ENABLED(CONFIG_KVM_MIPSTE)
+	r = kvm_mips_te_arch_init(opaque);
+	if (r)
+		return r;
+#endif
+#if IS_ENABLED(CONFIG_KVM_MIPSVZ)
+	r = mipsvz_arch_init(opaque);
+#endif
+	return r;
 }
 
 void kvm_arch_exit(void)
 {
+#if IS_ENABLED(CONFIG_KVM_MIPSTE)
 	kvm_mips_te_arch_exit();
+#endif
 }
 
 void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
@@ -508,9 +531,11 @@ int kvm_dev_ioctl_check_extension(long ext)
 	case KVM_CAP_ONE_REG:
 		r = 1;
 		break;
+#ifndef CONFIG_KVM_MIPSVZ
 	case KVM_CAP_COALESCED_MMIO:
 		r = KVM_COALESCED_MMIO_PAGE_OFFSET;
 		break;
+#endif
 	default:
 		r = 0;
 		break;
diff --git a/arch/mips/kvm/kvm_mipsvz.c b/arch/mips/kvm/kvm_mipsvz.c
new file mode 100644
index 0000000..2f408e0
--- /dev/null
+++ b/arch/mips/kvm/kvm_mipsvz.c
@@ -0,0 +1,1894 @@
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License.  See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2012-2013 Cavium, Inc.
+ */
+/* #define DEBUG 1 */
+#include <linux/module.h>
+#include <linux/err.h>
+#include <linux/kvm_host.h>
+#include <linux/kvm.h>
+#include <linux/perf_event.h>
+
+#include <asm/mipsregs.h>
+#include <asm/setup.h>
+#include <asm/mmu_context.h>
+#include <asm/kvm_mips_vz.h>
+#include <asm/pgalloc.h>
+#include <asm/branch.h>
+#include <asm/inst.h>
+#include <asm/time.h>
+#include <asm/fpu.h>
+
+asmlinkage void handle_hypervisor(void);
+void mipsvz_start_guest(struct kvm_vcpu *vcpu);
+void mipsvz_exit_guest(void) __noreturn;
+
+void mipsvz_install_fpu(struct kvm_vcpu *vcpu);
+void mipsvz_readout_fpu(struct kvm_vcpu *vcpu);
+
+unsigned long mips_kvm_rootsp[NR_CPUS];
+static u32 mipsvz_cp0_count_offset[NR_CPUS];
+
+static unsigned long mipsvz_entryhi_mask;
+
+struct vcpu_mips {
+	void *foo;
+};
+
+struct mipsvz_kvm_tlb_entry {
+	u64 entryhi;
+	u64 entrylo0;
+	u64 entrylo1;
+	u32 pagemask;
+};
+
+struct kvm_mips_vcpu_vz {
+	struct kvm_vcpu *vcpu;
+	u64 c0_entrylo0;
+	u64 c0_entrylo1;
+	u64 c0_context;
+	u64 c0_userlocal;
+	u64 c0_badvaddr;
+	u64 c0_entryhi;
+	u64 c0_ebase;
+	u64 c0_xcontext;
+	u64 c0_kscratch[6];
+	u32 c0_pagemask;
+	u32 c0_pagegrain;
+	u32 c0_wired;
+	u32 c0_hwrena;
+	u32 c0_compare;
+	u32 c0_status;
+	u32 c0_cause;
+	u32 c0_index;
+
+	u32 c0_count; /* Not settable, value at last exit. */
+	u32 c0_count_offset;
+
+	int tlb_size;
+	struct mipsvz_kvm_tlb_entry *tlb_state;
+
+	u32 last_exit_insn;
+	/* Saved  mips_kvm_rootsp[] value when we are off the CPU. */
+	unsigned long rootsp;
+
+	/* Protected by kvm_arch.irq_chip_lock, the value of Guestctl2[VIP] */
+	u8 injected_ipx;
+
+	struct hrtimer compare_timer;
+	ktime_t compare_timer_read;
+
+	bool have_counter_state;
+};
+
+static bool mipsvz_count_expired(u32 old_count, u32 new_count, u32 compare)
+{
+	if (new_count > old_count)
+		return compare >= old_count && compare <= new_count;
+	else
+		return compare >= old_count || compare <= new_count;
+}
+
+static void mipsvz_install_guest_cp0(struct kvm_mips_vcpu_vz *vcpu_vz)
+{
+	u32 gconfig4 = read_gc0_config4();
+	u32 count;
+
+	write_gc0_index(vcpu_vz->c0_index);
+	write_gc0_entrylo0(vcpu_vz->c0_entrylo0);
+	write_gc0_entrylo1(vcpu_vz->c0_entrylo1);
+	write_gc0_context(vcpu_vz->c0_context);
+	write_gc0_userlocal(vcpu_vz->c0_userlocal);
+	write_gc0_pagemask(vcpu_vz->c0_pagemask);
+	write_gc0_pagegrain(vcpu_vz->c0_pagegrain);
+	write_gc0_wired(vcpu_vz->c0_wired);
+	write_gc0_hwrena(vcpu_vz->c0_hwrena);
+	write_gc0_badvaddr(vcpu_vz->c0_badvaddr);
+	write_gc0_entryhi(vcpu_vz->c0_entryhi);
+	write_gc0_compare(vcpu_vz->c0_compare);
+	write_gc0_cause(vcpu_vz->c0_cause);
+	write_gc0_status(vcpu_vz->c0_status);
+	write_gc0_ebase(vcpu_vz->c0_ebase);
+	write_gc0_xcontext(vcpu_vz->c0_xcontext);
+
+	count = read_gc0_count();
+
+	if (mipsvz_count_expired(vcpu_vz->c0_count, count, vcpu_vz->c0_compare) &&
+	    (vcpu_vz->c0_cause & CAUSEF_TI) == 0) {
+		vcpu_vz->c0_cause |= CAUSEF_TI;
+		write_gc0_cause(vcpu_vz->c0_cause);
+	}
+	vcpu_vz->have_counter_state = false;
+
+#define MIPSVZ_GUEST_INSTALL_SCRATCH(_i)				\
+	if (gconfig4 & (1 << (18 + (_i))))				\
+		write_gc0_kscratch(2 + (_i), vcpu_vz->c0_kscratch[_i])
+
+	MIPSVZ_GUEST_INSTALL_SCRATCH(0);
+	MIPSVZ_GUEST_INSTALL_SCRATCH(1);
+	MIPSVZ_GUEST_INSTALL_SCRATCH(2);
+	MIPSVZ_GUEST_INSTALL_SCRATCH(3);
+	MIPSVZ_GUEST_INSTALL_SCRATCH(4);
+	MIPSVZ_GUEST_INSTALL_SCRATCH(5);
+}
+
+static void mipsvz_readout_cp0_counter_state(struct kvm_mips_vcpu_vz *vcpu_vz)
+{
+	/* Must read count before cause so we can emulate TI getting set. */
+	vcpu_vz->compare_timer_read = ktime_get();
+	vcpu_vz->c0_count = read_gc0_count();
+	vcpu_vz->c0_cause = read_gc0_cause();
+	vcpu_vz->c0_compare = read_gc0_compare();
+	vcpu_vz->have_counter_state = true;
+}
+
+static void mipsvz_readout_guest_cp0(struct kvm_mips_vcpu_vz *vcpu_vz)
+{
+	u32 gconfig4 = read_gc0_config4();
+
+	vcpu_vz->c0_index = read_gc0_index();
+	vcpu_vz->c0_entrylo0 = read_gc0_entrylo0();
+	vcpu_vz->c0_entrylo1 = read_gc0_entrylo1();
+	vcpu_vz->c0_context = read_gc0_context();
+	vcpu_vz->c0_userlocal = read_gc0_userlocal();
+	vcpu_vz->c0_pagemask = read_gc0_pagemask();
+	vcpu_vz->c0_pagegrain = read_gc0_pagegrain();
+	vcpu_vz->c0_wired = read_gc0_wired();
+	vcpu_vz->c0_hwrena = read_gc0_hwrena();
+	vcpu_vz->c0_badvaddr = read_gc0_badvaddr();
+	vcpu_vz->c0_entryhi = read_gc0_entryhi();
+	vcpu_vz->c0_compare = read_gc0_compare();
+	vcpu_vz->c0_status = read_gc0_status();
+
+	/* Must read count before cause so we can emulate TI getting set. */
+	vcpu_vz->c0_count = read_gc0_count();
+
+	vcpu_vz->c0_cause = read_gc0_cause();
+	vcpu_vz->c0_ebase = read_gc0_ebase();
+	vcpu_vz->c0_xcontext = read_gc0_xcontext();
+	if (!vcpu_vz->have_counter_state)
+		mipsvz_readout_cp0_counter_state(vcpu_vz);
+
+
+#define MIPSVZ_GUEST_READOUT_SCRATCH(_i)				\
+	if (gconfig4 & (1 << (18 + (_i))))				\
+		vcpu_vz->c0_kscratch[_i] = read_gc0_kscratch(2 + (_i))
+
+	MIPSVZ_GUEST_READOUT_SCRATCH(0);
+	MIPSVZ_GUEST_READOUT_SCRATCH(1);
+	MIPSVZ_GUEST_READOUT_SCRATCH(2);
+	MIPSVZ_GUEST_READOUT_SCRATCH(3);
+	MIPSVZ_GUEST_READOUT_SCRATCH(4);
+	MIPSVZ_GUEST_READOUT_SCRATCH(5);
+}
+
+static void mipsvz_exit_vm(struct pt_regs *regs, u32 exit_reason)
+{
+	int i;
+	struct kvm_vcpu *vcpu = current->thread.vcpu;
+	struct kvm_run *kvm_run = vcpu->run;
+
+	for (i = 1; i < ARRAY_SIZE(vcpu->arch.gprs); i++)
+		vcpu->arch.gprs[i] = regs->regs[i];
+	vcpu->arch.gprs[0] = 0; /* zero is special, and cannot be set. */
+	vcpu->arch.hi = regs->hi;
+	vcpu->arch.lo = regs->lo;
+	vcpu->arch.epc = regs->cp0_epc;
+
+	kvm_run->exit_reason = exit_reason;
+
+	local_irq_disable();
+
+	clear_tsk_thread_flag(current, TIF_GUESTMODE);
+	mipsvz_exit_guest();
+	/* Never returns here */
+}
+
+static unsigned int  mipsvz_get_fcr31(void)
+{
+	kvm_err("Help!  missing mipsvz_get_fcr31\n");
+	return 0;
+}
+
+static unsigned long mipsvz_compute_return_epc(struct pt_regs *regs)
+{
+	if (delay_slot(regs)) {
+		union mips_instruction insn;
+		insn.word = regs->cp0_badinstrp;
+		return __compute_return_epc_for_insn0(regs, insn, mipsvz_get_fcr31);
+	} else {
+		regs->cp0_epc += 4;
+		return 0;
+	}
+}
+
+struct mipsvz_szreg {
+	u8 size;
+	s8 reg; /* negative value indicates error */
+	bool sign_extend;
+};
+
+static struct mipsvz_szreg mipsvz_get_load_params(u32 insn)
+{
+	struct mipsvz_szreg r;
+	r.size = 0;
+	r.reg = -1;
+	r.sign_extend = false;
+
+	if ((insn & 0x80000000) == 0)
+		goto out;
+
+	switch ((insn >> 26) & 0x1f) {
+	case 0x00: /* LB */
+		r.size = 1;
+		r.sign_extend = true;
+		break;
+	case 0x04: /* LBU */
+		r.size = 1;
+		break;
+	case 0x01: /* LH */
+		r.size = 2;
+		r.sign_extend = true;
+		break;
+	case 0x05: /* LHU */
+		r.size = 2;
+		break;
+	case 0x03: /* LW */
+		r.size = 4;
+		r.sign_extend = true;
+		break;
+	case 0x07: /* LWU */
+		r.size = 4;
+		break;
+	case 0x17: /* LD */
+		r.size = 8;
+		break;
+	default:
+		goto out;
+	}
+	r.reg = (insn >> 16) & 0x1f;
+
+out:
+	return r;
+}
+
+static struct mipsvz_szreg mipsvz_get_store_params(u32 insn)
+{
+	struct mipsvz_szreg r;
+	r.size = 0;
+	r.reg = -1;
+	r.sign_extend = false;
+
+	if ((insn & 0x80000000) == 0)
+		goto out;
+
+	switch ((insn >> 26) & 0x1f) {
+	case 0x08: /* SB */
+		r.size = 1;
+		break;
+	case 0x09: /* SH */
+		r.size = 2;
+		break;
+	case 0x0b: /* SW */
+		r.size = 4;
+		break;
+	case 0x1f: /* SD */
+		r.size = 8;
+		break;
+	default:
+		goto out;
+	}
+	r.reg = (insn >> 16) & 0x1f;
+
+out:
+	return r;
+}
+
+static int mipsvz_handle_io_in(struct kvm_vcpu *vcpu)
+{
+	unsigned long val = 0;
+	struct kvm_mips_vcpu_vz *vcpu_vz = vcpu->arch.impl;
+	void *dest = sizeof(struct kvm_run) + (char *)((void *)vcpu->run);
+	struct mipsvz_szreg r = mipsvz_get_load_params(vcpu_vz->last_exit_insn);
+
+	if (r.reg < 0)
+		return -EINVAL;
+	if (r.sign_extend)
+		switch (r.size) {
+		case 1:
+			val = *(s8 *)dest;
+			break;
+		case 2:
+			val = *(s16 *)dest;
+			break;
+		case 4:
+			val = *(s32 *)dest;
+			break;
+		case 8:
+			val = *(u64 *)dest;
+			break;
+		}
+	else
+		switch (r.size) {
+		case 1:
+			val = *(u8 *)dest;
+			break;
+		case 2:
+			val = *(u16 *)dest;
+			break;
+		case 4:
+			val = *(u32 *)dest;
+			break;
+		case 8:
+			val = *(u64 *)dest;
+			break;
+		}
+
+	vcpu->arch.gprs[r.reg] = val;
+	kvm_debug("   ... %016lx  size %d\n", val, r.size);
+	return 0;
+}
+
+int mipsvz_arch_init(void *opaque)
+{
+	unsigned long saved_entryhi;
+	unsigned long flags;
+
+	local_irq_save(flags);
+	saved_entryhi = read_c0_entryhi();
+
+	write_c0_entryhi(~0x1ffful);
+	mipsvz_entryhi_mask = read_c0_entryhi();
+
+	write_c0_entryhi(saved_entryhi);
+	local_irq_restore(flags);
+
+	return 0;
+}
+
+int mipsvz_arch_hardware_enable(void *garbage)
+{
+	unsigned long flags;
+	int cpu = raw_smp_processor_id();
+	u32 count;
+	u64 cmv_count;
+
+	local_irq_save(flags);
+	count = read_c0_count();
+	cmv_count = read_c0_cvmcount();
+	local_irq_restore(flags);
+
+	mipsvz_cp0_count_offset[cpu] = ((u32)cmv_count) - count;
+
+	return 0;
+}
+
+#ifndef __PAGETABLE_PMD_FOLDED
+static void mipsvz_release_pud(pud_t pud)
+{
+	pmd_t *pmd = (pmd_t *)pud_val(pud);
+	int i;
+	for (i = 0; i < PTRS_PER_PMD; i++) {
+		if (pmd_present(pmd[i]))
+			pte_free_kernel(NULL, (pte_t *)pmd_val(pmd[i]));
+	}
+	pmd_free(NULL, pmd);
+}
+#endif
+
+static void mipsvz_destroy_vm(struct kvm *kvm)
+{
+	struct kvm_mips_vz *kvm_mips_vz = kvm->arch.impl;
+	pgd_t *pgd;
+	pud_t *pud;
+	int i;
+
+	pgd = kvm_mips_vz->pgd;
+	pud = pud_offset(pgd, 0);
+#ifndef __PAGETABLE_PMD_FOLDED
+	for (i = 0; i < PTRS_PER_PGD; i++) {
+		if (pud_present(pud[i]))
+			mipsvz_release_pud((pud[i]));
+	}
+#else
+	{
+		pmd_t *pmd = pmd_offset(pud, 0);
+		for (i = 0; i < PTRS_PER_PGD; i++) {
+			if (pmd_present(pmd[i]))
+				pte_free_kernel(NULL, (pte_t *)pmd_val(pmd[i]));
+		}
+	}
+#endif
+
+	free_pages((unsigned long)kvm_mips_vz->pgd, PGD_ORDER);
+	if (kvm_mips_vz->irq_chip)
+		__free_page(kvm_mips_vz->irq_chip);
+}
+
+/* Must be called with guest_mm_lock held. */
+static pte_t *mipsvz_pte_for_gpa(struct kvm *kvm, unsigned long addr)
+{
+	struct kvm_mips_vz *kvm_mips_vz = kvm->arch.impl;
+	pgd_t *pgd;
+	pud_t *pud;
+	pmd_t *pmd;
+
+	pgd = kvm_mips_vz->pgd + pgd_index(addr);
+	if (pgd_none(*pgd)) {
+		set_pgd(pgd, __pgd(0));
+		BUG();  /* Not used on MIPS. */
+	}
+	pud = pud_offset(pgd, addr);
+	if (pud_none(*pud)) {
+		pmd_t *new_pmd = pmd_alloc_one(NULL, addr);
+		WARN(!new_pmd, "We're hosed, no memory");
+		pud_populate(NULL, pud, new_pmd);
+	}
+	pmd = pmd_offset(pud, addr);
+	if (pmd_none(*pmd)) {
+		pte_t *new_pte = pte_alloc_one_kernel(NULL, addr);
+		WARN(!new_pte, "We're hosed, no memory");
+		pmd_populate_kernel(NULL, pmd, new_pte);
+	}
+	return pte_offset(pmd, addr);
+}
+
+struct mipsvz_irqchip {
+	u32 num_irqs;
+	u32 num_cpus;
+};
+
+static int mipsvz_create_irqchip(struct kvm *kvm)
+{
+	struct kvm_mips_vz *kvm_mips_vz = kvm->arch.impl;
+	int ret = 0;
+	pfn_t pfn;
+	pte_t *ptep, entry;
+	struct page *irq_chip;
+	struct mipsvz_irqchip *chip;
+
+	mutex_lock(&kvm->lock);
+
+	if (kvm_mips_vz->irq_chip) {
+		ret = -EEXIST;
+		goto out;
+	}
+
+	irq_chip = alloc_page(GFP_KERNEL | __GFP_ZERO);
+	if (!irq_chip) {
+		ret = -ENOMEM;
+		goto out;
+	}
+	chip = page_address(irq_chip);
+	chip->num_irqs = 64;
+	chip->num_cpus = max(8, KVM_MAX_VCPUS);
+
+	ptep = mipsvz_pte_for_gpa(kvm, 0x1e010000);
+
+	pfn = page_to_pfn(irq_chip);
+	entry = pfn_pte(pfn, __pgprot(_PAGE_VALID));
+	set_pte(ptep, entry);
+
+	kvm_mips_vz->irq_chip = irq_chip;
+out:
+	mutex_unlock(&kvm->lock);
+	return ret;
+}
+
+static void mipsvz_write_irqchip_w1x(u32 *irqchip_regs, int words_per_reg,
+				     unsigned int base, unsigned int offset,
+				     u32 val, u32 mask)
+{
+	int type = (offset - base) / words_per_reg;
+	int idx = (offset - base) % words_per_reg;
+
+	if (type == 0)  /* Set */
+		irqchip_regs[base + idx] = (irqchip_regs[base + idx] & ~mask) | (val & mask);
+	else if (type == 1) /* W1S*/
+		irqchip_regs[base + idx] |= (val & mask);
+	else if (type == 2) /* W1C*/
+		irqchip_regs[base + idx] &= ~(val & mask);
+
+	/* Make the W1S and W1C reg have the same value as the base reg. */
+	irqchip_regs[base + idx + 1 * words_per_reg] = irqchip_regs[base + idx];
+	irqchip_regs[base + idx + 2 * words_per_reg] = irqchip_regs[base + idx];
+}
+
+static void mipsvz_write_irqchip_reg(u32 *irqchip_regs, unsigned int offset, u32 val, u32 mask)
+{
+	int numbits = irqchip_regs[0];
+	int words_per_reg = numbits / 32;
+	int reg, reg_offset;
+	int rw_reg_base = 2;
+
+	if (offset <= 1 || offset >= (irqchip_regs[1] * (words_per_reg + 1) * 4))
+		return; /* ignore the write */
+
+	reg = (offset - rw_reg_base) / words_per_reg;
+	reg_offset = (offset - rw_reg_base) % words_per_reg;
+
+	if (reg_offset == 0)
+		mask &= ~0x1ffu; /* bits 8..0 are ignored */
+
+	if (reg <= 2) { /* Raw */
+		mipsvz_write_irqchip_w1x(irqchip_regs, words_per_reg, rw_reg_base, offset, val, mask);
+	} else {
+		/* Per CPU enables */
+		int cpu_first_reg = rw_reg_base + 3 * words_per_reg;
+		int cpu = (reg - 3) / 4;
+		int cpu_reg = (reg - 3) % 4;
+
+		if (cpu_reg != 0)
+			mipsvz_write_irqchip_w1x(irqchip_regs, words_per_reg,
+						 cpu_first_reg + words_per_reg + cpu * 4 * words_per_reg,
+						 offset, val, mask);
+	}
+}
+
+/* Returns a bit mask of vcpus where the */
+static u32 mipsvz_write_irqchip_new_irqs(struct kvm *kvm, u32 *irqchip_regs)
+{
+	u32 r = 0;
+	int rw_reg_base = 2;
+	int numbits = irqchip_regs[0];
+	int numcpus = irqchip_regs[1];
+	int words_per_reg = numbits / 32;
+	int cpu;
+
+	for (cpu = 0; cpu < numcpus; cpu++) {
+		int cpu_base = rw_reg_base + (3 * words_per_reg) + (cpu * 4 * words_per_reg);
+		int word;
+		u32 combined = 0;
+		for (word = 0; word < words_per_reg; word++) {
+			/* SRC = EN & RAW */
+			irqchip_regs[cpu_base + word] = irqchip_regs[cpu_base + words_per_reg + word] & irqchip_regs[rw_reg_base + word];
+			combined |= irqchip_regs[cpu_base + word];
+		}
+
+		if (kvm->vcpus[cpu]) {
+			u8 injected_ipx;
+			struct kvm_mips_vcpu_vz *vcpu_vz = kvm->vcpus[cpu]->arch.impl;
+			u8 old_injected_ipx = vcpu_vz->injected_ipx;
+
+			if (combined)
+				injected_ipx = 4;
+			else
+				injected_ipx = 0;
+
+			if (injected_ipx != old_injected_ipx) {
+				r |= 1 << cpu;
+				vcpu_vz->injected_ipx = injected_ipx;
+			}
+		}
+	}
+	return r;
+}
+
+static void mipsvz_assert_irqs(struct kvm *kvm, u32 effected_cpus)
+{
+	int i, me;
+	struct kvm_vcpu *vcpu;
+
+	if (!effected_cpus)
+		return;
+
+	me = get_cpu();
+
+	kvm_for_each_vcpu(i, vcpu, kvm) {
+		if (!((1 << vcpu->vcpu_id) & effected_cpus))
+			continue;
+
+		if (me == vcpu->cpu) {
+			u32 gc2 = read_c0_guestctl2();
+			struct kvm_mips_vcpu_vz *vcpu_vz = vcpu->arch.impl;
+			gc2 = (gc2 & ~0xff00) | (((u32)vcpu_vz->injected_ipx) << 8);
+			write_c0_guestctl2(gc2);
+		} else {
+			kvm_make_request(KVM_REQ_EVENT, vcpu);
+			kvm_vcpu_kick(vcpu);
+		}
+	}
+
+	put_cpu();
+}
+
+static bool mipsvz_write_irqchip(struct pt_regs *regs,
+				 unsigned long write,
+				 unsigned long address,
+				 struct kvm *kvm,
+				 struct kvm_vcpu *vcpu)
+{
+	struct kvm_mips_vz *kvm_mips_vz = kvm->arch.impl;
+	unsigned long flags;
+	struct mipsvz_szreg szreg;
+	u64 val;
+	u64 mask;
+	u32 *irqchip_regs;
+	u32 insn = regs->cp0_badinstr;
+	int offset = address - 0x1e010000;
+	u32 effected_cpus;
+
+	if (!write || !kvm_mips_vz->irq_chip) {
+		kvm_err("Error: Read fault in irqchip\n");
+		mipsvz_exit_vm(regs, KVM_EXIT_INTERNAL_ERROR);
+	}
+	if (mipsvz_compute_return_epc(regs)) {
+		kvm_err("Error: Bad EPC on store emulation: %08x\n", insn);
+		mipsvz_exit_vm(regs, KVM_EXIT_INTERNAL_ERROR);
+	}
+	szreg = mipsvz_get_store_params(insn);
+	val = regs->regs[szreg.reg];
+	mask = ~0ul >> (64 - (szreg.size * 8));
+	val &= mask;
+	val <<= 8 * (offset & 7);
+	mask <<= 8 * (offset & 7);
+
+	irqchip_regs = page_address(kvm_mips_vz->irq_chip);
+
+	mutex_lock(&kvm->lock);
+
+	spin_lock_irqsave(&kvm_mips_vz->irq_chip_lock, flags);
+
+	if (szreg.size == 8) {
+		offset &= ~7;
+		mipsvz_write_irqchip_reg(irqchip_regs, offset / 4 + 1,
+					 (u32)(val >> 32), (u32)(mask >> 32));
+		mipsvz_write_irqchip_reg(irqchip_regs, offset / 4 ,
+					 (u32)val, (u32)mask);
+	} else {
+		if (offset & 4) {
+			val >>= 32;
+			mask >>= 32;
+		}
+		offset &= ~3;
+		mipsvz_write_irqchip_reg(irqchip_regs, offset / 4 ,
+					 (u32)val, (u32)mask);
+	}
+
+	effected_cpus = mipsvz_write_irqchip_new_irqs(kvm, irqchip_regs);
+
+	spin_unlock_irqrestore(&kvm_mips_vz->irq_chip_lock, flags);
+
+	mipsvz_assert_irqs(kvm, effected_cpus);
+
+	mutex_unlock(&kvm->lock);
+
+	return true;
+}
+
+static int mipsvz_irq_line(struct kvm *kvm, unsigned long arg)
+{
+	void __user *argp = (void __user *)arg;
+	struct kvm_mips_vz *kvm_mips_vz = kvm->arch.impl;
+	unsigned long flags;
+	struct kvm_irq_level irq_level;
+	u32 *irqchip_regs;
+	u32 mask, val;
+	int numbits;
+	int i;
+	u32 effected_cpus;
+	int ret = 0;
+
+	if (!kvm_mips_vz->irq_chip)
+		return -ENODEV;
+
+	if (copy_from_user(&irq_level, argp, sizeof(irq_level))) {
+		ret = -EFAULT;
+		goto out;
+	}
+
+	irqchip_regs = page_address(kvm_mips_vz->irq_chip);
+	numbits = irqchip_regs[0];
+
+	if (irq_level.irq < 9)
+		goto out; /* Ignore */
+	if (irq_level.irq >= numbits) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	mutex_lock(&kvm->lock);
+
+	mask = 1ull << (irq_level.irq % 32);
+	i = irq_level.irq / 32;
+	if (irq_level.level)
+		val = mask;
+	else
+		val = 0;
+
+	spin_lock_irqsave(&kvm_mips_vz->irq_chip_lock, flags);
+
+	mipsvz_write_irqchip_reg(irqchip_regs, 2 + i, val, mask);
+	effected_cpus = mipsvz_write_irqchip_new_irqs(kvm, irqchip_regs);
+
+	spin_unlock_irqrestore(&kvm_mips_vz->irq_chip_lock, flags);
+
+	mipsvz_assert_irqs(kvm, effected_cpus);
+
+	mutex_unlock(&kvm->lock);
+
+out:
+	return ret;
+}
+
+static enum hrtimer_restart mipsvz_compare_timer_expire(struct hrtimer *t)
+{
+	struct kvm_mips_vcpu_vz *vcpu_vz;
+	vcpu_vz = container_of(t, struct kvm_mips_vcpu_vz, compare_timer);
+	kvm_vcpu_kick(vcpu_vz->vcpu);
+
+	return HRTIMER_NORESTART;
+}
+
+static long mipsvz_vm_ioctl(struct kvm *kvm, unsigned int ioctl,
+			    unsigned long arg)
+{
+	int r = -ENOIOCTLCMD;
+
+	switch (ioctl) {
+	case KVM_CREATE_IRQCHIP:
+		r = mipsvz_create_irqchip(kvm);
+		break;
+	case KVM_IRQ_LINE:
+		r = mipsvz_irq_line(kvm, arg);
+		break;
+	default:
+		break;
+	}
+	return r;
+}
+
+static struct kvm_vcpu *mipsvz_vcpu_create(struct kvm *kvm,
+					   unsigned int id)
+{
+	int r;
+	struct kvm_vcpu *vcpu = NULL;
+	struct kvm_mips_vcpu_vz *vcpu_vz = NULL;
+	struct mipsvz_kvm_tlb_entry *tlb_state = NULL;
+
+	/* MIPS CPU numbers have a maximum of 10 significant bits. */
+	if (id >= (1u << 10) || id >= KVM_MAX_VCPUS)
+		return ERR_PTR(-EINVAL);
+
+	vcpu_vz = kzalloc(sizeof(struct kvm_mips_vcpu_vz), GFP_KERNEL);
+	if (!vcpu_vz) {
+		r = -ENOMEM;
+		goto err;
+	}
+
+	vcpu = kzalloc(sizeof(struct kvm_vcpu), GFP_KERNEL);
+	if (!vcpu) {
+		r = -ENOMEM;
+		goto err;
+	}
+	vcpu->arch.impl = vcpu_vz;
+	vcpu_vz->vcpu = vcpu;
+
+	vcpu_vz->tlb_size = 128;
+	tlb_state = kzalloc(sizeof(struct mipsvz_kvm_tlb_entry) * vcpu_vz->tlb_size,
+			    GFP_KERNEL);
+	if (!tlb_state) {
+		r = -ENOMEM;
+		goto err;
+	}
+
+	vcpu_vz->tlb_state = tlb_state;
+
+	hrtimer_init(&vcpu_vz->compare_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
+	vcpu_vz->compare_timer.function = mipsvz_compare_timer_expire;
+
+	r = kvm_vcpu_init(vcpu, kvm, id);
+	if (r)
+		goto err;
+
+	return vcpu;
+err:
+	kfree(vcpu);
+	kfree(tlb_state);
+	return ERR_PTR(r);
+}
+
+static int mipsvz_vcpu_setup(struct kvm_vcpu *vcpu)
+{
+	int i;
+	unsigned long entryhi_base;
+	struct kvm_mips_vcpu_vz *vcpu_vz = vcpu->arch.impl;
+
+	entryhi_base = 0xffffffff90000000ul & mipsvz_entryhi_mask;
+
+	vcpu_vz->c0_ebase = 0xffffffff80000000ull | vcpu->vcpu_id;
+	vcpu_vz->c0_status = ST0_BEV | ST0_ERL;
+
+	for (i = 0; i < vcpu_vz->tlb_size; i++) {
+		vcpu_vz->tlb_state[i].entryhi = entryhi_base + 8192 * i;
+		vcpu_vz->tlb_state[i].entrylo0 = 0;
+		vcpu_vz->tlb_state[i].entrylo1 = 0;
+		vcpu_vz->tlb_state[i].pagemask = 0;
+	}
+	return 0;
+}
+
+static void mipsvz_vcpu_free(struct kvm_vcpu *vcpu)
+{
+	struct kvm_mips_vcpu_vz *vcpu_vz = vcpu->arch.impl;
+	hrtimer_cancel(&vcpu_vz->compare_timer);
+	kfree(vcpu_vz->tlb_state);
+	kfree(vcpu_vz);
+	/* ?? kfree(vcpu); */
+}
+
+static void mipsvz_vcpu_put(struct kvm_vcpu *vcpu)
+{
+	struct kvm_mips_vcpu_vz *vcpu_vz = vcpu->arch.impl;
+	unsigned long flags;
+	int i;
+	u64 memctl2, vmconfig;
+	unsigned long old_ctx;
+	int mmu_sizem1;
+
+	mipsvz_readout_guest_cp0(vcpu_vz);
+
+	local_irq_save(flags);
+
+	for (i = 0; i < vcpu_vz->tlb_size; i++) {
+		write_gc0_index(i);
+		guest_tlb_read();
+		vcpu_vz->tlb_state[i].entryhi = read_gc0_entryhi();
+		vcpu_vz->tlb_state[i].entrylo0 = read_gc0_entrylo0();
+		vcpu_vz->tlb_state[i].entrylo1 = read_gc0_entrylo1();
+		vcpu_vz->tlb_state[i].pagemask = read_gc0_pagemask();
+	}
+
+	memctl2 = __read_64bit_c0_register($16, 6); /* 16,6: CvmMemCtl2 */
+	memctl2 |= (1ull << 17); /* INHIBITTS */
+	__write_64bit_c0_register($16, 6, memctl2);
+
+	vmconfig = __read_64bit_c0_register($16, 7); /* 16,7: CvmVMConfig */
+	vmconfig &= ~0xffull;
+
+	mmu_sizem1 = (vmconfig >> 12) & 0xff;
+	vmconfig |= mmu_sizem1;		/* Root size TLBM1 */
+	__write_64bit_c0_register($16, 7, vmconfig);
+
+/* Copied from tlb-r4k.c */
+#define UNIQUE_ENTRYHI(idx) (CKSEG0 + ((idx) << (PAGE_SHIFT + 1)))
+	old_ctx = read_c0_entryhi();
+	write_c0_entrylo0(0);
+	write_c0_entrylo1(0);
+	for (i = 0; i < vcpu_vz->tlb_size; i++) {
+		write_c0_index(mmu_sizem1 - i);
+		write_c0_entryhi(UNIQUE_ENTRYHI(mmu_sizem1 - i));
+		mtc0_tlbw_hazard();
+		tlb_write_indexed();
+	}
+	tlbw_use_hazard();
+	write_c0_entryhi(old_ctx);
+
+	memctl2 &= ~(1ull << 17); /* INHIBITTS */
+	__write_64bit_c0_register($16, 6, memctl2);
+
+	local_irq_restore(flags);
+
+	vcpu_vz->rootsp = mips_kvm_rootsp[vcpu->cpu];
+	mips_kvm_rootsp[vcpu->cpu] = 0;
+	vcpu->cpu = -1;
+}
+
+static void mipsvz_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+{
+	struct kvm_mips_vcpu_vz *vcpu_vz = vcpu->arch.impl;
+	struct kvm *kvm = vcpu->kvm;
+	struct kvm_mips_vz *kvm_mips_vz = kvm->arch.impl;
+	unsigned long flags;
+	int i;
+	u32 t32;
+	u64 cp_val, t64;
+	int mmu_size;
+
+	vcpu->cpu = cpu;
+	if (test_tsk_thread_flag(current, TIF_GUESTMODE))
+		mips_kvm_rootsp[vcpu->cpu] = vcpu_vz->rootsp;
+
+	write_c0_gtoffset(mipsvz_cp0_count_offset[cpu] + vcpu_vz->c0_count_offset);
+
+	local_irq_save(flags);
+
+	t32 = read_c0_guestctl0();
+	/* GM = RI = MC = SFC2 = PIP = 0; CP0 = GT = CG = CF = SFC1 = 1*/
+	t32 |= 0xf380fc03;
+	t32 ^= 0xe000fc02;
+
+	write_c0_guestctl0(t32);
+
+	t32 = read_gc0_config1();
+	t32 &= ~(1u << 3); /* Guest.Config1[WR] = 0 */
+	write_gc0_config1(t32);
+
+	t64 = __read_64bit_gc0_register($9, 7); /* 9, 7: Guest.CvmCtl */
+	t64 &= ~(7ull << 4); /* IPTI */
+	t64 |= (7ull << 4);
+	t64 &= ~(7ull << 7); /* IPPCI */
+	t64 |= (6ull << 7);
+	__write_64bit_gc0_register($9, 7, t64);
+
+	cp_val = __read_64bit_c0_register($16, 7); /* 16, 7: CvmVMConfig */
+	cp_val |= (1ull << 60); /* No I/O hole translation. */
+	cp_val &= ~0xffull;
+
+	mmu_size = ((cp_val >> 12) & 0xff) + 1;
+	cp_val |= mmu_size - vcpu_vz->tlb_size - 1;	/* Root size TLBM1 */
+	__write_64bit_c0_register($16, 7, cp_val);
+
+	cp_val = __read_64bit_c0_register($16, 6); /* 16, 6: CvmMemCtl2 */
+	cp_val |= (1ull << 17); /* INHIBITTS */
+	__write_64bit_c0_register($16, 6, cp_val);
+
+	for (i = 0; i < vcpu_vz->tlb_size; i++) {
+		write_gc0_index(i);
+		write_gc0_entryhi(vcpu_vz->tlb_state[i].entryhi);
+		write_gc0_entrylo0(vcpu_vz->tlb_state[i].entrylo0);
+		write_gc0_entrylo1(vcpu_vz->tlb_state[i].entrylo1);
+		write_gc0_pagemask(vcpu_vz->tlb_state[i].pagemask);
+		guest_tlb_write_indexed();
+	}
+
+	cp_val &= ~(1ull << 17); /* INHIBITTS */
+	__write_64bit_c0_register($16, 6, cp_val);
+
+
+	spin_lock(&kvm_mips_vz->irq_chip_lock);
+	if (kvm_mips_vz->irq_chip) {
+		u32 gc2 = read_c0_guestctl2();
+		gc2 = (gc2 & ~0xff00) | (((u32)vcpu_vz->injected_ipx) << 8);
+		write_c0_guestctl2(gc2);
+	}
+	spin_unlock(&kvm_mips_vz->irq_chip_lock);
+
+	local_irq_restore(flags);
+
+	mipsvz_install_guest_cp0(vcpu_vz);
+	vcpu_vz->have_counter_state = false;
+	/* OCTEON need a local iCache flush on switching guests. */
+	local_flush_icache_range(0, 0);
+}
+
+static bool mipsvz_emulate_io(struct pt_regs *regs,
+			      unsigned long write,
+			      unsigned long address,
+			      struct kvm *kvm,
+			      struct kvm_vcpu *vcpu)
+{
+	struct kvm_mips_vcpu_vz *vcpu_vz = vcpu->arch.impl;
+	u32 insn = regs->cp0_badinstr;
+
+	if (mipsvz_compute_return_epc(regs)) {
+		kvm_err("Error: Bad EPC on store emulation: %08x\n", insn);
+		mipsvz_exit_vm(regs, KVM_EXIT_INTERNAL_ERROR);
+	}
+	vcpu->run->io.port = address - 0x1e000000;
+	vcpu->run->io.count = 1;
+	/* Store the data after the end of the kvm_run */
+	vcpu->run->io.data_offset = sizeof(struct kvm_run);
+	if (write) {
+		u64 val;
+		void *dest = sizeof(struct kvm_run) + (char *)((void *)vcpu->run);
+		struct mipsvz_szreg r = mipsvz_get_store_params(insn);
+		if (r.reg < 0) {
+			kvm_err("Error: Bad insn on store emulation: %08x\n", insn);
+			mipsvz_exit_vm(regs, KVM_EXIT_INTERNAL_ERROR);
+		}
+		vcpu->run->io.size = r.size;
+		vcpu->run->io.direction = KVM_EXIT_IO_OUT;
+		val = regs->regs[r.reg];
+		switch (r.size) {
+		case 1:
+			*(u8 *)dest = (u8)val;
+			kvm_debug("I/O out %02x -> %04x\n", (unsigned)(u8)val,
+				  vcpu->run->io.port);
+			break;
+		case 2:
+			*(u16 *)dest = (u16)val;
+			kvm_debug("I/O out %04x -> %04x\n", (unsigned)(u16)val,
+				  vcpu->run->io.port);
+			break;
+		case 4:
+			*(u32 *)dest = (u32)val;
+			kvm_debug("I/O out %08x -> %04x\n", (unsigned)(u32)val,
+				  vcpu->run->io.port);
+			break;
+		default:
+			*(u64 *)dest = val;
+			kvm_debug("I/O out %016lx -> %04x\n", (unsigned long)val,
+				  vcpu->run->io.port);
+			break;
+		}
+	} else {
+		struct mipsvz_szreg r = mipsvz_get_load_params(insn);
+		if (r.reg < 0) {
+			kvm_err("Error: Bad insn on load emulation: %08x\n", insn);
+			mipsvz_exit_vm(regs, KVM_EXIT_INTERNAL_ERROR);
+		}
+		vcpu_vz->last_exit_insn = insn;
+		vcpu->run->io.size = r.size;
+		vcpu->run->io.direction = KVM_EXIT_IO_IN;
+		kvm_debug("I/O in %04x ...\n", vcpu->run->io.port);
+	}
+	mipsvz_exit_vm(regs, KVM_EXIT_IO);
+	/* Never Gets Here. */
+	return true;
+}
+
+/* Return true if its a mipsvz guest fault. */
+bool mipsvz_page_fault(struct pt_regs *regs, unsigned long write,
+		       unsigned long address)
+{
+	unsigned long flags;
+	pte_t *ptep, entry;
+	u64 saved_entryhi;
+	pfn_t pfn;
+	s32 idx;
+	int srcu_idx;
+	unsigned long prot_bits;
+	struct kvm_vcpu *vcpu;
+	struct kvm *kvm;
+	struct kvm_mips_vz *kvm_mips_vz;
+	bool writable;
+
+	/*
+	 * Guest Physical Addresses can only be in the XKUSEG range
+	 * (which ends at XKSSEG).  Other addresses belong to the kernel.
+	 */
+	if (address >= XKSSEG)
+		return false;
+
+	vcpu = current->thread.vcpu;
+	kvm = vcpu->kvm;
+	kvm_mips_vz = kvm->arch.impl;
+
+	if (address >= 0x10000000) {
+		if (address < 0x1e000000) {
+			/* mmio */
+			mipsvz_exit_vm(regs, KVM_EXIT_EXCEPTION);
+			/* Never Gets Here. */
+		} else if (address < 0x1e010000) {
+			return mipsvz_emulate_io(regs, write, address,
+						 kvm, vcpu);
+		} else if (address < 0x1e020000) {
+			return mipsvz_write_irqchip(regs, write, address,
+						    kvm, vcpu);
+		} else {
+			mipsvz_exit_vm(regs, KVM_EXIT_EXCEPTION);
+			/* Never Gets Here. */
+		}
+	}
+
+	writable = false;
+
+	mutex_lock(&kvm_mips_vz->guest_mm_lock);
+
+	srcu_idx = srcu_read_lock(&kvm->srcu);
+
+	pfn = gfn_to_pfn_prot(kvm, address >> PAGE_SHIFT, write, &writable);
+
+#if 0
+	kvm_err("mipsvz_page_fault[%d] for %s: %lx -> page %x %s\n",
+		vcpu->vcpu_id, write ? "write" : "read",
+		address, (unsigned)pfn, writable ? "writable" : "read-only");
+#endif
+
+	if (!pfn) {
+		kvm_err("mipsvz_page_fault -- no mapping, must exit\n");
+		goto bad;
+	}
+
+	ptep = mipsvz_pte_for_gpa(kvm, address);
+
+	prot_bits = __READABLE | _PAGE_PRESENT;
+
+	/* If it is the same page, don't downgrade  _PAGE_DIRTY */
+	if (pte_pfn(*ptep) == pfn  && (pte_val(*ptep) &  _PAGE_DIRTY))
+		prot_bits |= __WRITEABLE;
+	if (write) {
+		if (!writable) {
+			kvm_err("mipsvz_page_fault writing to RO memory.");
+			goto bad;
+		} else {
+			prot_bits |= __WRITEABLE;
+			kvm_set_pfn_dirty(pfn);
+		}
+
+	} else {
+		kvm_set_pfn_accessed(pfn);
+	}
+	entry = pfn_pte(pfn, __pgprot(prot_bits));
+
+	set_pte(ptep, entry);
+
+	/* Directly set a valid TLB entry.  No more faults. */
+
+	local_irq_save(flags);
+	saved_entryhi = read_c0_entryhi();
+	address &= (PAGE_MASK << 1);
+	write_c0_entryhi(address | current->thread.guest_asid);
+	mtc0_tlbw_hazard();
+	tlb_probe();
+	tlb_probe_hazard();
+	idx = read_c0_index();
+
+	/* Goto a PTE pair boundry. */
+	ptep = (pte_t *)(((unsigned long)ptep) & ~(2 * sizeof(pte_t) - 1));
+	write_c0_entrylo0(pte_to_entrylo(pte_val(*ptep++)));
+	write_c0_entrylo1(pte_to_entrylo(pte_val(*ptep)));
+	mtc0_tlbw_hazard();
+	if (idx < 0)
+		tlb_write_random();
+	else
+		tlb_write_indexed();
+	tlbw_use_hazard();
+	write_c0_entryhi(saved_entryhi);
+	local_irq_restore(flags);
+
+	srcu_read_unlock(&kvm->srcu, srcu_idx);
+	mutex_unlock(&kvm_mips_vz->guest_mm_lock);
+	return true;
+
+bad:
+	srcu_read_unlock(&kvm->srcu, srcu_idx);
+	mutex_unlock(&kvm_mips_vz->guest_mm_lock);
+	mipsvz_exit_vm(regs, KVM_EXIT_EXCEPTION);
+	/* Never Gets Here. */
+	return true;
+}
+
+int kvm_unmap_hva(struct kvm *kvm, unsigned long hva)
+{
+	kvm_debug("kvm_unmap_hva for %lx\n", hva);
+	return 1;
+}
+
+void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte)
+{
+	kvm_err("kvm_set_spte_hva %lx\n", hva);
+}
+
+int kvm_age_hva(struct kvm *kvm, unsigned long hva)
+{
+	kvm_err("kvm_age_hva %lx\n", hva);
+	return 0;
+}
+
+int kvm_test_age_hva(struct kvm *kvm, unsigned long hva)
+{
+	kvm_err("kvm_test_age_hva %lx\n", hva);
+	return 0;
+}
+
+bool  mipsvz_cp_unusable(struct pt_regs *regs)
+{
+	bool r = false;
+	unsigned int cpid = (regs->cp0_cause >> CAUSEB_CE) & 3;
+
+	preempt_disable();
+
+	if (cpid != 1 || !cpu_has_fpu)
+		goto out;
+
+	regs->cp0_status |= (ST0_CU1 | ST0_FR); /* Enable FPU in guest ... */
+	set_c0_status(ST0_CU1 | ST0_FR);  /* ... and now so we can install its contents. */
+	enable_fpu_hazard();
+	mipsvz_install_fpu(current->thread.vcpu);
+
+	r = true;
+out:
+	preempt_enable();
+	return r;
+}
+
+static void mipsvz_commit_memory_region(struct kvm *kvm,
+					struct kvm_userspace_memory_region *mem,
+					const struct kvm_memory_slot *old,
+					enum kvm_mr_change change)
+{
+}
+
+static int mipsvz_vcpu_init(struct kvm_vcpu *vcpu)
+{
+	return 0;
+}
+
+#define KVM_REG_MIPS_CP0_INDEX (0x10000 + 8 * 0 + 0)
+#define KVM_REG_MIPS_CP0_ENTRYLO0 (0x10000 + 8 * 2 + 0)
+#define KVM_REG_MIPS_CP0_ENTRYLO1 (0x10000 + 8 * 3 + 0)
+#define KVM_REG_MIPS_CP0_CONTEXT (0x10000 + 8 * 4 + 0)
+#define KVM_REG_MIPS_CP0_USERLOCAL (0x10000 + 8 * 4 + 2)
+#define KVM_REG_MIPS_CP0_PAGEMASK (0x10000 + 8 * 5 + 0)
+#define KVM_REG_MIPS_CP0_PAGEGRAIN (0x10000 + 8 * 5 + 1)
+#define KVM_REG_MIPS_CP0_WIRED (0x10000 + 8 * 6 + 0)
+#define KVM_REG_MIPS_CP0_HWRENA (0x10000 + 8 * 7 + 0)
+#define KVM_REG_MIPS_CP0_BADVADDR (0x10000 + 8 * 8 + 0)
+#define KVM_REG_MIPS_CP0_COUNT (0x10000 + 8 * 9 + 0)
+#define KVM_REG_MIPS_CP0_ENTRYHI (0x10000 + 8 * 10 + 0)
+#define KVM_REG_MIPS_CP0_COMPARE (0x10000 + 8 * 11 + 0)
+#define KVM_REG_MIPS_CP0_STATUS (0x10000 + 8 * 12 + 0)
+#define KVM_REG_MIPS_CP0_CAUSE (0x10000 + 8 * 13 + 0)
+#define KVM_REG_MIPS_CP0_EBASE (0x10000 + 8 * 15 + 1)
+#define KVM_REG_MIPS_CP0_CONFIG (0x10000 + 8 * 16 + 0)
+#define KVM_REG_MIPS_CP0_CONFIG1 (0x10000 + 8 * 16 + 1)
+#define KVM_REG_MIPS_CP0_CONFIG2 (0x10000 + 8 * 16 + 2)
+#define KVM_REG_MIPS_CP0_CONFIG3 (0x10000 + 8 * 16 + 3)
+#define KVM_REG_MIPS_CP0_CONFIG7 (0x10000 + 8 * 16 + 7)
+#define KVM_REG_MIPS_CP0_XCONTEXT (0x10000 + 8 * 20 + 0)
+#define KVM_REG_MIPS_CP0_ERROREPC (0x10000 + 8 * 30 + 0)
+
+static int mipsvz_get_reg(struct kvm_vcpu *vcpu,
+			  const struct kvm_one_reg *reg)
+{
+	u64 __user *uaddr = (u64 __user *)(long)reg->addr;
+	s64 v;
+
+	switch (reg->id) {
+	case KVM_REG_MIPS_CP0_INDEX:
+		v = read_gc0_index();
+		break;
+	case KVM_REG_MIPS_CP0_ENTRYLO0:
+		v = read_gc0_entrylo0();
+		break;
+	case KVM_REG_MIPS_CP0_ENTRYLO1:
+		v = read_gc0_entrylo1();
+		break;
+	case KVM_REG_MIPS_CP0_CONTEXT:
+		v = read_gc0_context();
+		break;
+	case KVM_REG_MIPS_CP0_USERLOCAL:
+		v = read_gc0_userlocal();
+		break;
+	case KVM_REG_MIPS_CP0_PAGEMASK:
+		v = read_gc0_pagemask();
+		break;
+	case KVM_REG_MIPS_CP0_PAGEGRAIN:
+		v = read_gc0_pagegrain();
+		break;
+	case KVM_REG_MIPS_CP0_WIRED:
+		v = read_gc0_wired();
+		break;
+	case KVM_REG_MIPS_CP0_HWRENA:
+		v = read_gc0_hwrena();
+		break;
+	case KVM_REG_MIPS_CP0_BADVADDR:
+		v = read_gc0_badvaddr();
+		break;
+	case KVM_REG_MIPS_CP0_COUNT:
+		v = read_gc0_count();
+		break;
+	case KVM_REG_MIPS_CP0_ENTRYHI:
+		v = read_gc0_entryhi();
+		break;
+	case KVM_REG_MIPS_CP0_COMPARE:
+		v = read_gc0_compare();
+		break;
+	case KVM_REG_MIPS_CP0_STATUS:
+		v = read_gc0_status();
+		break;
+	case KVM_REG_MIPS_CP0_CAUSE:
+		v = read_gc0_cause();
+		break;
+	case KVM_REG_MIPS_CP0_EBASE:
+		v = read_gc0_ebase();
+		break;
+	case KVM_REG_MIPS_CP0_XCONTEXT:
+		v = read_gc0_xcontext();
+		break;
+	default:
+		return -EINVAL;
+	}
+	return put_user(v, uaddr);
+}
+
+static int mipsvz_set_reg(struct kvm_vcpu *vcpu,
+			  const struct kvm_one_reg *reg,
+			  u64 v)
+{
+	switch (reg->id) {
+	case KVM_REG_MIPS_CP0_INDEX:
+		write_gc0_index(v);
+		break;
+	case KVM_REG_MIPS_CP0_ENTRYLO0:
+		write_gc0_entrylo0(v);
+		break;
+	case KVM_REG_MIPS_CP0_ENTRYLO1:
+		write_gc0_entrylo1(v);
+		break;
+	case KVM_REG_MIPS_CP0_CONTEXT:
+		write_gc0_context(v);
+		break;
+	case KVM_REG_MIPS_CP0_USERLOCAL:
+		write_gc0_userlocal(v);
+		break;
+	case KVM_REG_MIPS_CP0_PAGEMASK:
+		write_gc0_pagemask(v);
+		break;
+	case KVM_REG_MIPS_CP0_PAGEGRAIN:
+		write_gc0_pagegrain(v);
+		break;
+	case KVM_REG_MIPS_CP0_WIRED:
+		write_gc0_wired(v);
+		break;
+	case KVM_REG_MIPS_CP0_HWRENA:
+		write_gc0_hwrena(v);
+		break;
+	case KVM_REG_MIPS_CP0_BADVADDR:
+		write_gc0_badvaddr(v);
+		break;
+/*
+	case MSR_MIPS_CP0_COUNT:
+		????;
+		break;
+*/
+	case KVM_REG_MIPS_CP0_ENTRYHI:
+		write_gc0_entryhi(v);
+		break;
+	case KVM_REG_MIPS_CP0_COMPARE:
+		write_gc0_compare(v);
+		break;
+	case KVM_REG_MIPS_CP0_STATUS:
+		write_gc0_status(v);
+		break;
+	case KVM_REG_MIPS_CP0_CAUSE:
+		write_gc0_cause(v);
+		break;
+	case KVM_REG_MIPS_CP0_EBASE:
+		write_gc0_ebase(v);
+		break;
+	case KVM_REG_MIPS_CP0_XCONTEXT:
+		write_gc0_xcontext(v);
+		break;
+	default:
+		return -EINVAL;
+	}
+	return 0;
+}
+
+static int mipsvz_vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
+{
+	int ret = 0;
+	int cpu;
+	struct kvm *kvm = vcpu->kvm;
+	struct kvm_mips_vz *kvm_mips_vz = kvm->arch.impl;
+
+	if (kvm_run->exit_reason == KVM_EXIT_IO && kvm_run->io.direction == KVM_EXIT_IO_IN) {
+		ret = mipsvz_handle_io_in(vcpu);
+		if (unlikely(ret)) {
+			pr_warn("Error: Return from KVM_EXIT_IO with bad exit_insn state.\n");
+			return ret;
+		}
+	}
+
+	lose_fpu(1);
+
+	WARN(irqs_disabled(), "IRQs should be on here.");
+	local_irq_disable();
+	kvm_run->exit_reason = KVM_EXIT_UNKNOWN;
+	cpu = raw_smp_processor_id();
+
+	/*
+	 * Make sure the Root guest and host contexts are in the same
+	 * ASID generation
+	 */
+	if ((kvm_mips_vz->asid[cpu] ^ asid_cache(cpu)) & ASID_VERSION_MASK)
+		kvm_mips_vz->asid[cpu] = get_new_asid(cpu);
+	if ((cpu_context(cpu, current->mm) ^ asid_cache(cpu)) & ASID_VERSION_MASK)
+		drop_mmu_context(current->mm, cpu);
+	if ((kvm_mips_vz->asid[cpu] ^ asid_cache(cpu)) & ASID_VERSION_MASK)
+		kvm_mips_vz->asid[cpu] = get_new_asid(cpu);
+
+	current->thread.mm_asid = read_c0_entryhi() & ASID_MASK;
+	current->thread.guest_asid = kvm_mips_vz->asid[cpu] & ASID_MASK;
+	current->thread.vcpu = vcpu;
+
+	write_c0_entryhi(current->thread.guest_asid);
+	TLBMISS_HANDLER_SETUP_PGD(kvm_mips_vz->pgd);
+
+	set_tsk_thread_flag(current, TIF_GUESTMODE);
+
+	mipsvz_start_guest(vcpu);
+
+	/* Save FPU if needed. */
+	if (read_c0_status() & ST0_CU1) {
+		set_c0_status(ST0_FR);
+		mipsvz_readout_fpu(vcpu);
+		disable_fpu();
+	}
+
+	local_irq_enable();
+
+	if (signal_pending(current) && kvm_run->exit_reason == KVM_EXIT_INTR)
+		ret = -EINTR;
+
+	return ret;
+}
+#if 0
+int kvm_dev_ioctl_check_extension(long ext)
+{
+	int r = 0;
+
+	switch (ext) {
+	case KVM_CAP_IRQCHIP:
+		r = 1;
+		break;
+	}
+
+	return r;
+}
+#endif
+static int mipsvz_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
+{
+	int me;
+	int ret = 0;
+
+	me = get_cpu();
+
+	if (vcpu->cpu == me) {
+		u32 cause = read_gc0_cause();
+		ret = (cause & CAUSEF_TI) != 0;
+	} else {
+		kvm_err("kvm_cpu_has_pending_timer:  Argh!!\n");
+	}
+
+	put_cpu();
+
+	kvm_debug("kvm_cpu_has_pending_timer: %d\n", ret);
+	return ret;
+}
+
+static int mipsvz_vcpu_runnable(struct kvm_vcpu *vcpu)
+{
+	struct kvm_mips_vcpu_vz *vcpu_vz = vcpu->arch.impl;
+	unsigned long flags;
+	u64 *irqchip_regs;
+	struct kvm *kvm = vcpu->kvm;
+	struct kvm_mips_vz *kvm_mips_vz = kvm->arch.impl;
+	u8 injected_ipx;
+
+	kvm_debug("kvm_arch_vcpu_runnable\n");
+
+	if (!kvm_mips_vz->irq_chip)
+		return 0;
+
+	irqchip_regs = page_address(kvm_mips_vz->irq_chip);
+
+	spin_lock_irqsave(&kvm_mips_vz->irq_chip_lock, flags);
+	injected_ipx = vcpu_vz->injected_ipx;
+	spin_unlock_irqrestore(&kvm_mips_vz->irq_chip_lock, flags);
+
+	return injected_ipx != 0;
+}
+
+static void mipsvz_hypercall_exit_vm(struct pt_regs *regs)
+{
+	mipsvz_exit_vm(regs, KVM_EXIT_SHUTDOWN);
+}
+
+static void mipsvz_hypercall_get_hpt_frequency(struct pt_regs *regs)
+{
+	regs->regs[2] = mips_hpt_frequency;
+}
+
+typedef void (*mipsvz_hypercall_handler_t)(struct pt_regs *);
+typedef void (*mipsvz_hypervisor_handler_t)(struct pt_regs *);
+
+static const  mipsvz_hypercall_handler_t mipsvz_hypercall_handlers[] = {
+	NULL,				/* Write to console. */
+	mipsvz_hypercall_exit_vm,	/* Exit VM */
+	mipsvz_hypercall_get_hpt_frequency, /* get the mips_hpt_frequency */
+};
+
+static void mipsvz_hypercall(struct pt_regs *regs)
+{
+	unsigned long call_number = regs->regs[2];
+
+	kvm_debug("kvm_mipsvz_hypercall: %lx\n", call_number);
+
+	if (mipsvz_compute_return_epc(regs)) {
+		kvm_err("Error: Bad EPC on hypercall\n");
+		mipsvz_exit_vm(regs, KVM_EXIT_INTERNAL_ERROR);
+	}
+
+	if (call_number >= ARRAY_SIZE(mipsvz_hypercall_handlers) ||
+	    !mipsvz_hypercall_handlers[call_number]) {
+		struct kvm_vcpu *vcpu = current->thread.vcpu;
+		struct kvm_run *kvm_run = vcpu->run;
+		int i;
+
+		kvm_run->hypercall.nr = call_number;
+		for (i = 0; i < ARRAY_SIZE(kvm_run->hypercall.args); i++)
+			kvm_run->hypercall.args[i] = regs->regs[4 + i];
+		mipsvz_exit_vm(regs, KVM_EXIT_HYPERCALL);
+	} else {
+		mipsvz_hypercall_handlers[call_number](regs);
+	}
+}
+
+static void mipsvz_sfce(struct pt_regs *regs)
+{
+	bool is_64bit;
+	int rt, rd, sel;
+	u64 rt_val;
+	u32 t, m;
+	u32 insn = regs->cp0_badinstr;
+
+	if ((insn & 0xffc007f8) != 0x40800000) {
+		kvm_err("Error: SFCE not on DMTC0/MTC0.\n");
+		mipsvz_exit_vm(regs, KVM_EXIT_INTERNAL_ERROR);
+	}
+	/* Move past the DMTC0/MTC0 insn */
+	if (mipsvz_compute_return_epc(regs)) {
+		kvm_err("Error: Bad EPC on SFCE\n");
+		mipsvz_exit_vm(regs, KVM_EXIT_INTERNAL_ERROR);
+	}
+
+	is_64bit = insn & (1 << 21);
+	rt = (insn >> 16) & 0x1f;
+	rd = (insn >> 11) & 0x1f;
+	sel = insn & 7;
+
+	rt_val = rt ? regs->regs[rt] : 0;
+
+	switch ((rd << 3) | sel) {
+	case 0x60: /* Status */
+		write_gc0_status((u32)rt_val);
+		break;
+	case 0x61: /* IntCtl */
+		/* Ignore */
+		break;
+	case 0x68: /* Cause */
+		m = (1 << 27) | (1 << 23); /* DC and IV bits only */
+		t = read_gc0_cause();
+		t &= ~m;
+		t |= (m & (u32)rt_val);
+		write_gc0_cause(t);
+		break;
+	default:
+		kvm_err("Error: SFCE unknown target reg.\n");
+		mipsvz_exit_vm(regs, KVM_EXIT_INTERNAL_ERROR);
+		break;
+	}
+}
+
+static void mipsvz_handle_cache(struct pt_regs *regs,
+				union mips_instruction insn)
+{
+	s64 ea;
+	s16 offset;
+
+	/* Move past the CACHE insn */
+	if (mipsvz_compute_return_epc(regs)) {
+		kvm_err("Error: Bad EPC on CACHE GPSI\n");
+		mipsvz_exit_vm(regs, KVM_EXIT_INTERNAL_ERROR);
+	}
+
+	offset = insn.c_format.simmediate;
+
+	switch (insn.c_format.cache) {
+	case 0: /* Primary Instruction */
+		switch (insn.c_format.c_op) {
+		case 0: /* Index Invalidate */
+			ea = regs->regs[insn.c_format.rs] + offset;
+			asm volatile("cache	0x00,0(%0)" : : "d" (ea));
+			break;
+		case 4: /* ICache invalidate EA */
+			ea = regs->regs[insn.c_format.rs] + offset;
+			asm volatile("synci	0($0)");
+			break;
+		default:
+			goto cannot_handle;
+		}
+		break;
+	case 1: /* Primary Data */
+		switch (insn.c_format.c_op) {
+		case 0: /* writebadk/invalidate tag */
+#if 0
+			ea = regs->regs[insn.c_format.rs] + offset;
+			asm volatile("cache	0x01,0(%0)" : : "d" (ea));
+			break;
+#endif
+		case 5: /*  writebadk/invalidate EA */
+			/* OCTEON has coherent caches, but clear the write buffers. */
+			asm volatile("sync");
+			break;
+		default:
+			goto cannot_handle;
+		}
+		break;
+	case 2: /* Tertiary */
+	case 3: /* Secondary */
+	default:
+		goto cannot_handle;
+	}
+
+	return;
+cannot_handle:
+		kvm_err("Error: GPSI Illegal cache op %08x\n", insn.word);
+		mipsvz_exit_vm(regs, KVM_EXIT_EXCEPTION);
+}
+
+static void mipsvz_handle_wait(struct pt_regs *regs)
+{
+	struct kvm_mips_vcpu_vz *vcpu_vz;
+	struct kvm_vcpu *vcpu;
+
+	/* Move past the WAIT insn */
+	if (mipsvz_compute_return_epc(regs)) {
+		kvm_err("Error: Bad EPC on WAIT GPSI\n");
+		mipsvz_exit_vm(regs, KVM_EXIT_INTERNAL_ERROR);
+	}
+
+	preempt_disable();
+	vcpu = current->thread.vcpu;
+	vcpu_vz = vcpu->arch.impl;
+	mipsvz_readout_cp0_counter_state(vcpu_vz);
+	if ((vcpu_vz->c0_cause & CAUSEF_TI) == 0) {
+		ktime_t exp;
+		u32 clk_to_exp = vcpu_vz->c0_compare - vcpu_vz->c0_count;
+		u64 ns_to_exp = (clk_to_exp * 1000000000ull) / mips_hpt_frequency;
+		/* Arm the timer */
+		exp = ktime_add_ns(vcpu_vz->compare_timer_read, ns_to_exp);
+		hrtimer_start(&vcpu_vz->compare_timer, exp, HRTIMER_MODE_ABS);
+	}
+	preempt_enable();
+
+	kvm_vcpu_block(current->thread.vcpu);
+
+	if (signal_pending(current))
+			mipsvz_exit_vm(regs, KVM_EXIT_INTR);
+}
+
+static void mipsvz_handle_gpsi_mtc0(struct pt_regs *regs,
+				    union mips_instruction insn)
+{
+	struct kvm_mips_vcpu_vz *vcpu_vz;
+	struct kvm_vcpu *vcpu;
+	u32 val;
+	u32 offset;
+
+	/* Move past the MTC0 insn */
+	if (mipsvz_compute_return_epc(regs)) {
+		kvm_err("Error: Bad EPC on MTC0 GPSI\n");
+		mipsvz_exit_vm(regs, KVM_EXIT_INTERNAL_ERROR);
+	}
+
+	preempt_disable();
+	vcpu = current->thread.vcpu;
+	vcpu_vz = vcpu->arch.impl;
+	switch (insn.c0m_format.rd) {
+	case 9:
+		if (insn.c0m_format.sel != 0)
+			goto bad_reg;
+		/* Count */
+		val = regs->regs[insn.c0m_format.rt];
+		offset = val - read_gc0_count();
+		vcpu_vz->c0_count_offset += offset;
+		write_c0_gtoffset(mipsvz_cp0_count_offset[vcpu->cpu] + vcpu_vz->c0_count_offset);
+		break;
+	default:
+		goto bad_reg;
+	}
+
+	preempt_enable();
+	return;
+
+bad_reg:
+	kvm_err("Error: Bad Reg($%d,%d) on MTC0 GPSI\n",
+		insn.c0m_format.rd, insn.c0m_format.sel);
+	mipsvz_exit_vm(regs, KVM_EXIT_INTERNAL_ERROR);
+
+}
+
+static void mipsvz_handle_gpsi_mfc0(struct pt_regs *regs,
+				    union mips_instruction insn)
+{
+	/* Move past the MFC0 insn */
+	if (mipsvz_compute_return_epc(regs)) {
+		kvm_err("Error: Bad EPC on MFC0 GPSI\n");
+		mipsvz_exit_vm(regs, KVM_EXIT_INTERNAL_ERROR);
+	}
+
+	switch (insn.c0m_format.rd) {
+	case 12:
+		if (insn.c0m_format.sel != 2)
+			goto bad_reg;
+		/* SRSCtl */
+		regs->regs[insn.c0m_format.rt] = 0;
+		break;
+	case 15:
+		if (insn.c0m_format.sel != 0)
+			goto bad_reg;
+		/* PRId */
+		regs->regs[insn.c0m_format.rt] = (s64)read_c0_prid();
+		break;
+	default:
+		goto bad_reg;
+	}
+	return;
+
+bad_reg:
+	kvm_err("Error: Bad Reg($%d,%d) on MTC0 GPSI\n",
+		insn.c0m_format.rd, insn.c0m_format.sel);
+	mipsvz_exit_vm(regs, KVM_EXIT_INTERNAL_ERROR);
+
+}
+
+static void mipsvz_gpsi(struct pt_regs *regs)
+{
+	union mips_instruction insn;
+
+	insn.word = regs->cp0_badinstr;
+
+	if (insn.c_format.opcode == cache_op)
+		mipsvz_handle_cache(regs, insn);
+	else if (insn.c0_format.opcode == cop0_op &&
+		 insn.c0_format.co == 1 &&
+		 insn.c0_format.func == wait_op)
+		mipsvz_handle_wait(regs);
+	else if (insn.c0m_format.opcode == cop0_op &&
+		 insn.c0m_format.func == mtc_op &&
+		 insn.c0m_format.code == 0)
+		mipsvz_handle_gpsi_mtc0(regs, insn);
+	else if (insn.c0m_format.opcode == cop0_op &&
+		 insn.c0m_format.func == mfc_op &&
+		 insn.c0m_format.code == 0)
+		mipsvz_handle_gpsi_mfc0(regs, insn);
+	else {
+		kvm_err("Error: GPSI not on CACHE, WAIT, MFC0 or MTC0: %08x\n",
+			insn.word);
+		mipsvz_exit_vm(regs, KVM_EXIT_INTERNAL_ERROR);
+	}
+}
+
+static void mipsvz_default_ex(struct pt_regs *regs)
+{
+	u32 guestctl0 = read_c0_guestctl0();
+	int gexc_code = (guestctl0 >> 2) & 0x1f;
+
+	kvm_err("Hypervisor Exception (%d): Not handled yet\n", gexc_code);
+	mipsvz_exit_vm(regs, KVM_EXIT_INTERNAL_ERROR);
+}
+
+#define mipsvz_gva mipsvz_default_ex
+#define mipsvz_gpa mipsvz_default_ex
+#define mipsvz_ghfc mipsvz_default_ex
+
+static const mipsvz_hypervisor_handler_t mipsvz_hypervisor_handlers[] = {
+	mipsvz_gpsi,		/* 0  - Guest Privileged Sensitive Instruction */
+	mipsvz_sfce,		/* 1  - Guest Software Field Change */
+	mipsvz_hypercall,	/* 2  - Hypercall */
+	mipsvz_default_ex,	/* 3  - Guest Reserved Instruction Redirect. */
+	mipsvz_default_ex,	/* 4  - Implementation defined */
+	mipsvz_default_ex,	/* 5  - Implementation defined */
+	mipsvz_default_ex,	/* 6  - Implementation defined */
+	mipsvz_default_ex,	/* 7  - Implementation defined */
+	mipsvz_gva,		/* 8  - Guest Mode Initiated Root TLB Execption: GVA */
+	mipsvz_ghfc,		/* 9  - Guest Hardware Field Change */
+	mipsvz_gpa,		/* 10 - Guest Mode Initiated Root TLB Execption: GPA */
+	mipsvz_default_ex,	/* 11 - Reserved */
+	mipsvz_default_ex,	/* 12 - Reserved */
+	mipsvz_default_ex,	/* 13 - Reserved */
+	mipsvz_default_ex,	/* 14 - Reserved */
+	mipsvz_default_ex,	/* 15 - Reserved */
+	mipsvz_default_ex,	/* 16 - Reserved */
+	mipsvz_default_ex,	/* 17 - Reserved */
+	mipsvz_default_ex,	/* 18 - Reserved */
+	mipsvz_default_ex,	/* 19 - Reserved */
+	mipsvz_default_ex,	/* 20 - Reserved */
+	mipsvz_default_ex,	/* 21 - Reserved */
+	mipsvz_default_ex,	/* 22 - Reserved */
+	mipsvz_default_ex,	/* 23 - Reserved */
+	mipsvz_default_ex,	/* 24 - Reserved */
+	mipsvz_default_ex,	/* 25 - Reserved */
+	mipsvz_default_ex,	/* 26 - Reserved */
+	mipsvz_default_ex,	/* 27 - Reserved */
+	mipsvz_default_ex,	/* 28 - Reserved */
+	mipsvz_default_ex,	/* 29 - Reserved */
+	mipsvz_default_ex,	/* 30 - Reserved */
+	mipsvz_default_ex,	/* 31 - Reserved */
+};
+/*
+ * Hypervisor Exception handler, called with interrupts disabled.
+ */
+asmlinkage void do_hypervisor(struct pt_regs *regs)
+{
+	int gexc_code;
+	u32 guestctl0 = read_c0_guestctl0();
+
+	/* Must read before any exceptions can happen. */
+	regs->cp0_badinstr = read_c0_badinstr();
+	regs->cp0_badinstrp = read_c0_badinstrp();
+
+	/* This could take a while, turn interrupts back on. */
+	local_irq_enable();
+
+	gexc_code = (guestctl0 >> 2) & 0x1f;
+
+	mipsvz_hypervisor_handlers[gexc_code](regs);
+}
+
+static long mipsvz_vcpu_ioctl(struct kvm_vcpu *vcpu, unsigned int ioctl,
+			      unsigned long arg)
+{
+	return -ENOIOCTLCMD;
+}
+
+static const struct kvm_mips_ops kvm_mips_vz_ops = {
+	.vcpu_runnable = mipsvz_vcpu_runnable,
+	.destroy_vm = mipsvz_destroy_vm,
+	.commit_memory_region = mipsvz_commit_memory_region,
+	.vcpu_create = mipsvz_vcpu_create,
+	.vcpu_free = mipsvz_vcpu_free,
+	.vcpu_run = mipsvz_vcpu_run,
+	.vm_ioctl = mipsvz_vm_ioctl,
+	.vcpu_ioctl = mipsvz_vcpu_ioctl,
+	.get_reg = mipsvz_get_reg,
+	.set_reg = mipsvz_set_reg,
+	.cpu_has_pending_timer = mipsvz_cpu_has_pending_timer,
+	.vcpu_init = mipsvz_vcpu_init,
+	.vcpu_setup = mipsvz_vcpu_setup,
+	.vcpu_load = mipsvz_vcpu_load,
+	.vcpu_put = mipsvz_vcpu_put,
+};
+
+int mipsvz_init_vm(struct kvm *kvm, unsigned long type)
+{
+	struct kvm_mips_vz *kvm_mips_vz;
+
+	if (!cpu_has_vz)
+		return -ENODEV;
+	if (type != 1)
+		return -EINVAL;
+
+	kvm->arch.ops = &kvm_mips_vz_ops;
+
+	kvm_mips_vz = kzalloc(sizeof(struct kvm_mips_vz), GFP_KERNEL);
+	if (!kvm_mips_vz)
+		goto err;
+
+	kvm->arch.impl = kvm_mips_vz;
+
+	mutex_init(&kvm_mips_vz->guest_mm_lock);
+
+	kvm_mips_vz->pgd = (pgd_t *)__get_free_pages(GFP_KERNEL, PGD_ORDER);
+	if (!kvm_mips_vz->pgd)
+		goto err;
+
+	pgd_init((unsigned long)kvm_mips_vz->pgd);
+
+	spin_lock_init(&kvm_mips_vz->irq_chip_lock);
+
+	set_except_vector(27, handle_hypervisor);
+
+	return 0;
+err:
+	kfree(kvm_mips_vz);
+	return -ENOMEM;
+}
diff --git a/arch/mips/kvm/kvm_mipsvz_guest.S b/arch/mips/kvm/kvm_mipsvz_guest.S
new file mode 100644
index 0000000..8e20dbc
--- /dev/null
+++ b/arch/mips/kvm/kvm_mipsvz_guest.S
@@ -0,0 +1,234 @@
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License.  See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2013 Cavium, Inc.
+ */
+
+#include <asm/stackframe.h>
+
+#define START_GUEST_STACK_ADJUST (8 * 11)
+	.set noreorder
+	.p2align 5
+LEAF(mipsvz_start_guest)
+	daddiu	sp, sp, -START_GUEST_STACK_ADJUST
+	sd	$16, (0 * 8)(sp)
+	sd	$17, (1 * 8)(sp)
+	sd	$18, (2 * 8)(sp)
+	sd	$19, (3 * 8)(sp)
+	sd	$20, (4 * 8)(sp)
+	sd	$21, (5 * 8)(sp)
+	sd	$22, (6 * 8)(sp)
+	sd	$23, (7 * 8)(sp)
+	/*	$24, t8 */
+	/*	$25, t9 */
+	/*	$26, K0 */
+	/*	$27, K1 */
+	sd	$28, (8 * 8)(sp) /* gp/current */
+	/*	$29, sp */
+	sd	$30, (9 * 8)(sp)
+	sd	$31, (10 * 8)(sp)
+
+	/* Save sp in the CPU specific slot */
+	set_mips_kvm_rootsp sp, v0
+
+	/*
+	 * Move to EXL with interrupts enabled.  When we ERET to Guest
+	 * mode, we can again process interrupts.
+	 */
+	mfc0	v0, CP0_STATUS
+	ori	v0, ST0_EXL | ST0_IE
+	mtc0	v0, CP0_STATUS
+
+	dli	v0,1
+	/* Set GuestMode (GM) bit */
+	mfc0	v1, CP0_GUESTCTL0
+	ins	v1, v0, MIPS_GUESTCTL0B_GM, 1
+	mtc0	v1, CP0_GUESTCTL0
+
+	/* Set the high order bit of CPU_ID_REG to indicate guest mode. */
+	dmfc0	v1, CPU_ID_REG
+	dins	v1, v0, 63, 1
+	dmtc0	v1, CPU_ID_REG
+
+	/* Load Guest register state */
+	ld	v0, KVM_VCPU_ARCH_EPC(a0)
+	ld	v1, KVM_VCPU_ARCH_HI(a0)
+	ld	ta0, KVM_VCPU_ARCH_LO(a0)
+	dmtc0	v0, CP0_EPC
+	mthi	v1
+	mtlo	ta0
+
+	.set	push
+	.set	noat
+	ld	$1, KVM_VCPU_ARCH_R1(a0)
+	ld	$2, KVM_VCPU_ARCH_R2(a0)
+	ld	$3, KVM_VCPU_ARCH_R3(a0)
+	ld	$5, KVM_VCPU_ARCH_R5(a0)
+	ld	$6, KVM_VCPU_ARCH_R6(a0)
+	ld	$7, KVM_VCPU_ARCH_R7(a0)
+	ld	$8, KVM_VCPU_ARCH_R8(a0)
+	ld	$9, KVM_VCPU_ARCH_R9(a0)
+	ld	$10, KVM_VCPU_ARCH_R10(a0)
+	ld	$11, KVM_VCPU_ARCH_R11(a0)
+	ld	$12, KVM_VCPU_ARCH_R12(a0)
+	ld	$13, KVM_VCPU_ARCH_R13(a0)
+	ld	$14, KVM_VCPU_ARCH_R14(a0)
+	ld	$15, KVM_VCPU_ARCH_R15(a0)
+	ld	$16, KVM_VCPU_ARCH_R16(a0)
+	ld	$17, KVM_VCPU_ARCH_R17(a0)
+	ld	$18, KVM_VCPU_ARCH_R18(a0)
+	ld	$19, KVM_VCPU_ARCH_R19(a0)
+	ld	$20, KVM_VCPU_ARCH_R20(a0)
+	ld	$21, KVM_VCPU_ARCH_R21(a0)
+	ld	$22, KVM_VCPU_ARCH_R22(a0)
+	ld	$23, KVM_VCPU_ARCH_R23(a0)
+	ld	$24, KVM_VCPU_ARCH_R24(a0)
+	ld	$25, KVM_VCPU_ARCH_R25(a0)
+	ld	$26, KVM_VCPU_ARCH_R26(a0)
+	ld	$27, KVM_VCPU_ARCH_R27(a0)
+	ld	$28, KVM_VCPU_ARCH_R28(a0)
+	ld	$29, KVM_VCPU_ARCH_R29(a0)
+	ld	$30, KVM_VCPU_ARCH_R30(a0)
+	ld	$31, KVM_VCPU_ARCH_R31(a0)
+	ld	$4, KVM_VCPU_ARCH_R4(a0) /* $4 == a0, do it last. */
+	eret
+	.set	pop
+
+	.p2align 7
+.Lmipsvz_exit_guest:
+FEXPORT(mipsvz_exit_guest)
+
+	/* Clear sp in the CPU specific slot */
+	CPU_ID_MFC0	k0, CPU_ID_REG
+	get_mips_kvm_rootsp
+	move	sp, k1
+	set_mips_kvm_rootsp zero, v0
+
+	ld	$16, (0 * 8)(sp)
+	ld	$17, (1 * 8)(sp)
+	ld	$18, (2 * 8)(sp)
+	ld	$19, (3 * 8)(sp)
+	ld	$20, (4 * 8)(sp)
+	ld	$21, (5 * 8)(sp)
+	ld	$22, (6 * 8)(sp)
+	ld	$23, (7 * 8)(sp)
+	/*	$24, t8 */
+	/*	$25, t9 */
+	/*	$26, K0 */
+	/*	$27, K1 */
+	ld	$28, (8 * 8)(sp) /* gp/current */
+	/*	$29, sp */
+	ld	$30, (9 * 8)(sp)
+	ld	$31, (10 * 8)(sp)
+
+	jr	ra
+	 daddiu	sp, sp, START_GUEST_STACK_ADJUST
+	END(mipsvz_start_guest)
+
+	.p2align 5
+	.set mips64r2
+
+LEAF(mipsvz_install_fpu)
+	ldc1	$f0, (KVM_VCPU_ARCH_FPR + (0 * 8))(a0)
+	ldc1	$f1, (KVM_VCPU_ARCH_FPR + (1 * 8))(a0)
+	ldc1	$f2, (KVM_VCPU_ARCH_FPR + (2 * 8))(a0)
+	ldc1	$f3, (KVM_VCPU_ARCH_FPR + (3 * 8))(a0)
+	ldc1	$f4, (KVM_VCPU_ARCH_FPR + (4 * 8))(a0)
+	ldc1	$f5, (KVM_VCPU_ARCH_FPR + (5 * 8))(a0)
+	ldc1	$f6, (KVM_VCPU_ARCH_FPR + (6 * 8))(a0)
+	ldc1	$f7, (KVM_VCPU_ARCH_FPR + (7 * 8))(a0)
+	ldc1	$f8, (KVM_VCPU_ARCH_FPR + (8 * 8))(a0)
+	ldc1	$f9, (KVM_VCPU_ARCH_FPR + (9 * 8))(a0)
+	ldc1	$f10, (KVM_VCPU_ARCH_FPR + (10 * 8))(a0)
+	ldc1	$f11, (KVM_VCPU_ARCH_FPR + (11 * 8))(a0)
+	ldc1	$f12, (KVM_VCPU_ARCH_FPR + (12 * 8))(a0)
+	ldc1	$f13, (KVM_VCPU_ARCH_FPR + (13 * 8))(a0)
+	ldc1	$f14, (KVM_VCPU_ARCH_FPR + (14 * 8))(a0)
+	ldc1	$f15, (KVM_VCPU_ARCH_FPR + (15 * 8))(a0)
+	ldc1	$f16, (KVM_VCPU_ARCH_FPR + (16 * 8))(a0)
+	ldc1	$f17, (KVM_VCPU_ARCH_FPR + (17 * 8))(a0)
+	ldc1	$f18, (KVM_VCPU_ARCH_FPR + (18 * 8))(a0)
+	ldc1	$f19, (KVM_VCPU_ARCH_FPR + (19 * 8))(a0)
+	ldc1	$f20, (KVM_VCPU_ARCH_FPR + (20 * 8))(a0)
+	ldc1	$f21, (KVM_VCPU_ARCH_FPR + (21 * 8))(a0)
+	ldc1	$f22, (KVM_VCPU_ARCH_FPR + (22 * 8))(a0)
+	ldc1	$f23, (KVM_VCPU_ARCH_FPR + (23 * 8))(a0)
+	ldc1	$f24, (KVM_VCPU_ARCH_FPR + (24 * 8))(a0)
+	ldc1	$f25, (KVM_VCPU_ARCH_FPR + (25 * 8))(a0)
+	ldc1	$f26, (KVM_VCPU_ARCH_FPR + (26 * 8))(a0)
+	ldc1	$f27, (KVM_VCPU_ARCH_FPR + (27 * 8))(a0)
+	ldc1	$f28, (KVM_VCPU_ARCH_FPR + (28 * 8))(a0)
+	ldc1	$f29, (KVM_VCPU_ARCH_FPR + (29 * 8))(a0)
+	ldc1	$f30, (KVM_VCPU_ARCH_FPR + (30 * 8))(a0)
+	ldc1	$f31, (KVM_VCPU_ARCH_FPR + (31 * 8))(a0)
+
+	lw	t0, KVM_VCPU_ARCH_FCSR(a0)
+	ctc1	t0, $31
+
+	lw	t0, KVM_VCPU_ARCH_FENR(a0)
+	ctc1	t0, $28
+
+	lw	t0, KVM_VCPU_ARCH_FEXR(a0)
+	ctc1	t0, $26
+
+	lw	t0, KVM_VCPU_ARCH_FCCR(a0)
+
+	jr	ra
+	 ctc1	t0, $25
+
+	END(mipsvz_install_fpu)
+
+LEAF(mipsvz_readout_fpu)
+	sdc1	$f0, (KVM_VCPU_ARCH_FPR + (0 * 8))(a0)
+	sdc1	$f1, (KVM_VCPU_ARCH_FPR + (1 * 8))(a0)
+	sdc1	$f2, (KVM_VCPU_ARCH_FPR + (2 * 8))(a0)
+	sdc1	$f3, (KVM_VCPU_ARCH_FPR + (3 * 8))(a0)
+	sdc1	$f4, (KVM_VCPU_ARCH_FPR + (4 * 8))(a0)
+	sdc1	$f5, (KVM_VCPU_ARCH_FPR + (5 * 8))(a0)
+	sdc1	$f6, (KVM_VCPU_ARCH_FPR + (6 * 8))(a0)
+	sdc1	$f7, (KVM_VCPU_ARCH_FPR + (7 * 8))(a0)
+	sdc1	$f8, (KVM_VCPU_ARCH_FPR + (8 * 8))(a0)
+	sdc1	$f9, (KVM_VCPU_ARCH_FPR + (9 * 8))(a0)
+	sdc1	$f10, (KVM_VCPU_ARCH_FPR + (10 * 8))(a0)
+	sdc1	$f11, (KVM_VCPU_ARCH_FPR + (11 * 8))(a0)
+	sdc1	$f12, (KVM_VCPU_ARCH_FPR + (12 * 8))(a0)
+	sdc1	$f13, (KVM_VCPU_ARCH_FPR + (13 * 8))(a0)
+	sdc1	$f14, (KVM_VCPU_ARCH_FPR + (14 * 8))(a0)
+	sdc1	$f15, (KVM_VCPU_ARCH_FPR + (15 * 8))(a0)
+	sdc1	$f16, (KVM_VCPU_ARCH_FPR + (16 * 8))(a0)
+	sdc1	$f17, (KVM_VCPU_ARCH_FPR + (17 * 8))(a0)
+	sdc1	$f18, (KVM_VCPU_ARCH_FPR + (18 * 8))(a0)
+	sdc1	$f19, (KVM_VCPU_ARCH_FPR + (19 * 8))(a0)
+	sdc1	$f20, (KVM_VCPU_ARCH_FPR + (20 * 8))(a0)
+	sdc1	$f21, (KVM_VCPU_ARCH_FPR + (21 * 8))(a0)
+	sdc1	$f22, (KVM_VCPU_ARCH_FPR + (22 * 8))(a0)
+	sdc1	$f23, (KVM_VCPU_ARCH_FPR + (23 * 8))(a0)
+	sdc1	$f24, (KVM_VCPU_ARCH_FPR + (24 * 8))(a0)
+	sdc1	$f25, (KVM_VCPU_ARCH_FPR + (25 * 8))(a0)
+	sdc1	$f26, (KVM_VCPU_ARCH_FPR + (26 * 8))(a0)
+	sdc1	$f27, (KVM_VCPU_ARCH_FPR + (27 * 8))(a0)
+	sdc1	$f28, (KVM_VCPU_ARCH_FPR + (28 * 8))(a0)
+	sdc1	$f29, (KVM_VCPU_ARCH_FPR + (29 * 8))(a0)
+	sdc1	$f30, (KVM_VCPU_ARCH_FPR + (30 * 8))(a0)
+	sdc1	$f31, (KVM_VCPU_ARCH_FPR + (31 * 8))(a0)
+
+	cfc1	t0, $31
+	sw	t0, KVM_VCPU_ARCH_FCSR(a0)
+
+	cfc1	t0, $28
+	sw	t0, KVM_VCPU_ARCH_FENR(a0)
+
+	cfc1	t0, $26
+	sw	t0, KVM_VCPU_ARCH_FEXR(a0)
+
+	cfc1	t0, $25
+	sw	t0, KVM_VCPU_ARCH_FCCR(a0)
+
+	cfc1	t0, $0
+
+	jr	ra
+	 sw	t0, KVM_VCPU_ARCH_FIR(a0)
+
+	END(mipsvz_readout_fpu)
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 30/31] mips/kvm: Enable MIPSVZ in Kconfig/Makefile
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (28 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 29/31] mips/kvm: Add MIPSVZ support David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-16 11:44   ` Ralf Baechle
  2013-06-07 23:03 ` [PATCH 31/31] mips/kvm: Allow for upto 8 KVM vcpus per vm David Daney
                   ` (4 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

Also let CPU_CAVIUM_OCTEON select KVM.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/Kconfig      | 1 +
 arch/mips/kvm/Kconfig  | 9 +++++++++
 arch/mips/kvm/Makefile | 1 +
 3 files changed, 11 insertions(+)

diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index 7a58ab9..16e3d22 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -1426,6 +1426,7 @@ config CPU_CAVIUM_OCTEON
 	select LIBFDT
 	select USE_OF
 	select USB_EHCI_BIG_ENDIAN_MMIO
+	select HAVE_KVM
 	help
 	  The Cavium Octeon processor is a highly integrated chip containing
 	  many ethernet hardware widgets for networking tasks. The processor
diff --git a/arch/mips/kvm/Kconfig b/arch/mips/kvm/Kconfig
index 95c0d22..32a5016 100644
--- a/arch/mips/kvm/Kconfig
+++ b/arch/mips/kvm/Kconfig
@@ -48,6 +48,15 @@ config KVM_MIPS_DEBUG_COP0_COUNTERS
 
 	  If unsure, say N.
 
+config KVM_MIPSVZ
+	bool "Kernel-based Virtual Machine (KVM) using hardware MIPS-VZ support"
+	depends on HAVE_KVM
+	select KVM
+	---help---
+	  Support for hosting Guest kernels on hardware with the
+	  MIPS-VZ hardware module.
+
+
 source drivers/vhost/Kconfig
 
 endif # VIRTUALIZATION
diff --git a/arch/mips/kvm/Makefile b/arch/mips/kvm/Makefile
index 3377197..595358f 100644
--- a/arch/mips/kvm/Makefile
+++ b/arch/mips/kvm/Makefile
@@ -13,3 +13,4 @@ kvm_mipste-objs		:= kvm_mips_emul.o kvm_locore.o kvm_mips_int.o \
 
 obj-$(CONFIG_KVM)		+= $(common-objs) kvm_mips.o
 obj-$(CONFIG_KVM_MIPSTE)	+= kvm_mipste.o
+obj-$(CONFIG_KVM_MIPSVZ)	+= kvm_mipsvz.o kvm_mipsvz_guest.o
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* [PATCH 31/31] mips/kvm: Allow for upto 8 KVM vcpus per vm.
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (29 preceding siblings ...)
  2013-06-07 23:03 ` [PATCH 30/31] mips/kvm: Enable MIPSVZ in Kconfig/Makefile David Daney
@ 2013-06-07 23:03 ` David Daney
  2013-06-16 11:37   ` Ralf Baechle
  2013-06-07 23:15   ` David Daney
                   ` (3 subsequent siblings)
  34 siblings, 1 reply; 84+ messages in thread
From: David Daney @ 2013-06-07 23:03 UTC (permalink / raw)
  To: linux-mips, ralf, kvm, Sanjay Lal; +Cc: linux-kernel, David Daney

From: David Daney <david.daney@cavium.com>

The mipsvz implementation allows for SMP, so let's be able to create
all those vcpus.

Signed-off-by: David Daney <david.daney@cavium.com>
---
 arch/mips/include/asm/kvm_host.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
index 9f209e1..0a5e218 100644
--- a/arch/mips/include/asm/kvm_host.h
+++ b/arch/mips/include/asm/kvm_host.h
@@ -20,7 +20,7 @@
 #include <linux/spinlock.h>
 
 
-#define KVM_MAX_VCPUS		1
+#define KVM_MAX_VCPUS		8
 #define KVM_USER_MEM_SLOTS	8
 /* memory slots that does not exposed to userspace */
 #define KVM_PRIVATE_MEM_SLOTS 	0
-- 
1.7.11.7


^ permalink raw reply related	[flat|nested] 84+ messages in thread

* Re: [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions.
@ 2013-06-07 23:15   ` David Daney
  0 siblings, 0 replies; 84+ messages in thread
From: David Daney @ 2013-06-07 23:15 UTC (permalink / raw)
  To: David Daney, kvm; +Cc: linux-mips, ralf, Sanjay Lal, linux-kernel, David Daney

I should also add that I will shortly send patches for the kvm tool 
required to drive this VM as well as a small set of patches that create 
a para-virtualized MIPS/Linux guest kernel.

The idea is that because there is no standard SMP linux system, we 
create a standard para-virtualized system that uses a handful of 
hypercalls, but mostly just uses virtio devices.  It has no emulated 
real hardware (no 8250 UART, no emulated legacy anything...)

David Daney


On 06/07/2013 04:03 PM, David Daney wrote:
> From: David Daney <david.daney@cavium.com>
>
> These patches take a somewhat different approach to MIPS
> virtualization via the MIPS-VZ extensions than the patches previously
> sent by Sanjay Lal.
>
> Several facts about the code:
>
> o Existing exception handlers are modified to hook in to KVM instead
>    of intercepting all exceptions via the EBase register, and then
>    chaining to real exception handlers.
>
> o Able to boot 64-bit SMP guests that use the FPU (I have booted 4-way
>    SMP 64-bit MIPS/Linux).
>
> o Additional overhead on every exception even when *no* vCPU is running.
>
> o Lower interrupt overhead, than the EBase interception method, when
>    vCPU *is* running.
>
> o This code is somewhat smaller than the existing trap/emulate
>    implementation (about 2100 lines vs. about 5300 lines)
>
> o Currently probably only usable on the OCTEON III CPU model, as some
>    MIPS-VZ implementation-defined behaviors were assumed to have the
>    OCTEON III behavior.
>
> Note: I think Ralf already has the 17/31 (MIPS: Quit exposing Kconfig
> symbols in uapi headers.) queued, but I also include it here.
>
> David Daney (31):
>    MIPS: Move allocate_kscratch to cpu-probe.c and make it public.
>    MIPS: Save and restore K0/K1 when CONFIG_KVM_MIPSVZ
>    mips/kvm: Fix 32-bitisms in kvm_locore.S
>    mips/kvm: Add casts to avoid pointer width mismatch build failures.
>    mips/kvm: Use generic cache flushing functions.
>    mips/kvm: Rename kvm_vcpu_arch.pc to  kvm_vcpu_arch.epc
>    mips/kvm: Rename VCPU_registername to KVM_VCPU_ARCH_registername
>    mips/kvm: Fix code formatting in arch/mips/kvm/kvm_locore.S
>    mips/kvm: Factor trap-and-emulate support into a pluggable
>      implementation.
>    mips/kvm: Implement ioctls to get and set FPU registers.
>    MIPS: Rearrange branch.c so it can be used by kvm code.
>    MIPS: Add instruction format information for WAIT, MTC0, MFC0, et al.
>    mips/kvm: Add accessors for MIPS VZ registers.
>    mips/kvm: Add thread_info flag to indicate operation in MIPS VZ Guest
>      Mode.
>    mips/kvm: Exception handling to leave and reenter guest mode.
>    mips/kvm: Add exception handler for MIPSVZ Guest exceptions.
>    MIPS: Quit exposing Kconfig symbols in uapi headers.
>    mips/kvm: Add pt_regs slots for BadInstr and BadInstrP
>    mips/kvm: Add host definitions for MIPS VZ based host.
>    mips/kvm: Hook into TLB fault handlers.
>    mips/kvm: Allow set_except_vector() to be used from MIPSVZ code.
>    mips/kvm: Split get_new_mmu_context into two parts.
>    mips/kvm: Hook into CP unusable exception handler.
>    mips/kvm: Add thread_struct fields used by MIPSVZ hosts.
>    mips/kvm: Add some asm-offsets constants used by MIPSVZ.
>    mips/kvm: Split up Kconfig and Makefile definitions in preperation
>      for MIPSVZ.
>    mips/kvm: Gate the use of kvm_local_flush_tlb_all() by KVM_MIPSTE
>    mips/kvm: Only use KVM_COALESCED_MMIO_PAGE_OFFSET with KVM_MIPSTE
>    mips/kvm: Add MIPSVZ support.
>    mips/kvm: Enable MIPSVZ in Kconfig/Makefile
>    mips/kvm: Allow for upto 8 KVM vcpus per vm.
>
>   arch/mips/Kconfig                   |    1 +
>   arch/mips/include/asm/branch.h      |    7 +
>   arch/mips/include/asm/kvm_host.h    |  622 +-----------
>   arch/mips/include/asm/kvm_mips_te.h |  589 +++++++++++
>   arch/mips/include/asm/kvm_mips_vz.h |   29 +
>   arch/mips/include/asm/mipsregs.h    |  264 +++++
>   arch/mips/include/asm/mmu_context.h |   12 +-
>   arch/mips/include/asm/processor.h   |    6 +
>   arch/mips/include/asm/ptrace.h      |   36 +
>   arch/mips/include/asm/stackframe.h  |  150 ++-
>   arch/mips/include/asm/thread_info.h |    2 +
>   arch/mips/include/asm/uasm.h        |    2 +-
>   arch/mips/include/uapi/asm/inst.h   |   23 +-
>   arch/mips/include/uapi/asm/ptrace.h |   17 +-
>   arch/mips/kernel/asm-offsets.c      |  124 ++-
>   arch/mips/kernel/branch.c           |   63 +-
>   arch/mips/kernel/cpu-probe.c        |   34 +
>   arch/mips/kernel/genex.S            |    8 +
>   arch/mips/kernel/scall64-64.S       |   12 +
>   arch/mips/kernel/scall64-n32.S      |   12 +
>   arch/mips/kernel/traps.c            |   15 +-
>   arch/mips/kvm/Kconfig               |   23 +-
>   arch/mips/kvm/Makefile              |   15 +-
>   arch/mips/kvm/kvm_locore.S          |  980 +++++++++---------
>   arch/mips/kvm/kvm_mips.c            |  768 ++------------
>   arch/mips/kvm/kvm_mips_comm.h       |    1 +
>   arch/mips/kvm/kvm_mips_commpage.c   |    9 +-
>   arch/mips/kvm/kvm_mips_dyntrans.c   |    4 +-
>   arch/mips/kvm/kvm_mips_emul.c       |  312 +++---
>   arch/mips/kvm/kvm_mips_int.c        |   53 +-
>   arch/mips/kvm/kvm_mips_int.h        |    2 -
>   arch/mips/kvm/kvm_mips_stats.c      |    6 +-
>   arch/mips/kvm/kvm_mipsvz.c          | 1894 +++++++++++++++++++++++++++++++++++
>   arch/mips/kvm/kvm_mipsvz_guest.S    |  234 +++++
>   arch/mips/kvm/kvm_tlb.c             |  140 +--
>   arch/mips/kvm/kvm_trap_emul.c       |  932 +++++++++++++++--
>   arch/mips/mm/fault.c                |    8 +
>   arch/mips/mm/tlbex-fault.S          |    6 +
>   arch/mips/mm/tlbex.c                |   45 +-
>   39 files changed, 5299 insertions(+), 2161 deletions(-)
>   create mode 100644 arch/mips/include/asm/kvm_mips_te.h
>   create mode 100644 arch/mips/include/asm/kvm_mips_vz.h
>   create mode 100644 arch/mips/kvm/kvm_mipsvz.c
>   create mode 100644 arch/mips/kvm/kvm_mipsvz_guest.S
>



^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions.
@ 2013-06-07 23:15   ` David Daney
  0 siblings, 0 replies; 84+ messages in thread
From: David Daney @ 2013-06-07 23:15 UTC (permalink / raw)
  To: David Daney, kvm; +Cc: linux-mips, ralf, Sanjay Lal, linux-kernel, David Daney

I should also add that I will shortly send patches for the kvm tool 
required to drive this VM as well as a small set of patches that create 
a para-virtualized MIPS/Linux guest kernel.

The idea is that because there is no standard SMP linux system, we 
create a standard para-virtualized system that uses a handful of 
hypercalls, but mostly just uses virtio devices.  It has no emulated 
real hardware (no 8250 UART, no emulated legacy anything...)

David Daney


On 06/07/2013 04:03 PM, David Daney wrote:
> From: David Daney <david.daney@cavium.com>
>
> These patches take a somewhat different approach to MIPS
> virtualization via the MIPS-VZ extensions than the patches previously
> sent by Sanjay Lal.
>
> Several facts about the code:
>
> o Existing exception handlers are modified to hook in to KVM instead
>    of intercepting all exceptions via the EBase register, and then
>    chaining to real exception handlers.
>
> o Able to boot 64-bit SMP guests that use the FPU (I have booted 4-way
>    SMP 64-bit MIPS/Linux).
>
> o Additional overhead on every exception even when *no* vCPU is running.
>
> o Lower interrupt overhead, than the EBase interception method, when
>    vCPU *is* running.
>
> o This code is somewhat smaller than the existing trap/emulate
>    implementation (about 2100 lines vs. about 5300 lines)
>
> o Currently probably only usable on the OCTEON III CPU model, as some
>    MIPS-VZ implementation-defined behaviors were assumed to have the
>    OCTEON III behavior.
>
> Note: I think Ralf already has the 17/31 (MIPS: Quit exposing Kconfig
> symbols in uapi headers.) queued, but I also include it here.
>
> David Daney (31):
>    MIPS: Move allocate_kscratch to cpu-probe.c and make it public.
>    MIPS: Save and restore K0/K1 when CONFIG_KVM_MIPSVZ
>    mips/kvm: Fix 32-bitisms in kvm_locore.S
>    mips/kvm: Add casts to avoid pointer width mismatch build failures.
>    mips/kvm: Use generic cache flushing functions.
>    mips/kvm: Rename kvm_vcpu_arch.pc to  kvm_vcpu_arch.epc
>    mips/kvm: Rename VCPU_registername to KVM_VCPU_ARCH_registername
>    mips/kvm: Fix code formatting in arch/mips/kvm/kvm_locore.S
>    mips/kvm: Factor trap-and-emulate support into a pluggable
>      implementation.
>    mips/kvm: Implement ioctls to get and set FPU registers.
>    MIPS: Rearrange branch.c so it can be used by kvm code.
>    MIPS: Add instruction format information for WAIT, MTC0, MFC0, et al.
>    mips/kvm: Add accessors for MIPS VZ registers.
>    mips/kvm: Add thread_info flag to indicate operation in MIPS VZ Guest
>      Mode.
>    mips/kvm: Exception handling to leave and reenter guest mode.
>    mips/kvm: Add exception handler for MIPSVZ Guest exceptions.
>    MIPS: Quit exposing Kconfig symbols in uapi headers.
>    mips/kvm: Add pt_regs slots for BadInstr and BadInstrP
>    mips/kvm: Add host definitions for MIPS VZ based host.
>    mips/kvm: Hook into TLB fault handlers.
>    mips/kvm: Allow set_except_vector() to be used from MIPSVZ code.
>    mips/kvm: Split get_new_mmu_context into two parts.
>    mips/kvm: Hook into CP unusable exception handler.
>    mips/kvm: Add thread_struct fields used by MIPSVZ hosts.
>    mips/kvm: Add some asm-offsets constants used by MIPSVZ.
>    mips/kvm: Split up Kconfig and Makefile definitions in preperation
>      for MIPSVZ.
>    mips/kvm: Gate the use of kvm_local_flush_tlb_all() by KVM_MIPSTE
>    mips/kvm: Only use KVM_COALESCED_MMIO_PAGE_OFFSET with KVM_MIPSTE
>    mips/kvm: Add MIPSVZ support.
>    mips/kvm: Enable MIPSVZ in Kconfig/Makefile
>    mips/kvm: Allow for upto 8 KVM vcpus per vm.
>
>   arch/mips/Kconfig                   |    1 +
>   arch/mips/include/asm/branch.h      |    7 +
>   arch/mips/include/asm/kvm_host.h    |  622 +-----------
>   arch/mips/include/asm/kvm_mips_te.h |  589 +++++++++++
>   arch/mips/include/asm/kvm_mips_vz.h |   29 +
>   arch/mips/include/asm/mipsregs.h    |  264 +++++
>   arch/mips/include/asm/mmu_context.h |   12 +-
>   arch/mips/include/asm/processor.h   |    6 +
>   arch/mips/include/asm/ptrace.h      |   36 +
>   arch/mips/include/asm/stackframe.h  |  150 ++-
>   arch/mips/include/asm/thread_info.h |    2 +
>   arch/mips/include/asm/uasm.h        |    2 +-
>   arch/mips/include/uapi/asm/inst.h   |   23 +-
>   arch/mips/include/uapi/asm/ptrace.h |   17 +-
>   arch/mips/kernel/asm-offsets.c      |  124 ++-
>   arch/mips/kernel/branch.c           |   63 +-
>   arch/mips/kernel/cpu-probe.c        |   34 +
>   arch/mips/kernel/genex.S            |    8 +
>   arch/mips/kernel/scall64-64.S       |   12 +
>   arch/mips/kernel/scall64-n32.S      |   12 +
>   arch/mips/kernel/traps.c            |   15 +-
>   arch/mips/kvm/Kconfig               |   23 +-
>   arch/mips/kvm/Makefile              |   15 +-
>   arch/mips/kvm/kvm_locore.S          |  980 +++++++++---------
>   arch/mips/kvm/kvm_mips.c            |  768 ++------------
>   arch/mips/kvm/kvm_mips_comm.h       |    1 +
>   arch/mips/kvm/kvm_mips_commpage.c   |    9 +-
>   arch/mips/kvm/kvm_mips_dyntrans.c   |    4 +-
>   arch/mips/kvm/kvm_mips_emul.c       |  312 +++---
>   arch/mips/kvm/kvm_mips_int.c        |   53 +-
>   arch/mips/kvm/kvm_mips_int.h        |    2 -
>   arch/mips/kvm/kvm_mips_stats.c      |    6 +-
>   arch/mips/kvm/kvm_mipsvz.c          | 1894 +++++++++++++++++++++++++++++++++++
>   arch/mips/kvm/kvm_mipsvz_guest.S    |  234 +++++
>   arch/mips/kvm/kvm_tlb.c             |  140 +--
>   arch/mips/kvm/kvm_trap_emul.c       |  932 +++++++++++++++--
>   arch/mips/mm/fault.c                |    8 +
>   arch/mips/mm/tlbex-fault.S          |    6 +
>   arch/mips/mm/tlbex.c                |   45 +-
>   39 files changed, 5299 insertions(+), 2161 deletions(-)
>   create mode 100644 arch/mips/include/asm/kvm_mips_te.h
>   create mode 100644 arch/mips/include/asm/kvm_mips_vz.h
>   create mode 100644 arch/mips/kvm/kvm_mipsvz.c
>   create mode 100644 arch/mips/kvm/kvm_mipsvz_guest.S
>

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 20/31] mips/kvm: Hook into TLB fault handlers.
  2013-06-07 23:03 ` [PATCH 20/31] mips/kvm: Hook into TLB fault handlers David Daney
@ 2013-06-07 23:34   ` Sergei Shtylyov
  2013-06-08  0:15       ` David Daney
  2013-06-16  8:51   ` Ralf Baechle
  1 sibling, 1 reply; 84+ messages in thread
From: Sergei Shtylyov @ 2013-06-07 23:34 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, ralf, kvm, Sanjay Lal, linux-kernel, David Daney

Hello.

On 06/08/2013 03:03 AM, David Daney wrote:

> From: David Daney <david.daney@cavium.com>
>
> If the CPU is operating in guest mode when a TLB related excpetion
> occurs, give KVM a chance to do emulation.
>
> Signed-off-by: David Daney <david.daney@cavium.com>
> ---
>   arch/mips/mm/fault.c       | 8 ++++++++
>   arch/mips/mm/tlbex-fault.S | 6 ++++++
>   2 files changed, 14 insertions(+)
>
> diff --git a/arch/mips/mm/fault.c b/arch/mips/mm/fault.c
> index 0fead53..9391da49 100644
> --- a/arch/mips/mm/fault.c
> +++ b/arch/mips/mm/fault.c
[...]
> @@ -50,6 +51,13 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs, unsigned long writ
>   	       field, regs->cp0_epc);
>   #endif
>   
> +#ifdef CONFIG_KVM_MIPSVZ
> +	if (test_tsk_thread_flag(current, TIF_GUESTMODE)) {
> +		if (mipsvz_page_fault(regs, write, address))

    Any reason not to collapse these into single *if*?

> +			return;
> +	}
> +#endif
> +
>


^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 20/31] mips/kvm: Hook into TLB fault handlers.
@ 2013-06-08  0:15       ` David Daney
  0 siblings, 0 replies; 84+ messages in thread
From: David Daney @ 2013-06-08  0:15 UTC (permalink / raw)
  To: Sergei Shtylyov
  Cc: David Daney, linux-mips, ralf, kvm, Sanjay Lal, linux-kernel,
	David Daney

On 06/07/2013 04:34 PM, Sergei Shtylyov wrote:
> Hello.
>
> On 06/08/2013 03:03 AM, David Daney wrote:
>
>> From: David Daney <david.daney@cavium.com>
>>
>> If the CPU is operating in guest mode when a TLB related excpetion
>> occurs, give KVM a chance to do emulation.
>>
>> Signed-off-by: David Daney <david.daney@cavium.com>
>> ---
>>   arch/mips/mm/fault.c       | 8 ++++++++
>>   arch/mips/mm/tlbex-fault.S | 6 ++++++
>>   2 files changed, 14 insertions(+)
>>
>> diff --git a/arch/mips/mm/fault.c b/arch/mips/mm/fault.c
>> index 0fead53..9391da49 100644
>> --- a/arch/mips/mm/fault.c
>> +++ b/arch/mips/mm/fault.c
> [...]
>> @@ -50,6 +51,13 @@ asmlinkage void __kprobes do_page_fault(struct
>> pt_regs *regs, unsigned long writ
>>              field, regs->cp0_epc);
>>   #endif
>> +#ifdef CONFIG_KVM_MIPSVZ
>> +    if (test_tsk_thread_flag(current, TIF_GUESTMODE)) {
>> +        if (mipsvz_page_fault(regs, write, address))
>
>     Any reason not to collapse these into single *if*?
>

It makes the conditional call to mipsvz_page_fault() less obvious.

Certainly the same semantics can be achieved several different ways.

David Daney


>> +            return;
>> +    }
>> +#endif
>> +
>>
>
>



^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 20/31] mips/kvm: Hook into TLB fault handlers.
@ 2013-06-08  0:15       ` David Daney
  0 siblings, 0 replies; 84+ messages in thread
From: David Daney @ 2013-06-08  0:15 UTC (permalink / raw)
  To: Sergei Shtylyov
  Cc: David Daney, linux-mips, ralf, kvm, Sanjay Lal, linux-kernel,
	David Daney

On 06/07/2013 04:34 PM, Sergei Shtylyov wrote:
> Hello.
>
> On 06/08/2013 03:03 AM, David Daney wrote:
>
>> From: David Daney <david.daney@cavium.com>
>>
>> If the CPU is operating in guest mode when a TLB related excpetion
>> occurs, give KVM a chance to do emulation.
>>
>> Signed-off-by: David Daney <david.daney@cavium.com>
>> ---
>>   arch/mips/mm/fault.c       | 8 ++++++++
>>   arch/mips/mm/tlbex-fault.S | 6 ++++++
>>   2 files changed, 14 insertions(+)
>>
>> diff --git a/arch/mips/mm/fault.c b/arch/mips/mm/fault.c
>> index 0fead53..9391da49 100644
>> --- a/arch/mips/mm/fault.c
>> +++ b/arch/mips/mm/fault.c
> [...]
>> @@ -50,6 +51,13 @@ asmlinkage void __kprobes do_page_fault(struct
>> pt_regs *regs, unsigned long writ
>>              field, regs->cp0_epc);
>>   #endif
>> +#ifdef CONFIG_KVM_MIPSVZ
>> +    if (test_tsk_thread_flag(current, TIF_GUESTMODE)) {
>> +        if (mipsvz_page_fault(regs, write, address))
>
>     Any reason not to collapse these into single *if*?
>

It makes the conditional call to mipsvz_page_fault() less obvious.

Certainly the same semantics can be achieved several different ways.

David Daney


>> +            return;
>> +    }
>> +#endif
>> +
>>
>
>

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions.
  2013-06-07 23:15   ` David Daney
  (?)
@ 2013-06-09  7:31   ` Gleb Natapov
  2013-06-09 23:23     ` David Daney
  -1 siblings, 1 reply; 84+ messages in thread
From: Gleb Natapov @ 2013-06-09  7:31 UTC (permalink / raw)
  To: David Daney
  Cc: David Daney, kvm, linux-mips, ralf, Sanjay Lal, linux-kernel,
	David Daney

On Fri, Jun 07, 2013 at 04:15:00PM -0700, David Daney wrote:
> I should also add that I will shortly send patches for the kvm tool
> required to drive this VM as well as a small set of patches that
> create a para-virtualized MIPS/Linux guest kernel.
> 
> The idea is that because there is no standard SMP linux system, we
> create a standard para-virtualized system that uses a handful of
> hypercalls, but mostly just uses virtio devices.  It has no emulated
> real hardware (no 8250 UART, no emulated legacy anything...)
> 
Virtualization is useful for running legacy code. Why dismiss support
for non pv guests so easily? How different MIPS SMP systems are? What
about running non pv UP systems?

> David Daney
> 
> 
> On 06/07/2013 04:03 PM, David Daney wrote:
> >From: David Daney <david.daney@cavium.com>
> >
> >These patches take a somewhat different approach to MIPS
> >virtualization via the MIPS-VZ extensions than the patches previously
> >sent by Sanjay Lal.
> >
> >Several facts about the code:
> >
> >o Existing exception handlers are modified to hook in to KVM instead
> >   of intercepting all exceptions via the EBase register, and then
> >   chaining to real exception handlers.
> >
> >o Able to boot 64-bit SMP guests that use the FPU (I have booted 4-way
> >   SMP 64-bit MIPS/Linux).
> >
> >o Additional overhead on every exception even when *no* vCPU is running.
> >
> >o Lower interrupt overhead, than the EBase interception method, when
> >   vCPU *is* running.
> >
> >o This code is somewhat smaller than the existing trap/emulate
> >   implementation (about 2100 lines vs. about 5300 lines)
> >
> >o Currently probably only usable on the OCTEON III CPU model, as some
> >   MIPS-VZ implementation-defined behaviors were assumed to have the
> >   OCTEON III behavior.
> >
> >Note: I think Ralf already has the 17/31 (MIPS: Quit exposing Kconfig
> >symbols in uapi headers.) queued, but I also include it here.
> >
> >David Daney (31):
> >   MIPS: Move allocate_kscratch to cpu-probe.c and make it public.
> >   MIPS: Save and restore K0/K1 when CONFIG_KVM_MIPSVZ
> >   mips/kvm: Fix 32-bitisms in kvm_locore.S
> >   mips/kvm: Add casts to avoid pointer width mismatch build failures.
> >   mips/kvm: Use generic cache flushing functions.
> >   mips/kvm: Rename kvm_vcpu_arch.pc to  kvm_vcpu_arch.epc
> >   mips/kvm: Rename VCPU_registername to KVM_VCPU_ARCH_registername
> >   mips/kvm: Fix code formatting in arch/mips/kvm/kvm_locore.S
> >   mips/kvm: Factor trap-and-emulate support into a pluggable
> >     implementation.
> >   mips/kvm: Implement ioctls to get and set FPU registers.
> >   MIPS: Rearrange branch.c so it can be used by kvm code.
> >   MIPS: Add instruction format information for WAIT, MTC0, MFC0, et al.
> >   mips/kvm: Add accessors for MIPS VZ registers.
> >   mips/kvm: Add thread_info flag to indicate operation in MIPS VZ Guest
> >     Mode.
> >   mips/kvm: Exception handling to leave and reenter guest mode.
> >   mips/kvm: Add exception handler for MIPSVZ Guest exceptions.
> >   MIPS: Quit exposing Kconfig symbols in uapi headers.
> >   mips/kvm: Add pt_regs slots for BadInstr and BadInstrP
> >   mips/kvm: Add host definitions for MIPS VZ based host.
> >   mips/kvm: Hook into TLB fault handlers.
> >   mips/kvm: Allow set_except_vector() to be used from MIPSVZ code.
> >   mips/kvm: Split get_new_mmu_context into two parts.
> >   mips/kvm: Hook into CP unusable exception handler.
> >   mips/kvm: Add thread_struct fields used by MIPSVZ hosts.
> >   mips/kvm: Add some asm-offsets constants used by MIPSVZ.
> >   mips/kvm: Split up Kconfig and Makefile definitions in preperation
> >     for MIPSVZ.
> >   mips/kvm: Gate the use of kvm_local_flush_tlb_all() by KVM_MIPSTE
> >   mips/kvm: Only use KVM_COALESCED_MMIO_PAGE_OFFSET with KVM_MIPSTE
> >   mips/kvm: Add MIPSVZ support.
> >   mips/kvm: Enable MIPSVZ in Kconfig/Makefile
> >   mips/kvm: Allow for upto 8 KVM vcpus per vm.
> >
> >  arch/mips/Kconfig                   |    1 +
> >  arch/mips/include/asm/branch.h      |    7 +
> >  arch/mips/include/asm/kvm_host.h    |  622 +-----------
> >  arch/mips/include/asm/kvm_mips_te.h |  589 +++++++++++
> >  arch/mips/include/asm/kvm_mips_vz.h |   29 +
> >  arch/mips/include/asm/mipsregs.h    |  264 +++++
> >  arch/mips/include/asm/mmu_context.h |   12 +-
> >  arch/mips/include/asm/processor.h   |    6 +
> >  arch/mips/include/asm/ptrace.h      |   36 +
> >  arch/mips/include/asm/stackframe.h  |  150 ++-
> >  arch/mips/include/asm/thread_info.h |    2 +
> >  arch/mips/include/asm/uasm.h        |    2 +-
> >  arch/mips/include/uapi/asm/inst.h   |   23 +-
> >  arch/mips/include/uapi/asm/ptrace.h |   17 +-
> >  arch/mips/kernel/asm-offsets.c      |  124 ++-
> >  arch/mips/kernel/branch.c           |   63 +-
> >  arch/mips/kernel/cpu-probe.c        |   34 +
> >  arch/mips/kernel/genex.S            |    8 +
> >  arch/mips/kernel/scall64-64.S       |   12 +
> >  arch/mips/kernel/scall64-n32.S      |   12 +
> >  arch/mips/kernel/traps.c            |   15 +-
> >  arch/mips/kvm/Kconfig               |   23 +-
> >  arch/mips/kvm/Makefile              |   15 +-
> >  arch/mips/kvm/kvm_locore.S          |  980 +++++++++---------
> >  arch/mips/kvm/kvm_mips.c            |  768 ++------------
> >  arch/mips/kvm/kvm_mips_comm.h       |    1 +
> >  arch/mips/kvm/kvm_mips_commpage.c   |    9 +-
> >  arch/mips/kvm/kvm_mips_dyntrans.c   |    4 +-
> >  arch/mips/kvm/kvm_mips_emul.c       |  312 +++---
> >  arch/mips/kvm/kvm_mips_int.c        |   53 +-
> >  arch/mips/kvm/kvm_mips_int.h        |    2 -
> >  arch/mips/kvm/kvm_mips_stats.c      |    6 +-
> >  arch/mips/kvm/kvm_mipsvz.c          | 1894 +++++++++++++++++++++++++++++++++++
> >  arch/mips/kvm/kvm_mipsvz_guest.S    |  234 +++++
> >  arch/mips/kvm/kvm_tlb.c             |  140 +--
> >  arch/mips/kvm/kvm_trap_emul.c       |  932 +++++++++++++++--
> >  arch/mips/mm/fault.c                |    8 +
> >  arch/mips/mm/tlbex-fault.S          |    6 +
> >  arch/mips/mm/tlbex.c                |   45 +-
> >  39 files changed, 5299 insertions(+), 2161 deletions(-)
> >  create mode 100644 arch/mips/include/asm/kvm_mips_te.h
> >  create mode 100644 arch/mips/include/asm/kvm_mips_vz.h
> >  create mode 100644 arch/mips/kvm/kvm_mipsvz.c
> >  create mode 100644 arch/mips/kvm/kvm_mipsvz_guest.S
> >
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

--
			Gleb.

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions.
  2013-06-09  7:31   ` Gleb Natapov
@ 2013-06-09 23:23     ` David Daney
  2013-06-09 23:40       ` Maciej W. Rozycki
                         ` (3 more replies)
  0 siblings, 4 replies; 84+ messages in thread
From: David Daney @ 2013-06-09 23:23 UTC (permalink / raw)
  To: Gleb Natapov
  Cc: David Daney, David Daney, kvm, linux-mips, ralf, Sanjay Lal,
	linux-kernel, David Daney

On 06/09/2013 12:31 AM, Gleb Natapov wrote:
> On Fri, Jun 07, 2013 at 04:15:00PM -0700, David Daney wrote:
>> I should also add that I will shortly send patches for the kvm tool
>> required to drive this VM as well as a small set of patches that
>> create a para-virtualized MIPS/Linux guest kernel.
>>
>> The idea is that because there is no standard SMP linux system, we
>> create a standard para-virtualized system that uses a handful of
>> hypercalls, but mostly just uses virtio devices.  It has no emulated
>> real hardware (no 8250 UART, no emulated legacy anything...)
>>
> Virtualization is useful for running legacy code. Why dismiss support
> for non pv guests so easily?

Just because we create standard PV system devices, doesn't preclude 
emulating real hardware.  In fact Sanjay Lal's work includes QEMU 
support for doing just this for a MIPS malta board.  I just wanted a 
very simple system I could implement with the kvm tool in a couple of 
days, so that is what I initially did.

The problem is that almost nobody has real malta boards, they are really 
only of interest because QEMU implements a virtual malta board.

Personally, I see the most interesting us cases of MIPS KVM being a 
deployment platform for new services, so legacy support is not so 
important to me.  That doesn't mean that other people wouldn't want some 
sort of legacy support.  The problem with 'legacy' on MIPS is that there 
are hundreds of legacies to choose from (Old SGI and DEC hardware, 
various network hardware from many different vendors, etc.).  Which 
would you choose?

>   How different MIPS SMP systems are?

o Old SGI heavy metal (several different system architectures).

o Cavium OCTEON SMP SoCs.

o Broadcom (several flavors) SoCs

o Loongson


Come to think of it, Emulating SGI hardware might be an interesting 
case.  There may be old IRIX systems and applications that could be 
running low on real hardware.  Some of those systems take up a whole 
room and draw a lot of power.  They might run faster and at much lower 
power consumption on a modern 48-Way SMP SoC based system.

>   What
> about running non pv UP systems?

See above.  I think this is what Sanjay Lal is doing.

>
>> David Daney
>>
>>
>> On 06/07/2013 04:03 PM, David Daney wrote:
>>> From: David Daney <david.daney@cavium.com>
>>>
>>> These patches take a somewhat different approach to MIPS
>>> virtualization via the MIPS-VZ extensions than the patches previously
>>> sent by Sanjay Lal.
>>>
>>> Several facts about the code:
>>>
>>> o Existing exception handlers are modified to hook in to KVM instead
>>>    of intercepting all exceptions via the EBase register, and then
>>>    chaining to real exception handlers.
>>>
>>> o Able to boot 64-bit SMP guests that use the FPU (I have booted 4-way
>>>    SMP 64-bit MIPS/Linux).
>>>
>>> o Additional overhead on every exception even when *no* vCPU is running.
>>>
>>> o Lower interrupt overhead, than the EBase interception method, when
>>>    vCPU *is* running.
>>>
>>> o This code is somewhat smaller than the existing trap/emulate
>>>    implementation (about 2100 lines vs. about 5300 lines)
>>>
>>> o Currently probably only usable on the OCTEON III CPU model, as some
>>>    MIPS-VZ implementation-defined behaviors were assumed to have the
>>>    OCTEON III behavior.
>>>
>>> Note: I think Ralf already has the 17/31 (MIPS: Quit exposing Kconfig
>>> symbols in uapi headers.) queued, but I also include it here.
>>>
>>> David Daney (31):
>>>    MIPS: Move allocate_kscratch to cpu-probe.c and make it public.
>>>    MIPS: Save and restore K0/K1 when CONFIG_KVM_MIPSVZ
>>>    mips/kvm: Fix 32-bitisms in kvm_locore.S
>>>    mips/kvm: Add casts to avoid pointer width mismatch build failures.
>>>    mips/kvm: Use generic cache flushing functions.
>>>    mips/kvm: Rename kvm_vcpu_arch.pc to  kvm_vcpu_arch.epc
>>>    mips/kvm: Rename VCPU_registername to KVM_VCPU_ARCH_registername
>>>    mips/kvm: Fix code formatting in arch/mips/kvm/kvm_locore.S
>>>    mips/kvm: Factor trap-and-emulate support into a pluggable
>>>      implementation.
>>>    mips/kvm: Implement ioctls to get and set FPU registers.
>>>    MIPS: Rearrange branch.c so it can be used by kvm code.
>>>    MIPS: Add instruction format information for WAIT, MTC0, MFC0, et al.
>>>    mips/kvm: Add accessors for MIPS VZ registers.
>>>    mips/kvm: Add thread_info flag to indicate operation in MIPS VZ Guest
>>>      Mode.
>>>    mips/kvm: Exception handling to leave and reenter guest mode.
>>>    mips/kvm: Add exception handler for MIPSVZ Guest exceptions.
>>>    MIPS: Quit exposing Kconfig symbols in uapi headers.
>>>    mips/kvm: Add pt_regs slots for BadInstr and BadInstrP
>>>    mips/kvm: Add host definitions for MIPS VZ based host.
>>>    mips/kvm: Hook into TLB fault handlers.
>>>    mips/kvm: Allow set_except_vector() to be used from MIPSVZ code.
>>>    mips/kvm: Split get_new_mmu_context into two parts.
>>>    mips/kvm: Hook into CP unusable exception handler.
>>>    mips/kvm: Add thread_struct fields used by MIPSVZ hosts.
>>>    mips/kvm: Add some asm-offsets constants used by MIPSVZ.
>>>    mips/kvm: Split up Kconfig and Makefile definitions in preperation
>>>      for MIPSVZ.
>>>    mips/kvm: Gate the use of kvm_local_flush_tlb_all() by KVM_MIPSTE
>>>    mips/kvm: Only use KVM_COALESCED_MMIO_PAGE_OFFSET with KVM_MIPSTE
>>>    mips/kvm: Add MIPSVZ support.
>>>    mips/kvm: Enable MIPSVZ in Kconfig/Makefile
>>>    mips/kvm: Allow for upto 8 KVM vcpus per vm.
>>>
>>>   arch/mips/Kconfig                   |    1 +
>>>   arch/mips/include/asm/branch.h      |    7 +
>>>   arch/mips/include/asm/kvm_host.h    |  622 +-----------
>>>   arch/mips/include/asm/kvm_mips_te.h |  589 +++++++++++
>>>   arch/mips/include/asm/kvm_mips_vz.h |   29 +
>>>   arch/mips/include/asm/mipsregs.h    |  264 +++++
>>>   arch/mips/include/asm/mmu_context.h |   12 +-
>>>   arch/mips/include/asm/processor.h   |    6 +
>>>   arch/mips/include/asm/ptrace.h      |   36 +
>>>   arch/mips/include/asm/stackframe.h  |  150 ++-
>>>   arch/mips/include/asm/thread_info.h |    2 +
>>>   arch/mips/include/asm/uasm.h        |    2 +-
>>>   arch/mips/include/uapi/asm/inst.h   |   23 +-
>>>   arch/mips/include/uapi/asm/ptrace.h |   17 +-
>>>   arch/mips/kernel/asm-offsets.c      |  124 ++-
>>>   arch/mips/kernel/branch.c           |   63 +-
>>>   arch/mips/kernel/cpu-probe.c        |   34 +
>>>   arch/mips/kernel/genex.S            |    8 +
>>>   arch/mips/kernel/scall64-64.S       |   12 +
>>>   arch/mips/kernel/scall64-n32.S      |   12 +
>>>   arch/mips/kernel/traps.c            |   15 +-
>>>   arch/mips/kvm/Kconfig               |   23 +-
>>>   arch/mips/kvm/Makefile              |   15 +-
>>>   arch/mips/kvm/kvm_locore.S          |  980 +++++++++---------
>>>   arch/mips/kvm/kvm_mips.c            |  768 ++------------
>>>   arch/mips/kvm/kvm_mips_comm.h       |    1 +
>>>   arch/mips/kvm/kvm_mips_commpage.c   |    9 +-
>>>   arch/mips/kvm/kvm_mips_dyntrans.c   |    4 +-
>>>   arch/mips/kvm/kvm_mips_emul.c       |  312 +++---
>>>   arch/mips/kvm/kvm_mips_int.c        |   53 +-
>>>   arch/mips/kvm/kvm_mips_int.h        |    2 -
>>>   arch/mips/kvm/kvm_mips_stats.c      |    6 +-
>>>   arch/mips/kvm/kvm_mipsvz.c          | 1894 +++++++++++++++++++++++++++++++++++
>>>   arch/mips/kvm/kvm_mipsvz_guest.S    |  234 +++++
>>>   arch/mips/kvm/kvm_tlb.c             |  140 +--
>>>   arch/mips/kvm/kvm_trap_emul.c       |  932 +++++++++++++++--
>>>   arch/mips/mm/fault.c                |    8 +
>>>   arch/mips/mm/tlbex-fault.S          |    6 +
>>>   arch/mips/mm/tlbex.c                |   45 +-
>>>   39 files changed, 5299 insertions(+), 2161 deletions(-)
>>>   create mode 100644 arch/mips/include/asm/kvm_mips_te.h
>>>   create mode 100644 arch/mips/include/asm/kvm_mips_vz.h
>>>   create mode 100644 arch/mips/kvm/kvm_mipsvz.c
>>>   create mode 100644 arch/mips/kvm/kvm_mipsvz_guest.S
>>>
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at  http://www.tux.org/lkml/
> --
> 			Gleb.
>


^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions.
  2013-06-09 23:23     ` David Daney
@ 2013-06-09 23:40       ` Maciej W. Rozycki
  2013-06-10 11:16         ` Ralf Baechle
  2013-06-10  6:18       ` Gleb Natapov
                         ` (2 subsequent siblings)
  3 siblings, 1 reply; 84+ messages in thread
From: Maciej W. Rozycki @ 2013-06-09 23:40 UTC (permalink / raw)
  To: David Daney
  Cc: Gleb Natapov, David Daney, David Daney, kvm, linux-mips,
	Ralf Baechle, Sanjay Lal, linux-kernel, David Daney

On Sun, 9 Jun 2013, David Daney wrote:

> >   How different MIPS SMP systems are?
> 
> o Old SGI heavy metal (several different system architectures).
> 
> o Cavium OCTEON SMP SoCs.
> 
> o Broadcom (several flavors) SoCs
> 
> o Loongson

o Old DEC hardware (DECsystem 58x0, R3000-based).

o Malta-based MIPS Technologies CMP solutions (1004K, 1074K, interAptiv).

  Maciej

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions.
  2013-06-09 23:23     ` David Daney
  2013-06-09 23:40       ` Maciej W. Rozycki
@ 2013-06-10  6:18       ` Gleb Natapov
  2013-06-10 16:37       ` Sanjay Lal
  2013-06-16 11:59       ` Ralf Baechle
  3 siblings, 0 replies; 84+ messages in thread
From: Gleb Natapov @ 2013-06-10  6:18 UTC (permalink / raw)
  To: David Daney
  Cc: David Daney, David Daney, kvm, linux-mips, ralf, Sanjay Lal,
	linux-kernel, David Daney

On Sun, Jun 09, 2013 at 04:23:51PM -0700, David Daney wrote:
> On 06/09/2013 12:31 AM, Gleb Natapov wrote:
> >On Fri, Jun 07, 2013 at 04:15:00PM -0700, David Daney wrote:
> >>I should also add that I will shortly send patches for the kvm tool
> >>required to drive this VM as well as a small set of patches that
> >>create a para-virtualized MIPS/Linux guest kernel.
> >>
> >>The idea is that because there is no standard SMP linux system, we
> >>create a standard para-virtualized system that uses a handful of
> >>hypercalls, but mostly just uses virtio devices.  It has no emulated
> >>real hardware (no 8250 UART, no emulated legacy anything...)
> >>
> >Virtualization is useful for running legacy code. Why dismiss support
> >for non pv guests so easily?
> 
> Just because we create standard PV system devices, doesn't preclude
> emulating real hardware.  In fact Sanjay Lal's work includes QEMU
> support for doing just this for a MIPS malta board.  I just wanted a
> very simple system I could implement with the kvm tool in a couple
> of days, so that is what I initially did.
> 
That makes sense. From your wording I misunderstood that there is something
in proposed patches that requires PV to run a guest.

--
			Gleb.

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions.
  2013-06-09 23:40       ` Maciej W. Rozycki
@ 2013-06-10 11:16         ` Ralf Baechle
  0 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-10 11:16 UTC (permalink / raw)
  To: Maciej W. Rozycki
  Cc: David Daney, Gleb Natapov, David Daney, David Daney, kvm,
	linux-mips, Sanjay Lal, linux-kernel, David Daney

On Mon, Jun 10, 2013 at 12:40:42AM +0100, Maciej W. Rozycki wrote:

> > >   How different MIPS SMP systems are?
> > 
> > o Old SGI heavy metal (several different system architectures).
> > 
> > o Cavium OCTEON SMP SoCs.
> > 
> > o Broadcom (several flavors) SoCs
> > 
> > o Loongson
> 
> o Old DEC hardware (DECsystem 58x0, R3000-based).
> 
> o Malta-based MIPS Technologies CMP solutions (1004K, 1074K, interAptiv).

And more.  It's fairly accurate that MIPS SMP system tend to have little
of their system architecture in common beyond the underlying processor
architecture and everything else should be treated as a lucky coincidence.

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions.
  2013-06-09 23:23     ` David Daney
  2013-06-09 23:40       ` Maciej W. Rozycki
  2013-06-10  6:18       ` Gleb Natapov
@ 2013-06-10 16:37       ` Sanjay Lal
  2013-06-16 11:59       ` Ralf Baechle
  3 siblings, 0 replies; 84+ messages in thread
From: Sanjay Lal @ 2013-06-10 16:37 UTC (permalink / raw)
  To: David Daney
  Cc: Gleb Natapov, David Daney, David Daney, kvm, linux-mips, ralf,
	linux-kernel, David Daney


On Jun 9, 2013, at 4:23 PM, David Daney wrote:

> On 06/09/2013 12:31 AM, Gleb Natapov wrote:
>> On Fri, Jun 07, 2013 at 04:15:00PM -0700, David Daney wrote:
>>> I should also add that I will shortly send patches for the kvm tool
>>> required to drive this VM as well as a small set of patches that
>>> create a para-virtualized MIPS/Linux guest kernel.
>>> 
>>> The idea is that because there is no standard SMP linux system, we
>>> create a standard para-virtualized system that uses a handful of
>>> hypercalls, but mostly just uses virtio devices.  It has no emulated
>>> real hardware (no 8250 UART, no emulated legacy anything...)
>>> 
>> Virtualization is useful for running legacy code. Why dismiss support
>> for non pv guests so easily?
> 
> Just because we create standard PV system devices, doesn't preclude emulating real hardware.  In fact Sanjay Lal's work includes QEMU support for doing just this for a MIPS malta board.  I just wanted a very simple system I could implement with the kvm tool in a couple of days, so that is what I initially did.
> 
> The problem is that almost nobody has real malta boards, they are really only of interest because QEMU implements a virtual malta board.
> 
> Personally, I see the most interesting us cases of MIPS KVM being a deployment platform for new services, so legacy support is not so important to me.  That doesn't mean that other people wouldn't want some sort of legacy support.  The problem with 'legacy' on MIPS is that there are hundreds of legacies to choose from (Old SGI and DEC hardware, various network hardware from many different vendors, etc.).  Which would you choose?
> 
>>  How different MIPS SMP systems are?
> 
> o Old SGI heavy metal (several different system architectures).
> 
> o Cavium OCTEON SMP SoCs.
> 
> o Broadcom (several flavors) SoCs
> 
> o Loongson
> 
> 
> Come to think of it, Emulating SGI hardware might be an interesting case.  There may be old IRIX systems and applications that could be running low on real hardware.  Some of those systems take up a whole room and draw a lot of power.  They might run faster and at much lower power consumption on a modern 48-Way SMP SoC based system.
> 
>>  What
>> about running non pv UP systems?
> 
> See above.  I think this is what Sanjay Lal is doing.


The KVM implementation from MIPS (currently in mainline) supports UP systems in trap and emulate mode.  The patch set I posted earlier adding VZ support also supports SMP.  We leverage the Malta board emulation in QEMU to offer full non-PV virtualization:

UP system: Malta board with a MIPS 24K processor
SMP system: Malta board with a 1074K CMP processor cluster with a GIC.

When it comes to PV/non-PV support, I see the two implementations as complementary.  If people want full legacy system emulation without any kernel modifications, then they can run the full QEMU/KVM stack, while people interested in pure PV solutions can run the lkvm version.

Regards
Sanjay







^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions.
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (31 preceding siblings ...)
  2013-06-07 23:15   ` David Daney
@ 2013-06-10 16:43 ` Sanjay Lal
  2013-06-10 17:14   ` David Daney
  2013-06-14 11:12 ` Ralf Baechle
  2013-06-19  9:06 ` Ralf Baechle
  34 siblings, 1 reply; 84+ messages in thread
From: Sanjay Lal @ 2013-06-10 16:43 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, ralf, kvm, linux-kernel, David Daney


On Jun 7, 2013, at 4:03 PM, David Daney wrote:

> From: David Daney <david.daney@cavium.com>
> 
> These patches take a somewhat different approach to MIPS
> virtualization via the MIPS-VZ extensions than the patches previously
> sent by Sanjay Lal.
> 
> Several facts about the code:
> 
> 
> o Currently probably only usable on the OCTEON III CPU model, as some
>  MIPS-VZ implementation-defined behaviors were assumed to have the
>  OCTEON III behavior.
> 


I've only briefly gone over the patches, but I was wondering if the Cavium implementation has support for GuestIDs, which are optional in the VZ-ASE?

Regards
Sanjay



^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions.
  2013-06-10 16:43 ` Sanjay Lal
@ 2013-06-10 17:14   ` David Daney
  0 siblings, 0 replies; 84+ messages in thread
From: David Daney @ 2013-06-10 17:14 UTC (permalink / raw)
  To: Sanjay Lal; +Cc: linux-mips, ralf, kvm, linux-kernel, David Daney

On 06/10/2013 09:43 AM, Sanjay Lal wrote:
>
> On Jun 7, 2013, at 4:03 PM, David Daney wrote:
>
>> From: David Daney <david.daney@cavium.com>
>>
>> These patches take a somewhat different approach to MIPS
>> virtualization via the MIPS-VZ extensions than the patches previously
>> sent by Sanjay Lal.
>>
>> Several facts about the code:
>>
>>
>> o Currently probably only usable on the OCTEON III CPU model, as some
>>   MIPS-VZ implementation-defined behaviors were assumed to have the
>>   OCTEON III behavior.
>>
>
>
> I've only briefly gone over the patches, but I was wondering if the Cavium implementation has support for GuestIDs, which are optional in the VZ-ASE?
>

No, OCTEON III does not support this optional behavior.  For the most 
part this only impacts TLB management.  I think in the context of KVM, 
you cannot leave foreign Guest's TLB entries present in the Guest TLB 
anyhow, so the feature is of little use.

Since MIPS TLBs are managed by software, it is valid for a guest to 
populate the TLB in any way it desires.  To have the hypervisor (KVM) 
come and randomly invalidate the TLB, via the GuestID mechanism, would 
both be detectable by the guest, and potentially make the guest operate 
incorrectly.

David Daney

> Regards
> Sanjay
>
>
>
>


^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions.
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (32 preceding siblings ...)
  2013-06-10 16:43 ` Sanjay Lal
@ 2013-06-14 11:12 ` Ralf Baechle
  2013-06-19  9:06 ` Ralf Baechle
  34 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-14 11:12 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

On Fri, Jun 07, 2013 at 04:03:04PM -0700, David Daney wrote:

> Subject: [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the
>  MIPS-VZ extensions.
> 
> From: David Daney <david.daney@cavium.com>
> 
> These patches take a somewhat different approach to MIPS
> virtualization via the MIPS-VZ extensions than the patches previously
> sent by Sanjay Lal.
> 
> Several facts about the code:
> 
> o Existing exception handlers are modified to hook in to KVM instead
>   of intercepting all exceptions via the EBase register, and then
>   chaining to real exception handlers.
> 
> o Able to boot 64-bit SMP guests that use the FPU (I have booted 4-way
>   SMP 64-bit MIPS/Linux).
> 
> o Additional overhead on every exception even when *no* vCPU is running.
> 
> o Lower interrupt overhead, than the EBase interception method, when
>   vCPU *is* running.
> 
> o This code is somewhat smaller than the existing trap/emulate
>   implementation (about 2100 lines vs. about 5300 lines)
> 
> o Currently probably only usable on the OCTEON III CPU model, as some
>   MIPS-VZ implementation-defined behaviors were assumed to have the
>   OCTEON III behavior.
> 
> Note: I think Ralf already has the 17/31 (MIPS: Quit exposing Kconfig
> symbols in uapi headers.) queued, but I also include it here.

Yes; as the references to CONFIG_* symbols in UAPI were a bug, I've
already merged this patch for 3.10 as 8f657933a3c2086d4731350c98f91a990783c0d3
[MIPS: Quit exposing Kconfig symbols in uapi headers.]

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 17/31] MIPS: Quit exposing Kconfig symbols in uapi headers.
  2013-06-07 23:03 ` [PATCH 17/31] MIPS: Quit exposing Kconfig symbols in uapi headers David Daney
@ 2013-06-14 11:12   ` Ralf Baechle
  2013-06-15 17:13   ` Paolo Bonzini
  1 sibling, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-14 11:12 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

So this one can be dropped.

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 01/31] MIPS: Move allocate_kscratch to cpu-probe.c and make it public.
  2013-06-07 23:03 ` [PATCH 01/31] MIPS: Move allocate_kscratch to cpu-probe.c and make it public David Daney
@ 2013-06-14 11:41   ` Ralf Baechle
  2013-06-14 13:11     ` Ralf Baechle
  0 siblings, 1 reply; 84+ messages in thread
From: Ralf Baechle @ 2013-06-14 11:41 UTC (permalink / raw)
  To: James Hogan; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

On Fri, Jun 07, 2013 at 04:03:05PM -0700, David Daney wrote:
> Date:   Fri,  7 Jun 2013 16:03:05 -0700
> From: David Daney <ddaney.cavm@gmail.com>
> To: linux-mips@linux-mips.org, ralf@linux-mips.org, kvm@vger.kernel.org,
>  Sanjay Lal <sanjayl@kymasys.com>
> Cc: linux-kernel@vger.kernel.org, David Daney <ddaney@caviumnetworks.com>
> Subject: [PATCH 01/31] MIPS: Move allocate_kscratch to cpu-probe.c and make
>  it public.
> 
> From: David Daney <ddaney@caviumnetworks.com>

I'd just like to add a note about compatibility.  Code optimized for
LL/SC-less CPUs has made use of the fact that exception handlers will
clobber k0/k1 to a non-zero value.  On a MIPS II or better CPU a branch
likely instruction could be used to atomically test k0/k1 and depending
on the test, execute a store instruction like:

	.set	noreorder
	beqzl	$k0, ok
	sw	$reg, offset($reg)
	/* if we get here, our SC emulation has failed  */
ok:	...

In particular Sony had elected to do this for the R5900 (after I explained
the concept to somebody and told it'd be a _bad_ idea for compatibility
reasons).  Bad ideas are infectious so I'm sure others have used it, too.

I don't think this should stop your patch nor should we unless this turns
out to be an actual problem add any kludges to support such cowboy style
hacks.  But I wanted to mention and document the issue; maybe this should
be mentioned in the log message of the next version of this patch.

Acked-by: Ralf Baechle <ralf@linux-mips.org>

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 03/31] mips/kvm: Fix 32-bitisms in kvm_locore.S
  2013-06-07 23:03 ` [PATCH 03/31] mips/kvm: Fix 32-bitisms in kvm_locore.S David Daney
@ 2013-06-14 13:09   ` Ralf Baechle
  2013-06-14 13:21     ` Sergei Shtylyov
  0 siblings, 1 reply; 84+ messages in thread
From: Ralf Baechle @ 2013-06-14 13:09 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

On Fri, Jun 07, 2013 at 04:03:07PM -0700, David Daney wrote:

> diff --git a/arch/mips/kvm/kvm_locore.S b/arch/mips/kvm/kvm_locore.S
> index dca2aa6..e86fa2a 100644
> --- a/arch/mips/kvm/kvm_locore.S
> +++ b/arch/mips/kvm/kvm_locore.S
> @@ -310,7 +310,7 @@ NESTED (MIPSX(GuestException), CALLFRAME_SIZ, ra)
>      LONG_S  t0, VCPU_R26(k1)
>  
>      /* Get GUEST k1 and save it in VCPU */
> -    la      t1, ~0x2ff
> +	PTR_LI	t1, ~0x2ff
>      mfc0    t0, CP0_EBASE
>      and     t0, t0, t1
>      LONG_L  t0, 0x3000(t0)
> @@ -384,14 +384,14 @@ NESTED (MIPSX(GuestException), CALLFRAME_SIZ, ra)
>      mtc0        k0, CP0_DDATA_LO
>  
>      /* Restore RDHWR access */
> -    la      k0, 0x2000000F
> +	PTR_LI	k0, 0x2000000F
>      mtc0    k0,  CP0_HWRENA
>  
>      /* Jump to handler */
>  FEXPORT(__kvm_mips_jump_to_handler)
>      /* XXXKYMA: not sure if this is safe, how large is the stack?? */
>      /* Now jump to the kvm_mips_handle_exit() to see if we can deal with this in the kernel */
> -    la          t9,kvm_mips_handle_exit
> +	PTR_LA	t9, kvm_mips_handle_exit
>      jalr.hb     t9
>      addiu       sp,sp, -CALLFRAME_SIZ           /* BD Slot */
>  
> @@ -566,7 +566,7 @@ __kvm_mips_return_to_host:
>      mtlo    k0
>  
>      /* Restore RDHWR access */
> -    la      k0, 0x2000000F
> +	PTR_LI	k0, 0x2000000F
>      mtc0    k0,  CP0_HWRENA

Technically ok, there's only a formatting issue because you indent the
changed lines with tabs while the existing file uses tab characters.
I suggest you insert an extra cleanup patch to properly re-indent the
entire file into the series before this one?

So with that sorted:

Acked-by: Ralf Baechle <ralf@linux-mips.org>

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 01/31] MIPS: Move allocate_kscratch to cpu-probe.c and make it public.
  2013-06-14 11:41   ` Ralf Baechle
@ 2013-06-14 13:11     ` Ralf Baechle
  0 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-14 13:11 UTC (permalink / raw)
  To: James Hogan; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

On Fri, Jun 14, 2013 at 01:41:18PM +0200, Ralf Baechle wrote:
> Date:   Fri, 14 Jun 2013 13:41:18 +0200
> From: Ralf Baechle <ralf@linux-mips.org>
> To: James Hogan <james.hogan@imgtec.com>
> Cc: linux-mips@linux-mips.org, kvm@vger.kernel.org, Sanjay Lal
>  <sanjayl@kymasys.com>, linux-kernel@vger.kernel.org, David Daney
>  <ddaney@caviumnetworks.com>
> Subject: Re: [PATCH 01/31] MIPS: Move allocate_kscratch to cpu-probe.c and
>  make it public.
> Content-Type: text/plain; charset=us-ascii
> 
> On Fri, Jun 07, 2013 at 04:03:05PM -0700, David Daney wrote:
> > Date:   Fri,  7 Jun 2013 16:03:05 -0700
> > From: David Daney <ddaney.cavm@gmail.com>
> > To: linux-mips@linux-mips.org, ralf@linux-mips.org, kvm@vger.kernel.org,
> >  Sanjay Lal <sanjayl@kymasys.com>
> > Cc: linux-kernel@vger.kernel.org, David Daney <ddaney@caviumnetworks.com>
> > Subject: [PATCH 01/31] MIPS: Move allocate_kscratch to cpu-probe.c and make
> >  it public.
> > 
> > From: David Daney <ddaney@caviumnetworks.com>
> 
> I'd just like to add a note about compatibility.  Code optimized for
> LL/SC-less CPUs has made use of the fact that exception handlers will
> clobber k0/k1 to a non-zero value.  On a MIPS II or better CPU a branch
> likely instruction could be used to atomically test k0/k1 and depending
> on the test, execute a store instruction like:
> 
> 	.set	noreorder
> 	beqzl	$k0, ok
> 	sw	$reg, offset($reg)
> 	/* if we get here, our SC emulation has failed  */
> ok:	...
> 
> In particular Sony had elected to do this for the R5900 (after I explained
> the concept to somebody and told it'd be a _bad_ idea for compatibility
> reasons).  Bad ideas are infectious so I'm sure others have used it, too.
> 
> I don't think this should stop your patch nor should we unless this turns
> out to be an actual problem add any kludges to support such cowboy style
> hacks.  But I wanted to mention and document the issue; maybe this should
> be mentioned in the log message of the next version of this patch.
> 
> Acked-by: Ralf Baechle <ralf@linux-mips.org>

Bleh.  I fatfingered mutt.  This of course was the reply intended for
"[PATCH 02/31] MIPS: Save and restore K0/K1 when CONFIG_KVM_MIPSVZ".

As for 1/31:

Acked-by: Ralf Baechle <ralf@linux-mips.org>

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 04/31] mips/kvm: Add casts to avoid pointer width mismatch build failures.
  2013-06-07 23:03 ` [PATCH 04/31] mips/kvm: Add casts to avoid pointer width mismatch build failures David Daney
@ 2013-06-14 13:14   ` Ralf Baechle
  0 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-14 13:14 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

Cast are always a bit ugly, in particular the one double casts - but
a necessary evil here.

Acked-by: Ralf Baechle <ralf@linux-mips.org

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 05/31] mips/kvm: Use generic cache flushing functions.
  2013-06-07 23:03 ` [PATCH 05/31] mips/kvm: Use generic cache flushing functions David Daney
@ 2013-06-14 13:17   ` Ralf Baechle
  0 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-14 13:17 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

On Fri, Jun 07, 2013 at 04:03:09PM -0700, David Daney wrote:

> From: David Daney <david.daney@cavium.com>
> 
> We don't know if we have the r4k specific functions available, so use
> universally available __flush_cache_all() instead.  This takes longer
> as it flushes both i-cache and d-cache, but is available for all CPUs.
> 
> Signed-off-by: David Daney <david.daney@cavium.com>
> ---
>  arch/mips/kvm/kvm_mips_emul.c | 6 ++----
>  1 file changed, 2 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/mips/kvm/kvm_mips_emul.c b/arch/mips/kvm/kvm_mips_emul.c
> index af9a661..a2c6687 100644
> --- a/arch/mips/kvm/kvm_mips_emul.c
> +++ b/arch/mips/kvm/kvm_mips_emul.c
> @@ -916,8 +916,6 @@ kvm_mips_emulate_cache(uint32_t inst, uint32_t *opc, uint32_t cause,
>  		       struct kvm_run *run, struct kvm_vcpu *vcpu)
>  {
>  	struct mips_coproc *cop0 = vcpu->arch.cop0;
> -	extern void (*r4k_blast_dcache) (void);
> -	extern void (*r4k_blast_icache) (void);
>  	enum emulation_result er = EMULATE_DONE;
>  	int32_t offset, cache, op_inst, op, base;
>  	struct kvm_vcpu_arch *arch = &vcpu->arch;
> @@ -954,9 +952,9 @@ kvm_mips_emulate_cache(uint32_t inst, uint32_t *opc, uint32_t cause,
>  		     arch->gprs[base], offset);
>  
>  		if (cache == MIPS_CACHE_DCACHE)
> -			r4k_blast_dcache();

Only nukes the D-cache.

> +			__flush_cache_all();

This is also going to blow away the I-cache, so will be slower.

>  		else if (cache == MIPS_CACHE_ICACHE)
> -			r4k_blast_icache();

Only nukes the I-cache.

> +			__flush_cache_all();

This is also going to blow away the D-cache, so will be slower.

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 06/31] mips/kvm: Rename kvm_vcpu_arch.pc to kvm_vcpu_arch.epc
  2013-06-07 23:03 ` [PATCH 06/31] mips/kvm: Rename kvm_vcpu_arch.pc to kvm_vcpu_arch.epc David Daney
@ 2013-06-14 13:18   ` Ralf Baechle
  0 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-14 13:18 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

Acked-by: Ralf Baechle <ralf@linux-mips.org>

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 07/31] mips/kvm: Rename VCPU_registername to KVM_VCPU_ARCH_registername
  2013-06-07 23:03 ` [PATCH 07/31] mips/kvm: Rename VCPU_registername to KVM_VCPU_ARCH_registername David Daney
@ 2013-06-14 13:18   ` Ralf Baechle
  0 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-14 13:18 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

Acked-by: Ralf Baechle <ralf@linux-mips.org>

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 08/31] mips/kvm: Fix code formatting in arch/mips/kvm/kvm_locore.S
  2013-06-07 23:03 ` [PATCH 08/31] mips/kvm: Fix code formatting in arch/mips/kvm/kvm_locore.S David Daney
@ 2013-06-14 13:21   ` Ralf Baechle
  0 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-14 13:21 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

Ah, here's you're taking care of my eariler complaint about the formatting
of kvm_locore.S.  I'd have done things in a different order to avoid the
inconsistent formatting - even if that was only a temporary state.  But
anyway,

Acked-by: Ralf Baechle <ralf@linux-mips.org>

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 03/31] mips/kvm: Fix 32-bitisms in kvm_locore.S
  2013-06-14 13:09   ` Ralf Baechle
@ 2013-06-14 13:21     ` Sergei Shtylyov
  0 siblings, 0 replies; 84+ messages in thread
From: Sergei Shtylyov @ 2013-06-14 13:21 UTC (permalink / raw)
  To: Ralf Baechle
  Cc: David Daney, linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

Hello.

On 14-06-2013 17:09, Ralf Baechle wrote:

>> diff --git a/arch/mips/kvm/kvm_locore.S b/arch/mips/kvm/kvm_locore.S
>> index dca2aa6..e86fa2a 100644
>> --- a/arch/mips/kvm/kvm_locore.S
>> +++ b/arch/mips/kvm/kvm_locore.S
>> @@ -310,7 +310,7 @@ NESTED (MIPSX(GuestException), CALLFRAME_SIZ, ra)
>>       LONG_S  t0, VCPU_R26(k1)
>>
>>       /* Get GUEST k1 and save it in VCPU */
>> -    la      t1, ~0x2ff
>> +	PTR_LI	t1, ~0x2ff
>>       mfc0    t0, CP0_EBASE
>>       and     t0, t0, t1
>>       LONG_L  t0, 0x3000(t0)
>> @@ -384,14 +384,14 @@ NESTED (MIPSX(GuestException), CALLFRAME_SIZ, ra)
>>       mtc0        k0, CP0_DDATA_LO
>>
>>       /* Restore RDHWR access */
>> -    la      k0, 0x2000000F
>> +	PTR_LI	k0, 0x2000000F
>>       mtc0    k0,  CP0_HWRENA
>>
>>       /* Jump to handler */
>>   FEXPORT(__kvm_mips_jump_to_handler)
>>       /* XXXKYMA: not sure if this is safe, how large is the stack?? */
>>       /* Now jump to the kvm_mips_handle_exit() to see if we can deal with this in the kernel */
>> -    la          t9,kvm_mips_handle_exit
>> +	PTR_LA	t9, kvm_mips_handle_exit
>>       jalr.hb     t9
>>       addiu       sp,sp, -CALLFRAME_SIZ           /* BD Slot */
>>
>> @@ -566,7 +566,7 @@ __kvm_mips_return_to_host:
>>       mtlo    k0
>>
>>       /* Restore RDHWR access */
>> -    la      k0, 0x2000000F
>> +	PTR_LI	k0, 0x2000000F
>>       mtc0    k0,  CP0_HWRENA

> Technically ok, there's only a formatting issue because you indent the
> changed lines with tabs while the existing file uses tab characters.

    I hope you meant the space characters? :-)

> I suggest you insert an extra cleanup patch to properly re-indent the
> entire file into the series before this one?

> So with that sorted:

> Acked-by: Ralf Baechle <ralf@linux-mips.org>

>    Ralf

WBR, Sergei



^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 09/31] mips/kvm: Factor trap-and-emulate support into a pluggable implementation.
  2013-06-07 23:03 ` [PATCH 09/31] mips/kvm: Factor trap-and-emulate support into a pluggable implementation David Daney
@ 2013-06-14 13:22   ` Ralf Baechle
  0 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-14 13:22 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

Acked-by: Ralf Baechle <ralf@linux-mips.org>

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 10/31] mips/kvm: Implement ioctls to get and set FPU registers.
  2013-06-07 23:03 ` [PATCH 10/31] mips/kvm: Implement ioctls to get and set FPU registers David Daney
@ 2013-06-14 16:11   ` Ralf Baechle
  0 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-14 16:11 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

On Fri, Jun 07, 2013 at 04:03:14PM -0700, David Daney wrote:

> From: David Daney <david.daney@cavium.com>
> 
> The current implementation does nothing with them, but future MIPSVZ
> work need them.  Also add the asm-offsets accessors for the fields.

Just as a note, older MIPS FPUs only have fcr0 and fcr31.

Acked-by: Ralf Baechle <ralf@linux-mips.org>

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 11/31] MIPS: Rearrange branch.c so it can be used by kvm code.
  2013-06-07 23:03 ` [PATCH 11/31] MIPS: Rearrange branch.c so it can be used by kvm code David Daney
@ 2013-06-14 22:03   ` Ralf Baechle
  0 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-14 22:03 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

Acked-by: Ralf Baechle <ralf@linux-mips.org>

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 12/31] MIPS: Add instruction format information for WAIT, MTC0, MFC0, et al.
  2013-06-07 23:03 ` [PATCH 12/31] MIPS: Add instruction format information for WAIT, MTC0, MFC0, et al David Daney
@ 2013-06-14 22:07   ` Ralf Baechle
  0 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-14 22:07 UTC (permalink / raw)
  To: David Daney
  Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney, Steven J. Hill

On Fri, Jun 07, 2013 at 04:03:16PM -0700, David Daney wrote:

> To: linux-mips@linux-mips.org, ralf@linux-mips.org, kvm@vger.kernel.org,
>  Sanjay Lal <sanjayl@kymasys.com>
> Cc: linux-kernel@vger.kernel.org, David Daney <david.daney@cavium.com>
> Subject: [PATCH 12/31] MIPS: Add instruction format information for WAIT,
>  MTC0, MFC0, et al.

Looks good.

Acked-by: Ralf Baechle <ralf@linux-mips.org>

I wonder if somebody could throw in microMIPS equivalent to this patch?

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 13/31] mips/kvm: Add accessors for MIPS VZ registers.
  2013-06-07 23:03 ` [PATCH 13/31] mips/kvm: Add accessors for MIPS VZ registers David Daney
@ 2013-06-14 22:10   ` Ralf Baechle
  0 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-14 22:10 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

Acked-by: Ralf Baechle <ralf@linux-mips.org>

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 14/31] mips/kvm: Add thread_info flag to indicate operation in MIPS VZ Guest Mode.
  2013-06-07 23:03 ` [PATCH 14/31] mips/kvm: Add thread_info flag to indicate operation in MIPS VZ Guest Mode David Daney
@ 2013-06-14 22:10   ` Ralf Baechle
  0 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-14 22:10 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

Acked-by: Ralf Baechle <ralf@linux-mips.org>

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 15/31] mips/kvm: Exception handling to leave and reenter guest mode.
  2013-06-07 23:03 ` [PATCH 15/31] mips/kvm: Exception handling to leave and reenter guest mode David Daney
@ 2013-06-15 10:00   ` Ralf Baechle
  2013-10-11 12:51     ` James Hogan
  1 sibling, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-15 10:00 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

On Fri, Jun 07, 2013 at 04:03:19PM -0700, David Daney wrote:

> Currently this is a little complex, here are the facts about how it works:

I'm not so much worried about the intrinic complexity of the job your
code is trying to do rather than stackframe.h getting every more complex.
We're reaching the point where reimplementing is using uasm.c is looking
like a good thing.  But certainly not now.

Acked-by: Ralf Baechle <ralf@linux-mips.org>

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 16/31] mips/kvm: Add exception handler for MIPSVZ Guest exceptions.
  2013-06-07 23:03 ` [PATCH 16/31] mips/kvm: Add exception handler for MIPSVZ Guest exceptions David Daney
@ 2013-06-15 10:23   ` Ralf Baechle
  0 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-15 10:23 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

On Fri, Jun 07, 2013 at 04:03:20PM -0700, David Daney wrote:

Acked-by: Ralf Baechle <ralf@linux-mips.org>

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 18/31] mips/kvm: Add pt_regs slots for BadInstr and BadInstrP
  2013-06-07 23:03 ` [PATCH 18/31] mips/kvm: Add pt_regs slots for BadInstr and BadInstrP David Daney
@ 2013-06-15 10:25   ` Ralf Baechle
  0 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-15 10:25 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

Acked-by: Ralf Baechle <ralf@linux-mips.org>

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 17/31] MIPS: Quit exposing Kconfig symbols in uapi headers.
  2013-06-07 23:03 ` [PATCH 17/31] MIPS: Quit exposing Kconfig symbols in uapi headers David Daney
  2013-06-14 11:12   ` Ralf Baechle
@ 2013-06-15 17:13   ` Paolo Bonzini
  1 sibling, 0 replies; 84+ messages in thread
From: Paolo Bonzini @ 2013-06-15 17:13 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, ralf, kvm, Sanjay Lal, linux-kernel, David Daney

Il 07/06/2013 19:03, David Daney ha scritto:
> From: David Daney <david.daney@cavium.com>
> 
> The kernel's struct pt_regs has many fields conditional on various
> Kconfig variables, we cannot be exporting this garbage to user-space.
> 
> Move the kernel's definition to asm/ptrace.h, and put a uapi only
> version in uapi/asm/ptrace.h gated by #ifndef __KERNEL__
> 
> Signed-off-by: David Daney <david.daney@cavium.com>
> ---
>  arch/mips/include/asm/ptrace.h      | 32 ++++++++++++++++++++++++++++++++
>  arch/mips/include/uapi/asm/ptrace.h | 17 ++---------------
>  2 files changed, 34 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/mips/include/asm/ptrace.h b/arch/mips/include/asm/ptrace.h
> index a3186f2..5e6cd09 100644
> --- a/arch/mips/include/asm/ptrace.h
> +++ b/arch/mips/include/asm/ptrace.h
> @@ -16,6 +16,38 @@
>  #include <asm/isadep.h>
>  #include <uapi/asm/ptrace.h>
>  
> +/*
> + * This struct defines the way the registers are stored on the stack during a
> + * system call/exception. As usual the registers k0/k1 aren't being saved.
> + */
> +struct pt_regs {
> +#ifdef CONFIG_32BIT
> +	/* Pad bytes for argument save space on the stack. */
> +	unsigned long pad0[6];
> +#endif
> +
> +	/* Saved main processor registers. */
> +	unsigned long regs[32];
> +
> +	/* Saved special registers. */
> +	unsigned long cp0_status;
> +	unsigned long hi;
> +	unsigned long lo;
> +#ifdef CONFIG_CPU_HAS_SMARTMIPS
> +	unsigned long acx;
> +#endif
> +	unsigned long cp0_badvaddr;
> +	unsigned long cp0_cause;
> +	unsigned long cp0_epc;
> +#ifdef CONFIG_MIPS_MT_SMTC
> +	unsigned long cp0_tcstatus;
> +#endif /* CONFIG_MIPS_MT_SMTC */
> +#ifdef CONFIG_CPU_CAVIUM_OCTEON
> +	unsigned long long mpl[3];	  /* MTM{0,1,2} */
> +	unsigned long long mtp[3];	  /* MTP{0,1,2} */
> +#endif
> +} __aligned(8);
> +
>  struct task_struct;
>  
>  extern int ptrace_getregs(struct task_struct *child, __s64 __user *data);
> diff --git a/arch/mips/include/uapi/asm/ptrace.h b/arch/mips/include/uapi/asm/ptrace.h
> index 4d58d84..b26f7e3 100644
> --- a/arch/mips/include/uapi/asm/ptrace.h
> +++ b/arch/mips/include/uapi/asm/ptrace.h
> @@ -22,16 +22,12 @@
>  #define DSP_CONTROL	77
>  #define ACX		78
>  
> +#ifndef __KERNEL__
>  /*
>   * This struct defines the way the registers are stored on the stack during a
>   * system call/exception. As usual the registers k0/k1 aren't being saved.
>   */
>  struct pt_regs {
> -#ifdef CONFIG_32BIT
> -	/* Pad bytes for argument save space on the stack. */
> -	unsigned long pad0[6];
> -#endif
> -

Out of curiosity, how has this ever worked (and how will this work) on
32-bit arches? :)  I can see that maybe no one uses pt_regs beyond .lo,
but these are at the beginning.  Maybe for the uapi version you can use
the __mips__ preprocessor symbol?

Paolo

>  	/* Saved main processor registers. */
>  	unsigned long regs[32];
>  
> @@ -39,20 +35,11 @@ struct pt_regs {
>  	unsigned long cp0_status;
>  	unsigned long hi;
>  	unsigned long lo;
> -#ifdef CONFIG_CPU_HAS_SMARTMIPS
> -	unsigned long acx;
> -#endif
>  	unsigned long cp0_badvaddr;
>  	unsigned long cp0_cause;
>  	unsigned long cp0_epc;
> -#ifdef CONFIG_MIPS_MT_SMTC
> -	unsigned long cp0_tcstatus;
> -#endif /* CONFIG_MIPS_MT_SMTC */
> -#ifdef CONFIG_CPU_CAVIUM_OCTEON
> -	unsigned long long mpl[3];	  /* MTM{0,1,2} */
> -	unsigned long long mtp[3];	  /* MTP{0,1,2} */
> -#endif
>  } __attribute__ ((aligned (8)));
> +#endif /* __KERNEL__ */
>  
>  /* Arbitrarily choose the same ptrace numbers as used by the Sparc code. */
>  #define PTRACE_GETREGS		12
> 


^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 19/31] mips/kvm: Add host definitions for MIPS VZ based host.
  2013-06-07 23:03 ` [PATCH 19/31] mips/kvm: Add host definitions for MIPS VZ based host David Daney
@ 2013-06-16  8:49   ` Ralf Baechle
  0 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-16  8:49 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

Acked-by: Ralf Baechle <ralf@linux-mips.org>

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 20/31] mips/kvm: Hook into TLB fault handlers.
  2013-06-07 23:03 ` [PATCH 20/31] mips/kvm: Hook into TLB fault handlers David Daney
  2013-06-07 23:34   ` Sergei Shtylyov
@ 2013-06-16  8:51   ` Ralf Baechle
  1 sibling, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-16  8:51 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

Acked-by: Ralf Baechle <ralf@linux-mips.org>

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 21/31] mips/kvm: Allow set_except_vector() to be used from MIPSVZ code.
  2013-06-07 23:03 ` [PATCH 21/31] mips/kvm: Allow set_except_vector() to be used from MIPSVZ code David Daney
@ 2013-06-16 11:22   ` Ralf Baechle
  0 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-16 11:22 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

On Fri, Jun 07, 2013 at 04:03:25PM -0700, David Daney wrote:

> From: David Daney <david.daney@cavium.com>
> 
> We need to move it out of __init so we don't have section mismatch problems.
> 
> Signed-off-by: David Daney <david.daney@cavium.com>
> ---
>  arch/mips/include/asm/uasm.h | 2 +-
>  arch/mips/kernel/traps.c     | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/mips/include/asm/uasm.h b/arch/mips/include/asm/uasm.h
> index 370d967..90b4f5e 100644
> --- a/arch/mips/include/asm/uasm.h
> +++ b/arch/mips/include/asm/uasm.h
> @@ -11,7 +11,7 @@
>  
>  #include <linux/types.h>
>  
> -#ifdef CONFIG_EXPORT_UASM
> +#if defined(CONFIG_EXPORT_UASM) || IS_ENABLED(CONFIG_KVM_MIPSVZ)
>  #include <linux/export.h>
>  #define __uasminit
>  #define __uasminitdata

I'd rather keep KVM bits out of uasm.h.  A select EXPORT_UASM in Kconfig
would have been cleaner - but read below.

> diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
> index f008795..fca0a2f 100644
> --- a/arch/mips/kernel/traps.c
> +++ b/arch/mips/kernel/traps.c
> @@ -1457,7 +1457,7 @@ unsigned long ebase;
>  unsigned long exception_handlers[32];
>  unsigned long vi_handlers[64];
>  
> -void __init *set_except_vector(int n, void *addr)
> +void __uasminit *set_except_vector(int n, void *addr)

A __uasminit tag is a bit unobvious because set_except_vector is not part of
uasm.  I could understand __cpuinit - but of course that doesn't sort your
problem.  Maybe we should just drop the __init tag alltogether?  Or if we
really want set_except_vector to become part of the uasm subsystem, then
probably it's declaration should move from setup.h to uasm.h.

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 22/31] mips/kvm: Split get_new_mmu_context into two parts.
  2013-06-07 23:03 ` [PATCH 22/31] mips/kvm: Split get_new_mmu_context into two parts David Daney
@ 2013-06-16 11:26   ` Ralf Baechle
  0 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-16 11:26 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

Acked-by: Ralf Baechle <ralf@linux-mips.org>

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 23/31] mips/kvm: Hook into CP unusable exception handler.
  2013-06-07 23:03 ` [PATCH 23/31] mips/kvm: Hook into CP unusable exception handler David Daney
@ 2013-06-16 11:28   ` Ralf Baechle
  0 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-16 11:28 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

On Fri, Jun 07, 2013 at 04:03:27PM -0700, David Daney wrote:

> From: David Daney <david.daney@cavium.com>
> 
> The MIPS VZ KVM code needs this to be able to manage the FPU.
> 
> Signed-off-by: David Daney <david.daney@cavium.com>

Looks good, Acked-by: Ralf Baechle <ralf@linux-mips.org>.

However I get cold shivers at the thought of SMTC FPU management with VZ,
it sounds like a source of new entertainment ...  But thinkin gaobu this
is something for another rainy day, not now.

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 24/31] mips/kvm: Add thread_struct fields used by MIPSVZ hosts.
  2013-06-07 23:03 ` [PATCH 24/31] mips/kvm: Add thread_struct fields used by MIPSVZ hosts David Daney
@ 2013-06-16 11:29   ` Ralf Baechle
  0 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-16 11:29 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

Acked-by: Ralf Baechle <ralf@linux-mips.org>

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 25/31] mips/kvm: Add some asm-offsets constants used by MIPSVZ.
  2013-06-07 23:03 ` [PATCH 25/31] mips/kvm: Add some asm-offsets constants used by MIPSVZ David Daney
@ 2013-06-16 11:31   ` Ralf Baechle
  2013-08-06 17:23     ` David Daney
  0 siblings, 1 reply; 84+ messages in thread
From: Ralf Baechle @ 2013-06-16 11:31 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

Patch looks ok but why not combine this patch with the previous one?

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 26/31] mips/kvm: Split up Kconfig and Makefile definitions in preperation for MIPSVZ.
  2013-06-07 23:03 ` [PATCH 26/31] mips/kvm: Split up Kconfig and Makefile definitions in preperation for MIPSVZ David Daney
@ 2013-06-16 11:33   ` Ralf Baechle
  0 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-16 11:33 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

The Trademark guys (and readability in general) sould probably be happier
if MIPSTE was spelled as MIPS_TE and for that matter, MIPZVZ as MIPS_VZ?

Other than that,

Acked-by: Ralf Baechle <ralf@linux-mips.org>

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 31/31] mips/kvm: Allow for upto 8 KVM vcpus per vm.
  2013-06-07 23:03 ` [PATCH 31/31] mips/kvm: Allow for upto 8 KVM vcpus per vm David Daney
@ 2013-06-16 11:37   ` Ralf Baechle
  0 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-16 11:37 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

Acked-by: Ralf Baechle <ralf@linux-mips.org>

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 27/31] mips/kvm: Gate the use of kvm_local_flush_tlb_all() by KVM_MIPSTE
  2013-06-07 23:03 ` [PATCH 27/31] mips/kvm: Gate the use of kvm_local_flush_tlb_all() by KVM_MIPSTE David Daney
@ 2013-06-16 11:42   ` Ralf Baechle
  0 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-16 11:42 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

On Fri, Jun 07, 2013 at 04:03:31PM -0700, David Daney wrote:

> From: David Daney <david.daney@cavium.com>
> 
> Only the trap-and-emulate KVM code needs a Special tlb flusher.  All
> other configurations should use the regular version.
> 
> Signed-off-by: David Daney <david.daney@cavium.com>
> ---
>  arch/mips/include/asm/mmu_context.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/mips/include/asm/mmu_context.h b/arch/mips/include/asm/mmu_context.h
> index 5609a32..04d0b74 100644
> --- a/arch/mips/include/asm/mmu_context.h
> +++ b/arch/mips/include/asm/mmu_context.h
> @@ -117,7 +117,7 @@ get_new_asid(unsigned long cpu)
>  	if (! ((asid += ASID_INC) & ASID_MASK) ) {
>  		if (cpu_has_vtag_icache)
>  			flush_icache_all();
> -#ifdef CONFIG_VIRTUALIZATION
> +#if IS_ENABLED(CONFIG_KVM_MIPSTE)
>  		kvm_local_flush_tlb_all();      /* start new asid cycle */
>  #else
>  		local_flush_tlb_all();	/* start new asid cycle */

Sanjay,

it would seem this is actually a bug if KVM is built as a module and should
be fixed for 3.10?

Acked-by: Ralf Baechle <ralf@linux-mips.org>

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 30/31] mips/kvm: Enable MIPSVZ in Kconfig/Makefile
  2013-06-07 23:03 ` [PATCH 30/31] mips/kvm: Enable MIPSVZ in Kconfig/Makefile David Daney
@ 2013-06-16 11:44   ` Ralf Baechle
  0 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-16 11:44 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

Acked-by: Ralf Baechle <ralf@linux-mips.org>

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 29/31] mips/kvm: Add MIPSVZ support.
  2013-06-07 23:03 ` [PATCH 29/31] mips/kvm: Add MIPSVZ support David Daney
@ 2013-06-16 11:47   ` Ralf Baechle
  0 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-16 11:47 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

Acked-by: Ralf Baechle <ralf@linux-mips.org>

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions.
  2013-06-09 23:23     ` David Daney
                         ` (2 preceding siblings ...)
  2013-06-10 16:37       ` Sanjay Lal
@ 2013-06-16 11:59       ` Ralf Baechle
  3 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-16 11:59 UTC (permalink / raw)
  To: David Daney
  Cc: Gleb Natapov, David Daney, David Daney, kvm, linux-mips,
	Sanjay Lal, linux-kernel, David Daney

On Sun, Jun 09, 2013 at 04:23:51PM -0700, David Daney wrote:

> Come to think of it, Emulating SGI hardware might be an interesting
> case.  There may be old IRIX systems and applications that could be
> running low on real hardware.  Some of those systems take up a whole
> room and draw a lot of power.  They might run faster and at much
> lower power consumption on a modern 48-Way SMP SoC based system.

Many SGI MIPS system have RTCs powered by builtin batteries with a
nominal livetime of ten years and for which no more replacements are
available.  This is beginning to limit usable SGI MIPS systems to those
who know how to solve these issues with a Dremel and a soldering iron.

That said, SGI platforms are all more or less weird custom architectures
so the platform emulation - let alone the firmware blobs - will be a
chunk of work.

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 28/31] mips/kvm: Only use KVM_COALESCED_MMIO_PAGE_OFFSET with KVM_MIPSTE
  2013-06-07 23:03 ` [PATCH 28/31] mips/kvm: Only use KVM_COALESCED_MMIO_PAGE_OFFSET with KVM_MIPSTE David Daney
@ 2013-06-16 12:03   ` Ralf Baechle
  0 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-16 12:03 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

On Fri, Jun 07, 2013 at 04:03:32PM -0700, David Daney wrote:

> From: David Daney <david.daney@cavium.com>
> 
> The forthcoming MIPSVZ code doesn't currently use this, so it must
> only be enabled for KVM_MIPSTE.
> 
> Signed-off-by: David Daney <david.daney@cavium.com>
> ---
>  arch/mips/include/asm/kvm_host.h | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
> index 505b804..9f209e1 100644
> --- a/arch/mips/include/asm/kvm_host.h
> +++ b/arch/mips/include/asm/kvm_host.h
> @@ -25,7 +25,9 @@
>  /* memory slots that does not exposed to userspace */
>  #define KVM_PRIVATE_MEM_SLOTS 	0
>  
> +#ifdef CONFIG_KVM_MIPSTE
>  #define KVM_COALESCED_MMIO_PAGE_OFFSET 1
> +#endif

What if KVM_MIPSTE and KVM_MIPSVZ both get enabled?

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions.
  2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
                   ` (33 preceding siblings ...)
  2013-06-14 11:12 ` Ralf Baechle
@ 2013-06-19  9:06 ` Ralf Baechle
  34 siblings, 0 replies; 84+ messages in thread
From: Ralf Baechle @ 2013-06-19  9:06 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

FYI, Since you intend to resubmit anyway I just dropped the entire series
from patchwork.

  Ralf

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 25/31] mips/kvm: Add some asm-offsets constants used by MIPSVZ.
  2013-06-16 11:31   ` Ralf Baechle
@ 2013-08-06 17:23     ` David Daney
  0 siblings, 0 replies; 84+ messages in thread
From: David Daney @ 2013-08-06 17:23 UTC (permalink / raw)
  To: Ralf Baechle; +Cc: linux-mips, kvm, Sanjay Lal, linux-kernel, David Daney

On 06/16/2013 04:31 AM, Ralf Baechle wrote:
> Patch looks ok but why not combine this patch with the previous one?
>

Because even though they both touch asm-offsets.c, they are offsets for 
unrelated structures.  I could try distributing these changes across 
several other patches, but getting the patch dependencies/ordering 
correct can be tricky.

David Daney


>    Ralf
>


^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 15/31] mips/kvm: Exception handling to leave and reenter guest mode.
@ 2013-10-11 12:51     ` James Hogan
  0 siblings, 0 replies; 84+ messages in thread
From: James Hogan @ 2013-10-11 12:51 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, ralf, kvm, Sanjay Lal, linux-kernel, David Daney

[-- Attachment #1: Type: text/plain, Size: 8798 bytes --]

Hi David,

I know it's been a while since you posted this patchset, but thought you
might appreciate the feedback anyway.

Some of my comments/suggestions relate to portability with MIPS32. I
don't object if you respond to those by just adding "depends on 64BIT"
so that I & others can fix it up in subsequent patches.

On 08/06/13 00:03, David Daney wrote:
> From: David Daney <david.daney@cavium.com>
> 
> Currently this is a little complex, here are the facts about how it works:
> 
> o When running in Guest mode we set the high bit of CP0_XCONTEXT.  If
>   this bit is clear, we don't do anything special on an exception.
> 
> o If we are in guest mode, upon an exception we:
> 
>   1) load the stack pointer from the mips_kvm_rootsp array instead of
>      kernelsp.
> 
>   2) Clear GuestCtl[GM] and high bit of CP0_XCONTEXT.
> 
>   3) Restore host ASID and PGD pointer.
> 
> o Upon restarting from an exception we test the task TIF_GUESTMODE
>   flag if it is clear, nothing special is done.
> 
> o If Guest mode is active for the thread we:
> 
>   1) Compare the stack pointer to mips_kvm_rootsp, if it doesn't match
>      we are not reentering guest mode, so no more special processing
>      is done.
> 
>   2) If reentering guest mode:
> 
>   2a) Set high bit of CP0_XCONTEXT and GuestCtl[GM].
> 
>   2b) Set Guest mode ASID and PGD pointer.
> 
> This allows a single set of exception handlers to be used for both
> host and guest mode operation.
> 
> Signed-off-by: David Daney <david.daney@cavium.com>
> ---
>  arch/mips/include/asm/stackframe.h | 135 ++++++++++++++++++++++++++++++++++++-
>  1 file changed, 132 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/mips/include/asm/stackframe.h b/arch/mips/include/asm/stackframe.h
> index 20627b2..bf2ec48 100644
> --- a/arch/mips/include/asm/stackframe.h
> +++ b/arch/mips/include/asm/stackframe.h
> @@ -17,6 +17,7 @@
>  #include <asm/asmmacro.h>
>  #include <asm/mipsregs.h>
>  #include <asm/asm-offsets.h>
> +#include <asm/thread_info.h>
>  
>  /*
>   * For SMTC kernel, global IE should be left set, and interrupts
> @@ -98,7 +99,9 @@
>  #define CPU_ID_REG CP0_CONTEXT
>  #define CPU_ID_MFC0 MFC0
>  #endif
> -		.macro	get_saved_sp	/* SMP variation */
> +#define CPU_ID_MASK ((1 << 13) - 1)

(I was going to say this could be made more portable by using the lowest
bit of PTEBASE (i.e. bit PTEBASE_SHIFT) for the guest mode state instead
of bit 63, and setting CPU_ID_MASK to -2 to mask off the lowest instead
of highest bit, but now I see you test it with bgez... In that case I
suppose it makes sense to use bit 31 for 32BIT, which still leaves 6
bits for the processor id - potentially expandable back to 7 by shifting
the processor id down a couple of bits and utilising the masking that
you've added).

> +
> +		.macro	get_saved_sp_for_save_some	/* SMP variation */
>  		CPU_ID_MFC0	k0, CPU_ID_REG

I suspect this shouldn't be here since both users of the macro already
seem to ensure it's done.

>  #if defined(CONFIG_32BIT) || defined(KBUILD_64BIT_SYM32)
>  		lui	k1, %hi(kernelsp)
> @@ -110,15 +113,49 @@
>  		dsll	k1, 16
>  #endif
>  		LONG_SRL	k0, PTEBASE_SHIFT
> +#ifdef CONFIG_KVM_MIPSVZ
> +		andi	k0, CPU_ID_MASK /* high bits indicate guest mode. */
> +#endif
>  		LONG_ADDU	k1, k0
>  		LONG_L	k1, %lo(kernelsp)(k1)
>  		.endm
>  
> +		.macro get_saved_sp
> +		CPU_ID_MFC0	k0, CPU_ID_REG
> +		get_saved_sp_for_save_some
> +		.endm
> +
> +		.macro	get_mips_kvm_rootsp	/* SMP variation */
> +#if defined(CONFIG_32BIT) || defined(KBUILD_64BIT_SYM32)
> +		lui	k1, %hi(mips_kvm_rootsp)
> +#else
> +		lui	k1, %highest(mips_kvm_rootsp)
> +		daddiu	k1, %higher(mips_kvm_rootsp)
> +		dsll	k1, 16
> +		daddiu	k1, %hi(mips_kvm_rootsp)
> +		dsll	k1, 16
> +#endif
> +		LONG_SRL	k0, PTEBASE_SHIFT
> +		andi	k0, CPU_ID_MASK /* high bits indicate guest mode. */
> +		LONG_ADDU	k1, k0
> +		LONG_L	k1, %lo(mips_kvm_rootsp)(k1)
> +		.endm
> +
>  		.macro	set_saved_sp stackp temp temp2
>  		CPU_ID_MFC0	\temp, CPU_ID_REG
>  		LONG_SRL	\temp, PTEBASE_SHIFT
> +#ifdef CONFIG_KVM_MIPSVZ
> +		andi	k0, CPU_ID_MASK /* high bits indicate guest mode. */
> +#endif
>  		LONG_S	\stackp, kernelsp(\temp)
>  		.endm
> +
> +		.macro	set_mips_kvm_rootsp stackp temp
> +		CPU_ID_MFC0	\temp, CPU_ID_REG
> +		LONG_SRL	\temp, PTEBASE_SHIFT
> +		andi	k0, CPU_ID_MASK /* high bits indicate guest mode. */

Should that be s/k0/\temp/?

> +		LONG_S	\stackp, mips_kvm_rootsp(\temp)
> +		.endm
>  #else
>  		.macro	get_saved_sp	/* Uniprocessor variation */
>  #ifdef CONFIG_CPU_JUMP_WORKAROUNDS
> @@ -152,9 +189,27 @@
>  		LONG_L	k1, %lo(kernelsp)(k1)
>  		.endm
>  
> +		.macro	get_mips_kvm_rootsp	/* Uniprocessor variation */
> +#if defined(CONFIG_32BIT) || defined(KBUILD_64BIT_SYM32)
> +		lui	k1, %hi(mips_kvm_rootsp)
> +#else
> +		lui	k1, %highest(mips_kvm_rootsp)
> +		daddiu	k1, %higher(mips_kvm_rootsp)
> +		dsll	k1, k1, 16
> +		daddiu	k1, %hi(mips_kvm_rootsp)
> +		dsll	k1, k1, 16
> +#endif
> +		LONG_L	k1, %lo(mips_kvm_rootsp)(k1)
> +		.endm
> +
> +
>  		.macro	set_saved_sp stackp temp temp2
>  		LONG_S	\stackp, kernelsp
>  		.endm
> +
> +		.macro	set_mips_kvm_rootsp stackp temp
> +		LONG_S	\stackp, mips_kvm_rootsp
> +		.endm
>  #endif
>  
>  		.macro	SAVE_SOME
> @@ -164,11 +219,21 @@
>  		mfc0	k0, CP0_STATUS
>  		sll	k0, 3		/* extract cu0 bit */
>  		.set	noreorder
> +#ifdef CONFIG_KVM_MIPSVZ
> +		bgez	k0, 7f

brief comments around here would make it easier to follow, e.g.
/* from kernel */
...

> +		 CPU_ID_MFC0	k0, CPU_ID_REG
> +		bgez	k0, 8f

/* from userland */

> +		 move	k1, sp

/* from guest */

> +		get_mips_kvm_rootsp
> +		b	8f
> +		 nop
> +#else
>  		bltz	k0, 8f
>  		 move	k1, sp
> +#endif
>  		.set	reorder
>  		/* Called from user mode, new stack. */
> -		get_saved_sp
> +7:		get_saved_sp_for_save_some

you don't appear to have defined a !CONFIG_SMP version of
get_saved_sp_for_save_some?

>  #ifndef CONFIG_CPU_DADDI_WORKAROUNDS
>  8:		move	k0, sp
>  		PTR_SUBU sp, k1, PT_SIZE
> @@ -227,6 +292,35 @@
>  		LONG_S	$31, PT_R31(sp)
>  		ori	$28, sp, _THREAD_MASK
>  		xori	$28, _THREAD_MASK
> +#ifdef CONFIG_KVM_MIPSVZ
> +		CPU_ID_MFC0	k0, CPU_ID_REG
> +		.set	noreorder
> +		bgez	k0, 8f
> +		/* Must clear GuestCtl0[GM] */
> +		 dins	k0, zero, 63, 1

Looks like we need a LONG_INS (and friends) defined in asm.h for this

> +		.set	reorder
> +		dmtc0	k0, CPU_ID_REG

Lets define a CPU_ID_MTC0 similar to CPU_ID_MFC0.

> +		mfc0	k0, CP0_GUESTCTL0
> +		ins	k0, zero, MIPS_GUESTCTL0B_GM, 1
> +		mtc0	k0, CP0_GUESTCTL0
> +		LONG_L	v0, TI_TASK($28)
> +		lw	v1, THREAD_MM_ASID(v0)
> +		dmtc0	v1, CP0_ENTRYHI

MTC0.

> +		LONG_L	v1, TASK_MM(v0)
> +		.set	noreorder
> +		jal	tlbmiss_handler_setup_pgd_array
> +		 LONG_L	a0, MM_PGD(v1)
> +		.set	reorder
> +		/*
> +		 * With KVM_MIPSVZ, we must not clobber k0/k1
> +		 * they were saved before they were used
> +		 */
> +8:
> +		MFC0	k0, CP0_KSCRATCH1
> +		MFC0	v1, CP0_KSCRATCH2
> +		LONG_S	k0, PT_R26(sp)
> +		LONG_S	v1, PT_R27(sp)
> +#endif
>  #ifdef CONFIG_CPU_CAVIUM_OCTEON
>  		.set	mips64
>  		pref	0, 0($28)	/* Prefetch the current pointer */
> @@ -439,10 +533,45 @@
>  		.set	mips0
>  #endif /* CONFIG_MIPS_MT_SMTC */
>  		LONG_L	v1, PT_EPC(sp)
> +		LONG_L	$25, PT_R25(sp)

Is this an optimisation? It's unclear why it's been moved.

>  		MTC0	v1, CP0_EPC
> +#ifdef CONFIG_KVM_MIPSVZ
> +	/*
> +	 * Only if TIF_GUESTMODE && sp is the saved KVM sp, return to
> +	 * guest mode.
> +	 */
> +		LONG_L	v0, TI_FLAGS($28)
> +		li	k1, _TIF_GUESTMODE
> +		and	v0, v0, k1
> +		beqz	v0, 8f
> +		CPU_ID_MFC0	k0, CPU_ID_REG
> +		get_mips_kvm_rootsp
> +		PTR_SUBU k1, k1, PT_SIZE
> +		bne	k1, sp, 8f
> +	/* Set the high order bit of CPU_ID_REG to indicate guest mode. */
> +		dli	v0, 1

I think li will do here.

> +		dmfc0	v1, CPU_ID_REG

CPU_ID_MFC0

> +		dins	v1, v0, 63, 1

LONG_INS

> +		dmtc0	v1, CPU_ID_REG

CPU_ID_MTC0

> +		/* Must set GuestCtl0[GM] */
> +		mfc0	v1, CP0_GUESTCTL0
> +		ins	v1, v0, MIPS_GUESTCTL0B_GM, 1
> +		mtc0	v1, CP0_GUESTCTL0
> +
> +		LONG_L	v0, TI_TASK($28)
> +		lw	v1, THREAD_GUEST_ASID(v0)
> +		dmtc0	v1, CP0_ENTRYHI

MTC0

> +		LONG_L	v1, THREAD_VCPU(v0)
> +		LONG_L	v1, KVM_VCPU_KVM(v1)
> +		LONG_L	v1, KVM_ARCH_IMPL(v1)
> +		.set	noreorder
> +		jal	tlbmiss_handler_setup_pgd_array
> +		 LONG_L	a0, KVM_MIPS_VZ_PGD(v1)
> +		.set	reorder
> +8:
> +#endif
>  		LONG_L	$31, PT_R31(sp)
>  		LONG_L	$28, PT_R28(sp)
> -		LONG_L	$25, PT_R25(sp)
>  #ifdef CONFIG_64BIT
>  		LONG_L	$8, PT_R8(sp)
>  		LONG_L	$9, PT_R9(sp)
> 

Cheers
James


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH 15/31] mips/kvm: Exception handling to leave and reenter guest mode.
@ 2013-10-11 12:51     ` James Hogan
  0 siblings, 0 replies; 84+ messages in thread
From: James Hogan @ 2013-10-11 12:51 UTC (permalink / raw)
  To: David Daney; +Cc: linux-mips, ralf, kvm, Sanjay Lal, linux-kernel, David Daney

[-- Attachment #1: Type: text/plain, Size: 8798 bytes --]

Hi David,

I know it's been a while since you posted this patchset, but thought you
might appreciate the feedback anyway.

Some of my comments/suggestions relate to portability with MIPS32. I
don't object if you respond to those by just adding "depends on 64BIT"
so that I & others can fix it up in subsequent patches.

On 08/06/13 00:03, David Daney wrote:
> From: David Daney <david.daney@cavium.com>
> 
> Currently this is a little complex, here are the facts about how it works:
> 
> o When running in Guest mode we set the high bit of CP0_XCONTEXT.  If
>   this bit is clear, we don't do anything special on an exception.
> 
> o If we are in guest mode, upon an exception we:
> 
>   1) load the stack pointer from the mips_kvm_rootsp array instead of
>      kernelsp.
> 
>   2) Clear GuestCtl[GM] and high bit of CP0_XCONTEXT.
> 
>   3) Restore host ASID and PGD pointer.
> 
> o Upon restarting from an exception we test the task TIF_GUESTMODE
>   flag if it is clear, nothing special is done.
> 
> o If Guest mode is active for the thread we:
> 
>   1) Compare the stack pointer to mips_kvm_rootsp, if it doesn't match
>      we are not reentering guest mode, so no more special processing
>      is done.
> 
>   2) If reentering guest mode:
> 
>   2a) Set high bit of CP0_XCONTEXT and GuestCtl[GM].
> 
>   2b) Set Guest mode ASID and PGD pointer.
> 
> This allows a single set of exception handlers to be used for both
> host and guest mode operation.
> 
> Signed-off-by: David Daney <david.daney@cavium.com>
> ---
>  arch/mips/include/asm/stackframe.h | 135 ++++++++++++++++++++++++++++++++++++-
>  1 file changed, 132 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/mips/include/asm/stackframe.h b/arch/mips/include/asm/stackframe.h
> index 20627b2..bf2ec48 100644
> --- a/arch/mips/include/asm/stackframe.h
> +++ b/arch/mips/include/asm/stackframe.h
> @@ -17,6 +17,7 @@
>  #include <asm/asmmacro.h>
>  #include <asm/mipsregs.h>
>  #include <asm/asm-offsets.h>
> +#include <asm/thread_info.h>
>  
>  /*
>   * For SMTC kernel, global IE should be left set, and interrupts
> @@ -98,7 +99,9 @@
>  #define CPU_ID_REG CP0_CONTEXT
>  #define CPU_ID_MFC0 MFC0
>  #endif
> -		.macro	get_saved_sp	/* SMP variation */
> +#define CPU_ID_MASK ((1 << 13) - 1)

(I was going to say this could be made more portable by using the lowest
bit of PTEBASE (i.e. bit PTEBASE_SHIFT) for the guest mode state instead
of bit 63, and setting CPU_ID_MASK to -2 to mask off the lowest instead
of highest bit, but now I see you test it with bgez... In that case I
suppose it makes sense to use bit 31 for 32BIT, which still leaves 6
bits for the processor id - potentially expandable back to 7 by shifting
the processor id down a couple of bits and utilising the masking that
you've added).

> +
> +		.macro	get_saved_sp_for_save_some	/* SMP variation */
>  		CPU_ID_MFC0	k0, CPU_ID_REG

I suspect this shouldn't be here since both users of the macro already
seem to ensure it's done.

>  #if defined(CONFIG_32BIT) || defined(KBUILD_64BIT_SYM32)
>  		lui	k1, %hi(kernelsp)
> @@ -110,15 +113,49 @@
>  		dsll	k1, 16
>  #endif
>  		LONG_SRL	k0, PTEBASE_SHIFT
> +#ifdef CONFIG_KVM_MIPSVZ
> +		andi	k0, CPU_ID_MASK /* high bits indicate guest mode. */
> +#endif
>  		LONG_ADDU	k1, k0
>  		LONG_L	k1, %lo(kernelsp)(k1)
>  		.endm
>  
> +		.macro get_saved_sp
> +		CPU_ID_MFC0	k0, CPU_ID_REG
> +		get_saved_sp_for_save_some
> +		.endm
> +
> +		.macro	get_mips_kvm_rootsp	/* SMP variation */
> +#if defined(CONFIG_32BIT) || defined(KBUILD_64BIT_SYM32)
> +		lui	k1, %hi(mips_kvm_rootsp)
> +#else
> +		lui	k1, %highest(mips_kvm_rootsp)
> +		daddiu	k1, %higher(mips_kvm_rootsp)
> +		dsll	k1, 16
> +		daddiu	k1, %hi(mips_kvm_rootsp)
> +		dsll	k1, 16
> +#endif
> +		LONG_SRL	k0, PTEBASE_SHIFT
> +		andi	k0, CPU_ID_MASK /* high bits indicate guest mode. */
> +		LONG_ADDU	k1, k0
> +		LONG_L	k1, %lo(mips_kvm_rootsp)(k1)
> +		.endm
> +
>  		.macro	set_saved_sp stackp temp temp2
>  		CPU_ID_MFC0	\temp, CPU_ID_REG
>  		LONG_SRL	\temp, PTEBASE_SHIFT
> +#ifdef CONFIG_KVM_MIPSVZ
> +		andi	k0, CPU_ID_MASK /* high bits indicate guest mode. */
> +#endif
>  		LONG_S	\stackp, kernelsp(\temp)
>  		.endm
> +
> +		.macro	set_mips_kvm_rootsp stackp temp
> +		CPU_ID_MFC0	\temp, CPU_ID_REG
> +		LONG_SRL	\temp, PTEBASE_SHIFT
> +		andi	k0, CPU_ID_MASK /* high bits indicate guest mode. */

Should that be s/k0/\temp/?

> +		LONG_S	\stackp, mips_kvm_rootsp(\temp)
> +		.endm
>  #else
>  		.macro	get_saved_sp	/* Uniprocessor variation */
>  #ifdef CONFIG_CPU_JUMP_WORKAROUNDS
> @@ -152,9 +189,27 @@
>  		LONG_L	k1, %lo(kernelsp)(k1)
>  		.endm
>  
> +		.macro	get_mips_kvm_rootsp	/* Uniprocessor variation */
> +#if defined(CONFIG_32BIT) || defined(KBUILD_64BIT_SYM32)
> +		lui	k1, %hi(mips_kvm_rootsp)
> +#else
> +		lui	k1, %highest(mips_kvm_rootsp)
> +		daddiu	k1, %higher(mips_kvm_rootsp)
> +		dsll	k1, k1, 16
> +		daddiu	k1, %hi(mips_kvm_rootsp)
> +		dsll	k1, k1, 16
> +#endif
> +		LONG_L	k1, %lo(mips_kvm_rootsp)(k1)
> +		.endm
> +
> +
>  		.macro	set_saved_sp stackp temp temp2
>  		LONG_S	\stackp, kernelsp
>  		.endm
> +
> +		.macro	set_mips_kvm_rootsp stackp temp
> +		LONG_S	\stackp, mips_kvm_rootsp
> +		.endm
>  #endif
>  
>  		.macro	SAVE_SOME
> @@ -164,11 +219,21 @@
>  		mfc0	k0, CP0_STATUS
>  		sll	k0, 3		/* extract cu0 bit */
>  		.set	noreorder
> +#ifdef CONFIG_KVM_MIPSVZ
> +		bgez	k0, 7f

brief comments around here would make it easier to follow, e.g.
/* from kernel */
...

> +		 CPU_ID_MFC0	k0, CPU_ID_REG
> +		bgez	k0, 8f

/* from userland */

> +		 move	k1, sp

/* from guest */

> +		get_mips_kvm_rootsp
> +		b	8f
> +		 nop
> +#else
>  		bltz	k0, 8f
>  		 move	k1, sp
> +#endif
>  		.set	reorder
>  		/* Called from user mode, new stack. */
> -		get_saved_sp
> +7:		get_saved_sp_for_save_some

you don't appear to have defined a !CONFIG_SMP version of
get_saved_sp_for_save_some?

>  #ifndef CONFIG_CPU_DADDI_WORKAROUNDS
>  8:		move	k0, sp
>  		PTR_SUBU sp, k1, PT_SIZE
> @@ -227,6 +292,35 @@
>  		LONG_S	$31, PT_R31(sp)
>  		ori	$28, sp, _THREAD_MASK
>  		xori	$28, _THREAD_MASK
> +#ifdef CONFIG_KVM_MIPSVZ
> +		CPU_ID_MFC0	k0, CPU_ID_REG
> +		.set	noreorder
> +		bgez	k0, 8f
> +		/* Must clear GuestCtl0[GM] */
> +		 dins	k0, zero, 63, 1

Looks like we need a LONG_INS (and friends) defined in asm.h for this

> +		.set	reorder
> +		dmtc0	k0, CPU_ID_REG

Lets define a CPU_ID_MTC0 similar to CPU_ID_MFC0.

> +		mfc0	k0, CP0_GUESTCTL0
> +		ins	k0, zero, MIPS_GUESTCTL0B_GM, 1
> +		mtc0	k0, CP0_GUESTCTL0
> +		LONG_L	v0, TI_TASK($28)
> +		lw	v1, THREAD_MM_ASID(v0)
> +		dmtc0	v1, CP0_ENTRYHI

MTC0.

> +		LONG_L	v1, TASK_MM(v0)
> +		.set	noreorder
> +		jal	tlbmiss_handler_setup_pgd_array
> +		 LONG_L	a0, MM_PGD(v1)
> +		.set	reorder
> +		/*
> +		 * With KVM_MIPSVZ, we must not clobber k0/k1
> +		 * they were saved before they were used
> +		 */
> +8:
> +		MFC0	k0, CP0_KSCRATCH1
> +		MFC0	v1, CP0_KSCRATCH2
> +		LONG_S	k0, PT_R26(sp)
> +		LONG_S	v1, PT_R27(sp)
> +#endif
>  #ifdef CONFIG_CPU_CAVIUM_OCTEON
>  		.set	mips64
>  		pref	0, 0($28)	/* Prefetch the current pointer */
> @@ -439,10 +533,45 @@
>  		.set	mips0
>  #endif /* CONFIG_MIPS_MT_SMTC */
>  		LONG_L	v1, PT_EPC(sp)
> +		LONG_L	$25, PT_R25(sp)

Is this an optimisation? It's unclear why it's been moved.

>  		MTC0	v1, CP0_EPC
> +#ifdef CONFIG_KVM_MIPSVZ
> +	/*
> +	 * Only if TIF_GUESTMODE && sp is the saved KVM sp, return to
> +	 * guest mode.
> +	 */
> +		LONG_L	v0, TI_FLAGS($28)
> +		li	k1, _TIF_GUESTMODE
> +		and	v0, v0, k1
> +		beqz	v0, 8f
> +		CPU_ID_MFC0	k0, CPU_ID_REG
> +		get_mips_kvm_rootsp
> +		PTR_SUBU k1, k1, PT_SIZE
> +		bne	k1, sp, 8f
> +	/* Set the high order bit of CPU_ID_REG to indicate guest mode. */
> +		dli	v0, 1

I think li will do here.

> +		dmfc0	v1, CPU_ID_REG

CPU_ID_MFC0

> +		dins	v1, v0, 63, 1

LONG_INS

> +		dmtc0	v1, CPU_ID_REG

CPU_ID_MTC0

> +		/* Must set GuestCtl0[GM] */
> +		mfc0	v1, CP0_GUESTCTL0
> +		ins	v1, v0, MIPS_GUESTCTL0B_GM, 1
> +		mtc0	v1, CP0_GUESTCTL0
> +
> +		LONG_L	v0, TI_TASK($28)
> +		lw	v1, THREAD_GUEST_ASID(v0)
> +		dmtc0	v1, CP0_ENTRYHI

MTC0

> +		LONG_L	v1, THREAD_VCPU(v0)
> +		LONG_L	v1, KVM_VCPU_KVM(v1)
> +		LONG_L	v1, KVM_ARCH_IMPL(v1)
> +		.set	noreorder
> +		jal	tlbmiss_handler_setup_pgd_array
> +		 LONG_L	a0, KVM_MIPS_VZ_PGD(v1)
> +		.set	reorder
> +8:
> +#endif
>  		LONG_L	$31, PT_R31(sp)
>  		LONG_L	$28, PT_R28(sp)
> -		LONG_L	$25, PT_R25(sp)
>  #ifdef CONFIG_64BIT
>  		LONG_L	$8, PT_R8(sp)
>  		LONG_L	$9, PT_R9(sp)
> 

Cheers
James


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 84+ messages in thread

end of thread, other threads:[~2013-10-11 12:53 UTC | newest]

Thread overview: 84+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-06-07 23:03 [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
2013-06-07 23:03 ` [PATCH 01/31] MIPS: Move allocate_kscratch to cpu-probe.c and make it public David Daney
2013-06-14 11:41   ` Ralf Baechle
2013-06-14 13:11     ` Ralf Baechle
2013-06-07 23:03 ` [PATCH 02/31] MIPS: Save and restore K0/K1 when CONFIG_KVM_MIPSVZ David Daney
2013-06-07 23:03 ` [PATCH 03/31] mips/kvm: Fix 32-bitisms in kvm_locore.S David Daney
2013-06-14 13:09   ` Ralf Baechle
2013-06-14 13:21     ` Sergei Shtylyov
2013-06-07 23:03 ` [PATCH 04/31] mips/kvm: Add casts to avoid pointer width mismatch build failures David Daney
2013-06-14 13:14   ` Ralf Baechle
2013-06-07 23:03 ` [PATCH 05/31] mips/kvm: Use generic cache flushing functions David Daney
2013-06-14 13:17   ` Ralf Baechle
2013-06-07 23:03 ` [PATCH 06/31] mips/kvm: Rename kvm_vcpu_arch.pc to kvm_vcpu_arch.epc David Daney
2013-06-14 13:18   ` Ralf Baechle
2013-06-07 23:03 ` [PATCH 07/31] mips/kvm: Rename VCPU_registername to KVM_VCPU_ARCH_registername David Daney
2013-06-14 13:18   ` Ralf Baechle
2013-06-07 23:03 ` [PATCH 08/31] mips/kvm: Fix code formatting in arch/mips/kvm/kvm_locore.S David Daney
2013-06-14 13:21   ` Ralf Baechle
2013-06-07 23:03 ` [PATCH 09/31] mips/kvm: Factor trap-and-emulate support into a pluggable implementation David Daney
2013-06-14 13:22   ` Ralf Baechle
2013-06-07 23:03 ` [PATCH 10/31] mips/kvm: Implement ioctls to get and set FPU registers David Daney
2013-06-14 16:11   ` Ralf Baechle
2013-06-07 23:03 ` [PATCH 11/31] MIPS: Rearrange branch.c so it can be used by kvm code David Daney
2013-06-14 22:03   ` Ralf Baechle
2013-06-07 23:03 ` [PATCH 12/31] MIPS: Add instruction format information for WAIT, MTC0, MFC0, et al David Daney
2013-06-14 22:07   ` Ralf Baechle
2013-06-07 23:03 ` [PATCH 13/31] mips/kvm: Add accessors for MIPS VZ registers David Daney
2013-06-14 22:10   ` Ralf Baechle
2013-06-07 23:03 ` [PATCH 14/31] mips/kvm: Add thread_info flag to indicate operation in MIPS VZ Guest Mode David Daney
2013-06-14 22:10   ` Ralf Baechle
2013-06-07 23:03 ` [PATCH 15/31] mips/kvm: Exception handling to leave and reenter guest mode David Daney
2013-06-15 10:00   ` Ralf Baechle
2013-10-11 12:51   ` James Hogan
2013-10-11 12:51     ` James Hogan
2013-06-07 23:03 ` [PATCH 16/31] mips/kvm: Add exception handler for MIPSVZ Guest exceptions David Daney
2013-06-15 10:23   ` Ralf Baechle
2013-06-07 23:03 ` [PATCH 17/31] MIPS: Quit exposing Kconfig symbols in uapi headers David Daney
2013-06-14 11:12   ` Ralf Baechle
2013-06-15 17:13   ` Paolo Bonzini
2013-06-07 23:03 ` [PATCH 18/31] mips/kvm: Add pt_regs slots for BadInstr and BadInstrP David Daney
2013-06-15 10:25   ` Ralf Baechle
2013-06-07 23:03 ` [PATCH 19/31] mips/kvm: Add host definitions for MIPS VZ based host David Daney
2013-06-16  8:49   ` Ralf Baechle
2013-06-07 23:03 ` [PATCH 20/31] mips/kvm: Hook into TLB fault handlers David Daney
2013-06-07 23:34   ` Sergei Shtylyov
2013-06-08  0:15     ` David Daney
2013-06-08  0:15       ` David Daney
2013-06-16  8:51   ` Ralf Baechle
2013-06-07 23:03 ` [PATCH 21/31] mips/kvm: Allow set_except_vector() to be used from MIPSVZ code David Daney
2013-06-16 11:22   ` Ralf Baechle
2013-06-07 23:03 ` [PATCH 22/31] mips/kvm: Split get_new_mmu_context into two parts David Daney
2013-06-16 11:26   ` Ralf Baechle
2013-06-07 23:03 ` [PATCH 23/31] mips/kvm: Hook into CP unusable exception handler David Daney
2013-06-16 11:28   ` Ralf Baechle
2013-06-07 23:03 ` [PATCH 24/31] mips/kvm: Add thread_struct fields used by MIPSVZ hosts David Daney
2013-06-16 11:29   ` Ralf Baechle
2013-06-07 23:03 ` [PATCH 25/31] mips/kvm: Add some asm-offsets constants used by MIPSVZ David Daney
2013-06-16 11:31   ` Ralf Baechle
2013-08-06 17:23     ` David Daney
2013-06-07 23:03 ` [PATCH 26/31] mips/kvm: Split up Kconfig and Makefile definitions in preperation for MIPSVZ David Daney
2013-06-16 11:33   ` Ralf Baechle
2013-06-07 23:03 ` [PATCH 27/31] mips/kvm: Gate the use of kvm_local_flush_tlb_all() by KVM_MIPSTE David Daney
2013-06-16 11:42   ` Ralf Baechle
2013-06-07 23:03 ` [PATCH 28/31] mips/kvm: Only use KVM_COALESCED_MMIO_PAGE_OFFSET with KVM_MIPSTE David Daney
2013-06-16 12:03   ` Ralf Baechle
2013-06-07 23:03 ` [PATCH 29/31] mips/kvm: Add MIPSVZ support David Daney
2013-06-16 11:47   ` Ralf Baechle
2013-06-07 23:03 ` [PATCH 30/31] mips/kvm: Enable MIPSVZ in Kconfig/Makefile David Daney
2013-06-16 11:44   ` Ralf Baechle
2013-06-07 23:03 ` [PATCH 31/31] mips/kvm: Allow for upto 8 KVM vcpus per vm David Daney
2013-06-16 11:37   ` Ralf Baechle
2013-06-07 23:15 ` [PATCH 00/31] KVM/MIPS: Implement hardware virtualization via the MIPS-VZ extensions David Daney
2013-06-07 23:15   ` David Daney
2013-06-09  7:31   ` Gleb Natapov
2013-06-09 23:23     ` David Daney
2013-06-09 23:40       ` Maciej W. Rozycki
2013-06-10 11:16         ` Ralf Baechle
2013-06-10  6:18       ` Gleb Natapov
2013-06-10 16:37       ` Sanjay Lal
2013-06-16 11:59       ` Ralf Baechle
2013-06-10 16:43 ` Sanjay Lal
2013-06-10 17:14   ` David Daney
2013-06-14 11:12 ` Ralf Baechle
2013-06-19  9:06 ` Ralf Baechle

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.