All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE)
@ 2013-05-19  5:47 Sanjay Lal
  2013-05-19  5:47 ` [PATCH 01/18] Revert "MIPS: microMIPS: Support dynamic ASID sizing." Sanjay Lal
                   ` (18 more replies)
  0 siblings, 19 replies; 40+ messages in thread
From: Sanjay Lal @ 2013-05-19  5:47 UTC (permalink / raw)
  To: kvm; +Cc: linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti, Sanjay Lal

The following patch set adds support for the recently announced virtualization
extensions for the MIPS32 architecture and allows running unmodified kernels in
Guest Mode.

For more info please refer to :
	MIPS Document #: MD00846
	Volume IV-i: Virtualization Module of the MIPS32 Architecture

which can be accessed @: http://www.mips.com/auth/MD00846-2B-VZMIPS32-AFP-01.03.pdf

The patch is agains Linux-3.10-rc1.

KVM/MIPS now supports 2 modes of operation:

(1) VZ mode: Unmodified kernels running in Guest Mode.  The processor now provides
    an almost complete COP0 context in Guest mode. This greatly reduces VM exits.

(2) Trap and Emulate: Runs minimally modified guest kernels in UM and uses binary patching
    to minimize the number of traps and improve performance. This is used for processors
    that do not support the VZ-ASE.

--
Sanjay Lal (18):
  Revert "MIPS: microMIPS: Support dynamic ASID sizing."
  Revert "MIPS: Allow ASID size to be determined at boot time."
  KVM/MIPS32: Export min_low_pfn.
  KVM/MIPS32-VZ: MIPS VZ-ASE related register defines and helper
    macros.
  KVM/MIPS32-VZ: VZ-ASE assembler wrapper functions to set GuestIDs
  KVM/MIPS32-VZ: VZ-ASE related callbacks to handle guest exceptions
    that trap to the Root context.
  KVM/MIPS32: VZ-ASE related CPU feature flags and options.
  KVM/MIPS32-VZ: Entry point for trampolining to the guest and trap
    handlers.
  KVM/MIPS32-VZ: Add support for CONFIG_KVM_MIPS_VZ option
  KVM/MIPS32-VZ: Add API for VZ-ASE Capability
  KVM/MIPS32-VZ: VZ: Handle Guest TLB faults that are handled in Root
    context
  KVM/MIPS32-VZ: VM Exit Stats, add VZ exit reasons.
  KVM/MIPS32-VZ: Top level handler for Guest faults
  KVM/MIPS32-VZ: Guest exception batching support.
  KVM/MIPS32: Add dummy trap handler to catch unexpected exceptions and
    dump out useful info
  KVM/MIPS32-VZ: Add VZ-ASE support to KVM/MIPS data structures.
  KVM/MIPS32: Revert to older method for accessing ASID parameters
  KVM/MIPS32-VZ: Dump out additional info about VZ features as part of
    /proc/cpuinfo

 arch/mips/include/asm/cpu-features.h |   36 ++
 arch/mips/include/asm/cpu-info.h     |   21 +
 arch/mips/include/asm/cpu.h          |    5 +
 arch/mips/include/asm/kvm_host.h     |  244 ++++++--
 arch/mips/include/asm/mipsvzregs.h   |  494 +++++++++++++++
 arch/mips/include/asm/mmu_context.h  |   95 ++-
 arch/mips/kernel/genex.S             |    2 +-
 arch/mips/kernel/mips_ksyms.c        |    6 +
 arch/mips/kernel/proc.c              |   11 +
 arch/mips/kernel/smtc.c              |   10 +-
 arch/mips/kernel/traps.c             |    6 +-
 arch/mips/kvm/Kconfig                |   14 +-
 arch/mips/kvm/Makefile               |   14 +-
 arch/mips/kvm/kvm_locore.S           | 1088 ++++++++++++++++++----------------
 arch/mips/kvm/kvm_mips.c             |   73 ++-
 arch/mips/kvm/kvm_mips_dyntrans.c    |   24 +-
 arch/mips/kvm/kvm_mips_emul.c        |  236 ++++----
 arch/mips/kvm/kvm_mips_int.h         |    5 +
 arch/mips/kvm/kvm_mips_stats.c       |   17 +-
 arch/mips/kvm/kvm_tlb.c              |  444 +++++++++++---
 arch/mips/kvm/kvm_trap_emul.c        |   68 ++-
 arch/mips/kvm/kvm_vz.c               |  786 ++++++++++++++++++++++++
 arch/mips/kvm/kvm_vz_locore.S        |   74 +++
 arch/mips/lib/dump_tlb.c             |    5 +-
 arch/mips/lib/r3k_dump_tlb.c         |    7 +-
 arch/mips/mm/tlb-r3k.c               |   20 +-
 arch/mips/mm/tlb-r4k.c               |    2 +-
 arch/mips/mm/tlb-r8k.c               |    2 +-
 arch/mips/mm/tlbex.c                 |   82 +--
 include/uapi/linux/kvm.h             |    1 +
 30 files changed, 2906 insertions(+), 986 deletions(-)
 create mode 100644 arch/mips/include/asm/mipsvzregs.h
 create mode 100644 arch/mips/kvm/kvm_vz.c
 create mode 100644 arch/mips/kvm/kvm_vz_locore.S

-- 
1.7.11.3

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 01/18] Revert "MIPS: microMIPS: Support dynamic ASID sizing."
  2013-05-19  5:47 [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE) Sanjay Lal
@ 2013-05-19  5:47 ` Sanjay Lal
  2013-05-19  5:47 ` [PATCH 02/18] Revert "MIPS: Allow ASID size to be determined at boot time." Sanjay Lal
                   ` (17 subsequent siblings)
  18 siblings, 0 replies; 40+ messages in thread
From: Sanjay Lal @ 2013-05-19  5:47 UTC (permalink / raw)
  To: kvm; +Cc: linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti, Sanjay Lal

This reverts commit f6b06d9361a008afb93b97fb3683a6e92d69d0f4.

Signed-off-by: Sanjay Lal <sanjayl@kymasys.com>
---
 arch/mips/mm/tlbex.c | 34 ++--------------------------------
 1 file changed, 2 insertions(+), 32 deletions(-)

diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index 4d46d37..2ad41e9 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -309,32 +309,13 @@ static int check_for_high_segbits __cpuinitdata;
 static void __cpuinit insn_fixup(unsigned int **start, unsigned int **stop,
 					unsigned int i_const)
 {
-	unsigned int **p;
+	unsigned int **p, *ip;
 
 	for (p = start; p < stop; p++) {
-#ifndef CONFIG_CPU_MICROMIPS
-		unsigned int *ip;
-
 		ip = *p;
 		*ip = (*ip & 0xffff0000) | i_const;
-#else
-		unsigned short *ip;
-
-		ip = ((unsigned short *)((unsigned int)*p - 1));
-		if ((*ip & 0xf000) == 0x4000) {
-			*ip &= 0xfff1;
-			*ip |= (i_const << 1);
-		} else if ((*ip & 0xf000) == 0x6000) {
-			*ip &= 0xfff1;
-			*ip |= ((i_const >> 2) << 1);
-		} else {
-			ip++;
-			*ip = i_const;
-		}
-#endif
-		local_flush_icache_range((unsigned long)ip,
-					 (unsigned long)ip + sizeof(*ip));
 	}
+	local_flush_icache_range((unsigned long)*p, (unsigned long)((*p) + 1));
 }
 
 #define asid_insn_fixup(section, const)					\
@@ -354,14 +335,6 @@ static void __cpuinit setup_asid(unsigned int inc, unsigned int mask,
 	extern asmlinkage void handle_ri_rdhwr_vivt(void);
 	unsigned long *vivt_exc;
 
-#ifdef CONFIG_CPU_MICROMIPS
-	/*
-	 * Worst case optimised microMIPS addiu instructions support
-	 * only a 3-bit immediate value.
-	 */
-	if(inc > 7)
-		panic("Invalid ASID increment value!");
-#endif
 	asid_insn_fixup(__asid_inc, inc);
 	asid_insn_fixup(__asid_mask, mask);
 	asid_insn_fixup(__asid_version_mask, version_mask);
@@ -369,9 +342,6 @@ static void __cpuinit setup_asid(unsigned int inc, unsigned int mask,
 
 	/* Patch up the 'handle_ri_rdhwr_vivt' handler. */
 	vivt_exc = (unsigned long *) &handle_ri_rdhwr_vivt;
-#ifdef CONFIG_CPU_MICROMIPS
-	vivt_exc = (unsigned long *)((unsigned long) vivt_exc - 1);
-#endif
 	vivt_exc++;
 	*vivt_exc = (*vivt_exc & ~mask) | mask;
 
-- 
1.7.11.3

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 02/18] Revert "MIPS: Allow ASID size to be determined at boot time."
  2013-05-19  5:47 [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE) Sanjay Lal
  2013-05-19  5:47 ` [PATCH 01/18] Revert "MIPS: microMIPS: Support dynamic ASID sizing." Sanjay Lal
@ 2013-05-19  5:47 ` Sanjay Lal
  2013-05-19  5:47 ` [PATCH 03/18] KVM/MIPS32: Export min_low_pfn Sanjay Lal
                   ` (16 subsequent siblings)
  18 siblings, 0 replies; 40+ messages in thread
From: Sanjay Lal @ 2013-05-19  5:47 UTC (permalink / raw)
  To: kvm; +Cc: linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti, Sanjay Lal

This reverts commit d532f3d26716a39dfd4b88d687bd344fbe77e390.

Conflicts:
	arch/mips/mm/tlbex.c

Signed-off-by: Sanjay Lal <sanjayl@kymasys.com>
---
 arch/mips/include/asm/mmu_context.h | 95 ++++++++++++++-----------------------
 arch/mips/kernel/genex.S            |  2 +-
 arch/mips/kernel/smtc.c             | 10 ++--
 arch/mips/kernel/traps.c            |  6 +--
 arch/mips/lib/dump_tlb.c            |  5 +-
 arch/mips/lib/r3k_dump_tlb.c        |  7 ++-
 arch/mips/mm/tlb-r3k.c              | 20 ++++----
 arch/mips/mm/tlb-r4k.c              |  2 +-
 arch/mips/mm/tlb-r8k.c              |  2 +-
 arch/mips/mm/tlbex.c                | 52 +-------------------
 10 files changed, 62 insertions(+), 139 deletions(-)

diff --git a/arch/mips/include/asm/mmu_context.h b/arch/mips/include/asm/mmu_context.h
index 1554721..8201160 100644
--- a/arch/mips/include/asm/mmu_context.h
+++ b/arch/mips/include/asm/mmu_context.h
@@ -67,68 +67,45 @@ extern unsigned long pgd_current[];
 	TLBMISS_HANDLER_SETUP_PGD(swapper_pg_dir)
 #endif
 #endif /* CONFIG_MIPS_PGD_C0_CONTEXT*/
+#if defined(CONFIG_CPU_R3000) || defined(CONFIG_CPU_TX39XX)
 
-#define ASID_INC(asid)						\
-({								\
-	unsigned long __asid = asid;				\
-	__asm__("1:\taddiu\t%0,1\t\t\t\t# patched\n\t"		\
-	".section\t__asid_inc,\"a\"\n\t"			\
-	".word\t1b\n\t"						\
-	".previous"						\
-	:"=r" (__asid)						\
-	:"0" (__asid));						\
-	__asid;							\
-})
-#define ASID_MASK(asid)						\
-({								\
-	unsigned long __asid = asid;				\
-	__asm__("1:\tandi\t%0,%1,0xfc0\t\t\t# patched\n\t"	\
-	".section\t__asid_mask,\"a\"\n\t"			\
-	".word\t1b\n\t"						\
-	".previous"						\
-	:"=r" (__asid)						\
-	:"r" (__asid));						\
-	__asid;							\
-})
-#define ASID_VERSION_MASK					\
-({								\
-	unsigned long __asid;					\
-	__asm__("1:\taddiu\t%0,$0,0xff00\t\t\t\t# patched\n\t"	\
-	".section\t__asid_version_mask,\"a\"\n\t"		\
-	".word\t1b\n\t"						\
-	".previous"						\
-	:"=r" (__asid));					\
-	__asid;							\
-})
-#define ASID_FIRST_VERSION					\
-({								\
-	unsigned long __asid = asid;				\
-	__asm__("1:\tli\t%0,0x100\t\t\t\t# patched\n\t"		\
-	".section\t__asid_first_version,\"a\"\n\t"		\
-	".word\t1b\n\t"						\
-	".previous"						\
-	:"=r" (__asid));					\
-	__asid;							\
-})
-
-#define ASID_FIRST_VERSION_R3000	0x1000
-#define ASID_FIRST_VERSION_R4000	0x100
-#define ASID_FIRST_VERSION_R8000	0x1000
-#define ASID_FIRST_VERSION_RM9000	0x1000
+#define ASID_INC	0x40
+#define ASID_MASK	0xfc0
+
+#elif defined(CONFIG_CPU_R8000)
+
+#define ASID_INC	0x10
+#define ASID_MASK	0xff0
+
+#elif defined(CONFIG_MIPS_MT_SMTC)
+
+#define ASID_INC	0x1
+extern unsigned long smtc_asid_mask;
+#define ASID_MASK	(smtc_asid_mask)
+#define HW_ASID_MASK	0xff
+/* End SMTC/34K debug hack */
+#else /* FIXME: not correct for R6000 */
+
+#define ASID_INC	0x1
+#define ASID_MASK	0xff
 
-#ifdef CONFIG_MIPS_MT_SMTC
-#define SMTC_HW_ASID_MASK		0xff
-extern unsigned int smtc_asid_mask;
 #endif
 
 #define cpu_context(cpu, mm)	((mm)->context.asid[cpu])
-#define cpu_asid(cpu, mm)	ASID_MASK(cpu_context((cpu), (mm)))
+#define cpu_asid(cpu, mm)	(cpu_context((cpu), (mm)) & ASID_MASK)
 #define asid_cache(cpu)		(cpu_data[cpu].asid_cache)
 
 static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
 {
 }
 
+/*
+ *  All unused by hardware upper bits will be considered
+ *  as a software asid extension.
+ */
+#define ASID_VERSION_MASK  ((unsigned long)~(ASID_MASK|(ASID_MASK-1)))
+#define ASID_FIRST_VERSION ((unsigned long)(~ASID_VERSION_MASK) + 1)
+
 #ifndef CONFIG_MIPS_MT_SMTC
 /* Normal, classic MIPS get_new_mmu_context */
 static inline void
@@ -137,7 +114,7 @@ get_new_mmu_context(struct mm_struct *mm, unsigned long cpu)
 	extern void kvm_local_flush_tlb_all(void);
 	unsigned long asid = asid_cache(cpu);
 
-	if (!ASID_MASK((asid = ASID_INC(asid)))) {
+	if (! ((asid += ASID_INC) & ASID_MASK) ) {
 		if (cpu_has_vtag_icache)
 			flush_icache_all();
 #ifdef CONFIG_VIRTUALIZATION
@@ -200,7 +177,7 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
 	 * free up the ASID value for use and flush any old
 	 * instances of it from the TLB.
 	 */
-	oldasid = ASID_MASK(read_c0_entryhi());
+	oldasid = (read_c0_entryhi() & ASID_MASK);
 	if(smtc_live_asid[mytlb][oldasid]) {
 		smtc_live_asid[mytlb][oldasid] &= ~(0x1 << cpu);
 		if(smtc_live_asid[mytlb][oldasid] == 0)
@@ -211,7 +188,7 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
 	 * having ASID_MASK smaller than the hardware maximum,
 	 * make sure no "soft" bits become "hard"...
 	 */
-	write_c0_entryhi((read_c0_entryhi() & ~SMTC_HW_ASID_MASK) |
+	write_c0_entryhi((read_c0_entryhi() & ~HW_ASID_MASK) |
 			 cpu_asid(cpu, next));
 	ehb(); /* Make sure it propagates to TCStatus */
 	evpe(mtflags);
@@ -264,15 +241,15 @@ activate_mm(struct mm_struct *prev, struct mm_struct *next)
 #ifdef CONFIG_MIPS_MT_SMTC
 	/* See comments for similar code above */
 	mtflags = dvpe();
-	oldasid = ASID_MASK(read_c0_entryhi());
+	oldasid = read_c0_entryhi() & ASID_MASK;
 	if(smtc_live_asid[mytlb][oldasid]) {
 		smtc_live_asid[mytlb][oldasid] &= ~(0x1 << cpu);
 		if(smtc_live_asid[mytlb][oldasid] == 0)
 			 smtc_flush_tlb_asid(oldasid);
 	}
 	/* See comments for similar code above */
-	write_c0_entryhi((read_c0_entryhi() & ~SMTC_HW_ASID_MASK) |
-	                 cpu_asid(cpu, next));
+	write_c0_entryhi((read_c0_entryhi() & ~HW_ASID_MASK) |
+			 cpu_asid(cpu, next));
 	ehb(); /* Make sure it propagates to TCStatus */
 	evpe(mtflags);
 #else
@@ -309,14 +286,14 @@ drop_mmu_context(struct mm_struct *mm, unsigned cpu)
 #ifdef CONFIG_MIPS_MT_SMTC
 		/* See comments for similar code above */
 		prevvpe = dvpe();
-		oldasid = ASID_MASK(read_c0_entryhi());
+		oldasid = (read_c0_entryhi() & ASID_MASK);
 		if (smtc_live_asid[mytlb][oldasid]) {
 			smtc_live_asid[mytlb][oldasid] &= ~(0x1 << cpu);
 			if(smtc_live_asid[mytlb][oldasid] == 0)
 				smtc_flush_tlb_asid(oldasid);
 		}
 		/* See comments for similar code above */
-		write_c0_entryhi((read_c0_entryhi() & ~SMTC_HW_ASID_MASK)
+		write_c0_entryhi((read_c0_entryhi() & ~HW_ASID_MASK)
 				| cpu_asid(cpu, mm));
 		ehb(); /* Make sure it propagates to TCStatus */
 		evpe(prevvpe);
diff --git a/arch/mips/kernel/genex.S b/arch/mips/kernel/genex.S
index 5c2ba9f..9098829 100644
--- a/arch/mips/kernel/genex.S
+++ b/arch/mips/kernel/genex.S
@@ -493,7 +493,7 @@ NESTED(nmi_handler, PT_SIZE, sp)
 	.set	noreorder
 	/* check if TLB contains a entry for EPC */
 	MFC0	k1, CP0_ENTRYHI
-	andi	k1, 0xff	/* ASID_MASK patched at run-time!! */
+	andi	k1, 0xff	/* ASID_MASK */
 	MFC0	k0, CP0_EPC
 	PTR_SRL k0, _PAGE_SHIFT + 1
 	PTR_SLL k0, _PAGE_SHIFT + 1
diff --git a/arch/mips/kernel/smtc.c b/arch/mips/kernel/smtc.c
index 31d22f3..7186222 100644
--- a/arch/mips/kernel/smtc.c
+++ b/arch/mips/kernel/smtc.c
@@ -111,7 +111,7 @@ static int vpe0limit;
 static int ipibuffers;
 static int nostlb;
 static int asidmask;
-unsigned int smtc_asid_mask = 0xff;
+unsigned long smtc_asid_mask = 0xff;
 
 static int __init vpe0tcs(char *str)
 {
@@ -1395,7 +1395,7 @@ void smtc_get_new_mmu_context(struct mm_struct *mm, unsigned long cpu)
 	asid = asid_cache(cpu);
 
 	do {
-		if (!ASID_MASK(ASID_INC(asid))) {
+		if (!((asid += ASID_INC) & ASID_MASK) ) {
 			if (cpu_has_vtag_icache)
 				flush_icache_all();
 			/* Traverse all online CPUs (hack requires contiguous range) */
@@ -1414,7 +1414,7 @@ void smtc_get_new_mmu_context(struct mm_struct *mm, unsigned long cpu)
 						mips_ihb();
 					}
 					tcstat = read_tc_c0_tcstatus();
-					smtc_live_asid[tlb][ASID_MASK(tcstat)] |= (asiduse)(0x1 << i);
+					smtc_live_asid[tlb][(tcstat & ASID_MASK)] |= (asiduse)(0x1 << i);
 					if (!prevhalt)
 						write_tc_c0_tchalt(0);
 				}
@@ -1423,7 +1423,7 @@ void smtc_get_new_mmu_context(struct mm_struct *mm, unsigned long cpu)
 				asid = ASID_FIRST_VERSION;
 			local_flush_tlb_all();	/* start new asid cycle */
 		}
-	} while (smtc_live_asid[tlb][ASID_MASK(asid)]);
+	} while (smtc_live_asid[tlb][(asid & ASID_MASK)]);
 
 	/*
 	 * SMTC shares the TLB within VPEs and possibly across all VPEs.
@@ -1461,7 +1461,7 @@ void smtc_flush_tlb_asid(unsigned long asid)
 		tlb_read();
 		ehb();
 		ehi = read_c0_entryhi();
-		if (ASID_MASK(ehi) == asid) {
+		if ((ehi & ASID_MASK) == asid) {
 		    /*
 		     * Invalidate only entries with specified ASID,
 		     * makiing sure all entries differ.
diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
index 77cff1f..cb14db3 100644
--- a/arch/mips/kernel/traps.c
+++ b/arch/mips/kernel/traps.c
@@ -1656,7 +1656,6 @@ void __cpuinit per_cpu_trap_init(bool is_boot_cpu)
 	unsigned int cpu = smp_processor_id();
 	unsigned int status_set = ST0_CU0;
 	unsigned int hwrena = cpu_hwrena_impl_bits;
-	unsigned long asid = 0;
 #ifdef CONFIG_MIPS_MT_SMTC
 	int secondaryTC = 0;
 	int bootTC = (cpu == 0);
@@ -1740,9 +1739,8 @@ void __cpuinit per_cpu_trap_init(bool is_boot_cpu)
 	}
 #endif /* CONFIG_MIPS_MT_SMTC */
 
-	asid = ASID_FIRST_VERSION;
-	cpu_data[cpu].asid_cache = asid;
-	TLBMISS_HANDLER_SETUP();
+	if (!cpu_data[cpu].asid_cache)
+		cpu_data[cpu].asid_cache = ASID_FIRST_VERSION;
 
 	atomic_inc(&init_mm.mm_count);
 	current->active_mm = &init_mm;
diff --git a/arch/mips/lib/dump_tlb.c b/arch/mips/lib/dump_tlb.c
index 8a12d00..32b9f21 100644
--- a/arch/mips/lib/dump_tlb.c
+++ b/arch/mips/lib/dump_tlb.c
@@ -11,7 +11,6 @@
 #include <asm/page.h>
 #include <asm/pgtable.h>
 #include <asm/tlbdebug.h>
-#include <asm/mmu_context.h>
 
 static inline const char *msk2str(unsigned int mask)
 {
@@ -56,7 +55,7 @@ static void dump_tlb(int first, int last)
 	s_pagemask = read_c0_pagemask();
 	s_entryhi = read_c0_entryhi();
 	s_index = read_c0_index();
-	asid = ASID_MASK(s_entryhi);
+	asid = s_entryhi & 0xff;
 
 	for (i = first; i <= last; i++) {
 		write_c0_index(i);
@@ -86,7 +85,7 @@ static void dump_tlb(int first, int last)
 
 			printk("va=%0*lx asid=%02lx\n",
 			       width, (entryhi & ~0x1fffUL),
-			       ASID_MASK(entryhi));
+			       entryhi & 0xff);
 			printk("\t[pa=%0*llx c=%d d=%d v=%d g=%d] ",
 			       width,
 			       (entrylo0 << 6) & PAGE_MASK, c0,
diff --git a/arch/mips/lib/r3k_dump_tlb.c b/arch/mips/lib/r3k_dump_tlb.c
index 8327698..91615c2 100644
--- a/arch/mips/lib/r3k_dump_tlb.c
+++ b/arch/mips/lib/r3k_dump_tlb.c
@@ -9,7 +9,6 @@
 #include <linux/mm.h>
 
 #include <asm/mipsregs.h>
-#include <asm/mmu_context.h>
 #include <asm/page.h>
 #include <asm/pgtable.h>
 #include <asm/tlbdebug.h>
@@ -22,7 +21,7 @@ static void dump_tlb(int first, int last)
 	unsigned int asid;
 	unsigned long entryhi, entrylo0;
 
-	asid = ASID_MASK(read_c0_entryhi());
+	asid = read_c0_entryhi() & 0xfc0;
 
 	for (i = first; i <= last; i++) {
 		write_c0_index(i<<8);
@@ -36,7 +35,7 @@ static void dump_tlb(int first, int last)
 
 		/* Unused entries have a virtual address of KSEG0.  */
 		if ((entryhi & 0xffffe000) != 0x80000000
-		    && (ASID_MASK(entryhi) == asid)) {
+		    && (entryhi & 0xfc0) == asid) {
 			/*
 			 * Only print entries in use
 			 */
@@ -45,7 +44,7 @@ static void dump_tlb(int first, int last)
 			printk("va=%08lx asid=%08lx"
 			       "  [pa=%06lx n=%d d=%d v=%d g=%d]",
 			       (entryhi & 0xffffe000),
-			       ASID_MASK(entryhi),
+			       entryhi & 0xfc0,
 			       entrylo0 & PAGE_MASK,
 			       (entrylo0 & (1 << 11)) ? 1 : 0,
 			       (entrylo0 & (1 << 10)) ? 1 : 0,
diff --git a/arch/mips/mm/tlb-r3k.c b/arch/mips/mm/tlb-r3k.c
index 4a13c15..a63d1ed 100644
--- a/arch/mips/mm/tlb-r3k.c
+++ b/arch/mips/mm/tlb-r3k.c
@@ -51,7 +51,7 @@ void local_flush_tlb_all(void)
 #endif
 
 	local_irq_save(flags);
-	old_ctx = ASID_MASK(read_c0_entryhi());
+	old_ctx = read_c0_entryhi() & ASID_MASK;
 	write_c0_entrylo0(0);
 	entry = r3k_have_wired_reg ? read_c0_wired() : 8;
 	for (; entry < current_cpu_data.tlbsize; entry++) {
@@ -87,13 +87,13 @@ void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
 
 #ifdef DEBUG_TLB
 		printk("[tlbrange<%lu,0x%08lx,0x%08lx>]",
-			ASID_MASK(cpu_context(cpu, mm)), start, end);
+			cpu_context(cpu, mm) & ASID_MASK, start, end);
 #endif
 		local_irq_save(flags);
 		size = (end - start + (PAGE_SIZE - 1)) >> PAGE_SHIFT;
 		if (size <= current_cpu_data.tlbsize) {
-			int oldpid = ASID_MASK(read_c0_entryhi());
-			int newpid = ASID_MASK(cpu_context(cpu, mm));
+			int oldpid = read_c0_entryhi() & ASID_MASK;
+			int newpid = cpu_context(cpu, mm) & ASID_MASK;
 
 			start &= PAGE_MASK;
 			end += PAGE_SIZE - 1;
@@ -166,10 +166,10 @@ void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 #ifdef DEBUG_TLB
 		printk("[tlbpage<%lu,0x%08lx>]", cpu_context(cpu, vma->vm_mm), page);
 #endif
-		newpid = ASID_MASK(cpu_context(cpu, vma->vm_mm));
+		newpid = cpu_context(cpu, vma->vm_mm) & ASID_MASK;
 		page &= PAGE_MASK;
 		local_irq_save(flags);
-		oldpid = ASID_MASK(read_c0_entryhi());
+		oldpid = read_c0_entryhi() & ASID_MASK;
 		write_c0_entryhi(page | newpid);
 		BARRIER;
 		tlb_probe();
@@ -197,10 +197,10 @@ void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte)
 	if (current->active_mm != vma->vm_mm)
 		return;
 
-	pid = ASID_MASK(read_c0_entryhi());
+	pid = read_c0_entryhi() & ASID_MASK;
 
 #ifdef DEBUG_TLB
-	if ((pid != ASID_MASK(cpu_context(cpu, vma->vm_mm))) || (cpu_context(cpu, vma->vm_mm) == 0)) {
+	if ((pid != (cpu_context(cpu, vma->vm_mm) & ASID_MASK)) || (cpu_context(cpu, vma->vm_mm) == 0)) {
 		printk("update_mmu_cache: Wheee, bogus tlbpid mmpid=%lu tlbpid=%d\n",
 		       (cpu_context(cpu, vma->vm_mm)), pid);
 	}
@@ -241,7 +241,7 @@ void add_wired_entry(unsigned long entrylo0, unsigned long entrylo1,
 
 		local_irq_save(flags);
 		/* Save old context and create impossible VPN2 value */
-		old_ctx = ASID_MASK(read_c0_entryhi());
+		old_ctx = read_c0_entryhi() & ASID_MASK;
 		old_pagemask = read_c0_pagemask();
 		w = read_c0_wired();
 		write_c0_wired(w + 1);
@@ -264,7 +264,7 @@ void add_wired_entry(unsigned long entrylo0, unsigned long entrylo1,
 #endif
 
 		local_irq_save(flags);
-		old_ctx = ASID_MASK(read_c0_entryhi());
+		old_ctx = read_c0_entryhi() & ASID_MASK;
 		write_c0_entrylo0(entrylo0);
 		write_c0_entryhi(entryhi);
 		write_c0_index(wired);
diff --git a/arch/mips/mm/tlb-r4k.c b/arch/mips/mm/tlb-r4k.c
index 09653b2..c643de4 100644
--- a/arch/mips/mm/tlb-r4k.c
+++ b/arch/mips/mm/tlb-r4k.c
@@ -287,7 +287,7 @@ void __update_tlb(struct vm_area_struct * vma, unsigned long address, pte_t pte)
 
 	ENTER_CRITICAL(flags);
 
-	pid = ASID_MASK(read_c0_entryhi());
+	pid = read_c0_entryhi() & ASID_MASK;
 	address &= (PAGE_MASK << 1);
 	write_c0_entryhi(address | pid);
 	pgdp = pgd_offset(vma->vm_mm, address);
diff --git a/arch/mips/mm/tlb-r8k.c b/arch/mips/mm/tlb-r8k.c
index 122f920..91c2499 100644
--- a/arch/mips/mm/tlb-r8k.c
+++ b/arch/mips/mm/tlb-r8k.c
@@ -195,7 +195,7 @@ void __update_tlb(struct vm_area_struct * vma, unsigned long address, pte_t pte)
 	if (current->active_mm != vma->vm_mm)
 		return;
 
-	pid = ASID_MASK(read_c0_entryhi());
+	pid = read_c0_entryhi() & ASID_MASK;
 
 	local_irq_save(flags);
 	address &= PAGE_MASK;
diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index 2ad41e9..017124f 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -29,7 +29,6 @@
 #include <linux/init.h>
 #include <linux/cache.h>
 
-#include <asm/mmu_context.h>
 #include <asm/cacheflush.h>
 #include <asm/pgtable.h>
 #include <asm/war.h>
@@ -306,48 +305,6 @@ static struct uasm_reloc relocs[128] __cpuinitdata;
 static int check_for_high_segbits __cpuinitdata;
 #endif
 
-static void __cpuinit insn_fixup(unsigned int **start, unsigned int **stop,
-					unsigned int i_const)
-{
-	unsigned int **p, *ip;
-
-	for (p = start; p < stop; p++) {
-		ip = *p;
-		*ip = (*ip & 0xffff0000) | i_const;
-	}
-	local_flush_icache_range((unsigned long)*p, (unsigned long)((*p) + 1));
-}
-
-#define asid_insn_fixup(section, const)					\
-do {									\
-	extern unsigned int *__start_ ## section;			\
-	extern unsigned int *__stop_ ## section;			\
-	insn_fixup(&__start_ ## section, &__stop_ ## section, const);	\
-} while(0)
-
-/*
- * Caller is assumed to flush the caches before the first context switch.
- */
-static void __cpuinit setup_asid(unsigned int inc, unsigned int mask,
-				 unsigned int version_mask,
-				 unsigned int first_version)
-{
-	extern asmlinkage void handle_ri_rdhwr_vivt(void);
-	unsigned long *vivt_exc;
-
-	asid_insn_fixup(__asid_inc, inc);
-	asid_insn_fixup(__asid_mask, mask);
-	asid_insn_fixup(__asid_version_mask, version_mask);
-	asid_insn_fixup(__asid_first_version, first_version);
-
-	/* Patch up the 'handle_ri_rdhwr_vivt' handler. */
-	vivt_exc = (unsigned long *) &handle_ri_rdhwr_vivt;
-	vivt_exc++;
-	*vivt_exc = (*vivt_exc & ~mask) | mask;
-
-	current_cpu_data.asid_cache = first_version;
-}
-
 static int check_for_high_segbits __cpuinitdata;
 
 static unsigned int kscratch_used_mask __cpuinitdata;
@@ -2226,9 +2183,7 @@ void __cpuinit build_tlb_refill_handler(void)
 	case CPU_TX3922:
 	case CPU_TX3927:
 #ifndef CONFIG_MIPS_PGD_C0_CONTEXT
-		setup_asid(0x40, 0xfc0, 0xf000, ASID_FIRST_VERSION_R3000);
-		if (cpu_has_local_ebase)
-			build_r3000_tlb_refill_handler();
+		build_r3000_tlb_refill_handler();
 		if (!run_once) {
 			if (!cpu_has_local_ebase)
 				build_r3000_tlb_refill_handler();
@@ -2252,11 +2207,6 @@ void __cpuinit build_tlb_refill_handler(void)
 		break;
 
 	default:
-#ifndef CONFIG_MIPS_MT_SMTC
-		setup_asid(0x1, 0xff, 0xff00, ASID_FIRST_VERSION_R4000);
-#else
-		setup_asid(0x1, smtc_asid_mask, 0xff00, ASID_FIRST_VERSION_R4000);
-#endif
 		if (!run_once) {
 			scratch_reg = allocate_kscratch();
 #ifdef CONFIG_MIPS_PGD_C0_CONTEXT
-- 
1.7.11.3

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 03/18] KVM/MIPS32: Export min_low_pfn.
  2013-05-19  5:47 [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE) Sanjay Lal
  2013-05-19  5:47 ` [PATCH 01/18] Revert "MIPS: microMIPS: Support dynamic ASID sizing." Sanjay Lal
  2013-05-19  5:47 ` [PATCH 02/18] Revert "MIPS: Allow ASID size to be determined at boot time." Sanjay Lal
@ 2013-05-19  5:47 ` Sanjay Lal
  2013-05-19  5:47 ` [PATCH 04/18] KVM/MIPS32-VZ: MIPS VZ-ASE related register defines and helper macros Sanjay Lal
                   ` (15 subsequent siblings)
  18 siblings, 0 replies; 40+ messages in thread
From: Sanjay Lal @ 2013-05-19  5:47 UTC (permalink / raw)
  To: kvm; +Cc: linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti, Sanjay Lal

The KVM module uses the standard MIPS cache management routines, which use min_low_pfn.
This creates and indirect dependency, requiring min_low_pfn to be exported.

Signed-off-by: Sanjay Lal <sanjayl@kymasys.com>
---
 arch/mips/kernel/mips_ksyms.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/mips/kernel/mips_ksyms.c b/arch/mips/kernel/mips_ksyms.c
index 6e58e97..0299472 100644
--- a/arch/mips/kernel/mips_ksyms.c
+++ b/arch/mips/kernel/mips_ksyms.c
@@ -14,6 +14,7 @@
 #include <linux/mm.h>
 #include <asm/uaccess.h>
 #include <asm/ftrace.h>
+#include <linux/bootmem.h>
 
 extern void *__bzero(void *__s, size_t __count);
 extern long __strncpy_from_user_nocheck_asm(char *__to,
@@ -60,3 +61,8 @@ EXPORT_SYMBOL(invalid_pte_table);
 /* _mcount is defined in arch/mips/kernel/mcount.S */
 EXPORT_SYMBOL(_mcount);
 #endif
+
+/* The KVM module uses the standard MIPS cache functions which use
+ * min_low_pfn, requiring it to be exported.
+ */
+EXPORT_SYMBOL(min_low_pfn);
-- 
1.7.11.3

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 04/18] KVM/MIPS32-VZ: MIPS VZ-ASE related register defines and helper macros.
  2013-05-19  5:47 [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE) Sanjay Lal
                   ` (2 preceding siblings ...)
  2013-05-19  5:47 ` [PATCH 03/18] KVM/MIPS32: Export min_low_pfn Sanjay Lal
@ 2013-05-19  5:47 ` Sanjay Lal
  2013-05-19  5:47 ` [PATCH 05/18] KVM/MIPS32-VZ: VZ-ASE assembler wrapper functions to set GuestIDs Sanjay Lal
                   ` (14 subsequent siblings)
  18 siblings, 0 replies; 40+ messages in thread
From: Sanjay Lal @ 2013-05-19  5:47 UTC (permalink / raw)
  To: kvm; +Cc: linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti, Sanjay Lal


Signed-off-by: Sanjay Lal <sanjayl@kymasys.com>
---
 arch/mips/include/asm/mipsvzregs.h | 494 +++++++++++++++++++++++++++++++++++++
 1 file changed, 494 insertions(+)
 create mode 100644 arch/mips/include/asm/mipsvzregs.h

diff --git a/arch/mips/include/asm/mipsvzregs.h b/arch/mips/include/asm/mipsvzregs.h
new file mode 100644
index 0000000..84b94b4
--- /dev/null
+++ b/arch/mips/include/asm/mipsvzregs.h
@@ -0,0 +1,494 @@
+/*
+* This file is subject to the terms and conditions of the GNU General Public
+* License.  See the file "COPYING" in the main directory of this archive
+* for more details.
+*
+* MIPS VZ-ASE  related register defines and helper macros
+*
+* Copyright (C) 2012  MIPS Technologies, Inc.  All rights reserved.
+* Authors: Yann Le Du <ledu@kymasys.com>
+*/
+
+
+/*
+ * VZ regs definitions, follows on from mipsregs.h
+ */
+
+#ifndef _ASM_MIPSVZREGS_H
+#define _ASM_MIPSVZREGS_H
+
+#include <asm/mipsregs.h>
+#include <asm/war.h>
+
+#ifndef __ASSEMBLY__
+
+/*
+ * C macros
+ */
+
+#define read_c0_guestctl0()	__read_32bit_c0_register($12, 6)
+#define write_c0_guestctl0(val) __write_32bit_c0_register($12, 6, val)
+#define read_c0_guestctl1()	__read_32bit_c0_register($10, 4)
+#define write_c0_guestctl1(val)	__write_32bit_c0_register($10, 4, val)
+#define read_c0_guestctl2()	__read_32bit_c0_register($10, 5)
+#define write_c0_guestctl2(val)	__write_32bit_c0_register($10, 5, val)
+#define read_c0_gtoffset()	__read_32bit_c0_register($12, 7)
+#define write_c0_gtoffset(val) __write_32bit_c0_register($12, 7, val)
+
+__BUILD_SET_C0(guestctl1)
+__BUILD_SET_C0(guestctl2)
+
+#else /* Assembly */
+/*
+ * Macros for use in assembly language code
+ */
+
+#define CP0_GUESTCTL0		$12,6
+#define CP0_GUESTCTL1		$10,4
+#define CP0_GTOFFSET		$12,7
+
+#endif
+
+/* GuestCtl0 fields */
+#define GUESTCTL0_GM_SHIFT	31
+#define GUESTCTL0_GM		(_ULCAST_(1) << GUESTCTL0_GM_SHIFT)
+#define GUESTCTL0_CP0_SHIFT	28
+#define GUESTCTL0_CP0		(_ULCAST_(1) << GUESTCTL0_CP0_SHIFT)
+#define GUESTCTL0_AT_SHIFT	26
+#define GUESTCTL0_AT 		(_ULCAST_(0x3) << GUESTCTL0_AT_SHIFT)
+#define GUESTCTL0_AT3		(_ULCAST_(3) << GUESTCTL0_AT_SHIFT)
+#define GUESTCTL0_AT1		(_ULCAST_(1) << GUESTCTL0_AT_SHIFT)
+#define GUESTCTL0_GT_SHIFT	25
+#define GUESTCTL0_GT		(_ULCAST_(1) << GUESTCTL0_GT_SHIFT)
+#define GUESTCTL0_CG_SHIFT	24
+#define GUESTCTL0_CG		(_ULCAST_(1) << GUESTCTL0_CG_SHIFT)
+#define GUESTCTL0_CF_SHIFT	23
+#define GUESTCTL0_CF		(_ULCAST_(1) << GUESTCTL0_CF_SHIFT)
+#define GUESTCTL0_G1_SHIFT	22
+#define GUESTCTL0_G1		(_ULCAST_(1) << GUESTCTL0_G1_SHIFT)
+#define GUESTCTL0_RAD_SHIFT	9
+#define GUESTCTL0_RAD		(_ULCAST_(1) << GUESTCTL0_RAD_SHIFT)
+#define GUESTCTL0_DRG_SHIFT	8
+#define GUESTCTL0_DRG		(_ULCAST_(1) << GUESTCTL0_DRG_SHIFT)
+#define GUESTCTL0_G2_SHIFT	7
+#define GUESTCTL0_G2		(_ULCAST_(1) << GUESTCTL0_G2_SHIFT)
+
+/* GuestCtl0.GExcCode Hypervisor exception cause code */
+#define GUESTCTL0_GEXC_SHIFT	2
+#define GUESTCTL0_GEXC		(_ULCAST_(0x1f) << GUESTCTL0_GEXC_SHIFT)
+#define GUESTCTL0_GEXC_GPSI	0  /* Guest Privileged Sensitive Instruction */
+#define GUESTCTL0_GEXC_GSFC 	1  /* Guest Software Field Change */
+#define GUESTCTL0_GEXC_HC 	2  /* Hypercall */
+#define GUESTCTL0_GEXC_GRR	3  /* Guest Reserved Instruction Redirect */
+#define GUESTCTL0_GEXC_GVA	8  /* Guest Virtual Address available */
+#define GUESTCTL0_GEXC_GHFC 	9  /* Guest Hardware Field Change */
+#define GUESTCTL0_GEXC_GPA	10 /* Guest Physical Address available */
+
+/* GuestCtl1 fields */
+#define GUESTCTL1_ID_SHIFT	0
+#define GUESTCTL1_ID_WIDTH	8
+#define GUESTCTL1_ID		(_ULCAST_(0xff) << GUESTCTL1_ID_SHIFT)
+#define GUESTCTL1_RID_SHIFT	16
+#define GUESTCTL1_RID_WIDTH	8
+#define GUESTCTL1_RID		(_ULCAST_(0xff) << GUESTCTL1_RID_SHIFT)
+
+/* VZ GuestID reserved for root context */
+#define GUESTCTL1_VZ_ROOT_GUESTID   0x00 
+
+/* entryhi fields */
+#define ENTRYHI_EHINV_SHIFT	10
+#define ENTRYHI_EHINV		(_ULCAST_(1) << ENTRYHI_EHINV_SHIFT)
+
+#ifndef __ASSEMBLY__
+
+#define mfgc0(rd,sel)							\
+({									\
+	 unsigned long  __res;						\
+									\
+	__asm__ __volatile__(						\
+	"	.set	push					\n"	\
+	"	.set	mips32r2				\n"	\
+	"	.set	noat					\n"	\
+	"	# mfgc0	$1, $" #rd ", " #sel "			\n"	\
+	"	.word	0x40600000 | (1<<16) | (" #rd "<<11) | " #sel "	\n"	\
+	"	move	%0, $1					\n"	\
+	"	.set	pop					\n"	\
+	: "=r" (__res));						\
+									\
+	__res;								\
+})
+
+#define mtgc0(rd, sel, v)							\
+({									\
+	__asm__ __volatile__(						\
+	"	.set	push					\n"	\
+	"	.set	mips32r2				\n"	\
+	"	.set	noat					\n"	\
+	"	move	$1, %0					\n"	\
+	"	# mtgc0 $1," #rd ", " #sel "			\n"	\
+	"	.word	0x40600200 | (1<<16) | (" #rd "<<11) | " #sel "	\n"	\
+	"	.set	pop					\n"	\
+	:								\
+	: "r" (v));							\
+})
+
+static inline void tlb_write_guest_indexed(void)
+{
+	__asm__ __volatile__(
+	"	.set	push\n"
+	"	.set	noreorder\n"
+	"	.set	mips32r2\n"
+	"	.word	0x4200000a  # tlbgwi ASM_TLBGWI \n"
+	"	.set	reorder\n"
+	"	.set	pop\n");
+}
+
+static inline void tlb_invalidate_flush(void)
+{
+	__asm__ __volatile__(
+	"	.set	push						\n"
+	"	.set	mips32r2					\n"
+	"	.set	noreorder					\n"
+	"	.word	0x42000004  # tlbinvf ASM_TLBINVF		\n"
+	"	.set	pop						\n"
+	);
+}
+
+static inline void tlb_guest_invalidate_flush(void)
+{
+	__asm__ __volatile__(
+	"	.set	push						\n"
+	"	.set	mips32r2					\n"
+	"	.set	noreorder					\n"
+	"	.word	0x4200000c  # tlbginvf ASM_TLBGINVF		\n"
+	"	.set	pop						\n"
+	);
+}
+
+#define read_c0_guest_index()           mfgc0(0, 0)
+#define write_c0_guest_index(val)       mtgc0(0, 0, val)
+#define read_c0_guest_random()          mfgc0(1, 0)
+#define read_c0_guest_entrylo0()        mfgc0(2, 0)
+#define write_c0_guest_entrylo0(val)    mtgc0(2, 0, val)
+#define read_c0_guest_entrylo1()        mfgc0(3, 0)
+#define write_c0_guest_entrylo1(val)    mtgc0(3, 0, val)
+#define read_c0_guest_context()         mfgc0(4, 0)
+#define write_c0_guest_context(val)     mtgc0(4, 0, val)
+#define read_c0_guest_userlocal()       mfgc0(4, 2)
+#define write_c0_guest_userlocal(val)   mtgc0(4, 2, val)
+#define read_c0_guest_pagemask()        mfgc0(5, 0)
+#define write_c0_guest_pagemask(val)    mtgc0(5, 0, val)
+#define read_c0_guest_pagegrain()       mfgc0(5, 1)
+#define write_c0_guest_pagegrain(val)   mtgc0(5, 1, val)
+#define read_c0_guest_wired()           mfgc0(6, 0)
+#define write_c0_guest_wired(val)       mtgc0(6, 0, val)
+#define read_c0_guest_hwrena()          mfgc0(7, 0)
+#define write_c0_guest_hwrena(val)      mtgc0(7, 0, val)
+#define read_c0_guest_badvaddr()        mfgc0(8, 0)
+#define write_c0_guest_badvaddr(val)    mtgc0(8, 0, val)
+#define read_c0_guest_count()           mfgc0(9, 0)
+#define write_c0_guest_count(val)       mtgc0(9, 0, val)
+#define read_c0_guest_entryhi()         mfgc0(10, 0)
+#define write_c0_guest_entryhi(val)     mtgc0(10, 0, val)
+#define read_c0_guest_compare()         mfgc0(11, 0)
+#define write_c0_guest_compare(val)     mtgc0(11, 0, val)
+#define read_c0_guest_status()          mfgc0(12, 0)
+#define write_c0_guest_status(val)      mtgc0(12, 0, val)
+#define read_c0_guest_intctl()          mfgc0(12, 1)
+#define write_c0_guest_intctl(val)      mtgc0(12, 1, val)
+#define read_c0_guest_cause()           mfgc0(13, 0)
+#define write_c0_guest_cause(val)       mtgc0(13, 0, val)
+#define read_c0_guest_epc()             mfgc0(14, 0)
+#define write_c0_guest_epc(val)         mtgc0(14, 0, val)
+#define read_c0_guest_ebase()           mfgc0(15, 1)
+#define write_c0_guest_ebase(val)       mtgc0(15, 1, val)
+#define read_c0_guest_config()          mfgc0(16, 0)
+#define read_c0_guest_config1()         mfgc0(16, 1)
+#define read_c0_guest_config2()         mfgc0(16, 2)
+#define read_c0_guest_config3()         mfgc0(16, 3)
+#define read_c0_guest_config4()         mfgc0(16, 4)
+#define read_c0_guest_config5()         mfgc0(16, 5)
+#define read_c0_guest_config6()         mfgc0(16, 6)
+#define read_c0_guest_config7()         mfgc0(16, 7)
+#define write_c0_guest_config(val)      mtgc0(16, 0, val)
+#define write_c0_guest_config1(val)     mtgc0(16, 1, val)
+#define write_c0_guest_config2(val)     mtgc0(16, 2, val)
+#define write_c0_guest_config3(val)     mtgc0(16, 3, val)
+#define write_c0_guest_config4(val)     mtgc0(16, 4, val)
+#define write_c0_guest_config5(val)     mtgc0(16, 5, val)
+#define write_c0_guest_config6(val)     mtgc0(16, 6, val)
+#define write_c0_guest_config7(val)     mtgc0(16, 7, val)
+#define read_c0_guest_errorepc()        mfgc0(30, 0)
+#define write_c0_guest_errorepc(val)    mtgc0(30, 0, val)
+
+__BUILD_SET_C0(guest_status)
+__BUILD_SET_C0(guest_cause)
+__BUILD_SET_C0(guest_ebase)
+
+#else /* end not __ASSEMBLY__ */
+
+/*
+ *************************************************************************
+ *                S O F T W A R E   G P R   I N D I C E S                *
+ *************************************************************************
+ *
+ * These definitions provide the index (number) of the GPR, as opposed
+ * to the assembler register name ($n).
+ */
+
+#define R_zero                   0
+#define R_AT                     1
+#define R_v0                     2
+#define R_v1                     3
+#define R_a0                     4
+#define R_a1                     5
+#define R_a2                     6
+#define R_a3                     7
+#define R_t0                     8
+#define R_t1                     9
+#define R_t2                    10
+#define R_t3                    11
+#define R_t4                    12
+#define R_t5                    13
+#define R_t6                    14
+#define R_t7                    15
+#define R_s0                    16
+#define R_s1                    17
+#define R_s2                    18
+#define R_s3                    19
+#define R_s4                    20
+#define R_s5                    21
+#define R_s6                    22
+#define R_s7                    23
+#define R_t8                    24
+#define R_t9                    25
+#define R_repc                  25
+#define R_k0                    26
+#define R_k1                    27
+#define R_gp                    28
+#define R_sp                    29
+#define R_fp                    30
+#define R_s8                    30
+#define R_tid                   30
+#define R_ra                    31
+#define R_hi                    32                      /* Hi register */
+#define R_lo                    33                      /* Lo register */
+
+/*
+ *************************************************************************
+ *             C P 0   R E G I S T E R   D E F I N I T I O N S           *
+ *************************************************************************
+ * Each register has the following definitions:
+ *
+ *      C0_rrr          The register number (as a $n value)
+ *      R_C0_rrr        The register index (as an integer corresponding
+ *                      to the register number)
+ *      R_C0_Selrrr     The register select (as an integer corresponding
+ *                      to the register select)
+ *
+ * Each field in a register has the following definitions:
+ *
+ *      S_rrrfff        The shift count required to right-justify
+ *                      the field.  This corresponds to the bit
+ *                      number of the right-most bit in the field.
+ *      M_rrrfff        The Mask required to isolate the field.
+ *
+ * Register diagrams included below as comments correspond to the
+ * MIPS32 and MIPS64 architecture specifications.  Refer to other
+ * sources for register diagrams for older architectures.
+ */
+#define R_CP0_INDEX              0
+#define R_CP0_SELINDEX           0
+#define R_CP0_RANDOM             1
+#define R_CP0_SELRANDOM          0
+#define R_CP0_ENTRYLO0           2
+#define R_CP0_SELENTRYLO0        0
+#define R_CP0_ENTRYLO1           3
+#define R_CP0_SELENTRYLO1        0
+#define R_CP0_CONTEXT            4
+#define R_CP0_SELCONTEXT         0
+#define R_CP0_CONTEXTCONFIG      4       /* Overload */
+#define R_CP0_SELCONTEXTCONFIG   1
+#define R_CP0_XCONTEXTCONFIG     4
+#define R_CP0_SELXCONTEXTCONFIG  3
+#define R_CP0_USERLOCAL          4
+#define R_CP0_SELUSERLOCAL       2
+#define R_CP0_PAGEMASK           5                       /* Mask (R/W) */
+#define R_CP0_SELPAGEMASK        0
+#define R_CP0_PAGEGRAIN          5                       /* Mask (R/W) */
+#define R_CP0_SELPAGEGRAIN       1
+#define R_CP0_WIRED              6
+#define R_CP0_SELWIRED           0
+#define R_CP0_HWRENA             7
+#define R_CP0_SELHWRENA          0
+#define R_CP0_BADVADDR           8
+#define R_CP0_SELBADVADDR        0
+#define R_CP0_BADINSTR           8
+#define R_CP0_SELBADINSTR        1
+#define R_CP0_BADINSTRP          8
+#define R_CP0_SELBADINSTRP       2
+#define R_CP0_COUNT              9
+#define R_CP0_SELCOUNT           0
+#define R_CP0_ENTRYHI            10
+#define R_CP0_SELENTRYHI         0
+#define R_CP0_COMPARE            11
+#define R_CP0_SELCOMPARE         0
+#define R_CP0_STATUS             12
+#define R_CP0_SELSTATUS          0
+#define R_CP0_INTCTL             12
+#define R_CP0_SELINTCTL          1
+#define R_CP0_SRSCTL             12
+#define R_CP0_SELSRSCTL          2
+#define R_CP0_SRSMAP             12
+#define R_CP0_SELSRSMAP          3
+#define R_CP0_VIEW_IPL           12
+#define R_CP0_SELVIEW_IPL        4
+#define R_CP0_SRSMAP2            12
+#define R_CP0_SELSRSMAP2         5
+#define R_CP0_CAUSE              13
+#define R_CP0_SELCAUSE           0
+#define R_CP0_VIEW_RIPL          13
+#define R_CP0_SELVIEW_RIPL       4
+#define R_CP0_EPC                14
+#define R_CP0_SELEPC             0
+#define R_CP0_PRID               15
+#define R_CP0_SELPRID            0
+#define R_CP0_EBASE              15
+#define R_CP0_SELEBASE           1
+#define R_CP0_CDMMBASE           15
+#define R_CP0_SELCDMMBASE        2
+#define R_CP0_CMGCRBASE          15
+#define R_CP0_SELCMGCRBASE       3
+#define R_CP0_CONFIG             16
+#define R_CP0_SELCONFIG          0
+#define R_CP0_CONFIG1            16
+#define R_CP0_SELCONFIG1         1
+#define R_CP0_CONFIG2            16
+#define R_CP0_SELCONFIG2         2
+#define R_CP0_CONFIG3            16
+#define R_CP0_SELCONFIG3         3
+#define R_CP0_CONFIG4            16
+#define R_CP0_SELCONFIG4         4
+#define R_CP0_CONFIG6            16
+#define R_CP0_SELCONFIG6         6
+#define R_CP0_CONFIG7            16
+#define R_CP0_SELCONFIG7         7
+#define R_CP0_LLADDR             17
+#define R_CP0_SELLLADDR          0
+#define R_CP0_WATCHLO            18
+#define R_CP0_SELWATCHLO         0
+#define R_CP0_WATCHHI            19
+#define R_CP0_SELWATCHHI         0
+#define R_CP0_XCONTEXT           20
+#define R_CP0_SELXCONTEXT        0
+#define R_CP0_DEBUG              23
+#define R_CP0_SELDEBUG           0
+#define R_CP0_TRACECONTROL       23
+#define R_CP0_SELTRACECONTROL    1
+#define R_CP0_TRACECONTROL2      23
+#define R_CP0_SELTRACECONTROL2   2
+#define R_CP0_USERTRACEDATA      23
+#define R_CP0_SELUSERTRACEDATA   3
+#define R_CP0_USERTRACEDATA2     24
+#define R_CP0_SELUSERTRACEDATA2  3
+#define R_CP0_TRACEBPC           23
+#define R_CP0_SELTRACEBPC        4
+#define R_CP0_TRACEIBPC          23
+#define R_CP0_SELTRACEIBPC       4
+#define R_CP0_TRACEDBPC          23
+#define R_CP0_SELTRACEDBPC       5
+#define R_CP0_DEBUG2             23
+#define R_CP0_SELDEBUG2          6
+#define R_CP0_TRACECONTROL3      24
+#define R_CP0_SELTRACECONTROL3   2
+#define R_CP0_DEPC               24
+#define R_CP0_SELDEPC            0
+#define R_CP0_PERFCNT            25
+#define R_CP0_SELPERFCNT         0
+#define R_CP0_SELPERFCNT0        1
+#define R_CP0_SELPERFCNT1        3
+#define R_CP0_SELPERFCNT2        5
+#define R_CP0_SELPERFCNT3        7
+#define R_CP0_PERFCTRL           25
+#define R_CP0_SELPERFCTRL0       0
+#define R_CP0_SELPERFCTRL1       2
+#define R_CP0_SELPERFCTRL2       4
+#define R_CP0_SELPERFCTRL3       6
+#define R_CP0_ERRCTL             26
+#define R_CP0_SELERRCTL          0
+#define R_CP0_CACHEERR           27
+#define R_CP0_SELCACHEERR        0
+#define R_CP0_TAGLO              28
+#define R_CP0_SELTAGLO           0
+#define R_CP0_ITAGLO             28
+#define R_CP0_SELITAGLO          0
+#define R_CP0_DTAGLO             28
+#define R_CP0_SELDTAGLO          2
+#define R_CP0_STAGLO             28
+#define R_CP0_SELSTAGLO          4
+#define R_CP0_DATALO             28
+#define R_CP0_SELDATALO          1
+#define R_CP0_IDATALO            28
+#define R_CP0_SELIDATALO         1
+#define R_CP0_DDATALO            28
+#define R_CP0_SELDDATALO         3
+#define R_CP0_SDATALO            28
+#define R_CP0_SELSDATALO         5
+#define R_CP0_TAGHI              29
+#define R_CP0_SELTAGHI           0
+#define R_CP0_ITAGHI             29
+#define R_CP0_SELITAGHI          0
+#define R_CP0_DTAGHI             29
+#define R_CP0_SELDTAGHI          2
+#define R_CP0_STAGHI             29
+#define R_CP0_SELSTAGHI          4
+#define R_CP0_DATAHI             29
+#define R_CP0_SELDATAHI          1
+#define R_CP0_IDATAHI            29
+#define R_CP0_SELIDATAHI         1
+#define R_CP0_DDATAHI            29
+#define R_CP0_SELDDATAHI         3
+#define R_CP0_SDATAHI            29
+#define R_CP0_SELSDATAHI         5
+#define R_CP0_ERROREPC           30
+#define R_CP0_SELERROREPC        0
+#define R_CP0_DESAVE             31
+#define R_CP0_SELDESAVE          0
+
+#define R_CP0_GUESTCTL0          12
+#define R_CP0_SELGUESTCTL0       6
+#define R_CP0_GUESTCTL1          10
+#define R_CP0_SELGUESTCTL1       4
+#define R_CP0_GTOFFSET           12
+#define R_CP0_SELGTOFFSET        7
+#define R_CP0_SEGCTL0            5
+#define R_CP0_SELSEGCTL0         2
+#define R_CP0_SEGCTL1            5
+#define R_CP0_SELSEGCTL1         3
+#define R_CP0_SEGCTL2            5
+#define R_CP0_SELSEGCTL2         4
+
+#define R_CP0_KSCRATCH1          31
+#define R_CP0_SELKSCRATCH1       2
+#define R_CP0_KSCRATCH2          31
+#define R_CP0_SELKSCRATCH2       3
+
+/* Use these ASM_* macros with the above R_* register constants for rt,rd */
+
+// VZ ASE Extensions
+#define ASM_HYPCALL(code)    .word( 0x42000028 | (code<<6) )
+#define ASM_MFGC0(rt,rd,sel) .word( 0x40600000 | (rt<<16) | (rd<<11) | sel )
+#define ASM_MTGC0(rt,rd,sel) .word( 0x40600200 | (rt<<16) | (rd<<11) | sel )
+#define ASM_TLBGP            .word( 0x42000010 )
+#define ASM_TLBGR            .word( 0x42000009 )
+#define ASM_TLBGWI           .word( 0x4200000a )
+#define ASM_TLBGINV          .word( 0x4200000b )
+#define ASM_TLBGINVF         .word( 0x4200000c )
+#define ASM_TLBGWR           .word( 0x4200000e )
+
+// Config4.IE TLB invalidate instructions
+#define ASM_TLBINV           .word( 0x42000003 )
+#define ASM_TLBINVF          .word( 0x42000004 )
+
+#endif
+#endif
-- 
1.7.11.3

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 05/18] KVM/MIPS32-VZ: VZ-ASE assembler wrapper functions to set GuestIDs
  2013-05-19  5:47 [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE) Sanjay Lal
                   ` (3 preceding siblings ...)
  2013-05-19  5:47 ` [PATCH 04/18] KVM/MIPS32-VZ: MIPS VZ-ASE related register defines and helper macros Sanjay Lal
@ 2013-05-19  5:47 ` Sanjay Lal
  2013-05-19 13:36   ` Sergei Shtylyov
  2013-05-19  5:47 ` [PATCH 06/18] KVM/MIPS32-VZ: VZ-ASE related callbacks to handle guest exceptions that trap to the Root context Sanjay Lal
                   ` (13 subsequent siblings)
  18 siblings, 1 reply; 40+ messages in thread
From: Sanjay Lal @ 2013-05-19  5:47 UTC (permalink / raw)
  To: kvm; +Cc: linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti, Sanjay Lal


Signed-off-by: Sanjay Lal <sanjayl@kymasys.com>
---
 arch/mips/kvm/kvm_vz_locore.S | 74 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 74 insertions(+)
 create mode 100644 arch/mips/kvm/kvm_vz_locore.S

diff --git a/arch/mips/kvm/kvm_vz_locore.S b/arch/mips/kvm/kvm_vz_locore.S
new file mode 100644
index 0000000..6d037d7
--- /dev/null
+++ b/arch/mips/kvm/kvm_vz_locore.S
@@ -0,0 +1,74 @@
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License.  See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * KVM/MIPS: Assembler support for hardware virtualization extensions
+ *
+ * Copyright (C) 2012  MIPS Technologies, Inc.  All rights reserved.
+ * Authors: Yann Le Du <ledu@kymasys.com>
+ */
+
+#include <asm/asm.h>
+#include <asm/asmmacro.h>
+#include <asm/regdef.h>
+#include <asm/mipsregs.h>
+#include <asm/asm-offsets.h>
+#include <asm/mipsvzregs.h>
+
+#define MIPSX(name)	mips32_ ## name
+
+/* 
+ * This routine sets GuestCtl1.RID to GUESTCTL1_VZ_ROOT_GUESTID
+ * Inputs: none
+ */
+LEAF(MIPSX(ClearGuestRID))
+	.set	push
+	.set	mips32r2
+	.set	noreorder
+	mfc0	t0, CP0_GUESTCTL1
+	addiu	t1, zero, GUESTCTL1_VZ_ROOT_GUESTID
+	ins	t0, t1, GUESTCTL1_RID_SHIFT, GUESTCTL1_RID_WIDTH
+	mtc0	t0, CP0_GUESTCTL1 # Set GuestCtl1.RID = GUESTCTL1_VZ_ROOT_GUESTID
+	ehb
+	j	ra
+	nop					# BD Slot
+	.set    pop
+END(MIPSX(ClearGuestRID))
+
+
+/* 
+ * This routine sets GuestCtl1.RID to a new value
+ * Inputs: a0 = new GuestRID value (right aligned)
+ */
+LEAF(MIPSX(SetGuestRID))
+	.set	push
+	.set	mips32r2
+	.set	noreorder
+	mfc0	t0, CP0_GUESTCTL1
+	ins 	t0, a0, GUESTCTL1_RID_SHIFT, GUESTCTL1_RID_WIDTH
+	mtc0	t0, CP0_GUESTCTL1		# Set GuestCtl1.RID
+	ehb
+	j	ra
+	nop					# BD Slot
+	.set	pop
+END(MIPSX(SetGuestRID))
+
+
+	/*
+	 * This routine sets GuestCtl1.RID to GuestCtl1.ID
+	 * Inputs: none
+	 */
+LEAF(MIPSX(SetGuestRIDtoGuestID))
+	.set	push
+	.set	mips32r2
+	.set	noreorder
+	mfc0	t0, CP0_GUESTCTL1		# Get current GuestID
+	ext 	t1, t0, GUESTCTL1_ID_SHIFT, GUESTCTL1_ID_WIDTH
+	ins 	t0, t1, GUESTCTL1_RID_SHIFT, GUESTCTL1_RID_WIDTH
+	mtc0	t0, CP0_GUESTCTL1		# Set GuestCtl1.RID = GuestCtl1.ID
+	ehb
+	j	ra
+	nop 					# BD Slot
+	.set	pop
+END(MIPSX(SetGuestRIDtoGuestID))
-- 
1.7.11.3

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 06/18] KVM/MIPS32-VZ: VZ-ASE related callbacks to handle guest exceptions that trap to the Root context.
  2013-05-19  5:47 [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE) Sanjay Lal
                   ` (4 preceding siblings ...)
  2013-05-19  5:47 ` [PATCH 05/18] KVM/MIPS32-VZ: VZ-ASE assembler wrapper functions to set GuestIDs Sanjay Lal
@ 2013-05-19  5:47 ` Sanjay Lal
  2013-05-28 15:04   ` Paolo Bonzini
  2013-05-28 16:14   ` Paolo Bonzini
  2013-05-19  5:47 ` [PATCH 07/18] KVM/MIPS32: VZ-ASE related CPU feature flags and options Sanjay Lal
                   ` (12 subsequent siblings)
  18 siblings, 2 replies; 40+ messages in thread
From: Sanjay Lal @ 2013-05-19  5:47 UTC (permalink / raw)
  To: kvm; +Cc: linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti, Sanjay Lal

The VZ-ASE provices the Guest with its own COP0 context, so the types of exceptions
that will trap to the root a lot fewer than in the trap and emulate case.

- Root level TLB miss handlers that map GPAs to RPAs.
- Guest Exits

Signed-off-by: Sanjay Lal <sanjayl@kymasys.com>
---
 arch/mips/kvm/kvm_vz.c | 786 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 786 insertions(+)
 create mode 100644 arch/mips/kvm/kvm_vz.c

diff --git a/arch/mips/kvm/kvm_vz.c b/arch/mips/kvm/kvm_vz.c
new file mode 100644
index 0000000..e85a497
--- /dev/null
+++ b/arch/mips/kvm/kvm_vz.c
@@ -0,0 +1,786 @@
+/*
+* This file is subject to the terms and conditions of the GNU General Public
+* License.  See the file "COPYING" in the main directory of this archive
+* for more details.
+*
+* KVM/MIPS: Support for hardware virtualization extensions
+*
+* Copyright (C) 2012  MIPS Technologies, Inc.  All rights reserved.
+* Authors: Yann Le Du <ledu@kymasys.com>
+*/
+
+#include <linux/errno.h>
+#include <linux/err.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <asm/cacheflush.h>
+#include <asm/mipsvzregs.h>
+#include <asm/inst.h>
+
+#include <linux/kvm_host.h>
+
+#include "kvm_mips_opcode.h"
+#include "kvm_mips_int.h"
+
+#include "trace.h"
+
+static gpa_t kvm_vz_gva_to_gpa_cb(gva_t gva)
+{
+	/* VZ guest has already converted gva to gpa */
+	return gva;
+}
+
+void kvm_vz_queue_irq(struct kvm_vcpu *vcpu, uint32_t priority)
+{
+	set_bit(priority, &vcpu->arch.pending_exceptions);
+	clear_bit(priority, &vcpu->arch.pending_exceptions_clr);
+}
+
+void kvm_vz_dequeue_irq(struct kvm_vcpu *vcpu, uint32_t priority)
+{
+	clear_bit(priority, &vcpu->arch.pending_exceptions);
+	set_bit(priority, &vcpu->arch.pending_exceptions_clr);
+}
+
+void kvm_vz_queue_timer_int_cb(struct kvm_vcpu *vcpu)
+{
+	/* timer expiry is asynchronous to vcpu execution therefore defer guest
+	 * cp0 accesses */
+	kvm_vz_queue_irq(vcpu, MIPS_EXC_INT_TIMER);
+}
+
+void kvm_vz_dequeue_timer_int_cb(struct kvm_vcpu *vcpu)
+{
+	/* timer expiry is asynchronous to vcpu execution therefore defer guest
+	 * cp0 accesses */
+	kvm_vz_dequeue_irq(vcpu, MIPS_EXC_INT_TIMER);
+}
+
+void
+kvm_vz_queue_io_int_cb(struct kvm_vcpu *vcpu, struct kvm_mips_interrupt *irq)
+{
+	int intr = (int)irq->irq;
+
+	/* interrupts are asynchronous to vcpu execution therefore defer guest
+	 * cp0 accesses */
+	switch (intr) {
+	case 2:
+		kvm_vz_queue_irq(vcpu, MIPS_EXC_INT_IO);
+		break;
+
+	case 3:
+		kvm_vz_queue_irq(vcpu, MIPS_EXC_INT_IPI_1);
+		break;
+
+	case 4:
+		kvm_vz_queue_irq(vcpu, MIPS_EXC_INT_IPI_2);
+		break;
+
+	default:
+		break;
+	}
+
+}
+
+void
+kvm_vz_dequeue_io_int_cb(struct kvm_vcpu *vcpu, struct kvm_mips_interrupt *irq)
+{
+	int intr = (int)irq->irq;
+
+	/* interrupts are asynchronous to vcpu execution therefore defer guest
+	 * cp0 accesses */
+	switch (intr) {
+	case -2:
+		kvm_vz_dequeue_irq(vcpu, MIPS_EXC_INT_IO);
+		break;
+
+	case -3:
+		kvm_vz_dequeue_irq(vcpu, MIPS_EXC_INT_IPI_1);
+		break;
+
+	case -4:
+		kvm_vz_dequeue_irq(vcpu, MIPS_EXC_INT_IPI_2);
+		break;
+
+	default:
+		break;
+	}
+
+}
+
+static uint32_t kvm_vz_priority_to_irq[MIPS_EXC_MAX] = {
+	[MIPS_EXC_INT_TIMER] = C_TI,
+	[MIPS_EXC_INT_IO]    = C_IRQ0,
+	[MIPS_EXC_INT_IPI_1] = C_IRQ1,
+	[MIPS_EXC_INT_IPI_2] = C_IRQ2,
+};
+
+static int
+kvm_vz_irq_deliver_cb(struct kvm_vcpu *vcpu, unsigned int priority,
+		      uint32_t cause)
+{
+	uint32_t irq = (priority < MIPS_EXC_MAX) ? 
+		kvm_vz_priority_to_irq[priority] : 0;
+
+	switch (priority) {
+	case MIPS_EXC_INT_TIMER:
+		kvm_set_c0_guest_cause(vcpu->arch.cop0, irq);
+		break;
+
+	case MIPS_EXC_INT_IO:
+	case MIPS_EXC_INT_IPI_1:
+	case MIPS_EXC_INT_IPI_2:
+		if (cpu_has_vzvirtirq)
+			set_c0_guestctl2(irq);
+		else
+			kvm_set_c0_guest_cause(vcpu->arch.cop0, irq);
+		break;
+
+	default:
+		break;
+	}
+
+	clear_bit(priority, &vcpu->arch.pending_exceptions);
+	return 1;
+}
+
+static int
+kvm_vz_irq_clear_cb(struct kvm_vcpu *vcpu, unsigned int priority,
+		    uint32_t cause)
+{
+	uint32_t irq = (priority < MIPS_EXC_MAX) ?
+		kvm_vz_priority_to_irq[priority] : 0;
+
+	switch (priority) {
+	case MIPS_EXC_INT_TIMER:
+		/* Call to kvm_write_c0_guest_compare clears Cause.TI in
+		 * kvm_mips_emulate_CP0. Explicitly clear irq associated with
+		 * Cause.IP[IPTI] if GuestCtl2 virtual interrupt register not
+		 * supported.
+		 */
+		if (!cpu_has_vzvirtirq)
+			kvm_clear_c0_guest_cause(vcpu->arch.cop0, (C_IRQ5));
+
+		break;
+
+	case MIPS_EXC_INT_IO:
+	case MIPS_EXC_INT_IPI_1:
+	case MIPS_EXC_INT_IPI_2:
+		if (cpu_has_vzvirtirq)
+			clear_c0_guestctl2(irq);
+		else
+			kvm_clear_c0_guest_cause(vcpu->arch.cop0, irq);
+		break;
+
+	default:
+		break;
+	}
+
+	clear_bit(priority, &vcpu->arch.pending_exceptions_clr);
+	return 1;
+}
+
+/*
+ * Restore Guest.Count, Guest.Compare and Guest.Cause taking care to
+ * preserve the value of Guest.Cause[TI] while restoring Guest.Cause.
+ *
+ * Follows the algorithm in VZ ASE specification - Section: Guest Timer.
+ */
+void
+kvm_vz_restore_guest_timer_int(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
+{
+	ulong current_guest_count;
+	ulong saved_guest_cause = regs->cp0reg[MIPS_CP0_CAUSE][0];
+	ulong saved_guest_count = regs->cp0reg[MIPS_CP0_COUNT][0];
+	ulong saved_guest_compare = regs->cp0reg[MIPS_CP0_COMPARE][0];
+	struct mips_coproc *cop0 = vcpu->arch.cop0;
+
+	/* TODO VZ gtoffset not being set anywhere at the moment */
+	/* restore root gtoffset from unused Guest gtoffset register */
+	write_c0_gtoffset(regs->cp0reg[MIPS_CP0_STATUS][7]);
+	kvm_write_c0_guest_cause(cop0, saved_guest_cause);
+
+	/* after the following statement, the hardware might now set
+	 * Guest.Cause[TI] */
+	kvm_write_c0_guest_compare(cop0, saved_guest_compare);
+	current_guest_count = kvm_read_c0_guest_count(cop0);
+
+	/*
+	 * set Guest.Cause[TI] if it would have been set while the guest was
+	 * sleeping.  This code assumes that the counter has not completely
+	 * wrapped around while the guest was sleeping.
+	 */
+	if (current_guest_count > saved_guest_count) {
+		if ((saved_guest_compare > saved_guest_count)
+		    && (saved_guest_compare < current_guest_count)) {
+			kvm_write_c0_guest_cause(cop0,
+						 saved_guest_cause | C_TI);
+		}
+	} else {
+		/* The count has wrapped. Check to see if guest count has
+		 * passed the saved compare value */
+		if ((saved_guest_compare > saved_guest_count)
+		    || (saved_guest_compare < current_guest_count)) {
+			kvm_write_c0_guest_cause(cop0,
+						 saved_guest_cause | C_TI);
+		}
+	}
+}
+
+static int kvm_trap_vz_no_handler(struct kvm_vcpu *vcpu)
+{
+	uint32_t *opc = (uint32_t *) vcpu->arch.pc;
+	ulong cause = vcpu->arch.host_cp0_cause;
+	uint32_t exccode = (cause >> CAUSEB_EXCCODE) & 0x1f;
+	ulong badvaddr = vcpu->arch.host_cp0_badvaddr;
+
+	kvm_err("Exception Code: %d not handled @ PC: %p, inst: 0x%08x  BadVaddr: %#lx Status: %#lx\n",
+			exccode, opc, kvm_get_inst(opc, vcpu), badvaddr,
+			kvm_read_c0_guest_status(vcpu->arch.cop0));
+	kvm_arch_vcpu_dump_regs(vcpu);
+	vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+	return RESUME_HOST;
+}
+
+#define COP0MT 0xffe007f8
+#define MTC0   0x40800000
+#define COP0MF 0xffe007f8
+#define MFC0   0x40000000
+#define RT     0x001f0000
+#define RD     0x0000f800
+#define SEL    0x00000007
+
+enum emulation_result
+kvm_trap_vz_handle_gpsi(ulong cause, uint32_t *opc,
+			struct kvm_vcpu *vcpu)
+{
+	enum emulation_result er = EMULATE_DONE;
+	struct kvm_run *run = vcpu->run;
+	uint32_t inst;
+
+	/*
+	 *  Fetch the instruction.
+	 */
+	if (cause & CAUSEF_BD)
+		opc += 1;
+
+	inst = kvm_get_inst(opc, vcpu);
+
+	switch (((union mips_instruction)inst).r_format.opcode) {
+	case cop0_op:
+		++vcpu->stat.hypervisor_gpsi_cp0_exits;
+		trace_kvm_exit(vcpu, HYPERVISOR_GPSI_CP0_EXITS);
+		er = kvm_mips_emulate_CP0(inst, opc, cause, run, vcpu);
+		break;
+	case cache_op:
+		++vcpu->stat.hypervisor_gpsi_cache_exits;
+		trace_kvm_exit(vcpu, HYPERVISOR_GPSI_CACHE_EXITS);
+		er = kvm_mips_emulate_cache(inst, opc, cause, run, vcpu);
+		break;
+
+	default:
+		kvm_err("GPSI exception not supported (%p/%#x)\n",
+				opc, inst);
+		kvm_arch_vcpu_dump_regs(vcpu);
+		er = EMULATE_FAIL;
+		break;
+	}
+
+	return er;
+}
+
+enum emulation_result
+kvm_trap_vz_handle_gsfc(ulong cause, uint32_t *opc,
+			struct kvm_vcpu *vcpu)
+{
+	enum emulation_result er = EMULATE_DONE;
+	struct kvm_vcpu_arch *arch = &vcpu->arch;
+	uint32_t inst;
+
+	/*
+	 *  Fetch the instruction.
+	 */
+	if (cause & CAUSEF_BD)
+		opc += 1;
+
+	inst = kvm_get_inst(opc, vcpu);
+
+	/* complete MTC0 on behalf of guest and advance EPC */
+	if ((inst & COP0MT) == MTC0) {
+		int rt = (inst & RT) >> 16;
+		int val = arch->gprs[rt];
+		int rd = (inst & RD) >> 11;
+		int sel = (inst & SEL);
+
+		if ((rd == MIPS_CP0_STATUS) && (sel == 0)) {
+			++vcpu->stat.hypervisor_gsfc_cp0_status_exits;
+			trace_kvm_exit(vcpu, HYPERVISOR_GSFC_CP0_STATUS_EXITS);
+			write_c0_guest_status(val);
+		} else if ((rd == MIPS_CP0_CAUSE) && (sel == 0)) {
+			++vcpu->stat.hypervisor_gsfc_cp0_cause_exits;
+			trace_kvm_exit(vcpu, HYPERVISOR_GSFC_CP0_CAUSE_EXITS);
+			write_c0_guest_cause(val);
+#define	MIPS_CP0_INTCTL MIPS_CP0_STATUS
+		} else if ((rd == MIPS_CP0_INTCTL) && (sel == 1)) {
+			++vcpu->stat.hypervisor_gsfc_cp0_intctl_exits;
+			trace_kvm_exit(vcpu, HYPERVISOR_GSFC_CP0_INTCTL_EXITS);
+			write_c0_guest_intctl(val);
+		} else {
+			kvm_err("Handle GSFC, unsupported field change @ %p: %#x\n",
+			    opc, inst);
+			er = EMULATE_FAIL;
+		}
+
+		if (er != EMULATE_FAIL) {
+			er = update_pc(vcpu, cause);
+#ifdef DEBUG
+			kvm_debug(
+			    "[%#x] MTGC0[%d][%d], vcpu->arch.gprs[%d]: %#lx\n",
+			    vcpu->arch.pc, rd, sel, rt, vcpu->arch.gprs[rt]);
+#endif
+		}
+	} else {
+		kvm_err("Handle GSFC, unrecognized instruction @ %p: %#x\n",
+			opc, inst);
+		er = EMULATE_FAIL;
+	}
+
+	return er;
+}
+
+enum emulation_result
+kvm_trap_vz_no_handler_guest_exit(int32_t gexccode, ulong cause,
+				  uint32_t *opc, struct kvm_vcpu *vcpu)
+{
+	uint32_t inst;
+
+	/*
+	 *  Fetch the instruction.
+	 */
+	if (cause & CAUSEF_BD)
+		opc += 1;
+
+	inst = kvm_get_inst(opc, vcpu);
+
+	kvm_err(
+	    "Guest Exception Code: %d, not yet handled, @ PC: %p, inst: 0x%08x  Status: %#lx\n",
+	    gexccode, opc, inst, kvm_read_c0_guest_status(vcpu->arch.cop0));
+
+	return EMULATE_FAIL;
+}
+
+static int kvm_trap_vz_handle_guest_exit(struct kvm_vcpu *vcpu)
+{
+	uint32_t *opc = (uint32_t *) vcpu->arch.pc;
+	ulong cause = vcpu->arch.host_cp0_cause;
+	enum emulation_result er = EMULATE_DONE;
+	int32_t gexccode =
+	    (read_c0_guestctl0() & GUESTCTL0_GEXC) >> GUESTCTL0_GEXC_SHIFT;
+	int ret = RESUME_GUEST;
+
+#ifdef DEBUG
+	kvm_debug("Hypervisor Guest Exit. GExcCode %s\n",
+	       (gexccode == GUESTCTL0_GEXC_GPSI ? "GPSI" :
+		(gexccode == GUESTCTL0_GEXC_GSFC ? "GSFC" :
+		 (gexccode == GUESTCTL0_GEXC_HC ? "HC" :
+		  (gexccode == GUESTCTL0_GEXC_GRR ? "GRR" :
+		   (gexccode == GUESTCTL0_GEXC_GVA ? "GVA" :
+		    (gexccode == GUESTCTL0_GEXC_GHFC ? "GHFC" :
+		     (gexccode == GUESTCTL0_GEXC_GPA ? "GPA" :
+		      "RESV"))))))));
+#endif
+
+	switch (gexccode) {
+	case GUESTCTL0_GEXC_GPSI:
+		++vcpu->stat.hypervisor_gpsi_exits;
+		trace_kvm_exit(vcpu, HYPERVISOR_GPSI_EXITS);
+		er = kvm_trap_vz_handle_gpsi(cause, opc, vcpu);
+		break;
+	case GUESTCTL0_GEXC_GSFC:
+		++vcpu->stat.hypervisor_gsfc_exits;
+		trace_kvm_exit(vcpu, HYPERVISOR_GSFC_EXITS);
+		er = kvm_trap_vz_handle_gsfc(cause, opc, vcpu);
+		break;
+	case GUESTCTL0_GEXC_HC:
+		++vcpu->stat.hypervisor_hc_exits;
+		trace_kvm_exit(vcpu, HYPERVISOR_HC_EXITS);
+		er = kvm_trap_vz_no_handler_guest_exit(gexccode, cause, opc,
+						       vcpu);
+		break;
+	case GUESTCTL0_GEXC_GRR:
+		++vcpu->stat.hypervisor_grr_exits;
+		trace_kvm_exit(vcpu, HYPERVISOR_GRR_EXITS);
+		er = kvm_trap_vz_no_handler_guest_exit(gexccode, cause, opc,
+						       vcpu);
+		break;
+	case GUESTCTL0_GEXC_GVA:
+		++vcpu->stat.hypervisor_gva_exits;
+		trace_kvm_exit(vcpu, HYPERVISOR_GVA_EXITS);
+		er = kvm_trap_vz_no_handler_guest_exit(gexccode, cause, opc,
+						       vcpu);
+		break;
+	case GUESTCTL0_GEXC_GHFC:
+		++vcpu->stat.hypervisor_ghfc_exits;
+		trace_kvm_exit(vcpu, HYPERVISOR_GHFC_EXITS);
+		er = kvm_trap_vz_no_handler_guest_exit(gexccode, cause, opc,
+						       vcpu);
+		break;
+	case GUESTCTL0_GEXC_GPA:
+		++vcpu->stat.hypervisor_gpa_exits;
+		trace_kvm_exit(vcpu, HYPERVISOR_GPA_EXITS);
+		er = kvm_trap_vz_no_handler_guest_exit(gexccode, cause, opc,
+						       vcpu);
+		break;
+	default:
+		++vcpu->stat.hypervisor_resv_exits;
+		trace_kvm_exit(vcpu, HYPERVISOR_RESV_EXITS);
+		er = kvm_trap_vz_no_handler_guest_exit(gexccode, cause, opc,
+						       vcpu);
+		break;
+
+	}
+
+	if (er == EMULATE_DONE)
+		ret = RESUME_GUEST;
+	else {
+		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		ret = RESUME_HOST;
+	}
+	return ret;
+}
+
+static int kvm_trap_vz_is_mmio_addrspace(struct kvm_vcpu *vcpu, ulong vaddr)
+{
+	gfn_t gfn = (vaddr >> PAGE_SHIFT);
+
+	/* KYMAXXX These MMIO flash address ranges are specific to the malta
+	 * board */
+	return (!kvm_is_visible_gfn(vcpu->kvm, gfn) ||
+		((vaddr >= 0x1e000000) && (vaddr <= 0x1e3fffff)) ||
+		((vaddr >= 0x1fc00000) && (vaddr <= 0x1fffffff)));
+}
+
+static int kvm_trap_vz_handle_tlb_ld_miss(struct kvm_vcpu *vcpu)
+{
+	struct kvm_run *run = vcpu->run;
+	uint32_t *opc = (uint32_t *) vcpu->arch.pc;
+	ulong cause = vcpu->arch.host_cp0_cause;
+	ulong badvaddr = vcpu->arch.host_cp0_badvaddr;
+	uint32_t inst;
+	ulong flags;
+	enum emulation_result er = EMULATE_DONE;
+	int ret = RESUME_GUEST;
+
+	if (kvm_trap_vz_is_mmio_addrspace(vcpu, badvaddr)) {
+#ifdef DEBUG
+		kvm_debug("Guest Emulate Load from MMIO space: PC: "
+			"%p, BadVaddr: %#lx\n", opc, badvaddr);
+#endif
+
+		/*
+		 *  Fetch the instruction.
+		 */
+		if (cause & CAUSEF_BD)
+			opc += 1;
+
+		inst = kvm_get_inst(opc, vcpu);
+
+		er = kvm_mips_emulate_load(inst, cause, run, vcpu);
+
+		if (er == EMULATE_FAIL) {
+			kvm_err(
+			    "Guest Emulate Load from MMIO space failed: PC: "
+			    "%p, BadVaddr: %#lx\n", opc, badvaddr);
+			run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		} else {
+			run->exit_reason = KVM_EXIT_MMIO;
+			er = EMULATE_DO_MMIO;
+		}
+
+	} else {
+#ifdef DEBUG
+		kvm_debug("Guest ADDR TLB LD fault: PC: %p, BadVaddr: %#lx\n",
+			opc, badvaddr);
+#endif
+		local_irq_save(flags);
+		if (kvm_mips_handle_vz_root_tlb_fault(badvaddr, vcpu) < 0) {
+			run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+			er = EMULATE_FAIL;
+		}
+		local_irq_restore(flags);
+	}
+
+	if (er == EMULATE_DONE) {
+		ret = RESUME_GUEST;
+	} else if (er == EMULATE_DO_MMIO) {
+		ret = RESUME_HOST;
+	} else {
+		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		ret = RESUME_HOST;
+	}
+	return ret;
+}
+
+static int kvm_trap_vz_handle_tlb_st_miss(struct kvm_vcpu *vcpu)
+{
+	struct kvm_run *run = vcpu->run;
+	uint32_t *opc = (uint32_t *) vcpu->arch.pc;
+	ulong cause = vcpu->arch.host_cp0_cause;
+	ulong badvaddr = vcpu->arch.host_cp0_badvaddr;
+	uint32_t inst;
+	ulong flags;
+	enum emulation_result er = EMULATE_DONE;
+	int ret = RESUME_GUEST;
+
+	if (kvm_trap_vz_is_mmio_addrspace(vcpu, badvaddr)) {
+#ifdef DEBUG
+		kvm_debug("Guest Emulate Store to MMIO space: PC: "
+				"%p, BadVaddr: %#lx\n", opc, badvaddr);
+#endif
+		/*
+		 *  Fetch the instruction.
+		 */
+		if (cause & CAUSEF_BD)
+			opc += 1;
+
+		inst = kvm_get_inst(opc, vcpu);
+
+		er = kvm_mips_emulate_store(inst, cause, run, vcpu);
+
+		if (er == EMULATE_FAIL) {
+			kvm_err("Guest Emulate Store to MMIO space failed: PC: "
+				"%p, BadVaddr: %#lx\n", opc, badvaddr);
+			run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		} else {
+			run->exit_reason = KVM_EXIT_MMIO;
+			er = EMULATE_DO_MMIO;
+		}
+
+	} else {
+#ifdef DEBUG
+		kvm_debug("Guest ADDR TLB ST fault: PC: %p, BadVaddr: %#lx\n",
+				opc, badvaddr);
+#endif
+		local_irq_save(flags);
+		if (kvm_mips_handle_vz_root_tlb_fault(badvaddr, vcpu) < 0) {
+			run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+			er = EMULATE_FAIL;
+		}
+		local_irq_restore(flags);
+	}
+
+	if (er == EMULATE_DONE) {
+		ret = RESUME_GUEST;
+	} else if (er == EMULATE_DO_MMIO) {
+		ret = RESUME_HOST;
+	} else {
+		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		ret = RESUME_HOST;
+	}
+	return ret;
+}
+
+static int kvm_vz_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
+{
+	struct mips_coproc *cop0 = vcpu->arch.cop0;
+
+	/* some registers are not restored
+	 * random, count        : read-only
+	 * userlocal            : not implemented in qemu
+	 * config6              : not implemented in processor variant
+	 * compare, cause       : defer to kvm_vz_restore_guest_timer_int
+	 */
+
+	kvm_write_c0_guest_index(cop0, regs->cp0reg[MIPS_CP0_TLB_INDEX][0]);
+	kvm_write_c0_guest_entrylo0(cop0, regs->cp0reg[MIPS_CP0_TLB_LO0][0]);
+	kvm_write_c0_guest_entrylo1(cop0, regs->cp0reg[MIPS_CP0_TLB_LO1][0]);
+	kvm_write_c0_guest_context(cop0, regs->cp0reg[MIPS_CP0_TLB_CONTEXT][0]);
+	kvm_write_c0_guest_pagemask(cop0,
+				    regs->cp0reg[MIPS_CP0_TLB_PG_MASK][0]);
+	kvm_write_c0_guest_pagegrain(cop0,
+				     regs->cp0reg[MIPS_CP0_TLB_PG_MASK][1]);
+	kvm_write_c0_guest_wired(cop0, regs->cp0reg[MIPS_CP0_TLB_WIRED][0]);
+	kvm_write_c0_guest_hwrena(cop0, regs->cp0reg[MIPS_CP0_HWRENA][0]);
+	kvm_write_c0_guest_badvaddr(cop0, regs->cp0reg[MIPS_CP0_BAD_VADDR][0]);
+	/* skip kvm_write_c0_guest_count */
+	kvm_write_c0_guest_entryhi(cop0, regs->cp0reg[MIPS_CP0_TLB_HI][0]);
+	/* defer kvm_write_c0_guest_compare */
+	kvm_write_c0_guest_status(cop0, regs->cp0reg[MIPS_CP0_STATUS][0]);
+	kvm_write_c0_guest_intctl(cop0, regs->cp0reg[MIPS_CP0_STATUS][1]);
+	/* defer kvm_write_c0_guest_cause */
+	kvm_write_c0_guest_epc(cop0, regs->cp0reg[MIPS_CP0_EXC_PC][0]);
+	kvm_write_c0_guest_prid(cop0, regs->cp0reg[MIPS_CP0_PRID][0]);
+	kvm_write_c0_guest_ebase(cop0, regs->cp0reg[MIPS_CP0_PRID][1]);
+
+	/* only restore implemented config registers */
+	kvm_write_c0_guest_config(cop0, regs->cp0reg[MIPS_CP0_CONFIG][0]);
+
+	if ((regs->cp0reg[MIPS_CP0_CONFIG][0] & MIPS_CONF_M) &
+			cpu_vz_has_config1)
+		kvm_write_c0_guest_config1(cop0,
+				regs->cp0reg[MIPS_CP0_CONFIG][1]);
+
+	if ((regs->cp0reg[MIPS_CP0_CONFIG][1] & MIPS_CONF_M) &
+			cpu_vz_has_config2)
+		kvm_write_c0_guest_config2(cop0,
+				regs->cp0reg[MIPS_CP0_CONFIG][2]);
+
+	if ((regs->cp0reg[MIPS_CP0_CONFIG][2] & MIPS_CONF_M) &
+			cpu_vz_has_config3)
+		kvm_write_c0_guest_config3(cop0,
+				regs->cp0reg[MIPS_CP0_CONFIG][3]);
+
+	if ((regs->cp0reg[MIPS_CP0_CONFIG][3] & MIPS_CONF_M) &
+			cpu_vz_has_config4)
+		kvm_write_c0_guest_config4(cop0,
+				regs->cp0reg[MIPS_CP0_CONFIG][4]);
+
+	if ((regs->cp0reg[MIPS_CP0_CONFIG][4] & MIPS_CONF_M) &
+			cpu_vz_has_config5)
+		kvm_write_c0_guest_config5(cop0,
+				regs->cp0reg[MIPS_CP0_CONFIG][5]);
+
+	if (cpu_vz_has_config6)
+		kvm_write_c0_guest_config6(cop0,
+				regs->cp0reg[MIPS_CP0_CONFIG][6]);
+	if (cpu_vz_has_config7)
+		kvm_write_c0_guest_config7(cop0,
+				regs->cp0reg[MIPS_CP0_CONFIG][7]);
+
+	kvm_write_c0_guest_errorepc(cop0, regs->cp0reg[MIPS_CP0_ERROR_PC][0]);
+
+	/* call after setting MIPS_CP0_CAUSE to avoid having it overwritten
+	 * this will set guest compare and cause.TI if necessary
+	 */
+	kvm_vz_restore_guest_timer_int(vcpu, regs);
+
+	return 0;
+}
+
+static int kvm_vz_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
+{
+	struct mips_coproc *cop0 = vcpu->arch.cop0;
+
+	regs->cp0reg[MIPS_CP0_TLB_INDEX][0] = kvm_read_c0_guest_index(cop0);
+	regs->cp0reg[MIPS_CP0_TLB_LO0][0] = kvm_read_c0_guest_entrylo0(cop0);
+	regs->cp0reg[MIPS_CP0_TLB_LO1][0] = kvm_read_c0_guest_entrylo1(cop0);
+	regs->cp0reg[MIPS_CP0_TLB_CONTEXT][0] = kvm_read_c0_guest_context(cop0);
+	regs->cp0reg[MIPS_CP0_TLB_PG_MASK][0] =
+		kvm_read_c0_guest_pagemask(cop0);
+	regs->cp0reg[MIPS_CP0_TLB_PG_MASK][1] =
+		kvm_read_c0_guest_pagegrain(cop0);
+	regs->cp0reg[MIPS_CP0_TLB_WIRED][0] = kvm_read_c0_guest_wired(cop0);
+	regs->cp0reg[MIPS_CP0_HWRENA][0] = kvm_read_c0_guest_hwrena(cop0);
+	regs->cp0reg[MIPS_CP0_BAD_VADDR][0] = kvm_read_c0_guest_badvaddr(cop0);
+	regs->cp0reg[MIPS_CP0_COUNT][0] = kvm_read_c0_guest_count(cop0);
+	regs->cp0reg[MIPS_CP0_TLB_HI][0] = kvm_read_c0_guest_entryhi(cop0);
+	regs->cp0reg[MIPS_CP0_COMPARE][0] = kvm_read_c0_guest_compare(cop0);
+	regs->cp0reg[MIPS_CP0_STATUS][0] = kvm_read_c0_guest_status(cop0);
+	regs->cp0reg[MIPS_CP0_STATUS][1] = kvm_read_c0_guest_intctl(cop0);
+	regs->cp0reg[MIPS_CP0_CAUSE][0] = kvm_read_c0_guest_cause(cop0);
+	regs->cp0reg[MIPS_CP0_EXC_PC][0] = kvm_read_c0_guest_epc(cop0);
+	regs->cp0reg[MIPS_CP0_PRID][0] = kvm_read_c0_guest_prid(cop0);
+	regs->cp0reg[MIPS_CP0_PRID][1] = kvm_read_c0_guest_ebase(cop0);
+
+	/* only save implemented config registers */
+	regs->cp0reg[MIPS_CP0_CONFIG][0] = kvm_read_c0_guest_config(cop0);
+	regs->cp0reg[MIPS_CP0_CONFIG][1] =
+		(regs->cp0reg[MIPS_CP0_CONFIG][0] & MIPS_CONF_M) &
+		cpu_vz_has_config1 ?  kvm_read_c0_guest_config1(cop0) : 0;
+	regs->cp0reg[MIPS_CP0_CONFIG][2] =
+		(regs->cp0reg[MIPS_CP0_CONFIG][1] & MIPS_CONF_M) &
+		cpu_vz_has_config2 ?  kvm_read_c0_guest_config2(cop0) : 0;
+	regs->cp0reg[MIPS_CP0_CONFIG][3] =
+		(regs->cp0reg[MIPS_CP0_CONFIG][2] & MIPS_CONF_M) &
+		cpu_vz_has_config3 ?  kvm_read_c0_guest_config3(cop0) : 0;
+	regs->cp0reg[MIPS_CP0_CONFIG][4] =
+		(regs->cp0reg[MIPS_CP0_CONFIG][3] & MIPS_CONF_M) &
+		cpu_vz_has_config4 ?  kvm_read_c0_guest_config4(cop0) : 0;
+	regs->cp0reg[MIPS_CP0_CONFIG][5] =
+		(regs->cp0reg[MIPS_CP0_CONFIG][4] & MIPS_CONF_M) &
+		cpu_vz_has_config5 ?  kvm_read_c0_guest_config5(cop0) : 0;
+	regs->cp0reg[MIPS_CP0_CONFIG][6] =
+		cpu_vz_has_config6 ?  kvm_read_c0_guest_config6(cop0) : 0;
+	regs->cp0reg[MIPS_CP0_CONFIG][7] =
+		cpu_vz_has_config7 ?  kvm_read_c0_guest_config7(cop0) : 0;
+
+	regs->cp0reg[MIPS_CP0_ERROR_PC][0] = kvm_read_c0_guest_errorepc(cop0);
+
+	/* save root context gtoffset (in unused Guest gtoffset register) */
+	regs->cp0reg[MIPS_CP0_STATUS][7] = read_c0_gtoffset();
+
+	return 0;
+}
+
+static int kvm_vz_vm_init(struct kvm *kvm)
+{
+
+	/* Enable virtualization features granting guest control of privileged
+	 * features */
+	write_c0_guestctl0(GUESTCTL0_CP0 | GUESTCTL0_AT3 |
+			   /* GUESTCTL0_GT | *//* Guest timer is emulated */
+			   GUESTCTL0_CG | GUESTCTL0_CF);
+
+	return 0;
+}
+
+static int kvm_vz_vcpu_init(struct kvm_vcpu *vcpu)
+{
+	int i;
+
+	for_each_possible_cpu(i)
+		vcpu->arch.vzguestid[i] = 0;
+
+	return 0;
+}
+
+static int kvm_vz_vcpu_setup(struct kvm_vcpu *vcpu)
+{
+	/* Initialize guest register structure; it will get overwritten with
+	 * the arch specific setup from QEMU but in the meantime
+	 * vcpu_load/vcpu_put should not write zeros.
+	 */
+	kvm_vz_ioctl_get_regs(vcpu, &vcpu->arch.guest_regs);
+
+	return 0;
+}
+
+static struct kvm_mips_callbacks kvm_vz_callbacks = {
+
+	.handle_cop_unusable = kvm_trap_vz_no_handler,
+	.handle_tlb_mod = kvm_trap_vz_no_handler,
+	.handle_tlb_ld_miss = kvm_trap_vz_handle_tlb_ld_miss,
+	.handle_tlb_st_miss = kvm_trap_vz_handle_tlb_st_miss,
+	.handle_addr_err_st = kvm_trap_vz_no_handler,
+	.handle_addr_err_ld = kvm_trap_vz_no_handler,
+	.handle_syscall = kvm_trap_vz_no_handler,
+	.handle_res_inst = kvm_trap_vz_no_handler,
+	.handle_break = kvm_trap_vz_no_handler,
+	.handle_guest_exit = kvm_trap_vz_handle_guest_exit,
+
+	.vm_init = kvm_vz_vm_init,
+	.vcpu_init = kvm_vz_vcpu_init,
+	.vcpu_setup = kvm_vz_vcpu_setup,
+	.gva_to_gpa = kvm_vz_gva_to_gpa_cb,
+	.queue_timer_int = kvm_vz_queue_timer_int_cb,
+	.dequeue_timer_int = kvm_vz_dequeue_timer_int_cb,
+	.queue_io_int = kvm_vz_queue_io_int_cb,
+	.dequeue_io_int = kvm_vz_dequeue_io_int_cb,
+	.irq_deliver = kvm_vz_irq_deliver_cb,
+	.irq_clear = kvm_vz_irq_clear_cb,
+	.vcpu_ioctl_get_regs = kvm_vz_ioctl_get_regs,
+	.vcpu_ioctl_set_regs = kvm_vz_ioctl_set_regs,
+};
+
+int kvm_mips_emulation_init(struct kvm_mips_callbacks **install_callbacks)
+{
+	if (!cpu_has_vz) {
+		pr_info("Ignoring CONFIG_KVM_MIPS_VZ; no hardware support\n");
+		return -ENOSYS;
+	}
+
+	pr_info("Starting KVM with MIPS VZ extension\n");
+
+	*install_callbacks = &kvm_vz_callbacks;
+	return 0;
+}
-- 
1.7.11.3

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 07/18] KVM/MIPS32: VZ-ASE related CPU feature flags and options.
  2013-05-19  5:47 [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE) Sanjay Lal
                   ` (5 preceding siblings ...)
  2013-05-19  5:47 ` [PATCH 06/18] KVM/MIPS32-VZ: VZ-ASE related callbacks to handle guest exceptions that trap to the Root context Sanjay Lal
@ 2013-05-19  5:47 ` Sanjay Lal
  2013-05-19  5:47 ` [PATCH 08/18] KVM/MIPS32-VZ: Entry point for trampolining to the guest and trap handlers Sanjay Lal
                   ` (11 subsequent siblings)
  18 siblings, 0 replies; 40+ messages in thread
From: Sanjay Lal @ 2013-05-19  5:47 UTC (permalink / raw)
  To: kvm; +Cc: linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti, Sanjay Lal

- GuestIDs and Virtual IRQs are optional
- New TLBINV instruction is also optional

Signed-off-by: Sanjay Lal <sanjayl@kymasys.com>
---
 arch/mips/include/asm/cpu-features.h | 36 ++++++++++++++++++++++++++++++++++++
 arch/mips/include/asm/cpu-info.h     | 21 +++++++++++++++++++++
 arch/mips/include/asm/cpu.h          |  5 +++++
 3 files changed, 62 insertions(+)

diff --git a/arch/mips/include/asm/cpu-features.h b/arch/mips/include/asm/cpu-features.h
index e5ec8fc..11c8fb8 100644
--- a/arch/mips/include/asm/cpu-features.h
+++ b/arch/mips/include/asm/cpu-features.h
@@ -83,6 +83,17 @@
 #ifndef kernel_uses_llsc
 #define kernel_uses_llsc	cpu_has_llsc
 #endif
+#ifdef CONFIG_KVM_MIPS_VZ
+#ifndef cpu_has_vzguestid
+#define cpu_has_vzguestid	(cpu_data[0].options & MIPS_CPU_VZGUESTID)
+#endif
+#ifndef cpu_has_vzvirtirq
+#define cpu_has_vzvirtirq	(cpu_data[0].options & MIPS_CPU_VZVIRTIRQ)
+#endif
+#ifndef cpu_has_tlbinv
+#define cpu_has_tlbinv		(cpu_data[0].options & MIPS_CPU_TLBINV)
+#endif
+#endif /* CONFIG_KVM_MIPS_VZ */
 #ifndef cpu_has_mips16
 #define cpu_has_mips16		(cpu_data[0].ases & MIPS_ASE_MIPS16)
 #endif
@@ -198,6 +209,31 @@
 #define cpu_has_mipsmt		(cpu_data[0].ases & MIPS_ASE_MIPSMT)
 #endif
 
+#ifndef cpu_has_vz
+#ifdef CONFIG_KVM_MIPS_VZ
+#define cpu_has_vz  		(cpu_data[0].ases & MIPS_ASE_VZ)
+#else
+#define cpu_has_vz		(0)
+#endif
+#define cpu_vz_config0		(cpu_data[0].vz.config0)
+#define cpu_vz_config1		(cpu_data[0].vz.config1)
+#define cpu_vz_config2		(cpu_data[0].vz.config2)
+#define cpu_vz_config3		(cpu_data[0].vz.config3)
+#define cpu_vz_config4		(cpu_data[0].vz.config4)
+#define cpu_vz_config5		(cpu_data[0].vz.config5)
+#define cpu_vz_config6		(cpu_data[0].vz.config6)
+#define cpu_vz_config7		(cpu_data[0].vz.config7)
+
+#define cpu_vz_has_tlb		(cpu_data[0].vz.options & MIPS_CPU_TLB)
+#define cpu_vz_has_config1	(cpu_data[0].vz.config0 & MIPS_CONF_M)
+#define cpu_vz_has_config2	(cpu_data[0].vz.config1 & MIPS_CONF_M)
+#define cpu_vz_has_config3	(cpu_data[0].vz.config2 & MIPS_CONF_M)
+#define cpu_vz_has_config4	(cpu_data[0].vz.config3 & MIPS_CONF_M)
+#define cpu_vz_has_config5	(cpu_data[0].vz.config4 & MIPS_CONF_M)
+#define cpu_vz_has_config6	(0)
+#define cpu_vz_has_config7	(1)
+#endif
+
 #ifndef cpu_has_userlocal
 #define cpu_has_userlocal	(cpu_data[0].options & MIPS_CPU_ULRI)
 #endif
diff --git a/arch/mips/include/asm/cpu-info.h b/arch/mips/include/asm/cpu-info.h
index 41401d8..70d104c 100644
--- a/arch/mips/include/asm/cpu-info.h
+++ b/arch/mips/include/asm/cpu-info.h
@@ -28,6 +28,24 @@ struct cache_desc {
 	unsigned char flags;	/* Flags describing cache properties */
 };
 
+#ifdef CONFIG_KVM_MIPS_VZ
+/*
+ * initial VZ ASE configuration
+ */
+struct vzase_info {
+	unsigned long		options;
+	int			tlbsize;
+	unsigned long		config0;
+	unsigned long		config1;
+	unsigned long		config2;
+	unsigned long		config3;
+	unsigned long		config4;
+	unsigned long		config5;
+	unsigned long		config6;
+	unsigned long		config7;
+};
+#endif
+
 /*
  * Flag definitions
  */
@@ -79,6 +97,9 @@ struct cpuinfo_mips {
 #define NUM_WATCH_REGS 4
 	u16			watch_reg_masks[NUM_WATCH_REGS];
 	unsigned int		kscratch_mask; /* Usable KScratch mask. */
+#ifdef CONFIG_KVM_MIPS_VZ
+	struct vzase_info	vz;
+#endif
 } __attribute__((aligned(SMP_CACHE_BYTES)));
 
 extern struct cpuinfo_mips cpu_data[];
diff --git a/arch/mips/include/asm/cpu.h b/arch/mips/include/asm/cpu.h
index dd86ab2..6836320 100644
--- a/arch/mips/include/asm/cpu.h
+++ b/arch/mips/include/asm/cpu.h
@@ -325,6 +325,11 @@ enum cpu_type_enum {
 #define MIPS_CPU_PCI		0x00400000 /* CPU has Perf Ctr Int indicator */
 #define MIPS_CPU_RIXI		0x00800000 /* CPU has TLB Read/eXec Inhibit */
 #define MIPS_CPU_MICROMIPS	0x01000000 /* CPU has microMIPS capability */
+#ifdef CONFIG_KVM_MIPS_VZ
+#define MIPS_CPU_VZGUESTID	0x02000000 /* CPU uses VZ ASE GuestID feature */
+#define MIPS_CPU_VZVIRTIRQ	0x04000000 /* CPU has VZ ASE virtual interrupt feature */
+#define MIPS_CPU_TLBINV		0x08000000 /* CPU has TLB invalidate instruction */
+#endif
 
 /*
  * CPU ASE encodings
-- 
1.7.11.3

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 08/18] KVM/MIPS32-VZ: Entry point for trampolining to the guest and trap handlers.
  2013-05-19  5:47 [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE) Sanjay Lal
                   ` (6 preceding siblings ...)
  2013-05-19  5:47 ` [PATCH 07/18] KVM/MIPS32: VZ-ASE related CPU feature flags and options Sanjay Lal
@ 2013-05-19  5:47 ` Sanjay Lal
  2013-05-28 14:43   ` Paolo Bonzini
  2013-05-19  5:47 ` [PATCH 09/18] KVM/MIPS32-VZ: Add support for CONFIG_KVM_MIPS_VZ option Sanjay Lal
                   ` (10 subsequent siblings)
  18 siblings, 1 reply; 40+ messages in thread
From: Sanjay Lal @ 2013-05-19  5:47 UTC (permalink / raw)
  To: kvm; +Cc: linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti, Sanjay Lal

- Add support for the MIPS VZ-ASE
- Whitespace fixes

Signed-off-by: Sanjay Lal <sanjayl@kymasys.com>
---
 arch/mips/kvm/kvm_locore.S | 1088 +++++++++++++++++++++++---------------------
 1 file changed, 573 insertions(+), 515 deletions(-)

diff --git a/arch/mips/kvm/kvm_locore.S b/arch/mips/kvm/kvm_locore.S
index dca2aa6..936171f 100644
--- a/arch/mips/kvm/kvm_locore.S
+++ b/arch/mips/kvm/kvm_locore.S
@@ -1,13 +1,13 @@
 /*
-* This file is subject to the terms and conditions of the GNU General Public
-* License.  See the file "COPYING" in the main directory of this archive
-* for more details.
-*
-* Main entry point for the guest, exception handling.
-*
-* Copyright (C) 2012  MIPS Technologies, Inc.  All rights reserved.
-* Authors: Sanjay Lal <sanjayl@kymasys.com>
-*/
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License.  See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Main entry point for the guest, exception handling.
+ *
+ * Copyright (C) 2012  MIPS Technologies, Inc.	All rights reserved.
+ * Authors: Sanjay Lal <sanjayl@kymasys.com>
+ */
 
 #include <asm/asm.h>
 #include <asm/asmmacro.h>
@@ -16,39 +16,40 @@
 #include <asm/stackframe.h>
 #include <asm/asm-offsets.h>
 
+#ifdef CONFIG_KVM_MIPS_VZ
+#include <asm/mipsvzregs.h>
+#endif
 
-#define _C_LABEL(x)     x
-#define MIPSX(name)     mips32_ ## name
-#define CALLFRAME_SIZ   32
+#define _C_LABEL(x)		x
+#define MIPSX(name)		mips32_ ## name
+#define CALLFRAME_SIZ		32
 
 /*
  * VECTOR
  *  exception vector entrypoint
  */
-#define VECTOR(x, regmask)      \
-    .ent    _C_LABEL(x),0;      \
-    EXPORT(x);
+#define VECTOR(x, regmask)	\
+	.ent	_C_LABEL(x),0;	\
+	EXPORT(x);
 
-#define VECTOR_END(x)      \
-    EXPORT(x);
+#define VECTOR_END(x)		\
+	EXPORT(x);
 
 /* Overload, Danger Will Robinson!! */
-#define PT_HOST_ASID        PT_BVADDR
-#define PT_HOST_USERLOCAL   PT_EPC
+#define PT_HOST_USERLOCAL	PT_EPC
 
-#define CP0_DDATA_LO        $28,3
-#define CP0_EBASE           $15,1
-
-#define CP0_INTCTL          $12,1
-#define CP0_SRSCTL          $12,2
-#define CP0_SRSMAP          $12,3
-#define CP0_HWRENA          $7,0
+#define CP0_DDATA_LO		$28,3
+#define CP0_EBASE		$15,1
+#define CP0_INTCTL		$12,1
+#define CP0_SRSCTL		$12,2
+#define CP0_SRSMAP		$12,3
+#define CP0_HWRENA		$7,0
 
 /* Resume Flags */
-#define RESUME_FLAG_HOST        (1<<1)  /* Resume host? */
+#define RESUME_FLAG_HOST	(1<<1)	/* Resume host? */
 
-#define RESUME_GUEST            0
-#define RESUME_HOST             RESUME_FLAG_HOST
+#define RESUME_GUEST		0
+#define RESUME_HOST		RESUME_FLAG_HOST
 
 /*
  * __kvm_mips_vcpu_run: entry point to the guest
@@ -57,172 +58,188 @@
  */
 
 FEXPORT(__kvm_mips_vcpu_run)
-    .set    push
-    .set    noreorder
-    .set    noat
-
-    /* k0/k1 not being used in host kernel context */
-	addiu  		k1,sp, -PT_SIZE
-    LONG_S	    $0, PT_R0(k1)
-    LONG_S     	$1, PT_R1(k1)
-    LONG_S     	$2, PT_R2(k1)
-    LONG_S     	$3, PT_R3(k1)
-
-    LONG_S     	$4, PT_R4(k1)
-    LONG_S     	$5, PT_R5(k1)
-    LONG_S     	$6, PT_R6(k1)
-    LONG_S     	$7, PT_R7(k1)
-
-    LONG_S     	$8,  PT_R8(k1)
-    LONG_S     	$9,  PT_R9(k1)
-    LONG_S     	$10, PT_R10(k1)
-    LONG_S     	$11, PT_R11(k1)
-    LONG_S     	$12, PT_R12(k1)
-    LONG_S     	$13, PT_R13(k1)
-    LONG_S     	$14, PT_R14(k1)
-    LONG_S     	$15, PT_R15(k1)
-    LONG_S     	$16, PT_R16(k1)
-    LONG_S     	$17, PT_R17(k1)
-
-    LONG_S     	$18, PT_R18(k1)
-    LONG_S     	$19, PT_R19(k1)
-    LONG_S     	$20, PT_R20(k1)
-    LONG_S     	$21, PT_R21(k1)
-    LONG_S     	$22, PT_R22(k1)
-    LONG_S     	$23, PT_R23(k1)
-    LONG_S     	$24, PT_R24(k1)
-    LONG_S     	$25, PT_R25(k1)
+	.set	push
+	.set	noreorder
+	.set	noat
+
+	/* k0/k1 not being used in host kernel context */
+	addiu	k1,sp, -PT_SIZE
+	LONG_S	$0, PT_R0(k1)
+	LONG_S	$1, PT_R1(k1)
+	LONG_S	$2, PT_R2(k1)
+	LONG_S	$3, PT_R3(k1)
+	LONG_S	$4, PT_R4(k1)
+	LONG_S	$5, PT_R5(k1)
+	LONG_S	$6, PT_R6(k1)
+	LONG_S	$7, PT_R7(k1)
+	LONG_S	$8, PT_R8(k1)
+	LONG_S	$9, PT_R9(k1)
+	LONG_S	$10, PT_R10(k1)
+	LONG_S	$11, PT_R11(k1)
+	LONG_S	$12, PT_R12(k1)
+	LONG_S	$13, PT_R13(k1)
+	LONG_S	$14, PT_R14(k1)
+	LONG_S	$15, PT_R15(k1)
+	LONG_S	$16, PT_R16(k1)
+	LONG_S	$17, PT_R17(k1)
+	LONG_S	$18, PT_R18(k1)
+	LONG_S	$19, PT_R19(k1)
+	LONG_S	$20, PT_R20(k1)
+	LONG_S	$21, PT_R21(k1)
+	LONG_S	$22, PT_R22(k1)
+	LONG_S	$23, PT_R23(k1)
+	LONG_S	$24, PT_R24(k1)
+	LONG_S	$25, PT_R25(k1)
 
 	/* XXXKYMA k0/k1 not saved, not being used if we got here through an ioctl() */
 
-    LONG_S     	$28, PT_R28(k1)
-    LONG_S     	$29, PT_R29(k1)
-    LONG_S     	$30, PT_R30(k1)
-    LONG_S     	$31, PT_R31(k1)
+	LONG_S	$28, PT_R28(k1)
+	LONG_S	$29, PT_R29(k1)
+	LONG_S	$30, PT_R30(k1)
+	LONG_S	$31, PT_R31(k1)
 
-    /* Save hi/lo */
-	mflo		v0
-	LONG_S		v0, PT_LO(k1)
-	mfhi   		v1
-	LONG_S		v1, PT_HI(k1)
+	/* Save hi/lo */
+	mflo	v0
+	LONG_S	v0, PT_LO(k1)
+	mfhi	v1
+	LONG_S	v1, PT_HI(k1)
 
 	/* Save host status */
-	mfc0		v0, CP0_STATUS
-	LONG_S		v0, PT_STATUS(k1)
-
-	/* Save host ASID, shove it into the BVADDR location */
-	mfc0 		v1,CP0_ENTRYHI
-	andi		v1, 0xff
-	LONG_S		v1, PT_HOST_ASID(k1)
-
-    /* Save DDATA_LO, will be used to store pointer to vcpu */
-    mfc0        v1, CP0_DDATA_LO
-    LONG_S      v1, PT_HOST_USERLOCAL(k1)
-
-    /* DDATA_LO has pointer to vcpu */
-    mtc0        a1,CP0_DDATA_LO
-
-    /* Offset into vcpu->arch */
-	addiu		k1, a1, VCPU_HOST_ARCH
-
-    /* Save the host stack to VCPU, used for exception processing when we exit from the Guest */
-    LONG_S      sp, VCPU_HOST_STACK(k1)
-
-    /* Save the kernel gp as well */
-    LONG_S      gp, VCPU_HOST_GP(k1)
-
-	/* Setup status register for running the guest in UM, interrupts are disabled */
-	li			k0,(ST0_EXL | KSU_USER| ST0_BEV)
-	mtc0		k0,CP0_STATUS
-    ehb
-
-    /* load up the new EBASE */
-    LONG_L      k0, VCPU_GUEST_EBASE(k1)
-    mtc0        k0,CP0_EBASE
-
-    /* Now that the new EBASE has been loaded, unset BEV, set interrupt mask as it was
-     * but make sure that timer interrupts are enabled
-     */
-    li          k0,(ST0_EXL | KSU_USER | ST0_IE)
-    andi        v0, v0, ST0_IM
-    or          k0, k0, v0
-    mtc0        k0,CP0_STATUS
-    ehb
-
+	mfc0	v0, CP0_STATUS
+	LONG_S	v0, PT_STATUS(k1)
+
+	/* Save DDATA_LO, will be used to store pointer to vcpu */
+	mfc0	v1, CP0_DDATA_LO
+	LONG_S	v1, PT_HOST_USERLOCAL(k1)
+
+	/* DDATA_LO has pointer to vcpu */
+	mtc0	a1, CP0_DDATA_LO
+
+	/* Offset into vcpu->arch */
+	addiu	k1, a1, VCPU_HOST_ARCH
+
+	/* Save the host stack to VCPU, used for exception processing when we
+	 * exit from the Guest */
+	LONG_S	sp, VCPU_HOST_STACK(k1)
+
+	/* Save the kernel gp as well */
+	LONG_S	gp, VCPU_HOST_GP(k1)
+
+	/* Setup status register for running the guest in UM, interrupts are
+	 * disabled */
+	li	k0,(ST0_EXL | KSU_USER| ST0_BEV)
+	mtc0	k0, CP0_STATUS
+	ehb
+
+	/* load up the new EBASE */
+	LONG_L	k0, VCPU_GUEST_EBASE(k1)
+	mtc0	k0, CP0_EBASE
+
+	/* Now that the new EBASE has been loaded, unset BEV, set interrupt
+	 * mask as it was but make sure that timer interrupts are enabled
+	 */
+	li	k0,(ST0_EXL | KSU_USER | ST0_IE)
+	andi	v0, v0, ST0_IM
+	or	k0, k0, v0
+	mtc0	k0, CP0_STATUS
+	ehb
+
+#ifdef CONFIG_KVM_MIPS_VZ
+	/* Set GM bit to setup eret to VZ guest context */
+	li	v1, 1
+	mfc0	k0, CP0_GUESTCTL0
+	ins	k0, v1, GUESTCTL0_GM_SHIFT, 1
+	mtc0	k0, CP0_GUESTCTL0
+
+	/* check GuestCtl0.G1 */
+	ext	t0, k0, GUESTCTL0_G1_SHIFT, 1
+	beq	t0, zero, 1f		/* no GuestCtl1 register */
+	nop
+
+	/* see SetGuestRIDtoGuestID. Handles both GuestCtl0.DRG mode enabled or
+	 * disabled */
+	mfc0	t0, CP0_GUESTCTL1	/* Get current GuestID */
+	ext	t1, t0, GUESTCTL1_ID_SHIFT, GUESTCTL1_ID_WIDTH
+	ins	t0, t1, GUESTCTL1_RID_SHIFT, GUESTCTL1_RID_WIDTH
+	mtc0	t0, CP0_GUESTCTL1	/* Set GuestCtl1.RID = GuestCtl1.ID */
+	ehb
+1:
+#endif
 
 	/* Set Guest EPC */
-	LONG_L		t0, VCPU_PC(k1)
-	mtc0		t0, CP0_EPC
+	LONG_L	t0, VCPU_PC(k1)
+	mtc0	t0, CP0_EPC
 
 FEXPORT(__kvm_mips_load_asid)
-    /* Set the ASID for the Guest Kernel */
-    sll         t0, t0, 1                       /* with kseg0 @ 0x40000000, kernel */
-                                                /* addresses shift to 0x80000000 */
-    bltz        t0, 1f                          /* If kernel */
-	addiu       t1, k1, VCPU_GUEST_KERNEL_ASID  /* (BD)  */
-    addiu       t1, k1, VCPU_GUEST_USER_ASID    /* else user */
+	/* Set the ASID for the Guest Kernel */
+#ifdef CONFIG_KVM_MIPS_VZ
+	addiu	t1, k1, VCPU_GUEST_KERNEL_ASID
+#else
+	sll	t0, t0, 1			/* with kseg0 @ 0x40000000, kernel */
+						/* addresses shift to 0x80000000 */
+	bltz	t0, 1f				/* If kernel */
+	addiu	t1, k1, VCPU_GUEST_KERNEL_ASID	/* (BD)  */
+	addiu	t1, k1, VCPU_GUEST_USER_ASID	/* else user */
 1:
-    /* t1: contains the base of the ASID array, need to get the cpu id  */
-    LONG_L      t2, TI_CPU($28)             /* smp_processor_id */
-    sll         t2, t2, 2                   /* x4 */
-    addu        t3, t1, t2
-    LONG_L      k0, (t3)
-    andi        k0, k0, 0xff
-	mtc0		k0,CP0_ENTRYHI
-    ehb
-
-    /* Disable RDHWR access */
-    mtc0    zero,  CP0_HWRENA
-
-    /* Now load up the Guest Context from VCPU */
-    LONG_L     	$1, VCPU_R1(k1)
-    LONG_L     	$2, VCPU_R2(k1)
-    LONG_L     	$3, VCPU_R3(k1)
-
-    LONG_L     	$4, VCPU_R4(k1)
-    LONG_L     	$5, VCPU_R5(k1)
-    LONG_L     	$6, VCPU_R6(k1)
-    LONG_L     	$7, VCPU_R7(k1)
-
-    LONG_L     	$8,  VCPU_R8(k1)
-    LONG_L     	$9,  VCPU_R9(k1)
-    LONG_L     	$10, VCPU_R10(k1)
-    LONG_L     	$11, VCPU_R11(k1)
-    LONG_L     	$12, VCPU_R12(k1)
-    LONG_L     	$13, VCPU_R13(k1)
-    LONG_L     	$14, VCPU_R14(k1)
-    LONG_L     	$15, VCPU_R15(k1)
-    LONG_L     	$16, VCPU_R16(k1)
-    LONG_L     	$17, VCPU_R17(k1)
-    LONG_L     	$18, VCPU_R18(k1)
-    LONG_L     	$19, VCPU_R19(k1)
-    LONG_L     	$20, VCPU_R20(k1)
-    LONG_L     	$21, VCPU_R21(k1)
-    LONG_L     	$22, VCPU_R22(k1)
-    LONG_L     	$23, VCPU_R23(k1)
-    LONG_L     	$24, VCPU_R24(k1)
-    LONG_L     	$25, VCPU_R25(k1)
-
-    /* k0/k1 loaded up later */
-
-    LONG_L     	$28, VCPU_R28(k1)
-    LONG_L     	$29, VCPU_R29(k1)
-    LONG_L     	$30, VCPU_R30(k1)
-    LONG_L     	$31, VCPU_R31(k1)
-
-    /* Restore hi/lo */
-	LONG_L		k0, VCPU_LO(k1)
-	mtlo		k0
-
-	LONG_L		k0, VCPU_HI(k1)
-	mthi   		k0
+#endif
+	/* t1: contains the base of the ASID array, need to get the cpu id  */
+	LONG_L	t2, TI_CPU($28)			/* smp_processor_id */
+	sll	t2, t2, 2			/* x4 */
+	addu	t3, t1, t2
+	LONG_L	k0, (t3)
+	andi	k0, k0, 0xff
+	mtc0	k0, CP0_ENTRYHI
+	ehb
+
+	/* Disable RDHWR access */
+	mtc0	zero, CP0_HWRENA
+
+	/* Now load up the Guest Context from VCPU */
+	LONG_L	$1, VCPU_R1(k1)
+	LONG_L	$2, VCPU_R2(k1)
+	LONG_L	$3, VCPU_R3(k1)
+	LONG_L	$4, VCPU_R4(k1)
+	LONG_L	$5, VCPU_R5(k1)
+	LONG_L	$6, VCPU_R6(k1)
+	LONG_L	$7, VCPU_R7(k1)
+	LONG_L	$8, VCPU_R8(k1)
+	LONG_L	$9, VCPU_R9(k1)
+	LONG_L	$10, VCPU_R10(k1)
+	LONG_L	$11, VCPU_R11(k1)
+	LONG_L	$12, VCPU_R12(k1)
+	LONG_L	$13, VCPU_R13(k1)
+	LONG_L	$14, VCPU_R14(k1)
+	LONG_L	$15, VCPU_R15(k1)
+	LONG_L	$16, VCPU_R16(k1)
+	LONG_L	$17, VCPU_R17(k1)
+	LONG_L	$18, VCPU_R18(k1)
+	LONG_L	$19, VCPU_R19(k1)
+	LONG_L	$20, VCPU_R20(k1)
+	LONG_L	$21, VCPU_R21(k1)
+	LONG_L	$22, VCPU_R22(k1)
+	LONG_L	$23, VCPU_R23(k1)
+	LONG_L	$24, VCPU_R24(k1)
+	LONG_L	$25, VCPU_R25(k1)
+
+	/* k0/k1 loaded later */
+	LONG_L	$28, VCPU_R28(k1)
+	LONG_L	$29, VCPU_R29(k1)
+	LONG_L	$30, VCPU_R30(k1)
+	LONG_L	$31, VCPU_R31(k1)
+
+	/* Restore hi/lo */
+	LONG_L	k0, VCPU_LO(k1)
+	mtlo	k0
+
+	LONG_L	k0, VCPU_HI(k1)
+	mthi	k0
 
 FEXPORT(__kvm_mips_load_k0k1)
 	/* Restore the guest's k0/k1 registers */
-    LONG_L     	k0, VCPU_R26(k1)
-    LONG_L     	k1, VCPU_R27(k1)
+	LONG_L	k0, VCPU_R26(k1)
+	LONG_L	k1, VCPU_R27(k1)
 
-    /* Jump to guest */
+	/* Jump to guest */
 	eret
 	.set	pop
 
@@ -230,19 +247,19 @@ VECTOR(MIPSX(exception), unknown)
 /*
  * Find out what mode we came from and jump to the proper handler.
  */
-    .set    push
+	.set	push
 	.set	noat
-    .set    noreorder
-    mtc0    k0, CP0_ERROREPC    #01: Save guest k0
-    ehb                         #02:
-
-    mfc0    k0, CP0_EBASE       #02: Get EBASE
-    srl     k0, k0, 10          #03: Get rid of CPUNum
-    sll     k0, k0, 10          #04
-    LONG_S  k1, 0x3000(k0)      #05: Save k1 @ offset 0x3000
-    addiu   k0, k0, 0x2000      #06: Exception handler is installed @ offset 0x2000
-	j	k0				        #07: jump to the function
-	nop				        	#08: branch delay slot
+	.set	noreorder
+	mtc0	k0, CP0_ERROREPC	#01: Save guest k0
+	ehb				#02:
+
+	mfc0	k0, CP0_EBASE		#02: Get EBASE
+	srl	k0, k0, 10		#03: Get rid of CPUNum
+	sll	k0, k0, 10		#04
+	LONG_S	k1, 0x3000(k0)		#05: Save k1 @ offset 0x3000
+	addiu	k0, k0, 0x2000		#06: Exc. handler is installed @ offset 0x2000
+	j	k0			#07: jump to the function
+	nop				#08: branch delay slot
 	.set	push
 VECTOR_END(MIPSX(exceptionEnd))
 .end MIPSX(exception)
@@ -250,332 +267,373 @@ VECTOR_END(MIPSX(exceptionEnd))
 /*
  * Generic Guest exception handler. We end up here when the guest
  * does something that causes a trap to kernel mode.
- *
  */
 NESTED (MIPSX(GuestException), CALLFRAME_SIZ, ra)
-    .set    push
-    .set    noat
-    .set    noreorder
-
-    /* Get the VCPU pointer from DDTATA_LO */
-    mfc0        k1, CP0_DDATA_LO
-	addiu		k1, k1, VCPU_HOST_ARCH
-
-    /* Start saving Guest context to VCPU */
-    LONG_S  $0, VCPU_R0(k1)
-    LONG_S  $1, VCPU_R1(k1)
-    LONG_S  $2, VCPU_R2(k1)
-    LONG_S  $3, VCPU_R3(k1)
-    LONG_S  $4, VCPU_R4(k1)
-    LONG_S  $5, VCPU_R5(k1)
-    LONG_S  $6, VCPU_R6(k1)
-    LONG_S  $7, VCPU_R7(k1)
-    LONG_S  $8, VCPU_R8(k1)
-    LONG_S  $9, VCPU_R9(k1)
-    LONG_S  $10, VCPU_R10(k1)
-    LONG_S  $11, VCPU_R11(k1)
-    LONG_S  $12, VCPU_R12(k1)
-    LONG_S  $13, VCPU_R13(k1)
-    LONG_S  $14, VCPU_R14(k1)
-    LONG_S  $15, VCPU_R15(k1)
-    LONG_S  $16, VCPU_R16(k1)
-    LONG_S  $17,VCPU_R17(k1)
-    LONG_S  $18, VCPU_R18(k1)
-    LONG_S  $19, VCPU_R19(k1)
-    LONG_S  $20, VCPU_R20(k1)
-    LONG_S  $21, VCPU_R21(k1)
-    LONG_S  $22, VCPU_R22(k1)
-    LONG_S  $23, VCPU_R23(k1)
-    LONG_S  $24, VCPU_R24(k1)
-    LONG_S  $25, VCPU_R25(k1)
-
-    /* Guest k0/k1 saved later */
-
-    LONG_S  $28, VCPU_R28(k1)
-    LONG_S  $29, VCPU_R29(k1)
-    LONG_S  $30, VCPU_R30(k1)
-    LONG_S  $31, VCPU_R31(k1)
-
-    /* We need to save hi/lo and restore them on
-     * the way out
-     */
-    mfhi    t0
-    LONG_S  t0, VCPU_HI(k1)
-
-    mflo    t0
-    LONG_S  t0, VCPU_LO(k1)
-
-    /* Finally save guest k0/k1 to VCPU */
-    mfc0    t0, CP0_ERROREPC
-    LONG_S  t0, VCPU_R26(k1)
-
-    /* Get GUEST k1 and save it in VCPU */
-    la      t1, ~0x2ff
-    mfc0    t0, CP0_EBASE
-    and     t0, t0, t1
-    LONG_L  t0, 0x3000(t0)
-    LONG_S  t0, VCPU_R27(k1)
-
-    /* Now that context has been saved, we can use other registers */
-
-    /* Restore vcpu */
-    mfc0        a1, CP0_DDATA_LO
-    move        s1, a1
-
-   /* Restore run (vcpu->run) */
-    LONG_L      a0, VCPU_RUN(a1)
-    /* Save pointer to run in s0, will be saved by the compiler */
-    move        s0, a0
-
-
-    /* Save Host level EPC, BadVaddr and Cause to VCPU, useful to process the exception */
-    mfc0    k0,CP0_EPC
-    LONG_S  k0, VCPU_PC(k1)
-
-    mfc0    k0, CP0_BADVADDR
-    LONG_S  k0, VCPU_HOST_CP0_BADVADDR(k1)
-
-    mfc0    k0, CP0_CAUSE
-    LONG_S  k0, VCPU_HOST_CP0_CAUSE(k1)
-
-    mfc0    k0, CP0_ENTRYHI
-    LONG_S  k0, VCPU_HOST_ENTRYHI(k1)
-
-    /* Now restore the host state just enough to run the handlers */
-
-    /* Swtich EBASE to the one used by Linux */
-    /* load up the host EBASE */
-    mfc0        v0, CP0_STATUS
-
-    .set at
-	or          k0, v0, ST0_BEV
-    .set noat
-
-    mtc0        k0, CP0_STATUS
-    ehb
-
-    LONG_L      k0, VCPU_HOST_EBASE(k1)
-    mtc0        k0,CP0_EBASE
-
-
-    /* Now that the new EBASE has been loaded, unset BEV and KSU_USER */
-    .set at
-	and         v0, v0, ~(ST0_EXL | KSU_USER | ST0_IE)
-    or          v0, v0, ST0_CU0
-    .set noat
-    mtc0        v0, CP0_STATUS
-    ehb
-
-    /* Load up host GP */
-    LONG_L  gp, VCPU_HOST_GP(k1)
-
-    /* Need a stack before we can jump to "C" */
-    LONG_L  sp, VCPU_HOST_STACK(k1)
-
-    /* Saved host state */
-    addiu   sp,sp, -PT_SIZE
+	.set	push
+	.set	noat
+	.set	noreorder
+
+	/* Get the VCPU pointer from DDTATA_LO */
+	mfc0	k1, CP0_DDATA_LO
+	addiu	k1, k1, VCPU_HOST_ARCH
+
+	/* Start saving Guest context to VCPU */
+	LONG_S	$0, VCPU_R0(k1)
+	LONG_S	$1, VCPU_R1(k1)
+	LONG_S	$2, VCPU_R2(k1)
+	LONG_S	$3, VCPU_R3(k1)
+	LONG_S	$4, VCPU_R4(k1)
+	LONG_S	$5, VCPU_R5(k1)
+	LONG_S	$6, VCPU_R6(k1)
+	LONG_S	$7, VCPU_R7(k1)
+	LONG_S	$8, VCPU_R8(k1)
+	LONG_S	$9, VCPU_R9(k1)
+	LONG_S	$10, VCPU_R10(k1)
+	LONG_S	$11, VCPU_R11(k1)
+	LONG_S	$12, VCPU_R12(k1)
+	LONG_S	$13, VCPU_R13(k1)
+	LONG_S	$14, VCPU_R14(k1)
+	LONG_S	$15, VCPU_R15(k1)
+	LONG_S	$16, VCPU_R16(k1)
+	LONG_S	$17, VCPU_R17(k1)
+	LONG_S	$18, VCPU_R18(k1)
+	LONG_S	$19, VCPU_R19(k1)
+	LONG_S	$20, VCPU_R20(k1)
+	LONG_S	$21, VCPU_R21(k1)
+	LONG_S	$22, VCPU_R22(k1)
+	LONG_S	$23, VCPU_R23(k1)
+	LONG_S	$24, VCPU_R24(k1)
+	LONG_S	$25, VCPU_R25(k1)
+
+	/* Guest k0/k1 saved later */
+
+	LONG_S	$28, VCPU_R28(k1)
+	LONG_S	$29, VCPU_R29(k1)
+	LONG_S	$30, VCPU_R30(k1)
+	LONG_S	$31, VCPU_R31(k1)
+
+	/* We need to save hi/lo and restore them on
+	 * the way out
+	 */
+	mfhi	t0
+	LONG_S	t0, VCPU_HI(k1)
+
+	mflo	t0
+	LONG_S	t0, VCPU_LO(k1)
+
+	/* Finally save guest k0/k1 to VCPU */
+	mfc0	t0, CP0_ERROREPC
+	LONG_S	t0, VCPU_R26(k1)
+
+	/* Get GUEST k1 and save it in VCPU */
+	la	t1, ~0x2ff
+	mfc0	t0, CP0_EBASE
+	and	t0, t0, t1
+	LONG_L	t0, 0x3000(t0)
+	LONG_S	t0, VCPU_R27(k1)
+
+	/* Now that context has been saved, we can use other registers */
+
+	/* Restore vcpu */
+	mfc0	a1, CP0_DDATA_LO
+	move	s1, a1
+
+	/* Restore run (vcpu->run) */
+	LONG_L	a0, VCPU_RUN(a1)
+	/* Save pointer to run in s0, will be saved by the compiler */
+	move	s0, a0
+
+
+	/* Save Host level EPC, BadVaddr and Cause to VCPU, useful to process
+	 * the exception */
+	mfc0	k0, CP0_EPC
+	LONG_S	k0, VCPU_PC(k1)
+
+	mfc0	k0, CP0_BADVADDR
+	LONG_S	k0, VCPU_HOST_CP0_BADVADDR(k1)
+
+	mfc0	k0, CP0_CAUSE
+	LONG_S	k0, VCPU_HOST_CP0_CAUSE(k1)
+
+	mfc0	k0, CP0_ENTRYHI
+	LONG_S	k0, VCPU_HOST_ENTRYHI(k1)
+
+	/* Now restore the host state just enough to run the handlers */
+
+	/* Switch EBASE to the one used by Linux */
+	/* load up the host EBASE */
+	mfc0	v0, CP0_STATUS
+
+	.set	at
+	or	k0, v0, ST0_BEV
+	.set	noat
+
+	mtc0	k0, CP0_STATUS
+	ehb
+
+	LONG_L	k0, VCPU_HOST_EBASE(k1)
+	mtc0	k0, CP0_EBASE
+
+#ifdef CONFIG_KVM_MIPS_VZ
+	/* Clear GM bit to avoid switching to VZ guest context when EXL is cleared */
+	mfc0	k0, CP0_GUESTCTL0
+	ins	k0, zero, GUESTCTL0_GM_SHIFT, 1
+	mtc0	k0, CP0_GUESTCTL0
+
+	/* check GuestCtl0.G1 */
+	ext	t0, k0, GUESTCTL0_G1_SHIFT, 1
+	beq	t0, zero, 1f		/* no GuestCtl1 register */
+	nop
+
+	/* see ClearGuestRID. Handles both GuestCtl0.DRG mode enabled or
+	 * disabled */
+	mfc0	t0, CP0_GUESTCTL1
+	addiu	t1, zero, GUESTCTL1_VZ_ROOT_GUESTID
+	ins	t0, t1, GUESTCTL1_RID_SHIFT, GUESTCTL1_RID_WIDTH
+	mtc0	t0, CP0_GUESTCTL1 /* Set GuestCtl1.RID = GUESTCTL1_VZ_ROOT_GUESTID */
+	ehb
+1:
+#endif
+
+	/* Now that the new EBASE has been loaded, unset BEV and KSU_USER */
+	.set	at
+	and	v0, v0, ~(ST0_EXL | KSU_USER | ST0_IE)
+	or	v0, v0, ST0_CU0
+	.set	noat
+	mtc0	v0, CP0_STATUS
+	ehb
+
+	/* Load up host GP */
+	LONG_L	gp, VCPU_HOST_GP(k1)
+
+	/* Need a stack before we can jump to "C" */
+	LONG_L	sp, VCPU_HOST_STACK(k1)
+
+	/* Saved host state */
+	addiu	sp,sp, -PT_SIZE
 
-    /* XXXKYMA do we need to load the host ASID, maybe not because the
-     * kernel entries are marked GLOBAL, need to verify
-     */
+	/* XXXKYMA do we need to load the host ASID, maybe not because the
+	 * kernel entries are marked GLOBAL, need to verify
+	 */
 
-    /* Restore host DDATA_LO */
-    LONG_L      k0, PT_HOST_USERLOCAL(sp)
-    mtc0        k0, CP0_DDATA_LO
+	/* Restore host DDATA_LO */
+	LONG_L	k0, PT_HOST_USERLOCAL(sp)
+	mtc0	k0, CP0_DDATA_LO
 
-    /* Restore RDHWR access */
-    la      k0, 0x2000000F
-    mtc0    k0,  CP0_HWRENA
+	/* Restore RDHWR access */
+	la	k0, 0x2000000F
+	mtc0	k0, CP0_HWRENA
 
-    /* Jump to handler */
+	/* Jump to handler */
 FEXPORT(__kvm_mips_jump_to_handler)
-    /* XXXKYMA: not sure if this is safe, how large is the stack?? */
-    /* Now jump to the kvm_mips_handle_exit() to see if we can deal with this in the kernel */
-    la          t9,kvm_mips_handle_exit
-    jalr.hb     t9
-    addiu       sp,sp, -CALLFRAME_SIZ           /* BD Slot */
-
-    /* Return from handler Make sure interrupts are disabled */
-    di
-    ehb
-
-    /* XXXKYMA: k0/k1 could have been blown away if we processed an exception
-     * while we were handling the exception from the guest, reload k1
-     */
-    move        k1, s1
-	addiu		k1, k1, VCPU_HOST_ARCH
-
-    /* Check return value, should tell us if we are returning to the host (handle I/O etc)
-     * or resuming the guest
-     */
-    andi        t0, v0, RESUME_HOST
-    bnez        t0, __kvm_mips_return_to_host
-    nop
+	/* XXXKYMA: not sure if this is safe, how large is the stack?? */
+	/* Now jump to the kvm_mips_handle_exit() to see if we can deal with
+	 * this in the kernel */
+	la	t9,kvm_mips_handle_exit
+	jalr.hb t9
+	addiu	sp,sp, -CALLFRAME_SIZ	    /* BD Slot */
+
+	/* Return from handler Make sure interrupts are disabled */
+	di
+	ehb
+
+	/* XXXKYMA: k0/k1 could have been blown away if we processed an exception
+	 * while we were handling the exception from the guest, reload k1
+	 */
+	move	k1, s1
+	addiu	k1, k1, VCPU_HOST_ARCH
+
+	/* Check return value, should tell us if we are returning to the host
+	 * (handle I/O etc) or resuming the guest
+	 */
+	andi	t0, v0, RESUME_HOST
+	bnez	t0, __kvm_mips_return_to_host
+	nop
 
 __kvm_mips_return_to_guest:
-    /* Put the saved pointer to vcpu (s1) back into the DDATA_LO Register */
-    mtc0        s1, CP0_DDATA_LO
-
-    /* Load up the Guest EBASE to minimize the window where BEV is set */
-    LONG_L      t0, VCPU_GUEST_EBASE(k1)
-
-    /* Switch EBASE back to the one used by KVM */
-    mfc0        v1, CP0_STATUS
-    .set at
-	or          k0, v1, ST0_BEV
-    .set noat
-    mtc0        k0, CP0_STATUS
-    ehb
-    mtc0        t0,CP0_EBASE
-
-    /* Setup status register for running guest in UM */
-    .set at
-    or     v1, v1, (ST0_EXL | KSU_USER | ST0_IE)
-    and     v1, v1, ~ST0_CU0
-    .set noat
-    mtc0    v1, CP0_STATUS
-    ehb
+	/* Put the saved pointer to vcpu (s1) back into the DDATA_LO Register */
+	mtc0	s1, CP0_DDATA_LO
 
+	/* Load up the Guest EBASE to minimize the window where BEV is set */
+	LONG_L	t0, VCPU_GUEST_EBASE(k1)
+
+	/* Switch EBASE back to the one used by KVM */
+	mfc0	v1, CP0_STATUS
+	.set	at
+	or	k0, v1, ST0_BEV
+	.set	noat
+	mtc0	k0, CP0_STATUS
+	ehb
+	mtc0	t0, CP0_EBASE
+
+	/* Setup status register for running guest in UM */
+	.set	at
+	or	v1, v1, (ST0_EXL | KSU_USER | ST0_IE)
+	and	v1, v1, ~ST0_CU0
+	.set	noat
+	mtc0	v1, CP0_STATUS
+	ehb
+
+#ifdef CONFIG_KVM_MIPS_VZ
+	/* Set GM bit to setup eret to VZ guest context */
+	li	v1, 1
+	mfc0	k0, CP0_GUESTCTL0
+	ins	k0, v1, GUESTCTL0_GM_SHIFT, 1
+	mtc0	k0, CP0_GUESTCTL0
+
+	/* check GuestCtl0.G1 */
+	ext	t0, k0, GUESTCTL0_G1_SHIFT, 1
+	beq	t0, zero, 1f		    /* no GuestCtl1 register */
+	nop
+
+	/* see SetGuestRIDtoGuestID. Handles both GuestCtl0.DRG mode enabled or
+	 * disabled */
+	mfc0	t0, CP0_GUESTCTL1	/* Get current GuestID */
+	ext	t1, t0, GUESTCTL1_ID_SHIFT, GUESTCTL1_ID_WIDTH
+	ins	t0, t1, GUESTCTL1_RID_SHIFT, GUESTCTL1_RID_WIDTH
+	mtc0	t0, CP0_GUESTCTL1	/* Set GuestCtl1.RID = GuestCtl1.ID */
+	ehb
+1:
+#endif
 
 	/* Set Guest EPC */
-	LONG_L		t0, VCPU_PC(k1)
-	mtc0		t0, CP0_EPC
-
-    /* Set the ASID for the Guest Kernel */
-    sll         t0, t0, 1                       /* with kseg0 @ 0x40000000, kernel */
-                                                /* addresses shift to 0x80000000 */
-    bltz        t0, 1f                          /* If kernel */
-	addiu       t1, k1, VCPU_GUEST_KERNEL_ASID  /* (BD)  */
-    addiu       t1, k1, VCPU_GUEST_USER_ASID    /* else user */
+	LONG_L	t0, VCPU_PC(k1)
+	mtc0	t0, CP0_EPC
+
+	/* Set the ASID for the Guest Kernel */
+#ifdef CONFIG_KVM_MIPS_VZ
+	addiu	t1, k1, VCPU_GUEST_KERNEL_ASID
+#else
+	sll	t0, t0, 1			/* with kseg0 @ 0x40000000, kernel */
+						/* addresses shift to 0x80000000 */
+	bltz	t0, 1f				/* If kernel */
+	addiu	t1, k1, VCPU_GUEST_KERNEL_ASID	/* (BD)  */
+	addiu	t1, k1, VCPU_GUEST_USER_ASID	/* else user */
 1:
-    /* t1: contains the base of the ASID array, need to get the cpu id  */
-    LONG_L      t2, TI_CPU($28)             /* smp_processor_id */
-    sll         t2, t2, 2                   /* x4 */
-    addu        t3, t1, t2
-    LONG_L      k0, (t3)
-    andi        k0, k0, 0xff
-	mtc0		k0,CP0_ENTRYHI
-    ehb
-
-    /* Disable RDHWR access */
-    mtc0    zero,  CP0_HWRENA
-
-    /* load the guest context from VCPU and return */
-    LONG_L  $0, VCPU_R0(k1)
-    LONG_L  $1, VCPU_R1(k1)
-    LONG_L  $2, VCPU_R2(k1)
-    LONG_L  $3, VCPU_R3(k1)
-    LONG_L  $4, VCPU_R4(k1)
-    LONG_L  $5, VCPU_R5(k1)
-    LONG_L  $6, VCPU_R6(k1)
-    LONG_L  $7, VCPU_R7(k1)
-    LONG_L  $8, VCPU_R8(k1)
-    LONG_L  $9, VCPU_R9(k1)
-    LONG_L  $10, VCPU_R10(k1)
-    LONG_L  $11, VCPU_R11(k1)
-    LONG_L  $12, VCPU_R12(k1)
-    LONG_L  $13, VCPU_R13(k1)
-    LONG_L  $14, VCPU_R14(k1)
-    LONG_L  $15, VCPU_R15(k1)
-    LONG_L  $16, VCPU_R16(k1)
-    LONG_L  $17, VCPU_R17(k1)
-    LONG_L  $18, VCPU_R18(k1)
-    LONG_L  $19, VCPU_R19(k1)
-    LONG_L  $20, VCPU_R20(k1)
-    LONG_L  $21, VCPU_R21(k1)
-    LONG_L  $22, VCPU_R22(k1)
-    LONG_L  $23, VCPU_R23(k1)
-    LONG_L  $24, VCPU_R24(k1)
-    LONG_L  $25, VCPU_R25(k1)
-
-    /* $/k1 loaded later */
-    LONG_L  $28, VCPU_R28(k1)
-    LONG_L  $29, VCPU_R29(k1)
-    LONG_L  $30, VCPU_R30(k1)
-    LONG_L  $31, VCPU_R31(k1)
+#endif
+	/* t1: contains the base of the ASID array, need to get the cpu id  */
+	LONG_L	t2, TI_CPU($28)			/* smp_processor_id */
+	sll	t2, t2, 2			/* x4 */
+	addu	t3, t1, t2
+	LONG_L	k0, (t3)
+	andi	k0, k0, 0xff
+	mtc0	k0, CP0_ENTRYHI
+	ehb
+
+	/* Disable RDHWR access */
+	mtc0	zero, CP0_HWRENA
+
+	/* load the guest context from VCPU and return */
+	LONG_L	$0, VCPU_R0(k1)
+	LONG_L	$1, VCPU_R1(k1)
+	LONG_L	$2, VCPU_R2(k1)
+	LONG_L	$3, VCPU_R3(k1)
+	LONG_L	$4, VCPU_R4(k1)
+	LONG_L	$5, VCPU_R5(k1)
+	LONG_L	$6, VCPU_R6(k1)
+	LONG_L	$7, VCPU_R7(k1)
+	LONG_L	$8, VCPU_R8(k1)
+	LONG_L	$9, VCPU_R9(k1)
+	LONG_L	$10, VCPU_R10(k1)
+	LONG_L	$11, VCPU_R11(k1)
+	LONG_L	$12, VCPU_R12(k1)
+	LONG_L	$13, VCPU_R13(k1)
+	LONG_L	$14, VCPU_R14(k1)
+	LONG_L	$15, VCPU_R15(k1)
+	LONG_L	$16, VCPU_R16(k1)
+	LONG_L	$17, VCPU_R17(k1)
+	LONG_L	$18, VCPU_R18(k1)
+	LONG_L	$19, VCPU_R19(k1)
+	LONG_L	$20, VCPU_R20(k1)
+	LONG_L	$21, VCPU_R21(k1)
+	LONG_L	$22, VCPU_R22(k1)
+	LONG_L	$23, VCPU_R23(k1)
+	LONG_L	$24, VCPU_R24(k1)
+	LONG_L	$25, VCPU_R25(k1)
+
+	/* k0/k1 loaded later */
+	LONG_L	$28, VCPU_R28(k1)
+	LONG_L	$29, VCPU_R29(k1)
+	LONG_L	$30, VCPU_R30(k1)
+	LONG_L	$31, VCPU_R31(k1)
 
 FEXPORT(__kvm_mips_skip_guest_restore)
-    LONG_L  k0, VCPU_HI(k1)
-    mthi    k0
+	LONG_L	k0, VCPU_HI(k1)
+	mthi	k0
 
-    LONG_L  k0, VCPU_LO(k1)
-    mtlo    k0
+	LONG_L	k0, VCPU_LO(k1)
+	mtlo	k0
 
-    LONG_L  k0, VCPU_R26(k1)
-    LONG_L  k1, VCPU_R27(k1)
+	LONG_L	k0, VCPU_R26(k1)
+	LONG_L	k1, VCPU_R27(k1)
 
-    eret
+	eret
 
 __kvm_mips_return_to_host:
-    /* EBASE is already pointing to Linux */
-    LONG_L  k1, VCPU_HOST_STACK(k1)
-	addiu  	k1,k1, -PT_SIZE
-
-    /* Restore host DDATA_LO */
-    LONG_L      k0, PT_HOST_USERLOCAL(k1)
-    mtc0        k0, CP0_DDATA_LO
-
-    /* Restore host ASID */
-    LONG_L      k0, PT_HOST_ASID(sp)
-    andi        k0, 0xff
-    mtc0        k0,CP0_ENTRYHI
-    ehb
-
-    /* Load context saved on the host stack */
-    LONG_L  $0, PT_R0(k1)
-    LONG_L  $1, PT_R1(k1)
-
-    /* r2/v0 is the return code, shift it down by 2 (arithmetic) to recover the err code  */
-    sra     k0, v0, 2
-    move    $2, k0
-
-    LONG_L  $3, PT_R3(k1)
-    LONG_L  $4, PT_R4(k1)
-    LONG_L  $5, PT_R5(k1)
-    LONG_L  $6, PT_R6(k1)
-    LONG_L  $7, PT_R7(k1)
-    LONG_L  $8, PT_R8(k1)
-    LONG_L  $9, PT_R9(k1)
-    LONG_L  $10, PT_R10(k1)
-    LONG_L  $11, PT_R11(k1)
-    LONG_L  $12, PT_R12(k1)
-    LONG_L  $13, PT_R13(k1)
-    LONG_L  $14, PT_R14(k1)
-    LONG_L  $15, PT_R15(k1)
-    LONG_L  $16, PT_R16(k1)
-    LONG_L  $17, PT_R17(k1)
-    LONG_L  $18, PT_R18(k1)
-    LONG_L  $19, PT_R19(k1)
-    LONG_L  $20, PT_R20(k1)
-    LONG_L  $21, PT_R21(k1)
-    LONG_L  $22, PT_R22(k1)
-    LONG_L  $23, PT_R23(k1)
-    LONG_L  $24, PT_R24(k1)
-    LONG_L  $25, PT_R25(k1)
-
-    /* Host k0/k1 were not saved */
-
-    LONG_L  $28, PT_R28(k1)
-    LONG_L  $29, PT_R29(k1)
-    LONG_L  $30, PT_R30(k1)
-
-    LONG_L  k0, PT_HI(k1)
-    mthi    k0
-
-    LONG_L  k0, PT_LO(k1)
-    mtlo    k0
-
-    /* Restore RDHWR access */
-    la      k0, 0x2000000F
-    mtc0    k0,  CP0_HWRENA
-
-
-    /* Restore RA, which is the address we will return to */
-    LONG_L  ra, PT_R31(k1)
-    j       ra
-    nop
-
-    .set    pop
+	/* EBASE is already pointing to Linux */
+	LONG_L	k1, VCPU_HOST_STACK(k1)
+	addiu	k1, k1, -PT_SIZE
+
+	/* Restore host DDATA_LO */
+	LONG_L	k0, PT_HOST_USERLOCAL(k1)
+	mtc0	k0, CP0_DDATA_LO
+
+	/* Load context saved on the host stack */
+	LONG_L	$0, PT_R0(k1)
+	LONG_L	$1, PT_R1(k1)
+
+	/* r2/v0 is the return code, shift it down by 2 (arithmetic) to recover
+	 * the err code  */
+	sra	k0, v0, 2
+	move	$2, k0
+
+	LONG_L	$3, PT_R3(k1)
+	LONG_L	$4, PT_R4(k1)
+	LONG_L	$5, PT_R5(k1)
+	LONG_L	$6, PT_R6(k1)
+	LONG_L	$7, PT_R7(k1)
+	LONG_L	$8, PT_R8(k1)
+	LONG_L	$9, PT_R9(k1)
+	LONG_L	$10, PT_R10(k1)
+	LONG_L	$11, PT_R11(k1)
+	LONG_L	$12, PT_R12(k1)
+	LONG_L	$13, PT_R13(k1)
+	LONG_L	$14, PT_R14(k1)
+	LONG_L	$15, PT_R15(k1)
+	LONG_L	$16, PT_R16(k1)
+	LONG_L	$17, PT_R17(k1)
+	LONG_L	$18, PT_R18(k1)
+	LONG_L	$19, PT_R19(k1)
+	LONG_L	$20, PT_R20(k1)
+	LONG_L	$21, PT_R21(k1)
+	LONG_L	$22, PT_R22(k1)
+	LONG_L	$23, PT_R23(k1)
+	LONG_L	$24, PT_R24(k1)
+	LONG_L	$25, PT_R25(k1)
+
+	/* Host k0/k1 were not saved */
+
+	LONG_L	$28, PT_R28(k1)
+	LONG_L	$29, PT_R29(k1)
+	LONG_L	$30, PT_R30(k1)
+
+	LONG_L	k0, PT_HI(k1)
+	mthi	k0
+
+	LONG_L	k0, PT_LO(k1)
+	mtlo	k0
+
+	/* Restore RDHWR access */
+	la	k0, 0x2000000F
+	mtc0	k0, CP0_HWRENA
+
+
+	/* Restore RA, which is the address we will return to */
+	LONG_L	ra, PT_R31(k1)
+	j	ra
+	nop
+
+	.set	pop
 VECTOR_END(MIPSX(GuestExceptionEnd))
 .end MIPSX(GuestException)
 
@@ -625,26 +683,26 @@ MIPSX(exceptions):
  * a1 = Size, in bytes, of new instruction stream
  */
 
-#define HW_SYNCI_Step       $1
+#define HW_SYNCI_Step	    $1
 LEAF(MIPSX(SyncICache))
-    .set    push
+	.set	push
 	.set	mips32r2
-    beq     a1, zero, 20f
-    nop
-    addu    a1, a0, a1
-    rdhwr   v0, HW_SYNCI_Step
-    beq     v0, zero, 20f
-    nop
+	beq	a1, zero, 20f
+	nop
+	addu	a1, a0, a1
+	rdhwr	v0, HW_SYNCI_Step
+	beq	v0, zero, 20f
+	nop
 
 10:
-    synci   0(a0)
-    addu    a0, a0, v0
-    sltu    v1, a0, a1
-    bne     v1, zero, 10b
-    nop
-    sync
+	synci	0(a0)
+	addu	a0, a0, v0
+	sltu	v1, a0, a1
+	bne	v1, zero, 10b
+	nop
+	sync
 20:
-    jr.hb   ra
-    nop
-    .set pop
+	jr.hb	ra
+	nop
+	.set	pop
 END(MIPSX(SyncICache))
-- 
1.7.11.3

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 09/18] KVM/MIPS32-VZ: Add support for CONFIG_KVM_MIPS_VZ option
  2013-05-19  5:47 [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE) Sanjay Lal
                   ` (7 preceding siblings ...)
  2013-05-19  5:47 ` [PATCH 08/18] KVM/MIPS32-VZ: Entry point for trampolining to the guest and trap handlers Sanjay Lal
@ 2013-05-19  5:47 ` Sanjay Lal
  2013-05-19  5:47 ` [PATCH 10/18] KVM/MIPS32-VZ: Add API for VZ-ASE Capability Sanjay Lal
                   ` (9 subsequent siblings)
  18 siblings, 0 replies; 40+ messages in thread
From: Sanjay Lal @ 2013-05-19  5:47 UTC (permalink / raw)
  To: kvm; +Cc: linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti, Sanjay Lal

- Add config option for KVM/MIPS with VZ support.

Signed-off-by: Sanjay Lal <sanjayl@kymasys.com>
---
 arch/mips/kvm/Kconfig  | 14 +++++++++++++-
 arch/mips/kvm/Makefile | 14 +++++++++-----
 2 files changed, 22 insertions(+), 6 deletions(-)

diff --git a/arch/mips/kvm/Kconfig b/arch/mips/kvm/Kconfig
index 2c15590..963657f 100644
--- a/arch/mips/kvm/Kconfig
+++ b/arch/mips/kvm/Kconfig
@@ -25,9 +25,21 @@ config KVM
 	  Support for hosting Guest kernels.
 	  Currently supported on MIPS32 processors.
 
+config KVM_MIPS_VZ
+	bool "KVM support using the MIPS Virtualization ASE"
+	depends on KVM
+	---help---
+	  Support running unmodified guest kernels in virtual machines using
+	  the MIPS virtualization ASE.  If this option is not selected
+	  then KVM will default to using trap and emulate to virtualize
+	  guests, which will not be as optimal as using the VZ ASE.
+
+	  If unsure, say N.
+
 config KVM_MIPS_DYN_TRANS
 	bool "KVM/MIPS: Dynamic binary translation to reduce traps"
-	depends on KVM
+	depends on KVM && !KVM_MIPS_VZ
+	default y
 	---help---
 	  When running in Trap & Emulate mode patch privileged
 	  instructions to reduce the number of traps.
diff --git a/arch/mips/kvm/Makefile b/arch/mips/kvm/Makefile
index 78d87bb..cc64bb4 100644
--- a/arch/mips/kvm/Makefile
+++ b/arch/mips/kvm/Makefile
@@ -5,9 +5,13 @@ common-objs = $(addprefix ../../../virt/kvm/, kvm_main.o coalesced_mmio.o)
 
 EXTRA_CFLAGS += -Ivirt/kvm -Iarch/mips/kvm
 
-kvm-objs := $(common-objs) kvm_mips.o kvm_mips_emul.o kvm_locore.o \
-	    kvm_mips_int.o kvm_mips_stats.o kvm_mips_commpage.o \
-	    kvm_mips_dyntrans.o kvm_trap_emul.o
+kvm-objs := $(common-objs) kvm_mips.o kvm_mips_emul.o kvm_locore.o kvm_mips_int.o \
+            kvm_mips_stats.o kvm_mips_commpage.o kvm_mips_dyntrans.o
 
-obj-$(CONFIG_KVM)	+= kvm.o
-obj-y			+= kvm_cb.o kvm_tlb.o
+ifdef CONFIG_KVM_MIPS_VZ
+kvm-objs                  += kvm_vz.o
+else
+kvm-objs                  += kvm_trap_emul.o
+endif
+obj-$(CONFIG_KVM)         += kvm.o
+obj-y                     += kvm_tlb.o kvm_cb.o kvm_vz_locore.o
-- 
1.7.11.3

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 10/18] KVM/MIPS32-VZ: Add API for VZ-ASE Capability
  2013-05-19  5:47 [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE) Sanjay Lal
                   ` (8 preceding siblings ...)
  2013-05-19  5:47 ` [PATCH 09/18] KVM/MIPS32-VZ: Add support for CONFIG_KVM_MIPS_VZ option Sanjay Lal
@ 2013-05-19  5:47 ` Sanjay Lal
  2013-05-28 16:34   ` Paolo Bonzini
  2013-05-19  5:47 ` [PATCH 11/18] KVM/MIPS32-VZ: VZ: Handle Guest TLB faults that are handled in Root context Sanjay Lal
                   ` (8 subsequent siblings)
  18 siblings, 1 reply; 40+ messages in thread
From: Sanjay Lal @ 2013-05-19  5:47 UTC (permalink / raw)
  To: kvm; +Cc: linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti, Sanjay Lal

- Add API to allow clients (QEMU etc.) to check whether the H/W
  supports the MIPS VZ-ASE.

Signed-off-by: Sanjay Lal <sanjayl@kymasys.com>
---
 include/uapi/linux/kvm.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index a5c86fc..5889e976 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -666,6 +666,7 @@ struct kvm_ppc_smmu_info {
 #define KVM_CAP_IRQ_MPIC 90
 #define KVM_CAP_PPC_RTAS 91
 #define KVM_CAP_IRQ_XICS 92
+#define KVM_CAP_MIPS_VZ_ASE 93
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
1.7.11.3

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 11/18] KVM/MIPS32-VZ: VZ: Handle Guest TLB faults that are handled in Root context
  2013-05-19  5:47 [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE) Sanjay Lal
                   ` (9 preceding siblings ...)
  2013-05-19  5:47 ` [PATCH 10/18] KVM/MIPS32-VZ: Add API for VZ-ASE Capability Sanjay Lal
@ 2013-05-19  5:47 ` Sanjay Lal
  2013-05-19  5:47 ` [PATCH 12/18] KVM/MIPS32-VZ: VM Exit Stats, add VZ exit reasons Sanjay Lal
                   ` (7 subsequent siblings)
  18 siblings, 0 replies; 40+ messages in thread
From: Sanjay Lal @ 2013-05-19  5:47 UTC (permalink / raw)
  To: kvm; +Cc: linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti, Sanjay Lal

- Guest physical addresses need to be mapped by the Root TLB.

Signed-off-by: Sanjay Lal <sanjayl@kymasys.com>
---
 arch/mips/kvm/kvm_tlb.c | 444 +++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 359 insertions(+), 85 deletions(-)

diff --git a/arch/mips/kvm/kvm_tlb.c b/arch/mips/kvm/kvm_tlb.c
index 89511a9..5b1a221 100644
--- a/arch/mips/kvm/kvm_tlb.c
+++ b/arch/mips/kvm/kvm_tlb.c
@@ -17,12 +17,16 @@
 #include <linux/delay.h>
 #include <linux/module.h>
 #include <linux/kvm_host.h>
+#include <linux/srcu.h>
 
 #include <asm/cpu.h>
 #include <asm/bootinfo.h>
 #include <asm/mmu_context.h>
 #include <asm/pgtable.h>
 #include <asm/cacheflush.h>
+#ifdef CONFIG_KVM_MIPS_VZ
+#include <asm/mipsvzregs.h>
+#endif
 
 #undef CONFIG_MIPS_MT
 #include <asm/r4kcache.h>
@@ -33,8 +37,12 @@
 
 #define PRIx64 "llx"
 
+#ifdef CONFIG_KVM_MIPS_VZ
 /* Use VZ EntryHi.EHINV to invalidate TLB entries */
+#define UNIQUE_ENTRYHI(idx) (ENTRYHI_EHINV | (CKSEG0 + ((idx) << (PAGE_SHIFT + 1))))
+#else
 #define UNIQUE_ENTRYHI(idx) (CKSEG0 + ((idx) << (PAGE_SHIFT + 1)))
+#endif
 
 atomic_t kvm_mips_instance;
 EXPORT_SYMBOL(kvm_mips_instance);
@@ -51,13 +59,13 @@ EXPORT_SYMBOL(kvm_mips_is_error_pfn);
 
 uint32_t kvm_mips_get_kernel_asid(struct kvm_vcpu *vcpu)
 {
-	return ASID_MASK(vcpu->arch.guest_kernel_asid[smp_processor_id()]);
+	return vcpu->arch.guest_kernel_asid[smp_processor_id()] & ASID_MASK;
 }
 
 
 uint32_t kvm_mips_get_user_asid(struct kvm_vcpu *vcpu)
 {
-	return ASID_MASK(vcpu->arch.guest_user_asid[smp_processor_id()]);
+	return vcpu->arch.guest_user_asid[smp_processor_id()] & ASID_MASK;
 }
 
 inline uint32_t kvm_mips_get_commpage_asid (struct kvm_vcpu *vcpu)
@@ -72,11 +80,11 @@ inline uint32_t kvm_mips_get_commpage_asid (struct kvm_vcpu *vcpu)
 
 void kvm_mips_dump_host_tlbs(void)
 {
-	unsigned long old_entryhi;
-	unsigned long old_pagemask;
 	struct kvm_mips_tlb tlb;
-	unsigned long flags;
 	int i;
+	ulong flags;
+	unsigned long old_entryhi;
+	unsigned long old_pagemask;
 
 	local_irq_save(flags);
 
@@ -84,7 +92,7 @@ void kvm_mips_dump_host_tlbs(void)
 	old_pagemask = read_c0_pagemask();
 
 	printk("HOST TLBs:\n");
-	printk("ASID: %#lx\n", ASID_MASK(read_c0_entryhi()));
+	printk("ASID: %#lx\n", read_c0_entryhi() & ASID_MASK);
 
 	for (i = 0; i < current_cpu_data.tlbsize; i++) {
 		write_c0_index(i);
@@ -97,10 +105,23 @@ void kvm_mips_dump_host_tlbs(void)
 		tlb.tlb_lo0 = read_c0_entrylo0();
 		tlb.tlb_lo1 = read_c0_entrylo1();
 		tlb.tlb_mask = read_c0_pagemask();
+#ifdef CONFIG_KVM_MIPS_VZ
+		tlb.guestctl1 = 0;
+		if (cpu_has_vzguestid) {
+			tlb.guestctl1 = read_c0_guestctl1();
+			/* clear GuestRID after tlb_read in case it was changed */
+			mips32_ClearGuestRID();
+		}
+#endif
 
 		printk("TLB%c%3d Hi 0x%08lx ",
 		       (tlb.tlb_lo0 | tlb.tlb_lo1) & MIPS3_PG_V ? ' ' : '*',
 		       i, tlb.tlb_hi);
+#ifdef CONFIG_KVM_MIPS_VZ
+		if (cpu_has_vzguestid) {
+			printk("GuestCtl1 0x%08x ", tlb.guestctl1);
+		}
+#endif
 		printk("Lo0=0x%09" PRIx64 " %c%c attr %lx ",
 		       (uint64_t) mips3_tlbpfn_to_paddr(tlb.tlb_lo0),
 		       (tlb.tlb_lo0 & MIPS3_PG_D) ? 'D' : ' ',
@@ -120,9 +141,9 @@ void kvm_mips_dump_host_tlbs(void)
 
 void kvm_mips_dump_guest_tlbs(struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
-	struct kvm_mips_tlb tlb;
 	int i;
+	struct kvm_mips_tlb tlb;
+	struct mips_coproc *cop0 = vcpu->arch.cop0;
 
 	printk("Guest TLBs:\n");
 	printk("Guest EntryHi: %#lx\n", kvm_read_c0_guest_entryhi(cop0));
@@ -156,6 +177,11 @@ void kvm_mips_dump_shadow_tlbs(struct kvm_vcpu *vcpu)
 		printk("TLB%c%3d Hi 0x%08lx ",
 		       (tlb.tlb_lo0 | tlb.tlb_lo1) & MIPS3_PG_V ? ' ' : '*',
 		       i, tlb.tlb_hi);
+#ifdef CONFIG_KVM_MIPS_VZ
+		if (cpu_has_vzguestid) {
+			printk("GuestCtl1 0x%08x ", tlb.guestctl1);
+		}
+#endif
 		printk("Lo0=0x%09" PRIx64 " %c%c attr %lx ",
 		       (uint64_t) mips3_tlbpfn_to_paddr(tlb.tlb_lo0),
 		       (tlb.tlb_lo0 & MIPS3_PG_D) ? 'D' : ' ',
@@ -169,26 +195,31 @@ void kvm_mips_dump_shadow_tlbs(struct kvm_vcpu *vcpu)
 	}
 }
 
-static void kvm_mips_map_page(struct kvm *kvm, gfn_t gfn)
+static int kvm_mips_map_page(struct kvm *kvm, gfn_t gfn)
 {
+	int srcu_idx, err = 0;
 	pfn_t pfn;
 
 	if (kvm->arch.guest_pmap[gfn] != KVM_INVALID_PAGE)
-		return;
+		return 0;
 
+        srcu_idx = srcu_read_lock(&kvm->srcu);
 	pfn = kvm_mips_gfn_to_pfn(kvm, gfn);
 
 	if (kvm_mips_is_error_pfn(pfn)) {
-		panic("Couldn't get pfn for gfn %#" PRIx64 "!\n", gfn);
+		kvm_err("Couldn't get pfn for gfn %#" PRIx64 "!\n", gfn);
+		err = -EFAULT;
+		goto out;
 	}
 
 	kvm->arch.guest_pmap[gfn] = pfn;
-	return;
+out:
+	srcu_read_unlock(&kvm->srcu, srcu_idx);
+	return err;
 }
 
 /* Translate guest KSEG0 addresses to Host PA */
-unsigned long kvm_mips_translate_guest_kseg0_to_hpa(struct kvm_vcpu *vcpu,
-	unsigned long gva)
+ulong kvm_mips_translate_guest_kseg0_to_hpa(struct kvm_vcpu *vcpu, ulong gva)
 {
 	gfn_t gfn;
 	uint32_t offset = gva & ~PAGE_MASK;
@@ -207,22 +238,32 @@ unsigned long kvm_mips_translate_guest_kseg0_to_hpa(struct kvm_vcpu *vcpu,
 			gva);
 		return KVM_INVALID_PAGE;
 	}
-	kvm_mips_map_page(vcpu->kvm, gfn);
+
+	if (kvm_mips_map_page(vcpu->kvm, gfn) < 0)
+		return KVM_INVALID_ADDR;
+
 	return (kvm->arch.guest_pmap[gfn] << PAGE_SHIFT) + offset;
 }
 
 /* XXXKYMA: Must be called with interrupts disabled */
 /* set flush_dcache_mask == 0 if no dcache flush required */
 int
-kvm_mips_host_tlb_write(struct kvm_vcpu *vcpu, unsigned long entryhi,
-	unsigned long entrylo0, unsigned long entrylo1, int flush_dcache_mask)
+kvm_mips_host_tlb_write(struct kvm_vcpu *vcpu, ulong entryhi,
+			ulong entrylo0, ulong entrylo1, int flush_dcache_mask)
 {
-	unsigned long flags;
-	unsigned long old_entryhi;
+	ulong flags;
+	ulong old_entryhi;
 	volatile int idx;
+	int debug __maybe_unused = 0;
 
 	local_irq_save(flags);
 
+#ifdef CONFIG_KVM_MIPS_VZ
+	if (cpu_has_vzguestid) {
+		/* Set Guest ID for root probe and write of guest TLB entry */
+		mips32_SetGuestRIDtoGuestID();
+	}
+#endif
 
 	old_entryhi = read_c0_entryhi();
 	write_c0_entryhi(entryhi);
@@ -256,6 +297,12 @@ kvm_mips_host_tlb_write(struct kvm_vcpu *vcpu, unsigned long entryhi,
 			  "entrylo0(R): 0x%08lx, entrylo1(R): 0x%08lx\n",
 			  vcpu->arch.pc, idx, read_c0_entryhi(),
 			  read_c0_entrylo0(), read_c0_entrylo1());
+#ifdef CONFIG_KVM_MIPS_VZ
+		if (cpu_has_vzguestid) {
+			kvm_debug("@ %#lx idx: %2d guestCtl1(R): 0x%08x\n",
+				  vcpu->arch.pc, idx, read_c0_guestctl1());
+		}
+#endif
 	}
 #endif
 
@@ -275,24 +322,77 @@ kvm_mips_host_tlb_write(struct kvm_vcpu *vcpu, unsigned long entryhi,
 	/* Restore old ASID */
 	write_c0_entryhi(old_entryhi);
 	mtc0_tlbw_hazard();
+#ifdef CONFIG_KVM_MIPS_VZ
+	if (cpu_has_vzguestid) {
+		mips32_ClearGuestRID();
+	}
+#endif
 	tlbw_use_hazard();
 	local_irq_restore(flags);
 	return 0;
 }
 
+#ifdef CONFIG_KVM_MIPS_VZ
+/* XXXKYMA: Must be called with interrupts disabled */
+int kvm_mips_handle_vz_root_tlb_fault(ulong badvaddr, struct kvm_vcpu *vcpu)
+{
+	gfn_t gfn;
+	pfn_t pfn0, pfn1;
+	ulong vaddr = 0;
+	ulong entryhi = 0, entrylo0 = 0, entrylo1 = 0;
+	int even;
+	struct kvm *kvm = vcpu->kvm;
+	const int flush_dcache_mask = 0;
+
+	gfn = (KVM_GUEST_CPHYSADDR(badvaddr) >> PAGE_SHIFT);
+	if (gfn >= kvm->arch.guest_pmap_npages) {
+		kvm_err("%s: Invalid gfn: %#llx, BadVaddr: %#lx\n", __func__,
+			gfn, badvaddr);
+		kvm_mips_dump_host_tlbs();
+		return -1;
+	}
+	even = !(gfn & 0x1);
+	vaddr = badvaddr & (PAGE_MASK << 1);
+
+	if (kvm_mips_map_page(vcpu->kvm, gfn) < 0)
+		return -1;
+
+	if (kvm_mips_map_page(vcpu->kvm, gfn ^ 0x1) < 0)
+		return -1;
+
+	if (even) {
+		pfn0 = kvm->arch.guest_pmap[gfn];
+		pfn1 = kvm->arch.guest_pmap[gfn ^ 0x1];
+	} else {
+		pfn0 = kvm->arch.guest_pmap[gfn ^ 0x1];
+		pfn1 = kvm->arch.guest_pmap[gfn];
+	}
+
+	entryhi = (vaddr | kvm_mips_get_kernel_asid(vcpu));
+	entrylo0 = mips3_paddr_to_tlbpfn(pfn0 << PAGE_SHIFT) | (0x3 << 3) | (1 << 2) |
+			(0x1 << 1);
+	entrylo1 = mips3_paddr_to_tlbpfn(pfn1 << PAGE_SHIFT) | (0x3 << 3) | (1 << 2) |
+			(0x1 << 1);
+
+	return kvm_mips_host_tlb_write(vcpu, entryhi, entrylo0, entrylo1,
+				       flush_dcache_mask);
+}
+#endif
 
 /* XXXKYMA: Must be called with interrupts disabled */
-int kvm_mips_handle_kseg0_tlb_fault(unsigned long badvaddr,
-	struct kvm_vcpu *vcpu)
+int kvm_mips_handle_kseg0_tlb_fault(ulong badvaddr, struct kvm_vcpu *vcpu)
 {
 	gfn_t gfn;
 	pfn_t pfn0, pfn1;
-	unsigned long vaddr = 0;
-	unsigned long entryhi = 0, entrylo0 = 0, entrylo1 = 0;
+	ulong vaddr = 0;
+	ulong entryhi = 0, entrylo0 = 0, entrylo1 = 0;
 	int even;
 	struct kvm *kvm = vcpu->kvm;
 	const int flush_dcache_mask = 0;
 
+#ifdef CONFIG_KVM_MIPS_VZ
+	BUG_ON(cpu_has_vz);
+#endif
 
 	if (KVM_GUEST_KSEGX(badvaddr) != KVM_GUEST_KSEG0) {
 		kvm_err("%s: Invalid BadVaddr: %#lx\n", __func__, badvaddr);
@@ -310,8 +410,11 @@ int kvm_mips_handle_kseg0_tlb_fault(unsigned long badvaddr,
 	even = !(gfn & 0x1);
 	vaddr = badvaddr & (PAGE_MASK << 1);
 
-	kvm_mips_map_page(vcpu->kvm, gfn);
-	kvm_mips_map_page(vcpu->kvm, gfn ^ 0x1);
+	if (kvm_mips_map_page(vcpu->kvm, gfn) < 0)
+		return -1;
+
+	if (kvm_mips_map_page(vcpu->kvm, gfn ^ 0x1) < 0)
+		return -1;
 
 	if (even) {
 		pfn0 = kvm->arch.guest_pmap[gfn];
@@ -331,13 +434,16 @@ int kvm_mips_handle_kseg0_tlb_fault(unsigned long badvaddr,
 				       flush_dcache_mask);
 }
 
-int kvm_mips_handle_commpage_tlb_fault(unsigned long badvaddr,
-	struct kvm_vcpu *vcpu)
+int kvm_mips_handle_commpage_tlb_fault(ulong badvaddr, struct kvm_vcpu *vcpu)
 {
 	pfn_t pfn0, pfn1;
-	unsigned long flags, old_entryhi = 0, vaddr = 0;
-	unsigned long entrylo0 = 0, entrylo1 = 0;
+	ulong flags, old_entryhi = 0, vaddr = 0;
+	ulong entrylo0 = 0, entrylo1 = 0;
+	int debug __maybe_unused = 0;
 
+#ifdef CONFIG_KVM_MIPS_VZ
+	BUG_ON(cpu_has_vz);
+#endif
 
 	pfn0 = CPHYSADDR(vcpu->arch.kseg0_commpage) >> PAGE_SHIFT;
 	pfn1 = 0;
@@ -378,19 +484,26 @@ int kvm_mips_handle_commpage_tlb_fault(unsigned long badvaddr,
 
 int
 kvm_mips_handle_mapped_seg_tlb_fault(struct kvm_vcpu *vcpu,
-	struct kvm_mips_tlb *tlb, unsigned long *hpa0, unsigned long *hpa1)
+				     struct kvm_mips_tlb *tlb, ulong *hpa0,
+				     ulong *hpa1)
 {
-	unsigned long entryhi = 0, entrylo0 = 0, entrylo1 = 0;
-	struct kvm *kvm = vcpu->kvm;
 	pfn_t pfn0, pfn1;
+	ulong entryhi = 0, entrylo0 = 0, entrylo1 = 0;
+	struct kvm *kvm = vcpu->kvm;
 
+#ifdef CONFIG_KVM_MIPS_VZ
+	BUG_ON(cpu_has_vz);
+#endif
 
 	if ((tlb->tlb_hi & VPN2_MASK) == 0) {
 		pfn0 = 0;
 		pfn1 = 0;
 	} else {
-		kvm_mips_map_page(kvm, mips3_tlbpfn_to_paddr(tlb->tlb_lo0) >> PAGE_SHIFT);
-		kvm_mips_map_page(kvm, mips3_tlbpfn_to_paddr(tlb->tlb_lo1) >> PAGE_SHIFT);
+		if (kvm_mips_map_page(kvm, mips3_tlbpfn_to_paddr(tlb->tlb_lo0) >> PAGE_SHIFT) < 0)
+			return -1;
+
+		if (kvm_mips_map_page(kvm, mips3_tlbpfn_to_paddr(tlb->tlb_lo1) >> PAGE_SHIFT) < 0)
+			return -1;
 
 		pfn0 = kvm->arch.guest_pmap[mips3_tlbpfn_to_paddr(tlb->tlb_lo0) >> PAGE_SHIFT];
 		pfn1 = kvm->arch.guest_pmap[mips3_tlbpfn_to_paddr(tlb->tlb_lo1) >> PAGE_SHIFT];
@@ -419,16 +532,19 @@ kvm_mips_handle_mapped_seg_tlb_fault(struct kvm_vcpu *vcpu,
 				       tlb->tlb_mask);
 }
 
-int kvm_mips_guest_tlb_lookup(struct kvm_vcpu *vcpu, unsigned long entryhi)
+int kvm_mips_guest_tlb_lookup(struct kvm_vcpu *vcpu, ulong entryhi)
 {
 	int i;
 	int index = -1;
 	struct kvm_mips_tlb *tlb = vcpu->arch.guest_tlb;
 
+#ifdef CONFIG_KVM_MIPS_VZ
+	BUG_ON(cpu_has_vz);
+#endif
 
 	for (i = 0; i < KVM_MIPS_GUEST_TLB_SIZE; i++) {
 		if (((TLB_VPN2(tlb[i]) & ~tlb[i].tlb_mask) == ((entryhi & VPN2_MASK) & ~tlb[i].tlb_mask)) &&
-			(TLB_IS_GLOBAL(tlb[i]) || (TLB_ASID(tlb[i]) == ASID_MASK(entryhi)))) {
+			(TLB_IS_GLOBAL(tlb[i]) || (TLB_ASID(tlb[i]) == (entryhi & ASID_MASK)))) {
 			index = i;
 			break;
 		}
@@ -442,11 +558,17 @@ int kvm_mips_guest_tlb_lookup(struct kvm_vcpu *vcpu, unsigned long entryhi)
 	return index;
 }
 
-int kvm_mips_host_tlb_lookup(struct kvm_vcpu *vcpu, unsigned long vaddr)
+int kvm_mips_host_tlb_lookup(struct kvm_vcpu *vcpu, ulong vaddr)
 {
-	unsigned long old_entryhi, flags;
+	ulong old_entryhi, flags;
 	volatile int idx;
 
+#ifdef CONFIG_KVM_MIPS_VZ
+	/* Not used in VZ emulation mode.
+	 *  -- call to tlb_probe could overwrite GuestID field
+	 */
+	BUG_ON(cpu_has_vz);
+#endif
 
 	local_irq_save(flags);
 
@@ -478,13 +600,19 @@ int kvm_mips_host_tlb_lookup(struct kvm_vcpu *vcpu, unsigned long vaddr)
 	return idx;
 }
 
-int kvm_mips_host_tlb_inv(struct kvm_vcpu *vcpu, unsigned long va)
+int kvm_mips_host_tlb_inv(struct kvm_vcpu *vcpu, ulong va)
 {
 	int idx;
-	unsigned long flags, old_entryhi;
+	ulong flags, old_entryhi;
 
 	local_irq_save(flags);
 
+#ifdef CONFIG_KVM_MIPS_VZ
+	if (cpu_has_vzguestid) {
+		/* Set Guest ID for root probe and write of guest TLB entry */
+		mips32_SetGuestRIDtoGuestID();
+	}
+#endif
 
 	old_entryhi = read_c0_entryhi();
 
@@ -514,6 +642,11 @@ int kvm_mips_host_tlb_inv(struct kvm_vcpu *vcpu, unsigned long va)
 
 	write_c0_entryhi(old_entryhi);
 	mtc0_tlbw_hazard();
+#ifdef CONFIG_KVM_MIPS_VZ
+	if (cpu_has_vzguestid) {
+		mips32_ClearGuestRID();
+	}
+#endif
 	tlbw_use_hazard();
 
 	local_irq_restore(flags);
@@ -531,13 +664,19 @@ int kvm_mips_host_tlb_inv(struct kvm_vcpu *vcpu, unsigned long va)
 /* XXXKYMA: Fix Guest USER/KERNEL no longer share the same ASID*/
 int kvm_mips_host_tlb_inv_index(struct kvm_vcpu *vcpu, int index)
 {
-	unsigned long flags, old_entryhi;
+	ulong flags, old_entryhi;
 
 	if (index >= current_cpu_data.tlbsize)
 		BUG();
 
 	local_irq_save(flags);
 
+#ifdef CONFIG_KVM_MIPS_VZ
+	if (cpu_has_vzguestid) {
+		/* Set Guest ID for root probe and write of guest TLB entry */
+		mips32_SetGuestRIDtoGuestID();
+	}
+#endif
 
 	old_entryhi = read_c0_entryhi();
 
@@ -559,6 +698,11 @@ int kvm_mips_host_tlb_inv_index(struct kvm_vcpu *vcpu, int index)
 
 	write_c0_entryhi(old_entryhi);
 	mtc0_tlbw_hazard();
+#ifdef CONFIG_KVM_MIPS_VZ
+	if (cpu_has_vzguestid) {
+		mips32_ClearGuestRID();
+	}
+#endif
 	tlbw_use_hazard();
 
 	local_irq_restore(flags);
@@ -574,6 +718,11 @@ void kvm_mips_flush_host_tlb(int skip_kseg0)
 	int entry = 0;
 	int maxentry = current_cpu_data.tlbsize;
 
+#ifdef CONFIG_KVM_MIPS_VZ
+	/* kseg0 should always be flushed in VZ emulation mode */
+	/* If this changes then clear GuestRID after tlb_read */
+	BUG_ON(cpu_has_vz && skip_kseg0);
+#endif
 
 	local_irq_save(flags);
 
@@ -626,7 +775,7 @@ kvm_get_new_mmu_context(struct mm_struct *mm, unsigned long cpu,
 {
 	unsigned long asid = asid_cache(cpu);
 
-	if (!(ASID_MASK(ASID_INC(asid)))) {
+	if (!((asid += ASID_INC) & ASID_MASK)) {
 		if (cpu_has_vtag_icache) {
 			flush_icache_all();
 		}
@@ -663,11 +812,22 @@ void kvm_shadow_tlb_put(struct kvm_vcpu *vcpu)
 		vcpu->arch.shadow_tlb[cpu][entry].tlb_lo0 = read_c0_entrylo0();
 		vcpu->arch.shadow_tlb[cpu][entry].tlb_lo1 = read_c0_entrylo1();
 		vcpu->arch.shadow_tlb[cpu][entry].tlb_mask = read_c0_pagemask();
+#ifdef CONFIG_KVM_MIPS_VZ
+		vcpu->arch.shadow_tlb[cpu][entry].guestctl1 = 0;
+		if (cpu_has_vzguestid) {
+			vcpu->arch.shadow_tlb[cpu][entry].guestctl1 = read_c0_guestctl1();
+		}
+#endif
 	}
 
 	write_c0_entryhi(old_entryhi);
 	write_c0_pagemask(old_pagemask);
 	mtc0_tlbw_hazard();
+#ifdef CONFIG_KVM_MIPS_VZ
+	if (cpu_has_vzguestid) {
+		mips32_ClearGuestRID();
+	}
+#endif
 
 	local_irq_restore(flags);
 
@@ -693,6 +853,14 @@ void kvm_shadow_tlb_load(struct kvm_vcpu *vcpu)
 		write_c0_index(entry);
 		mtc0_tlbw_hazard();
 
+#ifdef CONFIG_KVM_MIPS_VZ
+		if (cpu_has_vzguestid) {
+			/* Set GuestID for root write of guest TLB entry */
+			mips32_SetGuestRID((vcpu->arch.shadow_tlb[cpu][entry].
+					    guestctl1 & GUESTCTL1_RID) >>
+					   GUESTCTL1_RID_SHIFT);
+		}
+#endif
 		tlb_write_indexed();
 		tlbw_use_hazard();
 	}
@@ -700,9 +868,57 @@ void kvm_shadow_tlb_load(struct kvm_vcpu *vcpu)
 	tlbw_use_hazard();
 	write_c0_entryhi(old_ctx);
 	mtc0_tlbw_hazard();
+#ifdef CONFIG_KVM_MIPS_VZ
+	if (cpu_has_vzguestid) {
+		mips32_ClearGuestRID();
+	}
+#endif
 	local_irq_restore(flags);
 }
 
+#ifdef CONFIG_KVM_MIPS_VZ
+void kvm_vz_local_flush_guest_tlb_all(void)
+{
+	unsigned long flags;
+	unsigned long old_ctx;
+	int entry = 0;
+	struct mips_coproc *cop0 = NULL;
+
+	if (cpu_has_tlbinv) {
+
+		local_irq_save(flags);
+
+		/* Blast 'em all away. */
+		kvm_write_c0_guest_index(cop0, 0);
+		tlbw_use_hazard();
+		tlb_guest_invalidate_flush();
+
+		local_irq_restore(flags);
+
+		return;
+	}
+
+	local_irq_save(flags);
+	/* Save old context and create impossible VPN2 value */
+	old_ctx = kvm_read_c0_guest_entryhi(cop0);
+	kvm_write_c0_guest_entrylo0(cop0, 0);
+	kvm_write_c0_guest_entrylo1(cop0, 0);
+
+	/* Blast 'em all away. */
+	while (entry < current_cpu_data.vz.tlbsize) {
+		/* Make sure all entries differ. */
+		kvm_write_c0_guest_entryhi(cop0, UNIQUE_ENTRYHI(entry));
+		kvm_write_c0_guest_index(cop0, entry);
+		mtc0_tlbw_hazard();
+		tlb_write_guest_indexed();
+		entry++;
+	}
+	tlbw_use_hazard();
+	kvm_write_c0_guest_entryhi(cop0, old_ctx);
+	mtc0_tlbw_hazard();
+	local_irq_restore(flags);
+}
+#endif
 
 void kvm_local_flush_tlb_all(void)
 {
@@ -710,6 +926,21 @@ void kvm_local_flush_tlb_all(void)
 	unsigned long old_ctx;
 	int entry = 0;
 
+#ifdef CONFIG_KVM_MIPS_VZ
+	if (cpu_has_tlbinv) {
+
+		local_irq_save(flags);
+
+		/* Blast 'em all away. */
+		write_c0_index(0);
+		tlbw_use_hazard();
+		tlb_invalidate_flush();
+
+		local_irq_restore(flags);
+
+		return;
+	}
+#endif
 	local_irq_save(flags);
 	/* Save old context and create impossible VPN2 value */
 	old_ctx = read_c0_entryhi();
@@ -729,6 +960,11 @@ void kvm_local_flush_tlb_all(void)
 	write_c0_entryhi(old_ctx);
 	mtc0_tlbw_hazard();
 
+#ifdef CONFIG_KVM_MIPS_VZ
+	if (atomic_read(&kvm_mips_instance) != 0) {
+		kvm_vz_local_flush_guest_tlb_all();
+	}
+#endif
 	local_irq_restore(flags);
 }
 
@@ -744,6 +980,9 @@ void kvm_mips_init_shadow_tlb(struct kvm_vcpu *vcpu)
 			vcpu->arch.shadow_tlb[cpu][entry].tlb_lo1 = 0x0;
 			vcpu->arch.shadow_tlb[cpu][entry].tlb_mask =
 			    read_c0_pagemask();
+#ifdef CONFIG_KVM_MIPS_VZ
+			vcpu->arch.shadow_tlb[cpu][entry].guestctl1 = 0x0;
+#endif
 #ifdef DEBUG
 			kvm_debug
 			    ("shadow_tlb[%d][%d]: tlb_hi: %#lx, lo0: %#lx, lo1: %#lx\n",
@@ -759,8 +998,11 @@ void kvm_mips_init_shadow_tlb(struct kvm_vcpu *vcpu)
 /* Restore ASID once we are scheduled back after preemption */
 void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 {
-	unsigned long flags;
+	ulong flags;
 	int newasid = 0;
+#ifdef CONFIG_KVM_MIPS_VZ
+	int restore_regs = 0;
+#endif
 
 #ifdef DEBUG
 	kvm_debug("%s: vcpu %p, cpu: %d\n", __func__, vcpu, cpu);
@@ -770,6 +1012,10 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 
 	local_irq_save(flags);
 
+	if (vcpu->arch.last_sched_cpu != cpu)
+		kvm_info("[%d->%d]KVM VCPU[%d] switch\n",
+			 vcpu->arch.last_sched_cpu, cpu, vcpu->vcpu_id);
+
 	if (((vcpu->arch.
 	      guest_kernel_asid[cpu] ^ asid_cache(cpu)) & ASID_VERSION_MASK)) {
 		kvm_get_new_mmu_context(&vcpu->arch.guest_kernel_mm, cpu, vcpu);
@@ -780,6 +1026,21 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 		    vcpu->arch.guest_user_mm.context.asid[cpu];
 		newasid++;
 
+#ifdef CONFIG_KVM_MIPS_VZ
+		/* Set the GuestID for the guest VM.  A vcpu has a different
+		 * vzguestid on each host cpu in an smp system.
+		 */
+		if (cpu_has_vzguestid) {
+			vcpu->arch.vzguestid[cpu] =
+					vcpu->arch.guest_kernel_asid[cpu];
+			if (KVM_VZROOTID == (vcpu->arch.vzguestid[cpu] &
+						 KVM_VZGUESTID_MASK)) {
+				vcpu->arch.vzguestid[cpu] =
+					vcpu->arch.guest_user_asid[cpu];
+			}
+			restore_regs = 1;
+		}
+#endif
 		kvm_info("[%d]: cpu_context: %#lx\n", cpu,
 			 cpu_context(cpu, current->mm));
 		kvm_info("[%d]: Allocated new ASID for Guest Kernel: %#x\n",
@@ -788,11 +1049,28 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 			 vcpu->arch.guest_user_asid[cpu]);
 	}
 
-	if (vcpu->arch.last_sched_cpu != cpu) {
-		kvm_info("[%d->%d]KVM VCPU[%d] switch\n",
-			 vcpu->arch.last_sched_cpu, cpu, vcpu->vcpu_id);
+#ifdef CONFIG_KVM_MIPS_VZ
+	if (cpu_has_vzguestid) {
+		/* restore cp0 registers if another guest has been running */
+		if ((read_c0_guestctl1() ^ vcpu->arch.vzguestid[cpu]) &
+				KVM_VZGUESTID_MASK) {
+			change_c0_guestctl1(KVM_VZGUESTID_MASK,
+					vcpu->arch.vzguestid[cpu]);
+			restore_regs = 1;
+		}
+	} else {
+		restore_regs = 1;
+		kvm_vz_local_flush_guest_tlb_all();
 	}
 
+	if (vcpu->arch.last_sched_cpu != cpu)
+		restore_regs = 1;
+
+	if (restore_regs)
+		kvm_mips_callbacks->vcpu_ioctl_set_regs(vcpu,
+							&vcpu->arch.guest_regs);
+#endif
+
 	/* Only reload shadow host TLB if new ASIDs haven't been allocated */
 #if 0
 	if ((atomic_read(&kvm_mips_instance) > 1) && !newasid) {
@@ -801,28 +1079,17 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 	}
 #endif
 
-	if (!newasid) {
-		/* If we preempted while the guest was executing, then reload the pre-empted ASID */
-		if (current->flags & PF_VCPU) {
-			write_c0_entryhi(ASID_MASK(vcpu->arch.preempt_entryhi));
-			ehb();
-		}
-	} else {
-		/* New ASIDs were allocated for the VM */
-
-		/* Were we in guest context? If so then the pre-empted ASID is no longer
-		 * valid, we need to set it to what it should be based on the mode of
-		 * the Guest (Kernel/User)
-		 */
-		if (current->flags & PF_VCPU) {
-			if (KVM_GUEST_KERNEL_MODE(vcpu))
-				write_c0_entryhi(ASID_MASK(vcpu->arch.
-						 guest_kernel_asid[cpu]));
-			else
-				write_c0_entryhi(ASID_MASK(vcpu->arch.
-						 guest_user_asid[cpu]));
-			ehb();
-		}
+	/* If we preempted while the guest was executing, then reload the ASID
+	 * based on the mode of the Guest (Kernel/User)
+	 */
+	if (current->flags & PF_VCPU) {
+		if (KVM_GUEST_KERNEL_MODE(vcpu))
+			write_c0_entryhi(vcpu->arch.guest_kernel_asid[cpu] &
+					 ASID_MASK);
+		else
+			write_c0_entryhi(vcpu->arch.guest_user_asid[cpu] &
+					 ASID_MASK);
+		ehb();
 	}
 
 	local_irq_restore(flags);
@@ -832,21 +1099,17 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 /* ASID can change if another task is scheduled during preemption */
 void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
 {
-	unsigned long flags;
+	ulong flags;
 	uint32_t cpu;
 
 	local_irq_save(flags);
 
 	cpu = smp_processor_id();
-
-
-	vcpu->arch.preempt_entryhi = read_c0_entryhi();
 	vcpu->arch.last_sched_cpu = cpu;
 
-#if 0
-	if ((atomic_read(&kvm_mips_instance) > 1)) {
-		kvm_shadow_tlb_put(vcpu);
-	}
+#ifdef CONFIG_KVM_MIPS_VZ
+	/* save guest cp0 registers */
+	kvm_mips_callbacks->vcpu_ioctl_get_regs(vcpu, &vcpu->arch.guest_regs);
 #endif
 
 	if (((cpu_context(cpu, current->mm) ^ asid_cache(cpu)) &
@@ -863,23 +1126,28 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
 
 uint32_t kvm_get_inst(uint32_t *opc, struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
-	unsigned long paddr, flags;
 	uint32_t inst;
+	struct mips_coproc *cop0 = vcpu->arch.cop0;
 	int index;
+	ulong paddr, flags;
 
-	if (KVM_GUEST_KSEGX((unsigned long) opc) < KVM_GUEST_KSEG0 ||
-	    KVM_GUEST_KSEGX((unsigned long) opc) == KVM_GUEST_KSEG23) {
+	if (KVM_GUEST_KSEGX((ulong) opc) < KVM_GUEST_KSEG0 ||
+	    KVM_GUEST_KSEGX((ulong) opc) == KVM_GUEST_KSEG23) {
+#ifdef CONFIG_KVM_MIPS_VZ
+		/* TODO VZ verify if both kvm_get_inst paths are used */
+		BUG_ON(cpu_has_vz);
+#endif
 		local_irq_save(flags);
-		index = kvm_mips_host_tlb_lookup(vcpu, (unsigned long) opc);
+		index = kvm_mips_host_tlb_lookup(vcpu, (ulong) opc);
 		if (index >= 0) {
 			inst = *(opc);
 		} else {
 			index =
 			    kvm_mips_guest_tlb_lookup(vcpu,
-						      ((unsigned long) opc & VPN2_MASK)
+						      ((ulong) opc & VPN2_MASK)
 						      |
-						      ASID_MASK(kvm_read_c0_guest_entryhi(cop0)));
+						      (kvm_read_c0_guest_entryhi
+						       (cop0) & ASID_MASK));
 			if (index < 0) {
 				kvm_err
 				    ("%s: get_user_failed for %p, vcpu: %p, ASID: %#lx\n",
@@ -897,8 +1165,11 @@ uint32_t kvm_get_inst(uint32_t *opc, struct kvm_vcpu *vcpu)
 		local_irq_restore(flags);
 	} else if (KVM_GUEST_KSEGX(opc) == KVM_GUEST_KSEG0) {
 		paddr =
-		    kvm_mips_translate_guest_kseg0_to_hpa(vcpu,
-							 (unsigned long) opc);
+		    kvm_mips_translate_guest_kseg0_to_hpa(vcpu, (ulong) opc);
+
+		if (paddr == KVM_INVALID_ADDR)
+			return KVM_INVALID_INST;
+
 		inst = *(uint32_t *) CKSEG0ADDR(paddr);
 	} else {
 		kvm_err("%s: illegal address: %p\n", __func__, opc);
@@ -926,3 +1197,6 @@ EXPORT_SYMBOL(kvm_mips_dump_guest_tlbs);
 EXPORT_SYMBOL(kvm_get_inst);
 EXPORT_SYMBOL(kvm_arch_vcpu_load);
 EXPORT_SYMBOL(kvm_arch_vcpu_put);
+#ifdef CONFIG_KVM_MIPS_VZ
+EXPORT_SYMBOL(kvm_mips_handle_vz_root_tlb_fault);
+#endif
-- 
1.7.11.3

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 12/18] KVM/MIPS32-VZ: VM Exit Stats, add VZ exit reasons.
  2013-05-19  5:47 [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE) Sanjay Lal
                   ` (10 preceding siblings ...)
  2013-05-19  5:47 ` [PATCH 11/18] KVM/MIPS32-VZ: VZ: Handle Guest TLB faults that are handled in Root context Sanjay Lal
@ 2013-05-19  5:47 ` Sanjay Lal
  2013-05-19  5:47 ` [PATCH 13/18] KVM/MIPS32-VZ: Top level handler for Guest faults Sanjay Lal
                   ` (6 subsequent siblings)
  18 siblings, 0 replies; 40+ messages in thread
From: Sanjay Lal @ 2013-05-19  5:47 UTC (permalink / raw)
  To: kvm; +Cc: linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti, Sanjay Lal

- Additional VZ related exit reasons, used in the trace logs.

Signed-off-by: Sanjay Lal <sanjayl@kymasys.com>
---
 arch/mips/kvm/kvm_mips_stats.c | 17 ++++++++++++++++-
 1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/arch/mips/kvm/kvm_mips_stats.c b/arch/mips/kvm/kvm_mips_stats.c
index 075904b..c0d0c0f 100644
--- a/arch/mips/kvm/kvm_mips_stats.c
+++ b/arch/mips/kvm/kvm_mips_stats.c
@@ -3,7 +3,7 @@
 * License.  See the file "COPYING" in the main directory of this archive
 * for more details.
 *
-* KVM/MIPS: COP0 access histogram
+* KVM/MIPS: VM Exit stats, COP0 access histogram
 *
 * Copyright (C) 2012  MIPS Technologies, Inc.  All rights reserved.
 * Authors: Sanjay Lal <sanjayl@kymasys.com>
@@ -26,6 +26,21 @@ char *kvm_mips_exit_types_str[MAX_KVM_MIPS_EXIT_TYPES] = {
 	"Reserved Inst",
 	"Break Inst",
 	"D-Cache Flushes",
+#ifdef CONFIG_KVM_MIPS_VZ
+	"Hypervisor GPSI",
+	"Hypervisor GPSI [CP0]",
+	"Hypervisor GPSI [CACHE]",
+	"Hypervisor GSFC",
+	"Hypervisor GSFC [STATUS]",
+	"Hypervisor GSFC [CAUSE]",
+	"Hypervisor GSFC [INTCTL]",
+	"Hypervisor HC",
+	"Hypervisor GRR",
+	"Hypervisor GVA",
+	"Hypervisor GHFC",
+	"Hypervisor GPA",
+	"Hypervisor RESV",
+#endif
 };
 
 char *kvm_cop0_str[N_MIPS_COPROC_REGS] = {
-- 
1.7.11.3

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 13/18] KVM/MIPS32-VZ: Top level handler for Guest faults
  2013-05-19  5:47 [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE) Sanjay Lal
                   ` (11 preceding siblings ...)
  2013-05-19  5:47 ` [PATCH 12/18] KVM/MIPS32-VZ: VM Exit Stats, add VZ exit reasons Sanjay Lal
@ 2013-05-19  5:47 ` Sanjay Lal
  2013-05-19  5:47 ` [PATCH 14/18] KVM/MIPS32-VZ: Guest exception batching support Sanjay Lal
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 40+ messages in thread
From: Sanjay Lal @ 2013-05-19  5:47 UTC (permalink / raw)
  To: kvm; +Cc: linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti, Sanjay Lal

- Add VZ specific VM Exit reasons to the traces.
- Add top level handler for Guest Exit exceptions.

Signed-off-by: Sanjay Lal <sanjayl@kymasys.com>
---
 arch/mips/kvm/kvm_mips.c | 73 +++++++++++++++++++++++++++++++++++-------------
 1 file changed, 53 insertions(+), 20 deletions(-)

diff --git a/arch/mips/kvm/kvm_mips.c b/arch/mips/kvm/kvm_mips.c
index e0dad02..cad9112 100644
--- a/arch/mips/kvm/kvm_mips.c
+++ b/arch/mips/kvm/kvm_mips.c
@@ -18,6 +18,9 @@
 #include <asm/page.h>
 #include <asm/cacheflush.h>
 #include <asm/mmu_context.h>
+#ifdef CONFIG_KVM_MIPS_VZ
+#include <asm/mipsvzregs.h>
+#endif
 
 #include <linux/kvm_host.h>
 
@@ -47,6 +50,21 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
 	{ "resvd_inst", VCPU_STAT(resvd_inst_exits) },
 	{ "break_inst", VCPU_STAT(break_inst_exits) },
 	{ "flush_dcache", VCPU_STAT(flush_dcache_exits) },
+#ifdef CONFIG_KVM_MIPS_VZ
+	{ "hypervisor_gpsi", VCPU_STAT(hypervisor_gpsi_exits) },
+	{ "hypervisor_gpsi_cp0", VCPU_STAT(hypervisor_gpsi_cp0_exits) },
+	{ "hypervisor_gpsi_cache", VCPU_STAT(hypervisor_gpsi_cache_exits) },
+	{ "hypervisor_gsfc", VCPU_STAT(hypervisor_gsfc_exits) },
+	{ "hypervisor_gsfc_cp0_status", VCPU_STAT(hypervisor_gsfc_cp0_status_exits) },
+	{ "hypervisor_gsfc_cp0_cause", VCPU_STAT(hypervisor_gsfc_cp0_cause_exits) },
+	{ "hypervisor_gsfc_cp0_intctl", VCPU_STAT(hypervisor_gsfc_cp0_intctl_exits) },
+	{ "hypervisor_hc", VCPU_STAT(hypervisor_hc_exits) },
+	{ "hypervisor_grr", VCPU_STAT(hypervisor_grr_exits) },
+	{ "hypervisor_gva", VCPU_STAT(hypervisor_gva_exits) },
+	{ "hypervisor_ghfc", VCPU_STAT(hypervisor_ghfc_exits) },
+	{ "hypervisor_gpa", VCPU_STAT(hypervisor_gpa_exits) },
+	{ "hypervisor_resv", VCPU_STAT(hypervisor_resv_exits) },
+#endif
 	{ "halt_wakeup", VCPU_STAT(halt_wakeup) },
 	{NULL}
 };
@@ -57,6 +75,9 @@ static int kvm_mips_reset_vcpu(struct kvm_vcpu *vcpu)
 	for_each_possible_cpu(i) {
 		vcpu->arch.guest_kernel_asid[i] = 0;
 		vcpu->arch.guest_user_asid[i] = 0;
+#ifdef CONFIG_KVM_MIPS_VZ
+		vcpu->arch.vzguestid[i] = 0;
+#endif
 	}
 	return 0;
 }
@@ -106,7 +127,7 @@ void kvm_arch_check_processor_compat(void *rtn)
 
 static void kvm_mips_init_tlbs(struct kvm *kvm)
 {
-	unsigned long wired;
+	ulong wired;
 
 	/* Add a wired entry to the TLB, it is used to map the commpage to the Guest kernel */
 	wired = read_c0_wired();
@@ -209,19 +230,19 @@ int kvm_arch_create_memslot(struct kvm_memory_slot *slot, unsigned long npages)
 }
 
 int kvm_arch_prepare_memory_region(struct kvm *kvm,
-                                struct kvm_memory_slot *memslot,
-                                struct kvm_userspace_memory_region *mem,
-                                enum kvm_mr_change change)
+				   struct kvm_memory_slot *memslot,
+				   struct kvm_userspace_memory_region *mem,
+				   enum kvm_mr_change change)
 {
 	return 0;
 }
 
 void kvm_arch_commit_memory_region(struct kvm *kvm,
-                                struct kvm_userspace_memory_region *mem,
-                                const struct kvm_memory_slot *old,
-                                enum kvm_mr_change change)
+				   struct kvm_userspace_memory_region *mem,
+				   const struct kvm_memory_slot *old,
+				   enum kvm_mr_change change)
 {
-	unsigned long npages = 0;
+	ulong npages = 0;
 	int i, err = 0;
 
 	kvm_debug("%s: kvm: %p slot: %d, GPA: %llx, size: %llx, QVA: %llx\n",
@@ -236,7 +257,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
 		if (npages) {
 			kvm->arch.guest_pmap_npages = npages;
 			kvm->arch.guest_pmap =
-			    kzalloc(npages * sizeof(unsigned long), GFP_KERNEL);
+			    kzalloc(npages * sizeof(ulong), GFP_KERNEL);
 
 			if (!kvm->arch.guest_pmap) {
 				kvm_err("Failed to allocate guest PMAP");
@@ -345,7 +366,7 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id)
 	       mips32_GuestExceptionEnd - mips32_GuestException);
 
 	/* Invalidate the icache for these ranges */
-	mips32_SyncICache((unsigned long) gebase, ALIGN(size, PAGE_SIZE));
+	mips32_SyncICache((ulong) gebase, ALIGN(size, PAGE_SIZE));
 
 	/* Allocate comm page for guest kernel, a TLB will be reserved for mapping GVA @ 0xFFFF8000 to this page */
 	vcpu->arch.kseg0_commpage = kzalloc(PAGE_SIZE << 1, GFP_KERNEL);
@@ -376,6 +397,12 @@ out:
 	return ERR_PTR(err);
 }
 
+int kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
+{
+	return 0;
+}
+
+
 void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
 {
 	hrtimer_cancel(&vcpu->arch.comparecount_timer);
@@ -527,7 +554,7 @@ out:
 int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
 {
 	struct kvm_memory_slot *memslot;
-	unsigned long ga, ga_end;
+	ulong ga, ga_end;
 	int is_dirty = 0;
 	int r;
 	unsigned long n;
@@ -602,11 +629,6 @@ kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
 	return -ENOTSUPP;
 }
 
-int kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
-{
-	return 0;
-}
-
 int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
 {
 	return -ENOTSUPP;
@@ -630,6 +652,11 @@ int kvm_dev_ioctl_check_extension(long ext)
 	case KVM_CAP_COALESCED_MMIO:
 		r = KVM_COALESCED_MMIO_PAGE_OFFSET;
 		break;
+#ifdef CONFIG_KVM_MIPS_VZ
+	case KVM_CAP_MIPS_VZ_ASE:
+		r = cpu_has_vz;
+		break;
+#endif
 	default:
 		r = 0;
 		break;
@@ -721,7 +748,7 @@ enum hrtimer_restart kvm_mips_comparecount_wakeup(struct hrtimer *timer)
 	struct kvm_vcpu *vcpu;
 
 	vcpu = container_of(timer, struct kvm_vcpu, arch.comparecount_timer);
-	kvm_mips_comparecount_func((unsigned long) vcpu);
+	kvm_mips_comparecount_func((ulong) vcpu);
 	hrtimer_forward_now(&vcpu->arch.comparecount_timer,
 			    ktime_set(0, MS_TO_NS(10)));
 	return HRTIMER_RESTART;
@@ -776,14 +803,13 @@ int kvm_mips_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
 	uint32_t cause = vcpu->arch.host_cp0_cause;
 	uint32_t exccode = (cause >> CAUSEB_EXCCODE) & 0x1f;
-	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.pc;
-	unsigned long badvaddr = vcpu->arch.host_cp0_badvaddr;
+	uint32_t *opc = (uint32_t *) vcpu->arch.pc;
+	ulong badvaddr = vcpu->arch.host_cp0_badvaddr;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
 
 	/* Set a default exit reason */
 	run->exit_reason = KVM_EXIT_UNKNOWN;
-	run->ready_for_interrupt_injection = 1;
 
 	/* Set the appropriate status bits based on host CPU features, before we hit the scheduler */
 	kvm_mips_set_c0_status();
@@ -887,6 +913,13 @@ int kvm_mips_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
 		ret = kvm_mips_callbacks->handle_break(vcpu);
 		break;
 
+#ifdef CONFIG_KVM_MIPS_VZ
+	case T_GUEST_EXIT:
+		/* defer exit accounting to handler */
+		ret = kvm_mips_callbacks->handle_guest_exit(vcpu);
+		break;
+
+#endif
 	default:
 		kvm_err
 		    ("Exception Code: %d, not yet handled, @ PC: %p, inst: 0x%08x  BadVaddr: %#lx Status: %#lx\n",
-- 
1.7.11.3

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 14/18] KVM/MIPS32-VZ: Guest exception batching support.
  2013-05-19  5:47 [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE) Sanjay Lal
                   ` (12 preceding siblings ...)
  2013-05-19  5:47 ` [PATCH 13/18] KVM/MIPS32-VZ: Top level handler for Guest faults Sanjay Lal
@ 2013-05-19  5:47 ` Sanjay Lal
  2013-05-19  5:47 ` [PATCH 15/18] KVM/MIPS32: Add dummy trap handler to catch unexpected exceptions and dump out useful info Sanjay Lal
                   ` (4 subsequent siblings)
  18 siblings, 0 replies; 40+ messages in thread
From: Sanjay Lal @ 2013-05-19  5:47 UTC (permalink / raw)
  To: kvm; +Cc: linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti, Sanjay Lal

- In Trap & Emulate the hypervisor maintains exception priority
  in order to comply with the priorities defined by the architecture.

- In VZ mode, we just set all the pending exception bits, and let
  the processor deliver them to the guest in the expected priority
  order.

Signed-off-by: Sanjay Lal <sanjayl@kymasys.com>
---
 arch/mips/kvm/kvm_mips_int.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/mips/kvm/kvm_mips_int.h b/arch/mips/kvm/kvm_mips_int.h
index 20da7d2..7eac28e 100644
--- a/arch/mips/kvm/kvm_mips_int.h
+++ b/arch/mips/kvm/kvm_mips_int.h
@@ -29,8 +29,13 @@
 
 #define C_TI        (_ULCAST_(1) << 30)
 
+#ifdef CONFIG_KVM_MIPS_VZ
+#define KVM_MIPS_IRQ_DELIVER_ALL_AT_ONCE (1)
+#define KVM_MIPS_IRQ_CLEAR_ALL_AT_ONCE   (1)
+#else
 #define KVM_MIPS_IRQ_DELIVER_ALL_AT_ONCE (0)
 #define KVM_MIPS_IRQ_CLEAR_ALL_AT_ONCE   (0)
+#endif
 
 void kvm_mips_queue_irq(struct kvm_vcpu *vcpu, uint32_t priority);
 void kvm_mips_dequeue_irq(struct kvm_vcpu *vcpu, uint32_t priority);
-- 
1.7.11.3

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 15/18] KVM/MIPS32: Add dummy trap handler to catch unexpected exceptions and dump out useful info
  2013-05-19  5:47 [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE) Sanjay Lal
                   ` (13 preceding siblings ...)
  2013-05-19  5:47 ` [PATCH 14/18] KVM/MIPS32-VZ: Guest exception batching support Sanjay Lal
@ 2013-05-19  5:47 ` Sanjay Lal
  2013-05-19  5:47 ` [PATCH 16/18] KVM/MIPS32-VZ: Add VZ-ASE support to KVM/MIPS data structures Sanjay Lal
                   ` (3 subsequent siblings)
  18 siblings, 0 replies; 40+ messages in thread
From: Sanjay Lal @ 2013-05-19  5:47 UTC (permalink / raw)
  To: kvm; +Cc: linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti, Sanjay Lal


Signed-off-by: Sanjay Lal <sanjayl@kymasys.com>
---
 arch/mips/kvm/kvm_trap_emul.c | 68 ++++++++++++++++++++++++++++---------------
 1 file changed, 44 insertions(+), 24 deletions(-)

diff --git a/arch/mips/kvm/kvm_trap_emul.c b/arch/mips/kvm/kvm_trap_emul.c
index 466aeef..19b32a1 100644
--- a/arch/mips/kvm/kvm_trap_emul.c
+++ b/arch/mips/kvm/kvm_trap_emul.c
@@ -27,7 +27,7 @@ static gpa_t kvm_trap_emul_gva_to_gpa_cb(gva_t gva)
 	if ((kseg == CKSEG0) || (kseg == CKSEG1))
 		gpa = CPHYSADDR(gva);
 	else {
-		printk("%s: cannot find GPA for GVA: %#lx\n", __func__, gva);
+		kvm_err("%s: cannot find GPA for GVA: %#lx\n", __func__, gva);
 		kvm_mips_dump_host_tlbs();
 		gpa = KVM_INVALID_ADDR;
 	}
@@ -39,12 +39,29 @@ static gpa_t kvm_trap_emul_gva_to_gpa_cb(gva_t gva)
 	return gpa;
 }
 
+#ifdef CONFIG_KVM_MIPS_VZ
+static int kvm_trap_emul_no_handler(struct kvm_vcpu *vcpu)
+{
+	uint32_t *opc = (uint32_t *) vcpu->arch.pc;
+	ulong cause = vcpu->arch.host_cp0_cause;
+	uint32_t exccode = (cause >> CAUSEB_EXCCODE) & 0x1f;
+	ulong badvaddr = vcpu->arch.host_cp0_badvaddr;
+
+	printk
+	    ("Exception Code: %d, not handled, @ PC: %p, inst: 0x%08x  BadVaddr: %#lx Status: %#lx\n",
+	     exccode, opc, kvm_get_inst(opc, vcpu), badvaddr,
+	     kvm_read_c0_guest_status(vcpu->arch.cop0));
+	kvm_arch_vcpu_dump_regs(vcpu);
+	vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+	return RESUME_HOST;
+}
+#endif
 
 static int kvm_trap_emul_handle_cop_unusable(struct kvm_vcpu *vcpu)
 {
 	struct kvm_run *run = vcpu->run;
-	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.pc;
-	unsigned long cause = vcpu->arch.host_cp0_cause;
+	uint32_t *opc = (uint32_t *) vcpu->arch.pc;
+	ulong cause = vcpu->arch.host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
 
@@ -77,9 +94,9 @@ static int kvm_trap_emul_handle_cop_unusable(struct kvm_vcpu *vcpu)
 static int kvm_trap_emul_handle_tlb_mod(struct kvm_vcpu *vcpu)
 {
 	struct kvm_run *run = vcpu->run;
-	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.pc;
-	unsigned long badvaddr = vcpu->arch.host_cp0_badvaddr;
-	unsigned long cause = vcpu->arch.host_cp0_cause;
+	uint32_t *opc = (uint32_t *) vcpu->arch.pc;
+	ulong cause = vcpu->arch.host_cp0_cause;
+	ulong badvaddr = vcpu->arch.host_cp0_badvaddr;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
 
@@ -124,9 +141,9 @@ static int kvm_trap_emul_handle_tlb_mod(struct kvm_vcpu *vcpu)
 static int kvm_trap_emul_handle_tlb_st_miss(struct kvm_vcpu *vcpu)
 {
 	struct kvm_run *run = vcpu->run;
-	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.pc;
-	unsigned long badvaddr = vcpu->arch.host_cp0_badvaddr;
-	unsigned long cause = vcpu->arch.host_cp0_cause;
+	uint32_t *opc = (uint32_t *) vcpu->arch.pc;
+	ulong cause = vcpu->arch.host_cp0_cause;
+	ulong badvaddr = vcpu->arch.host_cp0_badvaddr;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
 
@@ -174,9 +191,9 @@ static int kvm_trap_emul_handle_tlb_st_miss(struct kvm_vcpu *vcpu)
 static int kvm_trap_emul_handle_tlb_ld_miss(struct kvm_vcpu *vcpu)
 {
 	struct kvm_run *run = vcpu->run;
-	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.pc;
-	unsigned long badvaddr = vcpu->arch.host_cp0_badvaddr;
-	unsigned long cause = vcpu->arch.host_cp0_cause;
+	uint32_t *opc = (uint32_t *) vcpu->arch.pc;
+	ulong cause = vcpu->arch.host_cp0_cause;
+	ulong badvaddr = vcpu->arch.host_cp0_badvaddr;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
 
@@ -228,9 +245,9 @@ static int kvm_trap_emul_handle_tlb_ld_miss(struct kvm_vcpu *vcpu)
 static int kvm_trap_emul_handle_addr_err_st(struct kvm_vcpu *vcpu)
 {
 	struct kvm_run *run = vcpu->run;
-	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.pc;
-	unsigned long badvaddr = vcpu->arch.host_cp0_badvaddr;
-	unsigned long cause = vcpu->arch.host_cp0_cause;
+	uint32_t *opc = (uint32_t *) vcpu->arch.pc;
+	ulong cause = vcpu->arch.host_cp0_cause;
+	ulong badvaddr = vcpu->arch.host_cp0_badvaddr;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
 
@@ -261,9 +278,9 @@ static int kvm_trap_emul_handle_addr_err_st(struct kvm_vcpu *vcpu)
 static int kvm_trap_emul_handle_addr_err_ld(struct kvm_vcpu *vcpu)
 {
 	struct kvm_run *run = vcpu->run;
-	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.pc;
-	unsigned long badvaddr = vcpu->arch.host_cp0_badvaddr;
-	unsigned long cause = vcpu->arch.host_cp0_cause;
+	uint32_t *opc = (uint32_t *) vcpu->arch.pc;
+	ulong cause = vcpu->arch.host_cp0_cause;
+	ulong badvaddr = vcpu->arch.host_cp0_badvaddr;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
 
@@ -294,8 +311,8 @@ static int kvm_trap_emul_handle_addr_err_ld(struct kvm_vcpu *vcpu)
 static int kvm_trap_emul_handle_syscall(struct kvm_vcpu *vcpu)
 {
 	struct kvm_run *run = vcpu->run;
-	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.pc;
-	unsigned long cause = vcpu->arch.host_cp0_cause;
+	uint32_t *opc = (uint32_t *) vcpu->arch.pc;
+	ulong cause = vcpu->arch.host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
 
@@ -312,8 +329,8 @@ static int kvm_trap_emul_handle_syscall(struct kvm_vcpu *vcpu)
 static int kvm_trap_emul_handle_res_inst(struct kvm_vcpu *vcpu)
 {
 	struct kvm_run *run = vcpu->run;
-	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.pc;
-	unsigned long cause = vcpu->arch.host_cp0_cause;
+	uint32_t *opc = (uint32_t *) vcpu->arch.pc;
+	ulong cause = vcpu->arch.host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
 
@@ -330,8 +347,8 @@ static int kvm_trap_emul_handle_res_inst(struct kvm_vcpu *vcpu)
 static int kvm_trap_emul_handle_break(struct kvm_vcpu *vcpu)
 {
 	struct kvm_run *run = vcpu->run;
-	uint32_t __user *opc = (uint32_t __user *) vcpu->arch.pc;
-	unsigned long cause = vcpu->arch.host_cp0_cause;
+	uint32_t *opc = (uint32_t *) vcpu->arch.pc;
+	ulong cause = vcpu->arch.host_cp0_cause;
 	enum emulation_result er = EMULATE_DONE;
 	int ret = RESUME_GUEST;
 
@@ -460,6 +477,9 @@ static struct kvm_mips_callbacks kvm_trap_emul_callbacks = {
 	.handle_syscall = kvm_trap_emul_handle_syscall,
 	.handle_res_inst = kvm_trap_emul_handle_res_inst,
 	.handle_break = kvm_trap_emul_handle_break,
+#ifdef CONFIG_KVM_MIPS_VZ
+	.handle_guest_exit = kvm_trap_emul_no_handler,
+#endif
 
 	.vm_init = kvm_trap_emul_vm_init,
 	.vcpu_init = kvm_trap_emul_vcpu_init,
-- 
1.7.11.3

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 16/18] KVM/MIPS32-VZ: Add VZ-ASE support to KVM/MIPS data structures.
  2013-05-19  5:47 [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE) Sanjay Lal
                   ` (14 preceding siblings ...)
  2013-05-19  5:47 ` [PATCH 15/18] KVM/MIPS32: Add dummy trap handler to catch unexpected exceptions and dump out useful info Sanjay Lal
@ 2013-05-19  5:47 ` Sanjay Lal
  2013-05-28 15:24   ` Paolo Bonzini
  2013-05-19  5:47 ` [PATCH 17/18] KVM/MIPS32: Revert to older method for accessing ASID parameters Sanjay Lal
                   ` (2 subsequent siblings)
  18 siblings, 1 reply; 40+ messages in thread
From: Sanjay Lal @ 2013-05-19  5:47 UTC (permalink / raw)
  To: kvm; +Cc: linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti, Sanjay Lal


Signed-off-by: Sanjay Lal <sanjayl@kymasys.com>
---
 arch/mips/include/asm/kvm_host.h | 244 ++++++++++++++++++++++++++++++---------
 1 file changed, 191 insertions(+), 53 deletions(-)

diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
index e68781e..c92e297 100644
--- a/arch/mips/include/asm/kvm_host.h
+++ b/arch/mips/include/asm/kvm_host.h
@@ -19,21 +19,28 @@
 #include <linux/threads.h>
 #include <linux/spinlock.h>
 
+#ifdef CONFIG_KVM_MIPS_VZ
+#include <asm/mipsvzregs.h>
+#endif
 
-#define KVM_MAX_VCPUS		1
-#define KVM_USER_MEM_SLOTS	8
+#define KVM_MAX_VCPUS 8
+#define KVM_USER_MEM_SLOTS 8
 /* memory slots that does not exposed to userspace */
-#define KVM_PRIVATE_MEM_SLOTS 	0
+#define KVM_PRIVATE_MEM_SLOTS 0
 
 #define KVM_COALESCED_MMIO_PAGE_OFFSET 1
 
 /* Don't support huge pages */
-#define KVM_HPAGE_GFN_SHIFT(x)	0
+#define KVM_HPAGE_GFN_SHIFT(x)  0
 
 /* We don't currently support large pages. */
 #define KVM_NR_PAGE_SIZES	1
-#define KVM_PAGES_PER_HPAGE(x)	1
+#define KVM_PAGES_PER_HPAGE(x)  1
 
+#ifdef CONFIG_KVM_MIPS_VZ
+#define KVM_VZROOTID		(GUESTCTL1_VZ_ROOT_GUESTID)
+#define KVM_VZGUESTID_MASK	(GUESTCTL1_ID)
+#endif
 
 
 /* Special address that contains the comm page, used for reducing # of traps */
@@ -42,11 +49,20 @@
 #define KVM_GUEST_KERNEL_MODE(vcpu)	((kvm_read_c0_guest_status(vcpu->arch.cop0) & (ST0_EXL | ST0_ERL)) || \
 					((kvm_read_c0_guest_status(vcpu->arch.cop0) & KSU_USER) == 0))
 
+#ifdef CONFIG_KVM_MIPS_VZ
+#define KVM_GUEST_KUSEG             0x00000000UL
+#define KVM_GUEST_KSEG0             0x80000000UL
+#define KVM_GUEST_KSEG1             0xa0000000UL
+#define KVM_GUEST_KSEG23            0xc0000000UL
+#define KVM_GUEST_KSEGX(a)          ((_ACAST32_(a)) & 0xe0000000)
+#define KVM_GUEST_CPHYSADDR(a)      ((_ACAST32_(a)) & 0x1fffffff)
+#else
 #define KVM_GUEST_KUSEG             0x00000000UL
 #define KVM_GUEST_KSEG0             0x40000000UL
 #define KVM_GUEST_KSEG23            0x60000000UL
 #define KVM_GUEST_KSEGX(a)          ((_ACAST32_(a)) & 0x60000000)
 #define KVM_GUEST_CPHYSADDR(a)      ((_ACAST32_(a)) & 0x1fffffff)
+#endif
 
 #define KVM_GUEST_CKSEG0ADDR(a)		(KVM_GUEST_CPHYSADDR(a) | KVM_GUEST_KSEG0)
 #define KVM_GUEST_CKSEG1ADDR(a)		(KVM_GUEST_CPHYSADDR(a) | KVM_GUEST_KSEG1)
@@ -100,6 +116,21 @@ struct kvm_vcpu_stat {
 	u32 resvd_inst_exits;
 	u32 break_inst_exits;
 	u32 flush_dcache_exits;
+#ifdef CONFIG_KVM_MIPS_VZ
+	u32 hypervisor_gpsi_exits;
+	u32 hypervisor_gpsi_cp0_exits;
+	u32 hypervisor_gpsi_cache_exits;
+	u32 hypervisor_gsfc_exits;
+	u32 hypervisor_gsfc_cp0_status_exits;
+	u32 hypervisor_gsfc_cp0_cause_exits;
+	u32 hypervisor_gsfc_cp0_intctl_exits;
+	u32 hypervisor_hc_exits;
+	u32 hypervisor_grr_exits;
+	u32 hypervisor_gva_exits;
+	u32 hypervisor_ghfc_exits;
+	u32 hypervisor_gpa_exits;
+	u32 hypervisor_resv_exits;
+#endif
 	u32 halt_wakeup;
 };
 
@@ -118,6 +149,21 @@ enum kvm_mips_exit_types {
 	RESVD_INST_EXITS,
 	BREAK_INST_EXITS,
 	FLUSH_DCACHE_EXITS,
+#ifdef CONFIG_KVM_MIPS_VZ
+	HYPERVISOR_GPSI_EXITS,
+	HYPERVISOR_GPSI_CP0_EXITS,
+	HYPERVISOR_GPSI_CACHE_EXITS,
+	HYPERVISOR_GSFC_EXITS,
+	HYPERVISOR_GSFC_CP0_STATUS_EXITS,
+	HYPERVISOR_GSFC_CP0_CAUSE_EXITS,
+	HYPERVISOR_GSFC_CP0_INTCTL_EXITS,
+	HYPERVISOR_HC_EXITS,
+	HYPERVISOR_GRR_EXITS,
+	HYPERVISOR_GVA_EXITS,
+	HYPERVISOR_GHFC_EXITS,
+	HYPERVISOR_GPA_EXITS,
+	HYPERVISOR_RESV_EXITS,
+#endif
 	MAX_KVM_MIPS_EXIT_TYPES
 };
 
@@ -126,8 +172,8 @@ struct kvm_arch_memory_slot {
 
 struct kvm_arch {
 	/* Guest GVA->HPA page table */
-	unsigned long *guest_pmap;
-	unsigned long guest_pmap_npages;
+	ulong *guest_pmap;
+	ulong guest_pmap_npages;
 
 	/* Wired host TLB used for the commpage */
 	int commpage_tlb;
@@ -137,9 +183,9 @@ struct kvm_arch {
 #define N_MIPS_COPROC_SEL   	8
 
 struct mips_coproc {
-	unsigned long reg[N_MIPS_COPROC_REGS][N_MIPS_COPROC_SEL];
+	ulong reg[N_MIPS_COPROC_REGS][N_MIPS_COPROC_SEL];
 #ifdef CONFIG_KVM_MIPS_DEBUG_COP0_COUNTERS
-	unsigned long stat[N_MIPS_COPROC_REGS][N_MIPS_COPROC_SEL];
+	ulong stat[N_MIPS_COPROC_REGS][N_MIPS_COPROC_SEL];
 #endif
 };
 
@@ -294,6 +340,9 @@ enum mips_mmu_types {
 #define T_RES_INST      10	/* Reserved instruction exception */
 #define T_COP_UNUSABLE      11	/* Coprocessor unusable */
 #define T_OVFLOW        12	/* Arithmetic overflow */
+#ifdef CONFIG_KVM_MIPS_VZ
+#define T_GUEST_EXIT        27	/* Guest Exit (VZ ASE) */
+#endif
 
 /*
  * Trap definitions added for r4000 port.
@@ -336,7 +385,7 @@ enum emulation_result {
 #define VPN2_MASK           0xffffe000
 #define TLB_IS_GLOBAL(x)    (((x).tlb_lo0 & MIPS3_PG_G) && ((x).tlb_lo1 & MIPS3_PG_G))
 #define TLB_VPN2(x)         ((x).tlb_hi & VPN2_MASK)
-#define TLB_ASID(x)         (ASID_MASK((x).tlb_hi))
+#define TLB_ASID(x)         ((x).tlb_hi & ASID_MASK)
 #define TLB_IS_VALID(x, va) (((va) & (1 << PAGE_SHIFT)) ? ((x).tlb_lo1 & MIPS3_PG_V) : ((x).tlb_lo0 & MIPS3_PG_V))
 
 struct kvm_mips_tlb {
@@ -344,26 +393,29 @@ struct kvm_mips_tlb {
 	long tlb_hi;
 	long tlb_lo0;
 	long tlb_lo1;
+#ifdef CONFIG_KVM_MIPS_VZ
+	uint32_t guestctl1;
+#endif
 };
 
 #define KVM_MIPS_GUEST_TLB_SIZE     64
 struct kvm_vcpu_arch {
 	void *host_ebase, *guest_ebase;
-	unsigned long host_stack;
-	unsigned long host_gp;
+	ulong host_stack;
+	ulong host_gp;
 
 	/* Host CP0 registers used when handling exits from guest */
-	unsigned long host_cp0_badvaddr;
-	unsigned long host_cp0_cause;
-	unsigned long host_cp0_epc;
-	unsigned long host_cp0_entryhi;
+	ulong host_cp0_badvaddr;
+	ulong host_cp0_cause;
+	ulong host_cp0_epc;
+	ulong host_cp0_entryhi;
 	uint32_t guest_inst;
 
 	/* GPRS */
-	unsigned long gprs[32];
-	unsigned long hi;
-	unsigned long lo;
-	unsigned long pc;
+	ulong gprs[32];
+	ulong hi;
+	ulong lo;
+	ulong pc;
 
 	/* FPU State */
 	struct mips_fpu_struct fpu;
@@ -380,15 +432,12 @@ struct kvm_vcpu_arch {
 	int32_t host_cp0_count;
 
 	/* Bitmask of exceptions that are pending */
-	unsigned long pending_exceptions;
+	ulong pending_exceptions;
 
 	/* Bitmask of pending exceptions to be cleared */
-	unsigned long pending_exceptions_clr;
+	ulong pending_exceptions_clr;
 
-	unsigned long pending_load_cause;
-
-	/* Save/Restore the entryhi register when are are preempted/scheduled back in */
-	unsigned long preempt_entryhi;
+	ulong pending_load_cause;
 
 	/* S/W Based TLB for guest */
 	struct kvm_mips_tlb guest_tlb[KVM_MIPS_GUEST_TLB_SIZE];
@@ -400,6 +449,13 @@ struct kvm_vcpu_arch {
 
 	struct kvm_mips_tlb shadow_tlb[NR_CPUS][KVM_MIPS_GUEST_TLB_SIZE];
 
+#ifdef CONFIG_KVM_MIPS_VZ
+	/* vcpu's vzguestid is different on each host cpu in an smp system */
+	uint32_t vzguestid[NR_CPUS];
+
+	/* storage for saved guest CP0 registers */
+	struct kvm_regs guest_regs;
+#endif
 
 	struct hrtimer comparecount_timer;
 
@@ -409,6 +465,74 @@ struct kvm_vcpu_arch {
 	int wait;
 };
 
+#ifdef CONFIG_KVM_MIPS_VZ
+
+#define kvm_read_c0_guest_index(cop0)                ((void)cop0, read_c0_guest_index())
+#define kvm_write_c0_guest_index(cop0, val)          ((void)cop0, write_c0_guest_index(val))
+#define kvm_read_c0_guest_random(cop0)               ((void)cop0, read_c0_guest_random())
+#define kvm_read_c0_guest_entrylo0(cop0)             ((void)cop0, read_c0_guest_entrylo0())
+#define kvm_write_c0_guest_entrylo0(cop0, val)       ((void)cop0, write_c0_guest_entrylo0(val))
+#define kvm_read_c0_guest_entrylo1(cop0)             ((void)cop0, read_c0_guest_entrylo1())
+#define kvm_write_c0_guest_entrylo1(cop0, val)       ((void)cop0, write_c0_guest_entrylo1(val))
+#define kvm_read_c0_guest_context(cop0)              ((void)cop0, read_c0_guest_context())
+#define kvm_write_c0_guest_context(cop0, val)        ((void)cop0, write_c0_guest_context(val))
+#define kvm_read_c0_guest_userlocal(cop0)            ((void)cop0, read_c0_guest_userlocal())
+#define kvm_write_c0_guest_userlocal(cop0, val)      ((void)cop0, write_c0_guest_userlocal(val))
+#define kvm_read_c0_guest_pagemask(cop0)             ((void)cop0, read_c0_guest_pagemask())
+#define kvm_write_c0_guest_pagemask(cop0, val)       ((void)cop0, write_c0_guest_pagemask(val))
+#define kvm_read_c0_guest_pagegrain(cop0)            ((void)cop0, read_c0_guest_pagegrain())
+#define kvm_write_c0_guest_pagegrain(cop0, val)      ((void)cop0, write_c0_guest_pagegrain(val))
+#define kvm_read_c0_guest_wired(cop0)                ((void)cop0, read_c0_guest_wired())
+#define kvm_write_c0_guest_wired(cop0, val)          ((void)cop0, write_c0_guest_wired(val))
+#define kvm_read_c0_guest_hwrena(cop0)               ((void)cop0, read_c0_guest_hwrena())
+#define kvm_write_c0_guest_hwrena(cop0, val)         ((void)cop0, write_c0_guest_hwrena(val))
+#define kvm_read_c0_guest_badvaddr(cop0)             ((void)cop0, read_c0_guest_badvaddr())
+#define kvm_write_c0_guest_badvaddr(cop0, val)       ((void)cop0, write_c0_guest_badvaddr(val))
+#define kvm_read_c0_guest_count(cop0)                ((void)cop0, read_c0_guest_count())
+#define kvm_write_c0_guest_count(cop0, val)          ((void)cop0, write_c0_guest_count(val))
+#define kvm_read_c0_guest_entryhi(cop0)              ((void)cop0, read_c0_guest_entryhi())
+#define kvm_write_c0_guest_entryhi(cop0, val)        ((void)cop0, write_c0_guest_entryhi(val))
+#define kvm_read_c0_guest_compare(cop0)              ((void)cop0, read_c0_guest_compare())
+#define kvm_write_c0_guest_compare(cop0, val)        ((void)cop0, write_c0_guest_compare(val))
+#define kvm_read_c0_guest_status(cop0)               ((void)cop0, read_c0_guest_status())
+#define kvm_write_c0_guest_status(cop0, val)         ((void)cop0, write_c0_guest_status(val))
+#define kvm_read_c0_guest_intctl(cop0)               ((void)cop0, read_c0_guest_intctl())
+#define kvm_write_c0_guest_intctl(cop0, val)         ((void)cop0, write_c0_guest_intctl(val))
+#define kvm_read_c0_guest_cause(cop0)                ((void)cop0, read_c0_guest_cause())
+#define kvm_write_c0_guest_cause(cop0, val)          ((void)cop0, write_c0_guest_cause(val))
+#define kvm_read_c0_guest_epc(cop0)                  ((void)cop0, read_c0_guest_epc())
+#define kvm_write_c0_guest_epc(cop0, val)            ((void)cop0, write_c0_guest_epc(val))
+#define kvm_read_c0_guest_prid(cop0)                 (cop0->reg[MIPS_CP0_PRID][0])
+#define kvm_write_c0_guest_prid(cop0, val)           (cop0->reg[MIPS_CP0_PRID][0] = (val))
+#define kvm_read_c0_guest_ebase(cop0)                ((void)cop0, read_c0_guest_ebase())
+#define kvm_write_c0_guest_ebase(cop0, val)          ((void)cop0, write_c0_guest_ebase(val))
+#define kvm_read_c0_guest_config(cop0)               ((void)cop0, read_c0_guest_config())
+#define kvm_read_c0_guest_config1(cop0)              ((void)cop0, read_c0_guest_config1())
+#define kvm_read_c0_guest_config2(cop0)              ((void)cop0, read_c0_guest_config2())
+#define kvm_read_c0_guest_config3(cop0)              ((void)cop0, read_c0_guest_config3())
+#define kvm_read_c0_guest_config4(cop0)              ((void)cop0, read_c0_guest_config4())
+#define kvm_read_c0_guest_config5(cop0)              ((void)cop0, read_c0_guest_config5())
+#define kvm_read_c0_guest_config6(cop0)              ((void)cop0, read_c0_guest_config6())
+#define kvm_read_c0_guest_config7(cop0)              ((void)cop0, read_c0_guest_config7())
+#define kvm_write_c0_guest_config(cop0, val)         ((void)cop0, write_c0_guest_config(val))
+#define kvm_write_c0_guest_config1(cop0, val)        ((void)cop0, write_c0_guest_config1(val))
+#define kvm_write_c0_guest_config2(cop0, val)        ((void)cop0, write_c0_guest_config2(val))
+#define kvm_write_c0_guest_config3(cop0, val)        ((void)cop0, write_c0_guest_config3(val))
+#define kvm_write_c0_guest_config4(cop0, val)        ((void)cop0, write_c0_guest_config4(val))
+#define kvm_write_c0_guest_config5(cop0, val)        ((void)cop0, write_c0_guest_config5(val))
+#define kvm_write_c0_guest_config6(cop0, val)        ((void)cop0, write_c0_guest_config6(val))
+#define kvm_write_c0_guest_config7(cop0, val)        ((void)cop0, write_c0_guest_config7(val))
+#define kvm_read_c0_guest_errorepc(cop0)             ((void)cop0, read_c0_guest_errorepc())
+#define kvm_write_c0_guest_errorepc(cop0, val)       ((void)cop0, write_c0_guest_errorepc(val))
+
+#define kvm_set_c0_guest_status(cop0, val)           ((void)cop0, set_c0_guest_status(val))
+#define kvm_clear_c0_guest_status(cop0, val)         ((void)cop0, clear_c0_guest_status(val))
+#define kvm_set_c0_guest_cause(cop0, val)            ((void)cop0, set_c0_guest_cause(val))
+#define kvm_clear_c0_guest_cause(cop0, val)          ((void)cop0, clear_c0_guest_cause(val))
+#define kvm_change_c0_guest_cause(cop0, change, val) ((void)cop0, change_c0_guest_cause(change, val))
+#define kvm_change_c0_guest_ebase(cop0, change, val) ((void)cop0, change_c0_guest_ebase(change, val))
+
+#else
 
 #define kvm_read_c0_guest_index(cop0)               (cop0->reg[MIPS_CP0_TLB_INDEX][0])
 #define kvm_write_c0_guest_index(cop0, val)         (cop0->reg[MIPS_CP0_TLB_INDEX][0] = val)
@@ -471,6 +595,7 @@ struct kvm_vcpu_arch {
     kvm_set_c0_guest_ebase(cop0, ((val) & (change))); \
 }
 
+#endif
 
 struct kvm_mips_callbacks {
 	int (*handle_cop_unusable) (struct kvm_vcpu *vcpu);
@@ -482,6 +607,9 @@ struct kvm_mips_callbacks {
 	int (*handle_syscall) (struct kvm_vcpu *vcpu);
 	int (*handle_res_inst) (struct kvm_vcpu *vcpu);
 	int (*handle_break) (struct kvm_vcpu *vcpu);
+#ifdef CONFIG_KVM_MIPS_VZ
+	int (*handle_guest_exit) (struct kvm_vcpu *vcpu);
+#endif
 	int (*vm_init) (struct kvm *kvm);
 	int (*vcpu_init) (struct kvm_vcpu *vcpu);
 	int (*vcpu_setup) (struct kvm_vcpu *vcpu);
@@ -517,23 +645,26 @@ uint32_t kvm_get_user_asid(struct kvm_vcpu *vcpu);
 
 uint32_t kvm_get_commpage_asid (struct kvm_vcpu *vcpu);
 
-extern int kvm_mips_handle_kseg0_tlb_fault(unsigned long badbaddr,
+#ifdef CONFIG_KVM_MIPS_VZ
+extern int kvm_mips_handle_vz_root_tlb_fault(ulong badvaddr,
+					     struct kvm_vcpu *vcpu);
+#endif
+extern int kvm_mips_handle_kseg0_tlb_fault(ulong badbaddr,
 					   struct kvm_vcpu *vcpu);
 
-extern int kvm_mips_handle_commpage_tlb_fault(unsigned long badvaddr,
+extern int kvm_mips_handle_commpage_tlb_fault(ulong badvaddr,
 					      struct kvm_vcpu *vcpu);
 
 extern int kvm_mips_handle_mapped_seg_tlb_fault(struct kvm_vcpu *vcpu,
 						struct kvm_mips_tlb *tlb,
-						unsigned long *hpa0,
-						unsigned long *hpa1);
+						ulong *hpa0, ulong *hpa1);
 
-extern enum emulation_result kvm_mips_handle_tlbmiss(unsigned long cause,
+extern enum emulation_result kvm_mips_handle_tlbmiss(ulong cause,
 						     uint32_t *opc,
 						     struct kvm_run *run,
 						     struct kvm_vcpu *vcpu);
 
-extern enum emulation_result kvm_mips_handle_tlbmod(unsigned long cause,
+extern enum emulation_result kvm_mips_handle_tlbmod(ulong cause,
 						    uint32_t *opc,
 						    struct kvm_run *run,
 						    struct kvm_vcpu *vcpu);
@@ -542,14 +673,13 @@ extern void kvm_mips_dump_host_tlbs(void);
 extern void kvm_mips_dump_guest_tlbs(struct kvm_vcpu *vcpu);
 extern void kvm_mips_dump_shadow_tlbs(struct kvm_vcpu *vcpu);
 extern void kvm_mips_flush_host_tlb(int skip_kseg0);
-extern int kvm_mips_host_tlb_inv(struct kvm_vcpu *vcpu, unsigned long entryhi);
+extern int kvm_mips_host_tlb_inv(struct kvm_vcpu *vcpu, ulong entryhi);
 extern int kvm_mips_host_tlb_inv_index(struct kvm_vcpu *vcpu, int index);
 
-extern int kvm_mips_guest_tlb_lookup(struct kvm_vcpu *vcpu,
-				     unsigned long entryhi);
-extern int kvm_mips_host_tlb_lookup(struct kvm_vcpu *vcpu, unsigned long vaddr);
-extern unsigned long kvm_mips_translate_guest_kseg0_to_hpa(struct kvm_vcpu *vcpu,
-						   unsigned long gva);
+extern int kvm_mips_guest_tlb_lookup(struct kvm_vcpu *vcpu, ulong entryhi);
+extern int kvm_mips_host_tlb_lookup(struct kvm_vcpu *vcpu, ulong vaddr);
+extern ulong kvm_mips_translate_guest_kseg0_to_hpa(struct kvm_vcpu *vcpu,
+						   ulong gva);
 extern void kvm_get_new_mmu_context(struct mm_struct *mm, unsigned long cpu,
 				    struct kvm_vcpu *vcpu);
 extern void kvm_shadow_tlb_put(struct kvm_vcpu *vcpu);
@@ -564,57 +694,57 @@ extern void kvm_mips_vcpu_put(struct kvm_vcpu *vcpu);
 uint32_t kvm_get_inst(uint32_t *opc, struct kvm_vcpu *vcpu);
 enum emulation_result update_pc(struct kvm_vcpu *vcpu, uint32_t cause);
 
-extern enum emulation_result kvm_mips_emulate_inst(unsigned long cause,
+extern enum emulation_result kvm_mips_emulate_inst(ulong cause,
 						   uint32_t *opc,
 						   struct kvm_run *run,
 						   struct kvm_vcpu *vcpu);
 
-extern enum emulation_result kvm_mips_emulate_syscall(unsigned long cause,
+extern enum emulation_result kvm_mips_emulate_syscall(ulong cause,
 						      uint32_t *opc,
 						      struct kvm_run *run,
 						      struct kvm_vcpu *vcpu);
 
-extern enum emulation_result kvm_mips_emulate_tlbmiss_ld(unsigned long cause,
+extern enum emulation_result kvm_mips_emulate_tlbmiss_ld(ulong cause,
 							 uint32_t *opc,
 							 struct kvm_run *run,
 							 struct kvm_vcpu *vcpu);
 
-extern enum emulation_result kvm_mips_emulate_tlbinv_ld(unsigned long cause,
+extern enum emulation_result kvm_mips_emulate_tlbinv_ld(ulong cause,
 							uint32_t *opc,
 							struct kvm_run *run,
 							struct kvm_vcpu *vcpu);
 
-extern enum emulation_result kvm_mips_emulate_tlbmiss_st(unsigned long cause,
+extern enum emulation_result kvm_mips_emulate_tlbmiss_st(ulong cause,
 							 uint32_t *opc,
 							 struct kvm_run *run,
 							 struct kvm_vcpu *vcpu);
 
-extern enum emulation_result kvm_mips_emulate_tlbinv_st(unsigned long cause,
+extern enum emulation_result kvm_mips_emulate_tlbinv_st(ulong cause,
 							uint32_t *opc,
 							struct kvm_run *run,
 							struct kvm_vcpu *vcpu);
 
-extern enum emulation_result kvm_mips_emulate_tlbmod(unsigned long cause,
+extern enum emulation_result kvm_mips_emulate_tlbmod(ulong cause,
 						     uint32_t *opc,
 						     struct kvm_run *run,
 						     struct kvm_vcpu *vcpu);
 
-extern enum emulation_result kvm_mips_emulate_fpu_exc(unsigned long cause,
+extern enum emulation_result kvm_mips_emulate_fpu_exc(ulong cause,
 						      uint32_t *opc,
 						      struct kvm_run *run,
 						      struct kvm_vcpu *vcpu);
 
-extern enum emulation_result kvm_mips_handle_ri(unsigned long cause,
+extern enum emulation_result kvm_mips_handle_ri(ulong cause,
 						uint32_t *opc,
 						struct kvm_run *run,
 						struct kvm_vcpu *vcpu);
 
-extern enum emulation_result kvm_mips_emulate_ri_exc(unsigned long cause,
+extern enum emulation_result kvm_mips_emulate_ri_exc(ulong cause,
 						     uint32_t *opc,
 						     struct kvm_run *run,
 						     struct kvm_vcpu *vcpu);
 
-extern enum emulation_result kvm_mips_emulate_bp_exc(unsigned long cause,
+extern enum emulation_result kvm_mips_emulate_bp_exc(ulong cause,
 						     uint32_t *opc,
 						     struct kvm_run *run,
 						     struct kvm_vcpu *vcpu);
@@ -624,7 +754,7 @@ extern enum emulation_result kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu,
 
 enum emulation_result kvm_mips_emulate_count(struct kvm_vcpu *vcpu);
 
-enum emulation_result kvm_mips_check_privilege(unsigned long cause,
+enum emulation_result kvm_mips_check_privilege(ulong cause,
 					       uint32_t *opc,
 					       struct kvm_run *run,
 					       struct kvm_vcpu *vcpu);
@@ -659,9 +789,17 @@ extern int kvm_mips_trans_mtc0(uint32_t inst, uint32_t *opc,
 			       struct kvm_vcpu *vcpu);
 
 /* Misc */
-extern void mips32_SyncICache(unsigned long addr, unsigned long size);
+extern void mips32_SyncICache(ulong addr, ulong size);
 extern int kvm_mips_dump_stats(struct kvm_vcpu *vcpu);
-extern unsigned long kvm_mips_get_ramsize(struct kvm *kvm);
-
+extern ulong kvm_mips_get_ramsize(struct kvm *kvm);
+
+#ifdef CONFIG_KVM_MIPS_VZ
+/* VZ ASE specific functions */
+extern void kvm_vz_restore_guest_timer_int(struct kvm_vcpu *vcpu,
+					   struct kvm_regs *regs);
+extern void mips32_ClearGuestRID(void);
+extern void mips32_SetGuestRID(ulong guestRID);
+extern void mips32_SetGuestRIDtoGuestID(void);
+#endif
 
 #endif /* __MIPS_KVM_HOST_H__ */
-- 
1.7.11.3

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 17/18] KVM/MIPS32: Revert to older method for accessing ASID parameters
  2013-05-19  5:47 [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE) Sanjay Lal
                   ` (15 preceding siblings ...)
  2013-05-19  5:47 ` [PATCH 16/18] KVM/MIPS32-VZ: Add VZ-ASE support to KVM/MIPS data structures Sanjay Lal
@ 2013-05-19  5:47 ` Sanjay Lal
  2013-05-19  5:47 ` [PATCH 18/18] KVM/MIPS32-VZ: Dump out additional info about VZ features as part of /proc/cpuinfo Sanjay Lal
  2013-05-20 15:50 ` [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE) David Daney
  18 siblings, 0 replies; 40+ messages in thread
From: Sanjay Lal @ 2013-05-19  5:47 UTC (permalink / raw)
  To: kvm; +Cc: linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti, Sanjay Lal

- Now that commit d532f3d26 has been reverted in the MIPS tree,
  revert back to the older method of using the ASID_MASK.
- Trivial cleanup: s/unsigned long/long

Signed-off-by: Sanjay Lal <sanjayl@kymasys.com>
---
 arch/mips/kvm/kvm_mips_dyntrans.c |  24 ++--
 arch/mips/kvm/kvm_mips_emul.c     | 236 ++++++++++++++++++++++----------------
 2 files changed, 147 insertions(+), 113 deletions(-)

diff --git a/arch/mips/kvm/kvm_mips_dyntrans.c b/arch/mips/kvm/kvm_mips_dyntrans.c
index 96528e2..c657b37 100644
--- a/arch/mips/kvm/kvm_mips_dyntrans.c
+++ b/arch/mips/kvm/kvm_mips_dyntrans.c
@@ -32,13 +32,13 @@ kvm_mips_trans_cache_index(uint32_t inst, uint32_t *opc,
 			   struct kvm_vcpu *vcpu)
 {
 	int result = 0;
-	unsigned long kseg0_opc;
+	ulong kseg0_opc;
 	uint32_t synci_inst = 0x0;
 
 	/* Replace the CACHE instruction, with a NOP */
 	kseg0_opc =
 	    CKSEG0ADDR(kvm_mips_translate_guest_kseg0_to_hpa
-		       (vcpu, (unsigned long) opc));
+		       (vcpu, (ulong) opc));
 	memcpy((void *)kseg0_opc, (void *)&synci_inst, sizeof(uint32_t));
 	mips32_SyncICache(kseg0_opc, 32);
 
@@ -54,7 +54,7 @@ kvm_mips_trans_cache_va(uint32_t inst, uint32_t *opc,
 			struct kvm_vcpu *vcpu)
 {
 	int result = 0;
-	unsigned long kseg0_opc;
+	ulong kseg0_opc;
 	uint32_t synci_inst = SYNCI_TEMPLATE, base, offset;
 
 	base = (inst >> 21) & 0x1f;
@@ -64,7 +64,7 @@ kvm_mips_trans_cache_va(uint32_t inst, uint32_t *opc,
 
 	kseg0_opc =
 	    CKSEG0ADDR(kvm_mips_translate_guest_kseg0_to_hpa
-		       (vcpu, (unsigned long) opc));
+		       (vcpu, (ulong) opc));
 	memcpy((void *)kseg0_opc, (void *)&synci_inst, sizeof(uint32_t));
 	mips32_SyncICache(kseg0_opc, 32);
 
@@ -76,7 +76,7 @@ kvm_mips_trans_mfc0(uint32_t inst, uint32_t *opc, struct kvm_vcpu *vcpu)
 {
 	int32_t rt, rd, sel;
 	uint32_t mfc0_inst;
-	unsigned long kseg0_opc, flags;
+	ulong kseg0_opc, flags;
 
 	rt = (inst >> 16) & 0x1f;
 	rd = (inst >> 11) & 0x1f;
@@ -97,13 +97,13 @@ kvm_mips_trans_mfc0(uint32_t inst, uint32_t *opc, struct kvm_vcpu *vcpu)
 	if (KVM_GUEST_KSEGX(opc) == KVM_GUEST_KSEG0) {
 		kseg0_opc =
 		    CKSEG0ADDR(kvm_mips_translate_guest_kseg0_to_hpa
-			       (vcpu, (unsigned long) opc));
+			       (vcpu, (ulong) opc));
 		memcpy((void *)kseg0_opc, (void *)&mfc0_inst, sizeof(uint32_t));
 		mips32_SyncICache(kseg0_opc, 32);
-	} else if (KVM_GUEST_KSEGX((unsigned long) opc) == KVM_GUEST_KSEG23) {
+	} else if (KVM_GUEST_KSEGX((ulong) opc) == KVM_GUEST_KSEG23) {
 		local_irq_save(flags);
 		memcpy((void *)opc, (void *)&mfc0_inst, sizeof(uint32_t));
-		mips32_SyncICache((unsigned long) opc, 32);
+		mips32_SyncICache((ulong) opc, 32);
 		local_irq_restore(flags);
 	} else {
 		kvm_err("%s: Invalid address: %p\n", __func__, opc);
@@ -118,7 +118,7 @@ kvm_mips_trans_mtc0(uint32_t inst, uint32_t *opc, struct kvm_vcpu *vcpu)
 {
 	int32_t rt, rd, sel;
 	uint32_t mtc0_inst = SW_TEMPLATE;
-	unsigned long kseg0_opc, flags;
+	ulong kseg0_opc, flags;
 
 	rt = (inst >> 16) & 0x1f;
 	rd = (inst >> 11) & 0x1f;
@@ -132,13 +132,13 @@ kvm_mips_trans_mtc0(uint32_t inst, uint32_t *opc, struct kvm_vcpu *vcpu)
 	if (KVM_GUEST_KSEGX(opc) == KVM_GUEST_KSEG0) {
 		kseg0_opc =
 		    CKSEG0ADDR(kvm_mips_translate_guest_kseg0_to_hpa
-			       (vcpu, (unsigned long) opc));
+			       (vcpu, (ulong) opc));
 		memcpy((void *)kseg0_opc, (void *)&mtc0_inst, sizeof(uint32_t));
 		mips32_SyncICache(kseg0_opc, 32);
-	} else if (KVM_GUEST_KSEGX((unsigned long) opc) == KVM_GUEST_KSEG23) {
+	} else if (KVM_GUEST_KSEGX((ulong) opc) == KVM_GUEST_KSEG23) {
 		local_irq_save(flags);
 		memcpy((void *)opc, (void *)&mtc0_inst, sizeof(uint32_t));
-		mips32_SyncICache((unsigned long) opc, 32);
+		mips32_SyncICache((ulong) opc, 32);
 		local_irq_restore(flags);
 	} else {
 		kvm_err("%s: Invalid address: %p\n", __func__, opc);
diff --git a/arch/mips/kvm/kvm_mips_emul.c b/arch/mips/kvm/kvm_mips_emul.c
index 2b2bac9..d9fb542 100644
--- a/arch/mips/kvm/kvm_mips_emul.c
+++ b/arch/mips/kvm/kvm_mips_emul.c
@@ -34,12 +34,13 @@
 
 #include "trace.h"
 
+static int debug __maybe_unused;
+
 /*
  * Compute the return address and do emulate branch simulation, if required.
  * This function should be called only in branch delay slot active.
  */
-unsigned long kvm_compute_return_epc(struct kvm_vcpu *vcpu,
-	unsigned long instpc)
+u_long kvm_compute_return_epc(struct kvm_vcpu *vcpu, u_long instpc)
 {
 	unsigned int dspcontrol;
 	union mips_instruction insn;
@@ -209,7 +210,7 @@ sigill:
 
 enum emulation_result update_pc(struct kvm_vcpu *vcpu, uint32_t cause)
 {
-	unsigned long branch_pc;
+	u_long branch_pc;
 	enum emulation_result er = EMULATE_DONE;
 
 	if (cause & CAUSEF_BD) {
@@ -234,8 +235,8 @@ enum emulation_result update_pc(struct kvm_vcpu *vcpu, uint32_t cause)
  */
 enum emulation_result kvm_mips_emulate_count(struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
 	enum emulation_result er = EMULATE_DONE;
+	struct mips_coproc *cop0 = vcpu->arch.cop0;
 
 	/* If COUNT is enabled */
 	if (!(kvm_read_c0_guest_cause(cop0) & CAUSEF_DC)) {
@@ -245,15 +246,13 @@ enum emulation_result kvm_mips_emulate_count(struct kvm_vcpu *vcpu)
 	} else {
 		hrtimer_try_to_cancel(&vcpu->arch.comparecount_timer);
 	}
-
 	return er;
 }
 
 enum emulation_result kvm_mips_emul_eret(struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
 	enum emulation_result er = EMULATE_DONE;
-
+	struct mips_coproc *cop0 = vcpu->arch.cop0;
 	if (kvm_read_c0_guest_status(cop0) & ST0_EXL) {
 		kvm_debug("[%#lx] ERET to %#lx\n", vcpu->arch.pc,
 			  kvm_read_c0_guest_epc(cop0));
@@ -268,7 +267,6 @@ enum emulation_result kvm_mips_emul_eret(struct kvm_vcpu *vcpu)
 		       vcpu->arch.pc);
 		er = EMULATE_FAIL;
 	}
-
 	return er;
 }
 
@@ -302,9 +300,9 @@ enum emulation_result kvm_mips_emul_wait(struct kvm_vcpu *vcpu)
  */
 enum emulation_result kvm_mips_emul_tlbr(struct kvm_vcpu *vcpu)
 {
+	uint32_t pc = vcpu->arch.pc;
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
 	enum emulation_result er = EMULATE_FAIL;
-	uint32_t pc = vcpu->arch.pc;
 
 	printk("[%#x] COP0_TLBR [%ld]\n", pc, kvm_read_c0_guest_index(cop0));
 	return er;
@@ -313,9 +311,9 @@ enum emulation_result kvm_mips_emul_tlbr(struct kvm_vcpu *vcpu)
 /* Write Guest TLB Entry @ Index */
 enum emulation_result kvm_mips_emul_tlbwi(struct kvm_vcpu *vcpu)
 {
+	enum emulation_result er = EMULATE_DONE;
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
 	int index = kvm_read_c0_guest_index(cop0);
-	enum emulation_result er = EMULATE_DONE;
 	struct kvm_mips_tlb *tlb = NULL;
 	uint32_t pc = vcpu->arch.pc;
 
@@ -331,10 +329,8 @@ enum emulation_result kvm_mips_emul_tlbwi(struct kvm_vcpu *vcpu)
 	}
 
 	tlb = &vcpu->arch.guest_tlb[index];
-#if 1
 	/* Probe the shadow host TLB for the entry being overwritten, if one matches, invalidate it */
 	kvm_mips_host_tlb_inv(vcpu, tlb->tlb_hi);
-#endif
 
 	tlb->tlb_mask = kvm_read_c0_guest_pagemask(cop0);
 	tlb->tlb_hi = kvm_read_c0_guest_entryhi(cop0);
@@ -353,18 +349,14 @@ enum emulation_result kvm_mips_emul_tlbwi(struct kvm_vcpu *vcpu)
 /* Write Guest TLB Entry @ Random Index */
 enum emulation_result kvm_mips_emul_tlbwr(struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
 	enum emulation_result er = EMULATE_DONE;
-	struct kvm_mips_tlb *tlb = NULL;
+	struct mips_coproc *cop0 = vcpu->arch.cop0;
 	uint32_t pc = vcpu->arch.pc;
 	int index;
+	struct kvm_mips_tlb *tlb = NULL;
 
-#if 1
 	get_random_bytes(&index, sizeof(index));
 	index &= (KVM_MIPS_GUEST_TLB_SIZE - 1);
-#else
-	index = jiffies % KVM_MIPS_GUEST_TLB_SIZE;
-#endif
 
 	if (index < 0 || index >= KVM_MIPS_GUEST_TLB_SIZE) {
 		printk("%s: illegal index: %d\n", __func__, index);
@@ -373,10 +365,8 @@ enum emulation_result kvm_mips_emul_tlbwr(struct kvm_vcpu *vcpu)
 
 	tlb = &vcpu->arch.guest_tlb[index];
 
-#if 1
 	/* Probe the shadow host TLB for the entry being overwritten, if one matches, invalidate it */
 	kvm_mips_host_tlb_inv(vcpu, tlb->tlb_hi);
-#endif
 
 	tlb->tlb_mask = kvm_read_c0_guest_pagemask(cop0);
 	tlb->tlb_hi = kvm_read_c0_guest_entryhi(cop0);
@@ -394,11 +384,11 @@ enum emulation_result kvm_mips_emul_tlbwr(struct kvm_vcpu *vcpu)
 
 enum emulation_result kvm_mips_emul_tlbp(struct kvm_vcpu *vcpu)
 {
+	int index = -1;
+	uint32_t pc = vcpu->arch.pc;
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
 	long entryhi = kvm_read_c0_guest_entryhi(cop0);
 	enum emulation_result er = EMULATE_DONE;
-	uint32_t pc = vcpu->arch.pc;
-	int index = -1;
 
 	index = kvm_mips_guest_tlb_lookup(vcpu, entryhi);
 
@@ -414,11 +404,11 @@ enum emulation_result
 kvm_mips_emulate_CP0(uint32_t inst, uint32_t *opc, uint32_t cause,
 		     struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
 	enum emulation_result er = EMULATE_DONE;
 	int32_t rt, rd, copz, sel, co_bit, op;
+	struct mips_coproc *cop0 = vcpu->arch.cop0;
 	uint32_t pc = vcpu->arch.pc;
-	unsigned long curr_pc;
+	u_long curr_pc;
 
 	/*
 	 * Update PC and hold onto current PC in case there is
@@ -478,15 +468,29 @@ kvm_mips_emulate_CP0(uint32_t inst, uint32_t *opc, uint32_t cause,
 #endif
 			/* Get reg */
 			if ((rd == MIPS_CP0_COUNT) && (sel == 0)) {
+#ifdef CONFIG_KVM_MIPS_VZ
+				vcpu->arch.gprs[rt] =
+				    kvm_read_c0_guest_count(cop0);
+#else
 				/* XXXKYMA: Run the Guest count register @ 1/4 the rate of the host */
 				vcpu->arch.gprs[rt] = (read_c0_count() >> 2);
+#endif
 			} else if ((rd == MIPS_CP0_ERRCTL) && (sel == 0)) {
 				vcpu->arch.gprs[rt] = 0x0;
 #ifdef CONFIG_KVM_MIPS_DYN_TRANS
 				kvm_mips_trans_mfc0(inst, opc, vcpu);
 #endif
 			}
+#ifdef CONFIG_KVM_MIPS_VZ
+			else if ((rd == MIPS_CP0_COMPARE) && (sel == 0)) {
+				vcpu->arch.gprs[rt] =
+				    kvm_read_c0_guest_compare(cop0);
+			}
+#endif
 			else {
+#ifdef CONFIG_KVM_MIPS_VZ
+				/* TODO VZ validate CP0 accesses for CONFIG_KVM_MIPS_VZ */
+#endif
 				vcpu->arch.gprs[rt] = cop0->reg[rd][sel];
 
 #ifdef CONFIG_KVM_MIPS_DYN_TRANS
@@ -501,6 +505,9 @@ kvm_mips_emulate_CP0(uint32_t inst, uint32_t *opc, uint32_t cause,
 			break;
 
 		case dmfc_op:
+#ifdef CONFIG_KVM_MIPS_VZ
+			/* TODO VZ add DMFC CONFIG_KVM_MIPS_VZ support if required */
+#endif
 			vcpu->arch.gprs[rt] = cop0->reg[rd][sel];
 			break;
 
@@ -525,16 +532,18 @@ kvm_mips_emulate_CP0(uint32_t inst, uint32_t *opc, uint32_t cause,
 				printk("MTCz, cop0->reg[EBASE]: %#lx\n",
 				       kvm_read_c0_guest_ebase(cop0));
 			} else if (rd == MIPS_CP0_TLB_HI && sel == 0) {
-				uint32_t nasid = ASID_MASK(vcpu->arch.gprs[rt]);
+				uint32_t nasid =
+				    vcpu->arch.gprs[rt] & ASID_MASK;
 				if ((KSEGX(vcpu->arch.gprs[rt]) != CKSEG0)
 				    &&
-				    (ASID_MASK(kvm_read_c0_guest_entryhi(cop0))
-				      != nasid)) {
+				    ((kvm_read_c0_guest_entryhi(cop0) &
+				      ASID_MASK) != nasid)) {
 
 					kvm_debug
 					    ("MTCz, change ASID from %#lx to %#lx\n",
-					     ASID_MASK(kvm_read_c0_guest_entryhi(cop0)),
-					     ASID_MASK(vcpu->arch.gprs[rt]));
+					     kvm_read_c0_guest_entryhi(cop0) &
+					     ASID_MASK,
+					     vcpu->arch.gprs[rt] & ASID_MASK);
 
 					/* Blow away the shadow host TLBs */
 					kvm_mips_flush_host_tlb(1);
@@ -570,6 +579,9 @@ kvm_mips_emulate_CP0(uint32_t inst, uint32_t *opc, uint32_t cause,
 				kvm_mips_trans_mtc0(inst, opc, vcpu);
 #endif
 			} else {
+#ifdef CONFIG_KVM_MIPS_VZ
+				/* TODO VZ validate CP0 accesses for CONFIG_KVM_MIPS_VZ */
+#endif
 				cop0->reg[rd][sel] = vcpu->arch.gprs[rt];
 #ifdef CONFIG_KVM_MIPS_DYN_TRANS
 				kvm_mips_trans_mtc0(inst, opc, vcpu);
@@ -659,7 +671,7 @@ kvm_mips_emulate_store(uint32_t inst, uint32_t cause,
 	int32_t op, base, rt, offset;
 	uint32_t bytes;
 	void *data = run->mmio.data;
-	unsigned long curr_pc;
+	u_long curr_pc;
 
 	/*
 	 * Update PC and hold onto current PC in case there is
@@ -871,13 +883,13 @@ kvm_mips_emulate_load(uint32_t inst, uint32_t cause,
 	return er;
 }
 
-int kvm_mips_sync_icache(unsigned long va, struct kvm_vcpu *vcpu)
+int kvm_mips_sync_icache(ulong va, struct kvm_vcpu *vcpu)
 {
-	unsigned long offset = (va & ~PAGE_MASK);
-	struct kvm *kvm = vcpu->kvm;
-	unsigned long pa;
 	gfn_t gfn;
 	pfn_t pfn;
+	ulong pa;
+	ulong offset = (va & ~PAGE_MASK);
+	struct kvm *kvm = vcpu->kvm;
 
 	gfn = va >> PAGE_SHIFT;
 
@@ -913,14 +925,15 @@ enum emulation_result
 kvm_mips_emulate_cache(uint32_t inst, uint32_t *opc, uint32_t cause,
 		       struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
 	extern void (*r4k_blast_dcache) (void);
 	extern void (*r4k_blast_icache) (void);
+	int debug __maybe_unused = 0;
 	enum emulation_result er = EMULATE_DONE;
 	int32_t offset, cache, op_inst, op, base;
 	struct kvm_vcpu_arch *arch = &vcpu->arch;
-	unsigned long va;
-	unsigned long curr_pc;
+	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	ulong va;
+	u_long curr_pc;
 
 	/*
 	 * Update PC and hold onto current PC in case there is
@@ -966,6 +979,9 @@ kvm_mips_emulate_cache(uint32_t inst, uint32_t *opc, uint32_t cause,
 #endif
 		goto done;
 	}
+#ifdef CONFIG_KVM_MIPS_VZ
+	BUG_ON(cpu_has_vz);
+#endif
 
 	preempt_disable();
 	if (KVM_GUEST_KSEGX(va) == KVM_GUEST_KSEG0) {
@@ -986,7 +1002,8 @@ kvm_mips_emulate_cache(uint32_t inst, uint32_t *opc, uint32_t cause,
 		 * resulting handler will do the right thing
 		 */
 		index = kvm_mips_guest_tlb_lookup(vcpu, (va & VPN2_MASK) |
-						  ASID_MASK(kvm_read_c0_guest_entryhi(cop0)));
+						  (kvm_read_c0_guest_entryhi
+						   (cop0) & ASID_MASK));
 
 		if (index < 0) {
 			vcpu->arch.host_cp0_entryhi = (va & VPN2_MASK);
@@ -1060,7 +1077,7 @@ skip_fault:
 }
 
 enum emulation_result
-kvm_mips_emulate_inst(unsigned long cause, uint32_t *opc,
+kvm_mips_emulate_inst(ulong cause, uint32_t *opc,
 		      struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
 	enum emulation_result er = EMULATE_DONE;
@@ -1110,12 +1127,12 @@ kvm_mips_emulate_inst(unsigned long cause, uint32_t *opc,
 }
 
 enum emulation_result
-kvm_mips_emulate_syscall(unsigned long cause, uint32_t *opc,
+kvm_mips_emulate_syscall(ulong cause, uint32_t *opc,
 			 struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
-	struct kvm_vcpu_arch *arch = &vcpu->arch;
 	enum emulation_result er = EMULATE_DONE;
+	struct kvm_vcpu_arch *arch = &vcpu->arch;
+	struct mips_coproc *cop0 = vcpu->arch.cop0;
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
 		/* save old pc */
@@ -1144,14 +1161,16 @@ kvm_mips_emulate_syscall(unsigned long cause, uint32_t *opc,
 }
 
 enum emulation_result
-kvm_mips_emulate_tlbmiss_ld(unsigned long cause, uint32_t *opc,
+kvm_mips_emulate_tlbmiss_ld(ulong cause, uint32_t *opc,
 			    struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
-	struct kvm_vcpu_arch *arch = &vcpu->arch;
 	enum emulation_result er = EMULATE_DONE;
-	unsigned long entryhi = (vcpu->arch.  host_cp0_badvaddr & VPN2_MASK) |
-				ASID_MASK(kvm_read_c0_guest_entryhi(cop0));
+	struct kvm_vcpu_arch *arch = &vcpu->arch;
+	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	ulong entryhi =
+	    (vcpu->arch.
+	     host_cp0_badvaddr & VPN2_MASK) | (kvm_read_c0_guest_entryhi(cop0) &
+					       ASID_MASK);
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
 		/* save old pc */
@@ -1190,15 +1209,16 @@ kvm_mips_emulate_tlbmiss_ld(unsigned long cause, uint32_t *opc,
 }
 
 enum emulation_result
-kvm_mips_emulate_tlbinv_ld(unsigned long cause, uint32_t *opc,
+kvm_mips_emulate_tlbinv_ld(ulong cause, uint32_t *opc,
 			   struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
-	struct kvm_vcpu_arch *arch = &vcpu->arch;
 	enum emulation_result er = EMULATE_DONE;
-	unsigned long entryhi =
-		(vcpu->arch.host_cp0_badvaddr & VPN2_MASK) |
-		ASID_MASK(kvm_read_c0_guest_entryhi(cop0));
+	struct kvm_vcpu_arch *arch = &vcpu->arch;
+	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	ulong entryhi =
+	    (vcpu->arch.
+	     host_cp0_badvaddr & VPN2_MASK) | (kvm_read_c0_guest_entryhi(cop0) &
+					       ASID_MASK);
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
 		/* save old pc */
@@ -1236,14 +1256,14 @@ kvm_mips_emulate_tlbinv_ld(unsigned long cause, uint32_t *opc,
 }
 
 enum emulation_result
-kvm_mips_emulate_tlbmiss_st(unsigned long cause, uint32_t *opc,
+kvm_mips_emulate_tlbmiss_st(ulong cause, uint32_t *opc,
 			    struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
-	struct kvm_vcpu_arch *arch = &vcpu->arch;
 	enum emulation_result er = EMULATE_DONE;
-	unsigned long entryhi = (vcpu->arch.host_cp0_badvaddr & VPN2_MASK) |
-				ASID_MASK(kvm_read_c0_guest_entryhi(cop0));
+	struct kvm_vcpu_arch *arch = &vcpu->arch;
+	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	ulong entryhi = (vcpu->arch.host_cp0_badvaddr & VPN2_MASK) |
+	    (kvm_read_c0_guest_entryhi(cop0) & ASID_MASK);
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
 		/* save old pc */
@@ -1280,14 +1300,18 @@ kvm_mips_emulate_tlbmiss_st(unsigned long cause, uint32_t *opc,
 }
 
 enum emulation_result
-kvm_mips_emulate_tlbinv_st(unsigned long cause, uint32_t *opc,
+kvm_mips_emulate_tlbinv_st(ulong cause, uint32_t *opc,
 			   struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
-	struct kvm_vcpu_arch *arch = &vcpu->arch;
 	enum emulation_result er = EMULATE_DONE;
-	unsigned long entryhi = (vcpu->arch.host_cp0_badvaddr & VPN2_MASK) |
-		ASID_MASK(kvm_read_c0_guest_entryhi(cop0));
+	struct kvm_vcpu_arch *arch = &vcpu->arch;
+	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	ulong entryhi = (vcpu->arch.host_cp0_badvaddr & VPN2_MASK) |
+	    (kvm_read_c0_guest_entryhi(cop0) & ASID_MASK);
+
+#ifdef CONFIG_KVM_MIPS_VZ
+	BUG_ON(cpu_has_vz);
+#endif
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
 		/* save old pc */
@@ -1325,10 +1349,18 @@ kvm_mips_emulate_tlbinv_st(unsigned long cause, uint32_t *opc,
 
 /* TLBMOD: store into address matching TLB with Dirty bit off */
 enum emulation_result
-kvm_mips_handle_tlbmod(unsigned long cause, uint32_t *opc,
+kvm_mips_handle_tlbmod(ulong cause, uint32_t *opc,
 		       struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
 	enum emulation_result er = EMULATE_DONE;
+	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	int index __maybe_unused;
+	ulong entryhi __maybe_unused = (vcpu->arch.host_cp0_badvaddr & VPN2_MASK) |
+					(kvm_read_c0_guest_entryhi(cop0) & ASID_MASK);
+
+#ifdef CONFIG_KVM_MIPS_VZ
+	BUG_ON(cpu_has_vz);
+#endif
 
 #ifdef DEBUG
 	/*
@@ -1351,14 +1383,14 @@ kvm_mips_handle_tlbmod(unsigned long cause, uint32_t *opc,
 }
 
 enum emulation_result
-kvm_mips_emulate_tlbmod(unsigned long cause, uint32_t *opc,
+kvm_mips_emulate_tlbmod(ulong cause, uint32_t *opc,
 			struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
-	unsigned long entryhi = (vcpu->arch.host_cp0_badvaddr & VPN2_MASK) |
-				ASID_MASK(kvm_read_c0_guest_entryhi(cop0));
-	struct kvm_vcpu_arch *arch = &vcpu->arch;
 	enum emulation_result er = EMULATE_DONE;
+	struct kvm_vcpu_arch *arch = &vcpu->arch;
+	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	ulong entryhi = (vcpu->arch.host_cp0_badvaddr & VPN2_MASK) |
+	    (kvm_read_c0_guest_entryhi(cop0) & ASID_MASK);
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
 		/* save old pc */
@@ -1393,12 +1425,12 @@ kvm_mips_emulate_tlbmod(unsigned long cause, uint32_t *opc,
 }
 
 enum emulation_result
-kvm_mips_emulate_fpu_exc(unsigned long cause, uint32_t *opc,
+kvm_mips_emulate_fpu_exc(ulong cause, uint32_t *opc,
 			 struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
-	struct kvm_vcpu_arch *arch = &vcpu->arch;
 	enum emulation_result er = EMULATE_DONE;
+	struct kvm_vcpu_arch *arch = &vcpu->arch;
+	struct mips_coproc *cop0 = vcpu->arch.cop0;
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
 		/* save old pc */
@@ -1422,12 +1454,12 @@ kvm_mips_emulate_fpu_exc(unsigned long cause, uint32_t *opc,
 }
 
 enum emulation_result
-kvm_mips_emulate_ri_exc(unsigned long cause, uint32_t *opc,
+kvm_mips_emulate_ri_exc(ulong cause, uint32_t *opc,
 			struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
-	struct kvm_vcpu_arch *arch = &vcpu->arch;
 	enum emulation_result er = EMULATE_DONE;
+	struct kvm_vcpu_arch *arch = &vcpu->arch;
+	struct mips_coproc *cop0 = vcpu->arch.cop0;
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
 		/* save old pc */
@@ -1456,12 +1488,12 @@ kvm_mips_emulate_ri_exc(unsigned long cause, uint32_t *opc,
 }
 
 enum emulation_result
-kvm_mips_emulate_bp_exc(unsigned long cause, uint32_t *opc,
+kvm_mips_emulate_bp_exc(ulong cause, uint32_t *opc,
 			struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
-	struct kvm_vcpu_arch *arch = &vcpu->arch;
 	enum emulation_result er = EMULATE_DONE;
+	struct kvm_vcpu_arch *arch = &vcpu->arch;
+	struct mips_coproc *cop0 = vcpu->arch.cop0;
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
 		/* save old pc */
@@ -1507,14 +1539,14 @@ kvm_mips_emulate_bp_exc(unsigned long cause, uint32_t *opc,
 #define RDHWR  0x0000003b
 
 enum emulation_result
-kvm_mips_handle_ri(unsigned long cause, uint32_t *opc,
+kvm_mips_handle_ri(ulong cause, uint32_t *opc,
 		   struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
-	struct kvm_vcpu_arch *arch = &vcpu->arch;
 	enum emulation_result er = EMULATE_DONE;
-	unsigned long curr_pc;
+	struct kvm_vcpu_arch *arch = &vcpu->arch;
+	struct mips_coproc *cop0 = vcpu->arch.cop0;
 	uint32_t inst;
+	u_long curr_pc;
 
 	/*
 	 * Update PC and hold onto current PC in case there is
@@ -1564,12 +1596,7 @@ kvm_mips_handle_ri(unsigned long cause, uint32_t *opc,
 			}
 			break;
 		case 29:
-#if 1
 			arch->gprs[rt] = kvm_read_c0_guest_userlocal(cop0);
-#else
-			/* UserLocal not implemented */
-			er = kvm_mips_emulate_ri_exc(cause, opc, run, vcpu);
-#endif
 			break;
 
 		default:
@@ -1594,9 +1621,9 @@ kvm_mips_handle_ri(unsigned long cause, uint32_t *opc,
 enum emulation_result
 kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu, struct kvm_run *run)
 {
-	unsigned long *gpr = &vcpu->arch.gprs[vcpu->arch.io_gpr];
 	enum emulation_result er = EMULATE_DONE;
-	unsigned long curr_pc;
+	ulong *gpr = &vcpu->arch.gprs[vcpu->arch.io_gpr];
+	u_long curr_pc;
 
 	if (run->mmio.len > sizeof(*gpr)) {
 		printk("Bad MMIO length: %d", run->mmio.len);
@@ -1644,13 +1671,17 @@ done:
 }
 
 static enum emulation_result
-kvm_mips_emulate_exc(unsigned long cause, uint32_t *opc,
+kvm_mips_emulate_exc(ulong cause, uint32_t *opc,
 		     struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	uint32_t exccode = (cause >> CAUSEB_EXCCODE) & 0x1f;
-	struct mips_coproc *cop0 = vcpu->arch.cop0;
-	struct kvm_vcpu_arch *arch = &vcpu->arch;
 	enum emulation_result er = EMULATE_DONE;
+	struct kvm_vcpu_arch *arch = &vcpu->arch;
+	struct mips_coproc *cop0 = vcpu->arch.cop0;
+	uint32_t exccode = (cause >> CAUSEB_EXCCODE) & 0x1f;
+
+#ifdef CONFIG_KVM_MIPS_VZ
+	BUG_ON(cpu_has_vz);
+#endif
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
 		/* save old pc */
@@ -1681,12 +1712,12 @@ kvm_mips_emulate_exc(unsigned long cause, uint32_t *opc,
 }
 
 enum emulation_result
-kvm_mips_check_privilege(unsigned long cause, uint32_t *opc,
+kvm_mips_check_privilege(ulong cause, uint32_t *opc,
 			 struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
 	enum emulation_result er = EMULATE_DONE;
 	uint32_t exccode = (cause >> CAUSEB_EXCCODE) & 0x1f;
-	unsigned long badvaddr = vcpu->arch.host_cp0_badvaddr;
+	ulong badvaddr = vcpu->arch.host_cp0_badvaddr;
 
 	int usermode = !KVM_GUEST_KERNEL_MODE(vcpu);
 
@@ -1708,7 +1739,7 @@ kvm_mips_check_privilege(unsigned long cause, uint32_t *opc,
 
 		case T_TLB_LD_MISS:
 			/* We we are accessing Guest kernel space, then send an address error exception to the guest */
-			if (badvaddr >= (unsigned long) KVM_GUEST_KSEG0) {
+			if (badvaddr >= (ulong) KVM_GUEST_KSEG0) {
 				printk("%s: LD MISS @ %#lx\n", __func__,
 				       badvaddr);
 				cause &= ~0xff;
@@ -1719,7 +1750,7 @@ kvm_mips_check_privilege(unsigned long cause, uint32_t *opc,
 
 		case T_TLB_ST_MISS:
 			/* We we are accessing Guest kernel space, then send an address error exception to the guest */
-			if (badvaddr >= (unsigned long) KVM_GUEST_KSEG0) {
+			if (badvaddr >= (ulong) KVM_GUEST_KSEG0) {
 				printk("%s: ST MISS @ %#lx\n", __func__,
 				       badvaddr);
 				cause &= ~0xff;
@@ -1765,17 +1796,20 @@ kvm_mips_check_privilege(unsigned long cause, uint32_t *opc,
  *     case we inject the TLB from the Guest TLB into the shadow host TLB
  */
 enum emulation_result
-kvm_mips_handle_tlbmiss(unsigned long cause, uint32_t *opc,
+kvm_mips_handle_tlbmiss(ulong cause, uint32_t *opc,
 			struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
 	enum emulation_result er = EMULATE_DONE;
 	uint32_t exccode = (cause >> CAUSEB_EXCCODE) & 0x1f;
-	unsigned long va = vcpu->arch.host_cp0_badvaddr;
+	ulong va = vcpu->arch.host_cp0_badvaddr;
 	int index;
 
 	kvm_debug("kvm_mips_handle_tlbmiss: badvaddr: %#lx, entryhi: %#lx\n",
 		  vcpu->arch.host_cp0_badvaddr, vcpu->arch.host_cp0_entryhi);
 
+#ifdef CONFIG_KVM_MIPS_VZ
+	BUG_ON(cpu_has_vz);
+#endif
 	/* KVM would not have got the exception if this entry was valid in the shadow host TLB
 	 * Check the Guest TLB, if the entry is not there then send the guest an
 	 * exception. The guest exc handler should then inject an entry into the
@@ -1783,8 +1817,8 @@ kvm_mips_handle_tlbmiss(unsigned long cause, uint32_t *opc,
 	 */
 	index = kvm_mips_guest_tlb_lookup(vcpu,
 					  (va & VPN2_MASK) |
-					  ASID_MASK(kvm_read_c0_guest_entryhi
-					   (vcpu->arch.cop0)));
+					  (kvm_read_c0_guest_entryhi
+					   (vcpu->arch.cop0) & ASID_MASK));
 	if (index < 0) {
 		if (exccode == T_TLB_LD_MISS) {
 			er = kvm_mips_emulate_tlbmiss_ld(cause, opc, run, vcpu);
-- 
1.7.11.3

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH 18/18] KVM/MIPS32-VZ: Dump out additional info about VZ features as part of /proc/cpuinfo
  2013-05-19  5:47 [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE) Sanjay Lal
                   ` (16 preceding siblings ...)
  2013-05-19  5:47 ` [PATCH 17/18] KVM/MIPS32: Revert to older method for accessing ASID parameters Sanjay Lal
@ 2013-05-19  5:47 ` Sanjay Lal
  2013-05-20 15:50 ` [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE) David Daney
  18 siblings, 0 replies; 40+ messages in thread
From: Sanjay Lal @ 2013-05-19  5:47 UTC (permalink / raw)
  To: kvm; +Cc: linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti, Sanjay Lal


Signed-off-by: Sanjay Lal <sanjayl@kymasys.com>
---
 arch/mips/kernel/proc.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/arch/mips/kernel/proc.c b/arch/mips/kernel/proc.c
index a3e4614..308e042 100644
--- a/arch/mips/kernel/proc.c
+++ b/arch/mips/kernel/proc.c
@@ -99,6 +99,17 @@ static int show_cpuinfo(struct seq_file *m, void *v)
 	if (cpu_has_vz)		seq_printf(m, "%s", " vz");
 	seq_printf(m, "\n");
 
+#ifdef CONFIG_KVM_MIPS_VZ
+	if (cpu_has_vz) {
+		seq_printf(m, "vz guestid\t\t: %s\n",
+			cpu_has_vzguestid ? "yes" : "no");
+		seq_printf(m, "vz virt irq\t\t: %s\n",
+			cpu_has_vzvirtirq ? "yes" : "no");
+	}
+	seq_printf(m, "tlbinv instructions\t: %s\n",
+		cpu_has_tlbinv ? "yes" : "no");
+#endif
+
 	if (cpu_has_mmips) {
 		seq_printf(m, "micromips kernel\t: %s\n",
 		      (read_c0_config3() & MIPS_CONF3_ISA_OE) ?  "yes" : "no");
-- 
1.7.11.3

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH 05/18] KVM/MIPS32-VZ: VZ-ASE assembler wrapper functions to set GuestIDs
  2013-05-19  5:47 ` [PATCH 05/18] KVM/MIPS32-VZ: VZ-ASE assembler wrapper functions to set GuestIDs Sanjay Lal
@ 2013-05-19 13:36   ` Sergei Shtylyov
  0 siblings, 0 replies; 40+ messages in thread
From: Sergei Shtylyov @ 2013-05-19 13:36 UTC (permalink / raw)
  To: Sanjay Lal; +Cc: kvm, linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti

Hello.

On 19-05-2013 9:47, Sanjay Lal wrote:

> Signed-off-by: Sanjay Lal <sanjayl@kymasys.com>
> ---
>   arch/mips/kvm/kvm_vz_locore.S | 74 +++++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 74 insertions(+)
>   create mode 100644 arch/mips/kvm/kvm_vz_locore.S

> diff --git a/arch/mips/kvm/kvm_vz_locore.S b/arch/mips/kvm/kvm_vz_locore.S
> new file mode 100644
> index 0000000..6d037d7
> --- /dev/null
> +++ b/arch/mips/kvm/kvm_vz_locore.S
> @@ -0,0 +1,74 @@
> +/*
> + * This file is subject to the terms and conditions of the GNU General Public
> + * License.  See the file "COPYING" in the main directory of this archive
> + * for more details.
> + *
> + * KVM/MIPS: Assembler support for hardware virtualization extensions
> + *
> + * Copyright (C) 2012  MIPS Technologies, Inc.  All rights reserved.
> + * Authors: Yann Le Du <ledu@kymasys.com>
> + */
> +
> +#include <asm/asm.h>
> +#include <asm/asmmacro.h>
> +#include <asm/regdef.h>
> +#include <asm/mipsregs.h>
> +#include <asm/asm-offsets.h>
> +#include <asm/mipsvzregs.h>
> +
> +#define MIPSX(name)	mips32_ ## name
> +
> +/*
> + * This routine sets GuestCtl1.RID to GUESTCTL1_VZ_ROOT_GUESTID
> + * Inputs: none
> + */
> +LEAF(MIPSX(ClearGuestRID))
> +	.set	push
> +	.set	mips32r2
> +	.set	noreorder
> +	mfc0	t0, CP0_GUESTCTL1
> +	addiu	t1, zero, GUESTCTL1_VZ_ROOT_GUESTID
> +	ins	t0, t1, GUESTCTL1_RID_SHIFT, GUESTCTL1_RID_WIDTH
> +	mtc0	t0, CP0_GUESTCTL1 # Set GuestCtl1.RID = GUESTCTL1_VZ_ROOT_GUESTID
> +	ehb
> +	j	ra

    Not jr?

> +	nop					# BD Slot

    Instruction in the delay slot is usually indented by extra space.

> +	.set    pop
> +END(MIPSX(ClearGuestRID))
> +
> +
> +/*
> + * This routine sets GuestCtl1.RID to a new value
> + * Inputs: a0 = new GuestRID value (right aligned)
> + */
> +LEAF(MIPSX(SetGuestRID))
> +	.set	push
> +	.set	mips32r2
> +	.set	noreorder
> +	mfc0	t0, CP0_GUESTCTL1
> +	ins 	t0, a0, GUESTCTL1_RID_SHIFT, GUESTCTL1_RID_WIDTH
> +	mtc0	t0, CP0_GUESTCTL1		# Set GuestCtl1.RID
> +	ehb
> +	j	ra
> +	nop					# BD Slot

    Same here...

> +	.set	pop
> +END(MIPSX(SetGuestRID))
> +
> +
> +	/*
> +	 * This routine sets GuestCtl1.RID to GuestCtl1.ID
> +	 * Inputs: none
> +	 */
> +LEAF(MIPSX(SetGuestRIDtoGuestID))
> +	.set	push
> +	.set	mips32r2
> +	.set	noreorder
> +	mfc0	t0, CP0_GUESTCTL1		# Get current GuestID
> +	ext 	t1, t0, GUESTCTL1_ID_SHIFT, GUESTCTL1_ID_WIDTH
> +	ins 	t0, t1, GUESTCTL1_RID_SHIFT, GUESTCTL1_RID_WIDTH
> +	mtc0	t0, CP0_GUESTCTL1		# Set GuestCtl1.RID = GuestCtl1.ID
> +	ehb
> +	j	ra
> +	nop 					# BD Slot

    ... and here.

> +	.set	pop
> +END(MIPSX(SetGuestRIDtoGuestID))

WBR, Sergei

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE)
  2013-05-19  5:47 [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE) Sanjay Lal
                   ` (17 preceding siblings ...)
  2013-05-19  5:47 ` [PATCH 18/18] KVM/MIPS32-VZ: Dump out additional info about VZ features as part of /proc/cpuinfo Sanjay Lal
@ 2013-05-20 15:50 ` David Daney
  2013-05-20 16:58   ` Sanjay Lal
  18 siblings, 1 reply; 40+ messages in thread
From: David Daney @ 2013-05-20 15:50 UTC (permalink / raw)
  To: Sanjay Lal; +Cc: kvm, linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti

On 05/18/2013 10:47 PM, Sanjay Lal wrote:
> The following patch set adds support for the recently announced virtualization
> extensions for the MIPS32 architecture and allows running unmodified kernels in
> Guest Mode.
>
> For more info please refer to :
> 	MIPS Document #: MD00846
> 	Volume IV-i: Virtualization Module of the MIPS32 Architecture
>
> which can be accessed @: http://www.mips.com/auth/MD00846-2B-VZMIPS32-AFP-01.03.pdf
>
> The patch is agains Linux-3.10-rc1.
>
> KVM/MIPS now supports 2 modes of operation:
>
> (1) VZ mode: Unmodified kernels running in Guest Mode.  The processor now provides
>      an almost complete COP0 context in Guest mode. This greatly reduces VM exits.

Two questions:

1) How are you handling not clobbering the Guest K0/K1 registers when a 
Root exception occurs?  It is not obvious to me from inspecting the code.

2) What environment are you using to test this stuff?

David Daney


>
> (2) Trap and Emulate: Runs minimally modified guest kernels in UM and uses binary patching
>      to minimize the number of traps and improve performance. This is used for processors
>      that do not support the VZ-ASE.
>
> --
> Sanjay Lal (18):
>    Revert "MIPS: microMIPS: Support dynamic ASID sizing."
>    Revert "MIPS: Allow ASID size to be determined at boot time."
>    KVM/MIPS32: Export min_low_pfn.
>    KVM/MIPS32-VZ: MIPS VZ-ASE related register defines and helper
>      macros.
>    KVM/MIPS32-VZ: VZ-ASE assembler wrapper functions to set GuestIDs
>    KVM/MIPS32-VZ: VZ-ASE related callbacks to handle guest exceptions
>      that trap to the Root context.
>    KVM/MIPS32: VZ-ASE related CPU feature flags and options.
>    KVM/MIPS32-VZ: Entry point for trampolining to the guest and trap
>      handlers.
>    KVM/MIPS32-VZ: Add support for CONFIG_KVM_MIPS_VZ option
>    KVM/MIPS32-VZ: Add API for VZ-ASE Capability
>    KVM/MIPS32-VZ: VZ: Handle Guest TLB faults that are handled in Root
>      context
>    KVM/MIPS32-VZ: VM Exit Stats, add VZ exit reasons.
>    KVM/MIPS32-VZ: Top level handler for Guest faults
>    KVM/MIPS32-VZ: Guest exception batching support.
>    KVM/MIPS32: Add dummy trap handler to catch unexpected exceptions and
>      dump out useful info
>    KVM/MIPS32-VZ: Add VZ-ASE support to KVM/MIPS data structures.
>    KVM/MIPS32: Revert to older method for accessing ASID parameters
>    KVM/MIPS32-VZ: Dump out additional info about VZ features as part of
>      /proc/cpuinfo
>
>   arch/mips/include/asm/cpu-features.h |   36 ++
>   arch/mips/include/asm/cpu-info.h     |   21 +
>   arch/mips/include/asm/cpu.h          |    5 +
>   arch/mips/include/asm/kvm_host.h     |  244 ++++++--
>   arch/mips/include/asm/mipsvzregs.h   |  494 +++++++++++++++
>   arch/mips/include/asm/mmu_context.h  |   95 ++-
>   arch/mips/kernel/genex.S             |    2 +-
>   arch/mips/kernel/mips_ksyms.c        |    6 +
>   arch/mips/kernel/proc.c              |   11 +
>   arch/mips/kernel/smtc.c              |   10 +-
>   arch/mips/kernel/traps.c             |    6 +-
>   arch/mips/kvm/Kconfig                |   14 +-
>   arch/mips/kvm/Makefile               |   14 +-
>   arch/mips/kvm/kvm_locore.S           | 1088 ++++++++++++++++++----------------
>   arch/mips/kvm/kvm_mips.c             |   73 ++-
>   arch/mips/kvm/kvm_mips_dyntrans.c    |   24 +-
>   arch/mips/kvm/kvm_mips_emul.c        |  236 ++++----
>   arch/mips/kvm/kvm_mips_int.h         |    5 +
>   arch/mips/kvm/kvm_mips_stats.c       |   17 +-
>   arch/mips/kvm/kvm_tlb.c              |  444 +++++++++++---
>   arch/mips/kvm/kvm_trap_emul.c        |   68 ++-
>   arch/mips/kvm/kvm_vz.c               |  786 ++++++++++++++++++++++++
>   arch/mips/kvm/kvm_vz_locore.S        |   74 +++
>   arch/mips/lib/dump_tlb.c             |    5 +-
>   arch/mips/lib/r3k_dump_tlb.c         |    7 +-
>   arch/mips/mm/tlb-r3k.c               |   20 +-
>   arch/mips/mm/tlb-r4k.c               |    2 +-
>   arch/mips/mm/tlb-r8k.c               |    2 +-
>   arch/mips/mm/tlbex.c                 |   82 +--
>   include/uapi/linux/kvm.h             |    1 +
>   30 files changed, 2906 insertions(+), 986 deletions(-)
>   create mode 100644 arch/mips/include/asm/mipsvzregs.h
>   create mode 100644 arch/mips/kvm/kvm_vz.c
>   create mode 100644 arch/mips/kvm/kvm_vz_locore.S
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE)
  2013-05-20 15:50 ` [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE) David Daney
@ 2013-05-20 16:58   ` Sanjay Lal
  2013-05-20 17:29     ` David Daney
  2013-05-20 18:36     ` Maciej W. Rozycki
  0 siblings, 2 replies; 40+ messages in thread
From: Sanjay Lal @ 2013-05-20 16:58 UTC (permalink / raw)
  To: David Daney; +Cc: kvm, linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti


On May 20, 2013, at 8:50 AM, David Daney wrote:

> On 05/18/2013 10:47 PM, Sanjay Lal wrote:
>> The following patch set adds support for the recently announced virtualization
>> extensions for the MIPS32 architecture and allows running unmodified kernels in
>> Guest Mode.
>> 
>> For more info please refer to :
>> 	MIPS Document #: MD00846
>> 	Volume IV-i: Virtualization Module of the MIPS32 Architecture
>> 
>> which can be accessed @: http://www.mips.com/auth/MD00846-2B-VZMIPS32-AFP-01.03.pdf
>> 
>> The patch is agains Linux-3.10-rc1.
>> 
>> KVM/MIPS now supports 2 modes of operation:
>> 
>> (1) VZ mode: Unmodified kernels running in Guest Mode.  The processor now provides
>>     an almost complete COP0 context in Guest mode. This greatly reduces VM exits.
> 
> Two questions:
> 
> 1) How are you handling not clobbering the Guest K0/K1 registers when a Root exception occurs?  It is not obvious to me from inspecting the code.
> 
> 2) What environment are you using to test this stuff?
> 
> David Daney
> 

(1) Newer versions of the MIPS architecture define scratch registers for just this purpose, but since we have to support standard MIPS32R2 processors, we use the DDataLo Register (CP0 Register 28, Select 3) as a scratch register to save k0 and save k1 @ a known offset from EBASE.

(2) Platforms that we've tested on:

KVM Trap & Emulate
- Malta Board with FPGA based 34K
- Sigma Designs TangoX board with a 24K based 8654 SoC.
- Malta Board with 74K @ 1GHz
- QEMU (as of 1.4.90)
- Imperas M*SDK MIPS32 simulator

KVM MIPS/VZ
- Imperas M*SDK MIPS32 simulator + MIPS/VZ model.

Regards
Sanjay

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE)
  2013-05-20 16:58   ` Sanjay Lal
@ 2013-05-20 17:29     ` David Daney
  2013-05-20 17:34       ` Sanjay Lal
  2013-05-20 18:36     ` Maciej W. Rozycki
  1 sibling, 1 reply; 40+ messages in thread
From: David Daney @ 2013-05-20 17:29 UTC (permalink / raw)
  To: Sanjay Lal; +Cc: kvm, linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti

On 05/20/2013 09:58 AM, Sanjay Lal wrote:
>
> On May 20, 2013, at 8:50 AM, David Daney wrote:
>
>> On 05/18/2013 10:47 PM, Sanjay Lal wrote:
>>> The following patch set adds support for the recently announced virtualization
>>> extensions for the MIPS32 architecture and allows running unmodified kernels in
>>> Guest Mode.
>>>
>>> For more info please refer to :
>>> 	MIPS Document #: MD00846
>>> 	Volume IV-i: Virtualization Module of the MIPS32 Architecture
>>>
>>> which can be accessed @: http://www.mips.com/auth/MD00846-2B-VZMIPS32-AFP-01.03.pdf
>>>
>>> The patch is agains Linux-3.10-rc1.
>>>
>>> KVM/MIPS now supports 2 modes of operation:
>>>
>>> (1) VZ mode: Unmodified kernels running in Guest Mode.  The processor now provides
>>>      an almost complete COP0 context in Guest mode. This greatly reduces VM exits.
>>
>> Two questions:
>>
>> 1) How are you handling not clobbering the Guest K0/K1 registers when a Root exception occurs?  It is not obvious to me from inspecting the code.
>>
>> 2) What environment are you using to test this stuff?
>>
>> David Daney
>>
>
> (1) Newer versions of the MIPS architecture define scratch registers for just this purpose, but since we have to support standard MIPS32R2 processors, we use the DDataLo Register (CP0 Register 28, Select 3) as a scratch register to save k0 and save k1 @ a known offset from EBASE.
>

Right, I understand that.  But I am looking at arch/mips/mm/tlbex.c, and 
I don't see the code that does that for TLBRefill exceptions.

Where is it done for interrupts?  I would expect code in 
arch/mips/kernel/genex.S and/or stackframe.h would handle this.  But I 
don't see where it is.

Am I missing something?

David Daney

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE)
  2013-05-20 17:29     ` David Daney
@ 2013-05-20 17:34       ` Sanjay Lal
  0 siblings, 0 replies; 40+ messages in thread
From: Sanjay Lal @ 2013-05-20 17:34 UTC (permalink / raw)
  To: David Daney; +Cc: kvm, linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti


On May 20, 2013, at 10:29 AM, David Daney wrote:

> On 05/20/2013 09:58 AM, Sanjay Lal wrote:
>> 
>> On May 20, 2013, at 8:50 AM, David Daney wrote:
>> 
>>> On 05/18/2013 10:47 PM, Sanjay Lal wrote:
>>>> The following patch set adds support for the recently announced virtualization
>>>> extensions for the MIPS32 architecture and allows running unmodified kernels in
>>>> Guest Mode.
>>>> 
>>>> For more info please refer to :
>>>> 	MIPS Document #: MD00846
>>>> 	Volume IV-i: Virtualization Module of the MIPS32 Architecture
>>>> 
>>>> which can be accessed @: http://www.mips.com/auth/MD00846-2B-VZMIPS32-AFP-01.03.pdf
>>>> 
>>>> The patch is agains Linux-3.10-rc1.
>>>> 
>>>> KVM/MIPS now supports 2 modes of operation:
>>>> 
>>>> (1) VZ mode: Unmodified kernels running in Guest Mode.  The processor now provides
>>>>     an almost complete COP0 context in Guest mode. This greatly reduces VM exits.
>>> 
>>> Two questions:
>>> 
>>> 1) How are you handling not clobbering the Guest K0/K1 registers when a Root exception occurs?  It is not obvious to me from inspecting the code.
>>> 
>>> 2) What environment are you using to test this stuff?
>>> 
>>> David Daney
>>> 
>> 
>> (1) Newer versions of the MIPS architecture define scratch registers for just this purpose, but since we have to support standard MIPS32R2 processors, we use the DDataLo Register (CP0 Register 28, Select 3) as a scratch register to save k0 and save k1 @ a known offset from EBASE.
>> 
> 
> Right, I understand that.  But I am looking at arch/mips/mm/tlbex.c, and I don't see the code that does that for TLBRefill exceptions.
> 
> Where is it done for interrupts?  I would expect code in arch/mips/kernel/genex.S and/or stackframe.h would handle this.  But I don't see where it is.
> 
> Am I missing something?
> 
> David Daney
> 


arch/mips/kvm/kvm_locore.S

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE)
  2013-05-20 16:58   ` Sanjay Lal
  2013-05-20 17:29     ` David Daney
@ 2013-05-20 18:36     ` Maciej W. Rozycki
  2013-05-20 18:58       ` David Daney
  1 sibling, 1 reply; 40+ messages in thread
From: Maciej W. Rozycki @ 2013-05-20 18:36 UTC (permalink / raw)
  To: Sanjay Lal
  Cc: David Daney, kvm, linux-mips, Ralf Baechle, Gleb Natapov,
	Marcelo Tosatti

On Mon, 20 May 2013, Sanjay Lal wrote:

> (1) Newer versions of the MIPS architecture define scratch registers for 
> just this purpose, but since we have to support standard MIPS32R2 
> processors, we use the DDataLo Register (CP0 Register 28, Select 3) as a 
> scratch register to save k0 and save k1 @ a known offset from EBASE.

 That's rather risky as the implementation of this register (and its 
presence in the first place) is processor-specific.  Do you maintain a 
list of PRId values the use of this register is safe with?

  Maciej

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE)
  2013-05-20 18:36     ` Maciej W. Rozycki
@ 2013-05-20 18:58       ` David Daney
  2013-05-27 12:45         ` Maciej W. Rozycki
  0 siblings, 1 reply; 40+ messages in thread
From: David Daney @ 2013-05-20 18:58 UTC (permalink / raw)
  To: Maciej W. Rozycki
  Cc: Sanjay Lal, kvm, linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti

On 05/20/2013 11:36 AM, Maciej W. Rozycki wrote:
> On Mon, 20 May 2013, Sanjay Lal wrote:
>
>> (1) Newer versions of the MIPS architecture define scratch registers for
>> just this purpose, but since we have to support standard MIPS32R2
>> processors, we use the DDataLo Register (CP0 Register 28, Select 3) as a
>> scratch register to save k0 and save k1 @ a known offset from EBASE.
>
>   That's rather risky as the implementation of this register (and its
> presence in the first place) is processor-specific.  Do you maintain a
> list of PRId values the use of this register is safe with?
>

FWIW:  The MIPS-VZ architecture module requires the presence of CP0 
scratch registers that can be used for this in the exception handlers 
without having to worry about using these implementation dependent 
registers.  For the trap-and-emulate only version, there really is no 
choice other than to re-purpose some of the existing CP0 registers.

David Daney

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE)
  2013-05-20 18:58       ` David Daney
@ 2013-05-27 12:45         ` Maciej W. Rozycki
  0 siblings, 0 replies; 40+ messages in thread
From: Maciej W. Rozycki @ 2013-05-27 12:45 UTC (permalink / raw)
  To: David Daney
  Cc: Sanjay Lal, kvm, linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti

On Mon, 20 May 2013, David Daney wrote:

> >   That's rather risky as the implementation of this register (and its
> > presence in the first place) is processor-specific.  Do you maintain a
> > list of PRId values the use of this register is safe with?
> > 
> 
> FWIW:  The MIPS-VZ architecture module requires the presence of CP0 scratch
> registers that can be used for this in the exception handlers without having
> to worry about using these implementation dependent registers.  For the
> trap-and-emulate only version, there really is no choice other than to
> re-purpose some of the existing CP0 registers.

 Sure, I've just been wondering what the implementation does to make sure 
it does not go astray on a random processor out there.

 FWIW, offhand the ErrorEPC register, that's been universally present 
since MIPS III (and I doubt anyone cares of virtualising on earlier 
implementations), seems to me promising as a better choice -- of course 
that register can get clobbered if an error-class exception happens early 
on in exception processing, but in that case we're in a worse trouble than 
just clobbering one of the guest registers anyway and likely cannot 
recover at all regardless.

  Maciej

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 08/18] KVM/MIPS32-VZ: Entry point for trampolining to the guest and trap handlers.
  2013-05-19  5:47 ` [PATCH 08/18] KVM/MIPS32-VZ: Entry point for trampolining to the guest and trap handlers Sanjay Lal
@ 2013-05-28 14:43   ` Paolo Bonzini
  0 siblings, 0 replies; 40+ messages in thread
From: Paolo Bonzini @ 2013-05-28 14:43 UTC (permalink / raw)
  To: Sanjay Lal; +Cc: kvm, linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti

Il 19/05/2013 07:47, Sanjay Lal ha scritto:
> - Add support for the MIPS VZ-ASE
> - Whitespace fixes
> 
> Signed-off-by: Sanjay Lal <sanjayl@kymasys.com>
> ---
>  arch/mips/kvm/kvm_locore.S | 1088 +++++++++++++++++++++++---------------------
>  1 file changed, 573 insertions(+), 515 deletions(-)

This is unreadable, can you split the whitespace fixes?

Paolo

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 06/18] KVM/MIPS32-VZ: VZ-ASE related callbacks to handle guest exceptions that trap to the Root context.
  2013-05-19  5:47 ` [PATCH 06/18] KVM/MIPS32-VZ: VZ-ASE related callbacks to handle guest exceptions that trap to the Root context Sanjay Lal
@ 2013-05-28 15:04   ` Paolo Bonzini
  2013-05-30 18:35     ` Sanjay Lal
  2013-05-28 16:14   ` Paolo Bonzini
  1 sibling, 1 reply; 40+ messages in thread
From: Paolo Bonzini @ 2013-05-28 15:04 UTC (permalink / raw)
  To: Sanjay Lal; +Cc: kvm, linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti

Il 19/05/2013 07:47, Sanjay Lal ha scritto:
> +static int kvm_vz_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
> +{
> +	struct mips_coproc *cop0 = vcpu->arch.cop0;
> +
> +	/* some registers are not restored
> +	 * random, count        : read-only
> +	 * userlocal            : not implemented in qemu
> +	 * config6              : not implemented in processor variant
> +	 * compare, cause       : defer to kvm_vz_restore_guest_timer_int
> +	 */
> +
> +	kvm_write_c0_guest_index(cop0, regs->cp0reg[MIPS_CP0_TLB_INDEX][0]);
> +	kvm_write_c0_guest_entrylo0(cop0, regs->cp0reg[MIPS_CP0_TLB_LO0][0]);
> +	kvm_write_c0_guest_entrylo1(cop0, regs->cp0reg[MIPS_CP0_TLB_LO1][0]);
> +	kvm_write_c0_guest_context(cop0, regs->cp0reg[MIPS_CP0_TLB_CONTEXT][0]);
> +	kvm_write_c0_guest_pagemask(cop0,
> +				    regs->cp0reg[MIPS_CP0_TLB_PG_MASK][0]);
> +	kvm_write_c0_guest_pagegrain(cop0,
> +				     regs->cp0reg[MIPS_CP0_TLB_PG_MASK][1]);
> +	kvm_write_c0_guest_wired(cop0, regs->cp0reg[MIPS_CP0_TLB_WIRED][0]);
> +	kvm_write_c0_guest_hwrena(cop0, regs->cp0reg[MIPS_CP0_HWRENA][0]);
> +	kvm_write_c0_guest_badvaddr(cop0, regs->cp0reg[MIPS_CP0_BAD_VADDR][0]);
> +	/* skip kvm_write_c0_guest_count */
> +	kvm_write_c0_guest_entryhi(cop0, regs->cp0reg[MIPS_CP0_TLB_HI][0]);
> +	/* defer kvm_write_c0_guest_compare */
> +	kvm_write_c0_guest_status(cop0, regs->cp0reg[MIPS_CP0_STATUS][0]);
> +	kvm_write_c0_guest_intctl(cop0, regs->cp0reg[MIPS_CP0_STATUS][1]);
> +	/* defer kvm_write_c0_guest_cause */
> +	kvm_write_c0_guest_epc(cop0, regs->cp0reg[MIPS_CP0_EXC_PC][0]);
> +	kvm_write_c0_guest_prid(cop0, regs->cp0reg[MIPS_CP0_PRID][0]);
> +	kvm_write_c0_guest_ebase(cop0, regs->cp0reg[MIPS_CP0_PRID][1]);
> +
> +	/* only restore implemented config registers */
> +	kvm_write_c0_guest_config(cop0, regs->cp0reg[MIPS_CP0_CONFIG][0]);
> +
> +	if ((regs->cp0reg[MIPS_CP0_CONFIG][0] & MIPS_CONF_M) &
> +			cpu_vz_has_config1)
> +		kvm_write_c0_guest_config1(cop0,
> +				regs->cp0reg[MIPS_CP0_CONFIG][1]);
> +
> +	if ((regs->cp0reg[MIPS_CP0_CONFIG][1] & MIPS_CONF_M) &
> +			cpu_vz_has_config2)
> +		kvm_write_c0_guest_config2(cop0,
> +				regs->cp0reg[MIPS_CP0_CONFIG][2]);
> +
> +	if ((regs->cp0reg[MIPS_CP0_CONFIG][2] & MIPS_CONF_M) &
> +			cpu_vz_has_config3)
> +		kvm_write_c0_guest_config3(cop0,
> +				regs->cp0reg[MIPS_CP0_CONFIG][3]);
> +
> +	if ((regs->cp0reg[MIPS_CP0_CONFIG][3] & MIPS_CONF_M) &
> +			cpu_vz_has_config4)
> +		kvm_write_c0_guest_config4(cop0,
> +				regs->cp0reg[MIPS_CP0_CONFIG][4]);
> +
> +	if ((regs->cp0reg[MIPS_CP0_CONFIG][4] & MIPS_CONF_M) &
> +			cpu_vz_has_config5)
> +		kvm_write_c0_guest_config5(cop0,
> +				regs->cp0reg[MIPS_CP0_CONFIG][5]);
> +
> +	if (cpu_vz_has_config6)
> +		kvm_write_c0_guest_config6(cop0,
> +				regs->cp0reg[MIPS_CP0_CONFIG][6]);
> +	if (cpu_vz_has_config7)
> +		kvm_write_c0_guest_config7(cop0,
> +				regs->cp0reg[MIPS_CP0_CONFIG][7]);
> +
> +	kvm_write_c0_guest_errorepc(cop0, regs->cp0reg[MIPS_CP0_ERROR_PC][0]);
> +
> +	/* call after setting MIPS_CP0_CAUSE to avoid having it overwritten
> +	 * this will set guest compare and cause.TI if necessary
> +	 */
> +	kvm_vz_restore_guest_timer_int(vcpu, regs);
> +
> +	return 0;
> +}

All this is now obsolete after David's patches (reusing kvm_regs looked
a bit strange in fact).

Paolo

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 16/18] KVM/MIPS32-VZ: Add VZ-ASE support to KVM/MIPS data structures.
  2013-05-19  5:47 ` [PATCH 16/18] KVM/MIPS32-VZ: Add VZ-ASE support to KVM/MIPS data structures Sanjay Lal
@ 2013-05-28 15:24   ` Paolo Bonzini
  0 siblings, 0 replies; 40+ messages in thread
From: Paolo Bonzini @ 2013-05-28 15:24 UTC (permalink / raw)
  To: Sanjay Lal; +Cc: kvm, linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti

Il 19/05/2013 07:47, Sanjay Lal ha scritto:
> Signed-off-by: Sanjay Lal <sanjayl@kymasys.com>
> ---
>  arch/mips/include/asm/kvm_host.h | 244 ++++++++++++++++++++++++++++++---------
>  1 file changed, 191 insertions(+), 53 deletions(-)
> 
> diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
> index e68781e..c92e297 100644
> --- a/arch/mips/include/asm/kvm_host.h
> +++ b/arch/mips/include/asm/kvm_host.h
> @@ -19,21 +19,28 @@
>  #include <linux/threads.h>
>  #include <linux/spinlock.h>
>  
> +#ifdef CONFIG_KVM_MIPS_VZ
> +#include <asm/mipsvzregs.h>
> +#endif

No need to make the inclusion conditional.

>  
> -#define KVM_MAX_VCPUS		1
> -#define KVM_USER_MEM_SLOTS	8
> +#define KVM_MAX_VCPUS 8
> +#define KVM_USER_MEM_SLOTS 8
>  /* memory slots that does not exposed to userspace */
> -#define KVM_PRIVATE_MEM_SLOTS 	0
> +#define KVM_PRIVATE_MEM_SLOTS 0
>  
>  #define KVM_COALESCED_MMIO_PAGE_OFFSET 1
>  
>  /* Don't support huge pages */
> -#define KVM_HPAGE_GFN_SHIFT(x)	0
> +#define KVM_HPAGE_GFN_SHIFT(x)  0
>  
>  /* We don't currently support large pages. */
>  #define KVM_NR_PAGE_SIZES	1
> -#define KVM_PAGES_PER_HPAGE(x)	1
> +#define KVM_PAGES_PER_HPAGE(x)  1
>  
> +#ifdef CONFIG_KVM_MIPS_VZ
> +#define KVM_VZROOTID		(GUESTCTL1_VZ_ROOT_GUESTID)
> +#define KVM_VZGUESTID_MASK	(GUESTCTL1_ID)
> +#endif

Any reason to not use the GUESTCTL1_* macros directly?

>  
>  
>  /* Special address that contains the comm page, used for reducing # of traps */
> @@ -42,11 +49,20 @@
>  #define KVM_GUEST_KERNEL_MODE(vcpu)	((kvm_read_c0_guest_status(vcpu->arch.cop0) & (ST0_EXL | ST0_ERL)) || \
>  					((kvm_read_c0_guest_status(vcpu->arch.cop0) & KSU_USER) == 0))
>  
> +#ifdef CONFIG_KVM_MIPS_VZ
> +#define KVM_GUEST_KUSEG             0x00000000UL
> +#define KVM_GUEST_KSEG0             0x80000000UL
> +#define KVM_GUEST_KSEG1             0xa0000000UL
> +#define KVM_GUEST_KSEG23            0xc0000000UL
> +#define KVM_GUEST_KSEGX(a)          ((_ACAST32_(a)) & 0xe0000000)
> +#define KVM_GUEST_CPHYSADDR(a)      ((_ACAST32_(a)) & 0x1fffffff)
> +#else
>  #define KVM_GUEST_KUSEG             0x00000000UL
>  #define KVM_GUEST_KSEG0             0x40000000UL
>  #define KVM_GUEST_KSEG23            0x60000000UL
>  #define KVM_GUEST_KSEGX(a)          ((_ACAST32_(a)) & 0x60000000)
>  #define KVM_GUEST_CPHYSADDR(a)      ((_ACAST32_(a)) & 0x1fffffff)
> +#endif
>  
>  #define KVM_GUEST_CKSEG0ADDR(a)		(KVM_GUEST_CPHYSADDR(a) | KVM_GUEST_KSEG0)
>  #define KVM_GUEST_CKSEG1ADDR(a)		(KVM_GUEST_CPHYSADDR(a) | KVM_GUEST_KSEG1)
> @@ -100,6 +116,21 @@ struct kvm_vcpu_stat {
>  	u32 resvd_inst_exits;
>  	u32 break_inst_exits;
>  	u32 flush_dcache_exits;
> +#ifdef CONFIG_KVM_MIPS_VZ
> +	u32 hypervisor_gpsi_exits;
> +	u32 hypervisor_gpsi_cp0_exits;
> +	u32 hypervisor_gpsi_cache_exits;
> +	u32 hypervisor_gsfc_exits;
> +	u32 hypervisor_gsfc_cp0_status_exits;
> +	u32 hypervisor_gsfc_cp0_cause_exits;
> +	u32 hypervisor_gsfc_cp0_intctl_exits;
> +	u32 hypervisor_hc_exits;
> +	u32 hypervisor_grr_exits;
> +	u32 hypervisor_gva_exits;
> +	u32 hypervisor_ghfc_exits;
> +	u32 hypervisor_gpa_exits;
> +	u32 hypervisor_resv_exits;
> +#endif
>  	u32 halt_wakeup;
>  };
>  
> @@ -118,6 +149,21 @@ enum kvm_mips_exit_types {
>  	RESVD_INST_EXITS,
>  	BREAK_INST_EXITS,
>  	FLUSH_DCACHE_EXITS,
> +#ifdef CONFIG_KVM_MIPS_VZ
> +	HYPERVISOR_GPSI_EXITS,
> +	HYPERVISOR_GPSI_CP0_EXITS,
> +	HYPERVISOR_GPSI_CACHE_EXITS,
> +	HYPERVISOR_GSFC_EXITS,
> +	HYPERVISOR_GSFC_CP0_STATUS_EXITS,
> +	HYPERVISOR_GSFC_CP0_CAUSE_EXITS,
> +	HYPERVISOR_GSFC_CP0_INTCTL_EXITS,
> +	HYPERVISOR_HC_EXITS,
> +	HYPERVISOR_GRR_EXITS,
> +	HYPERVISOR_GVA_EXITS,
> +	HYPERVISOR_GHFC_EXITS,
> +	HYPERVISOR_GPA_EXITS,
> +	HYPERVISOR_RESV_EXITS,
> +#endif
>  	MAX_KVM_MIPS_EXIT_TYPES
>  };
>  
> @@ -126,8 +172,8 @@ struct kvm_arch_memory_slot {
>  
>  struct kvm_arch {
>  	/* Guest GVA->HPA page table */
> -	unsigned long *guest_pmap;
> -	unsigned long guest_pmap_npages;
> +	ulong *guest_pmap;
> +	ulong guest_pmap_npages;

Please make this search-and-replace a separate patch.

>  
>  	/* Wired host TLB used for the commpage */
>  	int commpage_tlb;
> @@ -137,9 +183,9 @@ struct kvm_arch {
>  #define N_MIPS_COPROC_SEL   	8
>  
>  struct mips_coproc {
> -	unsigned long reg[N_MIPS_COPROC_REGS][N_MIPS_COPROC_SEL];
> +	ulong reg[N_MIPS_COPROC_REGS][N_MIPS_COPROC_SEL];
>  #ifdef CONFIG_KVM_MIPS_DEBUG_COP0_COUNTERS
> -	unsigned long stat[N_MIPS_COPROC_REGS][N_MIPS_COPROC_SEL];
> +	ulong stat[N_MIPS_COPROC_REGS][N_MIPS_COPROC_SEL];
>  #endif
>  };
>  
> @@ -294,6 +340,9 @@ enum mips_mmu_types {
>  #define T_RES_INST      10	/* Reserved instruction exception */
>  #define T_COP_UNUSABLE      11	/* Coprocessor unusable */
>  #define T_OVFLOW        12	/* Arithmetic overflow */
> +#ifdef CONFIG_KVM_MIPS_VZ
> +#define T_GUEST_EXIT        27	/* Guest Exit (VZ ASE) */
> +#endif

Also doesn't need to be conditional, does it?

Even in the implementation, I would prefer to have no ifdefs like

#ifdef CONFIG_KVM_MIPS_VZ
	case T_GUEST_EXIT:
		/* defer exit accounting to handler */
		ret = kvm_mips_callbacks->handle_guest_exit(vcpu);
		break;

#endif

in generic code.

>  
>  /*
>   * Trap definitions added for r4000 port.
> @@ -336,7 +385,7 @@ enum emulation_result {
>  #define VPN2_MASK           0xffffe000
>  #define TLB_IS_GLOBAL(x)    (((x).tlb_lo0 & MIPS3_PG_G) && ((x).tlb_lo1 & MIPS3_PG_G))
>  #define TLB_VPN2(x)         ((x).tlb_hi & VPN2_MASK)
> -#define TLB_ASID(x)         (ASID_MASK((x).tlb_hi))
> +#define TLB_ASID(x)         ((x).tlb_hi & ASID_MASK)
>  #define TLB_IS_VALID(x, va) (((va) & (1 << PAGE_SHIFT)) ? ((x).tlb_lo1 & MIPS3_PG_V) : ((x).tlb_lo0 & MIPS3_PG_V))
>  
>  struct kvm_mips_tlb {
> @@ -344,26 +393,29 @@ struct kvm_mips_tlb {
>  	long tlb_hi;
>  	long tlb_lo0;
>  	long tlb_lo1;
> +#ifdef CONFIG_KVM_MIPS_VZ
> +	uint32_t guestctl1;
> +#endif
>  };
>  
>  #define KVM_MIPS_GUEST_TLB_SIZE     64
>  struct kvm_vcpu_arch {
>  	void *host_ebase, *guest_ebase;
> -	unsigned long host_stack;
> -	unsigned long host_gp;
> +	ulong host_stack;
> +	ulong host_gp;
>  
>  	/* Host CP0 registers used when handling exits from guest */
> -	unsigned long host_cp0_badvaddr;
> -	unsigned long host_cp0_cause;
> -	unsigned long host_cp0_epc;
> -	unsigned long host_cp0_entryhi;
> +	ulong host_cp0_badvaddr;
> +	ulong host_cp0_cause;
> +	ulong host_cp0_epc;
> +	ulong host_cp0_entryhi;
>  	uint32_t guest_inst;
>  
>  	/* GPRS */
> -	unsigned long gprs[32];
> -	unsigned long hi;
> -	unsigned long lo;
> -	unsigned long pc;
> +	ulong gprs[32];
> +	ulong hi;
> +	ulong lo;
> +	ulong pc;
>  
>  	/* FPU State */
>  	struct mips_fpu_struct fpu;
> @@ -380,15 +432,12 @@ struct kvm_vcpu_arch {
>  	int32_t host_cp0_count;
>  
>  	/* Bitmask of exceptions that are pending */
> -	unsigned long pending_exceptions;
> +	ulong pending_exceptions;
>  
>  	/* Bitmask of pending exceptions to be cleared */
> -	unsigned long pending_exceptions_clr;
> +	ulong pending_exceptions_clr;
>  
> -	unsigned long pending_load_cause;
> -
> -	/* Save/Restore the entryhi register when are are preempted/scheduled back in */
> -	unsigned long preempt_entryhi;
> +	ulong pending_load_cause;
>  
>  	/* S/W Based TLB for guest */
>  	struct kvm_mips_tlb guest_tlb[KVM_MIPS_GUEST_TLB_SIZE];
> @@ -400,6 +449,13 @@ struct kvm_vcpu_arch {
>  
>  	struct kvm_mips_tlb shadow_tlb[NR_CPUS][KVM_MIPS_GUEST_TLB_SIZE];
>  
> +#ifdef CONFIG_KVM_MIPS_VZ
> +	/* vcpu's vzguestid is different on each host cpu in an smp system */
> +	uint32_t vzguestid[NR_CPUS];
> +
> +	/* storage for saved guest CP0 registers */
> +	struct kvm_regs guest_regs;
> +#endif
>  
>  	struct hrtimer comparecount_timer;
>  
> @@ -409,6 +465,74 @@ struct kvm_vcpu_arch {
>  	int wait;
>  };
>  
> +#ifdef CONFIG_KVM_MIPS_VZ
> +
> +#define kvm_read_c0_guest_index(cop0)                ((void)cop0, read_c0_guest_index())

Parentheses around cop0.

> +#define kvm_write_c0_guest_index(cop0, val)          ((void)cop0, write_c0_guest_index(val))
> +#define kvm_read_c0_guest_random(cop0)               ((void)cop0, read_c0_guest_random())
> +#define kvm_read_c0_guest_entrylo0(cop0)             ((void)cop0, read_c0_guest_entrylo0())
> +#define kvm_write_c0_guest_entrylo0(cop0, val)       ((void)cop0, write_c0_guest_entrylo0(val))
> +#define kvm_read_c0_guest_entrylo1(cop0)             ((void)cop0, read_c0_guest_entrylo1())
> +#define kvm_write_c0_guest_entrylo1(cop0, val)       ((void)cop0, write_c0_guest_entrylo1(val))
> +#define kvm_read_c0_guest_context(cop0)              ((void)cop0, read_c0_guest_context())
> +#define kvm_write_c0_guest_context(cop0, val)        ((void)cop0, write_c0_guest_context(val))
> +#define kvm_read_c0_guest_userlocal(cop0)            ((void)cop0, read_c0_guest_userlocal())
> +#define kvm_write_c0_guest_userlocal(cop0, val)      ((void)cop0, write_c0_guest_userlocal(val))
> +#define kvm_read_c0_guest_pagemask(cop0)             ((void)cop0, read_c0_guest_pagemask())
> +#define kvm_write_c0_guest_pagemask(cop0, val)       ((void)cop0, write_c0_guest_pagemask(val))
> +#define kvm_read_c0_guest_pagegrain(cop0)            ((void)cop0, read_c0_guest_pagegrain())
> +#define kvm_write_c0_guest_pagegrain(cop0, val)      ((void)cop0, write_c0_guest_pagegrain(val))
> +#define kvm_read_c0_guest_wired(cop0)                ((void)cop0, read_c0_guest_wired())
> +#define kvm_write_c0_guest_wired(cop0, val)          ((void)cop0, write_c0_guest_wired(val))
> +#define kvm_read_c0_guest_hwrena(cop0)               ((void)cop0, read_c0_guest_hwrena())
> +#define kvm_write_c0_guest_hwrena(cop0, val)         ((void)cop0, write_c0_guest_hwrena(val))
> +#define kvm_read_c0_guest_badvaddr(cop0)             ((void)cop0, read_c0_guest_badvaddr())
> +#define kvm_write_c0_guest_badvaddr(cop0, val)       ((void)cop0, write_c0_guest_badvaddr(val))
> +#define kvm_read_c0_guest_count(cop0)                ((void)cop0, read_c0_guest_count())
> +#define kvm_write_c0_guest_count(cop0, val)          ((void)cop0, write_c0_guest_count(val))
> +#define kvm_read_c0_guest_entryhi(cop0)              ((void)cop0, read_c0_guest_entryhi())
> +#define kvm_write_c0_guest_entryhi(cop0, val)        ((void)cop0, write_c0_guest_entryhi(val))
> +#define kvm_read_c0_guest_compare(cop0)              ((void)cop0, read_c0_guest_compare())
> +#define kvm_write_c0_guest_compare(cop0, val)        ((void)cop0, write_c0_guest_compare(val))
> +#define kvm_read_c0_guest_status(cop0)               ((void)cop0, read_c0_guest_status())
> +#define kvm_write_c0_guest_status(cop0, val)         ((void)cop0, write_c0_guest_status(val))
> +#define kvm_read_c0_guest_intctl(cop0)               ((void)cop0, read_c0_guest_intctl())
> +#define kvm_write_c0_guest_intctl(cop0, val)         ((void)cop0, write_c0_guest_intctl(val))
> +#define kvm_read_c0_guest_cause(cop0)                ((void)cop0, read_c0_guest_cause())
> +#define kvm_write_c0_guest_cause(cop0, val)          ((void)cop0, write_c0_guest_cause(val))
> +#define kvm_read_c0_guest_epc(cop0)                  ((void)cop0, read_c0_guest_epc())
> +#define kvm_write_c0_guest_epc(cop0, val)            ((void)cop0, write_c0_guest_epc(val))
> +#define kvm_read_c0_guest_prid(cop0)                 (cop0->reg[MIPS_CP0_PRID][0])
> +#define kvm_write_c0_guest_prid(cop0, val)           (cop0->reg[MIPS_CP0_PRID][0] = (val))
> +#define kvm_read_c0_guest_ebase(cop0)                ((void)cop0, read_c0_guest_ebase())
> +#define kvm_write_c0_guest_ebase(cop0, val)          ((void)cop0, write_c0_guest_ebase(val))
> +#define kvm_read_c0_guest_config(cop0)               ((void)cop0, read_c0_guest_config())
> +#define kvm_read_c0_guest_config1(cop0)              ((void)cop0, read_c0_guest_config1())
> +#define kvm_read_c0_guest_config2(cop0)              ((void)cop0, read_c0_guest_config2())
> +#define kvm_read_c0_guest_config3(cop0)              ((void)cop0, read_c0_guest_config3())
> +#define kvm_read_c0_guest_config4(cop0)              ((void)cop0, read_c0_guest_config4())
> +#define kvm_read_c0_guest_config5(cop0)              ((void)cop0, read_c0_guest_config5())
> +#define kvm_read_c0_guest_config6(cop0)              ((void)cop0, read_c0_guest_config6())
> +#define kvm_read_c0_guest_config7(cop0)              ((void)cop0, read_c0_guest_config7())
> +#define kvm_write_c0_guest_config(cop0, val)         ((void)cop0, write_c0_guest_config(val))
> +#define kvm_write_c0_guest_config1(cop0, val)        ((void)cop0, write_c0_guest_config1(val))
> +#define kvm_write_c0_guest_config2(cop0, val)        ((void)cop0, write_c0_guest_config2(val))
> +#define kvm_write_c0_guest_config3(cop0, val)        ((void)cop0, write_c0_guest_config3(val))
> +#define kvm_write_c0_guest_config4(cop0, val)        ((void)cop0, write_c0_guest_config4(val))
> +#define kvm_write_c0_guest_config5(cop0, val)        ((void)cop0, write_c0_guest_config5(val))
> +#define kvm_write_c0_guest_config6(cop0, val)        ((void)cop0, write_c0_guest_config6(val))
> +#define kvm_write_c0_guest_config7(cop0, val)        ((void)cop0, write_c0_guest_config7(val))
> +#define kvm_read_c0_guest_errorepc(cop0)             ((void)cop0, read_c0_guest_errorepc())
> +#define kvm_write_c0_guest_errorepc(cop0, val)       ((void)cop0, write_c0_guest_errorepc(val))
> +
> +#define kvm_set_c0_guest_status(cop0, val)           ((void)cop0, set_c0_guest_status(val))
> +#define kvm_clear_c0_guest_status(cop0, val)         ((void)cop0, clear_c0_guest_status(val))
> +#define kvm_set_c0_guest_cause(cop0, val)            ((void)cop0, set_c0_guest_cause(val))
> +#define kvm_clear_c0_guest_cause(cop0, val)          ((void)cop0, clear_c0_guest_cause(val))
> +#define kvm_change_c0_guest_cause(cop0, change, val) ((void)cop0, change_c0_guest_cause(change, val))
> +#define kvm_change_c0_guest_ebase(cop0, change, val) ((void)cop0, change_c0_guest_ebase(change, val))
> +
> +#else
>  
>  #define kvm_read_c0_guest_index(cop0)               (cop0->reg[MIPS_CP0_TLB_INDEX][0])
>  #define kvm_write_c0_guest_index(cop0, val)         (cop0->reg[MIPS_CP0_TLB_INDEX][0] = val)
> @@ -471,6 +595,7 @@ struct kvm_vcpu_arch {
>      kvm_set_c0_guest_ebase(cop0, ((val) & (change))); \
>  }
>  
> +#endif
>  
>  struct kvm_mips_callbacks {
>  	int (*handle_cop_unusable) (struct kvm_vcpu *vcpu);
> @@ -482,6 +607,9 @@ struct kvm_mips_callbacks {
>  	int (*handle_syscall) (struct kvm_vcpu *vcpu);
>  	int (*handle_res_inst) (struct kvm_vcpu *vcpu);
>  	int (*handle_break) (struct kvm_vcpu *vcpu);
> +#ifdef CONFIG_KVM_MIPS_VZ
> +	int (*handle_guest_exit) (struct kvm_vcpu *vcpu);
> +#endif

No need to make it conditional.  Ideally, you could choose both modes
available independently, so that a kernel with both VZ and
trap-and-emulate could be built.  Can the trap-and-emulate kernels run
under VZ?

Otherwise, there's hardly any point in having the callbacks in the first
place.

>  	int (*vm_init) (struct kvm *kvm);
>  	int (*vcpu_init) (struct kvm_vcpu *vcpu);
>  	int (*vcpu_setup) (struct kvm_vcpu *vcpu);
> @@ -517,23 +645,26 @@ uint32_t kvm_get_user_asid(struct kvm_vcpu *vcpu);
>  
>  uint32_t kvm_get_commpage_asid (struct kvm_vcpu *vcpu);
>  
> -extern int kvm_mips_handle_kseg0_tlb_fault(unsigned long badbaddr,
> +#ifdef CONFIG_KVM_MIPS_VZ
> +extern int kvm_mips_handle_vz_root_tlb_fault(ulong badvaddr,
> +					     struct kvm_vcpu *vcpu);
> +#endif

kvm_tlb.c has many parts that, after this patch, are BUG_ON(!cpu_has_vz)'d.

Perhaps you could split it into kvm_tlb.c, kvm_tlb_emul.c, kvm_tlb_vz.c?

Then these functions can also be in a separate header, rather than being
#ifdef'ed in this one.

This raises a related question: how much of kvm_mips_emul.c is reused
under VZ?  In x86, you can trigger emulation of any instruction by
"racing" the emulator: one VCPU triggers emulation of an instruction
that has to be emulated, the second rewrites it to another one on the
fly.  Another way to do the same is to make the iTLB and dTLB point to
different pages (load from the address of the instruction, change the
page table, execute the instruction; then the emulator will use the
entry from the dTLB).

Would it be possible to "trick" KVM into executing unwanted branches of
kvm_mips_emul.c by these tricks?  If so, the best thing would be to just
reenter the guest and retry executing the same instruction.  But a
BUG_ON() is definitely wrong, because it will crash the host.

Paolo

> +extern int kvm_mips_handle_kseg0_tlb_fault(ulong badbaddr,
>  					   struct kvm_vcpu *vcpu);
>  
> -extern int kvm_mips_handle_commpage_tlb_fault(unsigned long badvaddr,
> +extern int kvm_mips_handle_commpage_tlb_fault(ulong badvaddr,
>  					      struct kvm_vcpu *vcpu);
>  
>  extern int kvm_mips_handle_mapped_seg_tlb_fault(struct kvm_vcpu *vcpu,
>  						struct kvm_mips_tlb *tlb,
> -						unsigned long *hpa0,
> -						unsigned long *hpa1);
> +						ulong *hpa0, ulong *hpa1);
>  
> -extern enum emulation_result kvm_mips_handle_tlbmiss(unsigned long cause,
> +extern enum emulation_result kvm_mips_handle_tlbmiss(ulong cause,
>  						     uint32_t *opc,
>  						     struct kvm_run *run,
>  						     struct kvm_vcpu *vcpu);
>  
> -extern enum emulation_result kvm_mips_handle_tlbmod(unsigned long cause,
> +extern enum emulation_result kvm_mips_handle_tlbmod(ulong cause,
>  						    uint32_t *opc,
>  						    struct kvm_run *run,
>  						    struct kvm_vcpu *vcpu);
> @@ -542,14 +673,13 @@ extern void kvm_mips_dump_host_tlbs(void);
>  extern void kvm_mips_dump_guest_tlbs(struct kvm_vcpu *vcpu);
>  extern void kvm_mips_dump_shadow_tlbs(struct kvm_vcpu *vcpu);
>  extern void kvm_mips_flush_host_tlb(int skip_kseg0);
> -extern int kvm_mips_host_tlb_inv(struct kvm_vcpu *vcpu, unsigned long entryhi);
> +extern int kvm_mips_host_tlb_inv(struct kvm_vcpu *vcpu, ulong entryhi);
>  extern int kvm_mips_host_tlb_inv_index(struct kvm_vcpu *vcpu, int index);
>  
> -extern int kvm_mips_guest_tlb_lookup(struct kvm_vcpu *vcpu,
> -				     unsigned long entryhi);
> -extern int kvm_mips_host_tlb_lookup(struct kvm_vcpu *vcpu, unsigned long vaddr);
> -extern unsigned long kvm_mips_translate_guest_kseg0_to_hpa(struct kvm_vcpu *vcpu,
> -						   unsigned long gva);
> +extern int kvm_mips_guest_tlb_lookup(struct kvm_vcpu *vcpu, ulong entryhi);
> +extern int kvm_mips_host_tlb_lookup(struct kvm_vcpu *vcpu, ulong vaddr);
> +extern ulong kvm_mips_translate_guest_kseg0_to_hpa(struct kvm_vcpu *vcpu,
> +						   ulong gva);
>  extern void kvm_get_new_mmu_context(struct mm_struct *mm, unsigned long cpu,
>  				    struct kvm_vcpu *vcpu);
>  extern void kvm_shadow_tlb_put(struct kvm_vcpu *vcpu);
> @@ -564,57 +694,57 @@ extern void kvm_mips_vcpu_put(struct kvm_vcpu *vcpu);
>  uint32_t kvm_get_inst(uint32_t *opc, struct kvm_vcpu *vcpu);
>  enum emulation_result update_pc(struct kvm_vcpu *vcpu, uint32_t cause);
>  
> -extern enum emulation_result kvm_mips_emulate_inst(unsigned long cause,
> +extern enum emulation_result kvm_mips_emulate_inst(ulong cause,
>  						   uint32_t *opc,
>  						   struct kvm_run *run,
>  						   struct kvm_vcpu *vcpu);
>  
> -extern enum emulation_result kvm_mips_emulate_syscall(unsigned long cause,
> +extern enum emulation_result kvm_mips_emulate_syscall(ulong cause,
>  						      uint32_t *opc,
>  						      struct kvm_run *run,
>  						      struct kvm_vcpu *vcpu);
>  
> -extern enum emulation_result kvm_mips_emulate_tlbmiss_ld(unsigned long cause,
> +extern enum emulation_result kvm_mips_emulate_tlbmiss_ld(ulong cause,
>  							 uint32_t *opc,
>  							 struct kvm_run *run,
>  							 struct kvm_vcpu *vcpu);
>  
> -extern enum emulation_result kvm_mips_emulate_tlbinv_ld(unsigned long cause,
> +extern enum emulation_result kvm_mips_emulate_tlbinv_ld(ulong cause,
>  							uint32_t *opc,
>  							struct kvm_run *run,
>  							struct kvm_vcpu *vcpu);
>  
> -extern enum emulation_result kvm_mips_emulate_tlbmiss_st(unsigned long cause,
> +extern enum emulation_result kvm_mips_emulate_tlbmiss_st(ulong cause,
>  							 uint32_t *opc,
>  							 struct kvm_run *run,
>  							 struct kvm_vcpu *vcpu);
>  
> -extern enum emulation_result kvm_mips_emulate_tlbinv_st(unsigned long cause,
> +extern enum emulation_result kvm_mips_emulate_tlbinv_st(ulong cause,
>  							uint32_t *opc,
>  							struct kvm_run *run,
>  							struct kvm_vcpu *vcpu);
>  
> -extern enum emulation_result kvm_mips_emulate_tlbmod(unsigned long cause,
> +extern enum emulation_result kvm_mips_emulate_tlbmod(ulong cause,
>  						     uint32_t *opc,
>  						     struct kvm_run *run,
>  						     struct kvm_vcpu *vcpu);
>  
> -extern enum emulation_result kvm_mips_emulate_fpu_exc(unsigned long cause,
> +extern enum emulation_result kvm_mips_emulate_fpu_exc(ulong cause,
>  						      uint32_t *opc,
>  						      struct kvm_run *run,
>  						      struct kvm_vcpu *vcpu);
>  
> -extern enum emulation_result kvm_mips_handle_ri(unsigned long cause,
> +extern enum emulation_result kvm_mips_handle_ri(ulong cause,
>  						uint32_t *opc,
>  						struct kvm_run *run,
>  						struct kvm_vcpu *vcpu);
>  
> -extern enum emulation_result kvm_mips_emulate_ri_exc(unsigned long cause,
> +extern enum emulation_result kvm_mips_emulate_ri_exc(ulong cause,
>  						     uint32_t *opc,
>  						     struct kvm_run *run,
>  						     struct kvm_vcpu *vcpu);
>  
> -extern enum emulation_result kvm_mips_emulate_bp_exc(unsigned long cause,
> +extern enum emulation_result kvm_mips_emulate_bp_exc(ulong cause,
>  						     uint32_t *opc,
>  						     struct kvm_run *run,
>  						     struct kvm_vcpu *vcpu);
> @@ -624,7 +754,7 @@ extern enum emulation_result kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu,
>  
>  enum emulation_result kvm_mips_emulate_count(struct kvm_vcpu *vcpu);
>  
> -enum emulation_result kvm_mips_check_privilege(unsigned long cause,
> +enum emulation_result kvm_mips_check_privilege(ulong cause,
>  					       uint32_t *opc,
>  					       struct kvm_run *run,
>  					       struct kvm_vcpu *vcpu);
> @@ -659,9 +789,17 @@ extern int kvm_mips_trans_mtc0(uint32_t inst, uint32_t *opc,
>  			       struct kvm_vcpu *vcpu);
>  
>  /* Misc */
> -extern void mips32_SyncICache(unsigned long addr, unsigned long size);
> +extern void mips32_SyncICache(ulong addr, ulong size);
>  extern int kvm_mips_dump_stats(struct kvm_vcpu *vcpu);
> -extern unsigned long kvm_mips_get_ramsize(struct kvm *kvm);
> -
> +extern ulong kvm_mips_get_ramsize(struct kvm *kvm);
> +
> +#ifdef CONFIG_KVM_MIPS_VZ
> +/* VZ ASE specific functions */
> +extern void kvm_vz_restore_guest_timer_int(struct kvm_vcpu *vcpu,
> +					   struct kvm_regs *regs);
> +extern void mips32_ClearGuestRID(void);
> +extern void mips32_SetGuestRID(ulong guestRID);
> +extern void mips32_SetGuestRIDtoGuestID(void);
> +#endif
>  
>  #endif /* __MIPS_KVM_HOST_H__ */
> 

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 06/18] KVM/MIPS32-VZ: VZ-ASE related callbacks to handle guest exceptions that trap to the Root context.
  2013-05-19  5:47 ` [PATCH 06/18] KVM/MIPS32-VZ: VZ-ASE related callbacks to handle guest exceptions that trap to the Root context Sanjay Lal
  2013-05-28 15:04   ` Paolo Bonzini
@ 2013-05-28 16:14   ` Paolo Bonzini
  2013-05-30 18:35     ` Sanjay Lal
  1 sibling, 1 reply; 40+ messages in thread
From: Paolo Bonzini @ 2013-05-28 16:14 UTC (permalink / raw)
  To: Sanjay Lal; +Cc: kvm, linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti

Il 19/05/2013 07:47, Sanjay Lal ha scritto:
> +#endif
> +		local_irq_save(flags);
> +		if (kvm_mips_handle_vz_root_tlb_fault(badvaddr, vcpu) < 0) {
> +			run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> +			er = EMULATE_FAIL;
> +		}
> +		local_irq_restore(flags);
> +	}

This is introduced much later.  Please make sure that, with
CONFIG_KVM_MIPS_VZ, every patch builds.

Paolo

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 10/18] KVM/MIPS32-VZ: Add API for VZ-ASE Capability
  2013-05-19  5:47 ` [PATCH 10/18] KVM/MIPS32-VZ: Add API for VZ-ASE Capability Sanjay Lal
@ 2013-05-28 16:34   ` Paolo Bonzini
  2013-05-30 17:07     ` David Daney
  0 siblings, 1 reply; 40+ messages in thread
From: Paolo Bonzini @ 2013-05-28 16:34 UTC (permalink / raw)
  To: Sanjay Lal; +Cc: kvm, linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti

Il 19/05/2013 07:47, Sanjay Lal ha scritto:
> - Add API to allow clients (QEMU etc.) to check whether the H/W
>   supports the MIPS VZ-ASE.

Why does this matter to userspace?  Do the userspace have some way to
detect if the kernel is unmodified or minimally-modified?

Paolo

> Signed-off-by: Sanjay Lal <sanjayl@kymasys.com>
> ---
>  include/uapi/linux/kvm.h | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index a5c86fc..5889e976 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -666,6 +666,7 @@ struct kvm_ppc_smmu_info {
>  #define KVM_CAP_IRQ_MPIC 90
>  #define KVM_CAP_PPC_RTAS 91
>  #define KVM_CAP_IRQ_XICS 92
> +#define KVM_CAP_MIPS_VZ_ASE 93
>  
>  #ifdef KVM_CAP_IRQ_ROUTING
>  
> 

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 10/18] KVM/MIPS32-VZ: Add API for VZ-ASE Capability
  2013-05-28 16:34   ` Paolo Bonzini
@ 2013-05-30 17:07     ` David Daney
  2013-05-30 17:51       ` Paolo Bonzini
  2013-05-30 18:30       ` Sanjay Lal
  0 siblings, 2 replies; 40+ messages in thread
From: David Daney @ 2013-05-30 17:07 UTC (permalink / raw)
  To: Paolo Bonzini, Sanjay Lal
  Cc: kvm, linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti

On 05/28/2013 09:34 AM, Paolo Bonzini wrote:
> Il 19/05/2013 07:47, Sanjay Lal ha scritto:
>> - Add API to allow clients (QEMU etc.) to check whether the H/W
>>    supports the MIPS VZ-ASE.
>
> Why does this matter to userspace?  Do the userspace have some way to
> detect if the kernel is unmodified or minimally-modified?
>

There are (will be) two types of VM presented by MIPS KVM:

1) That provided by the initial patch where a faux-MIPS is emulated and 
all kernel code must be in the USEG address space.

2) Real MIPS, addressing works as per the architecture specification.

Presumably the user-space client would like to know which of these are 
supported, as well as be able to select the desired model.

I don't know the best way to do this, but I agree that 
KVM_CAP_MIPS_VZ_ASE is probably not the best name for it.

My idea was to have the arg of the KVM_CREATE_VM ioctl specify the 
desired style

David Daney



> Paolo
>
>> Signed-off-by: Sanjay Lal <sanjayl@kymasys.com>
>> ---
>>   include/uapi/linux/kvm.h | 1 +
>>   1 file changed, 1 insertion(+)
>>
>> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
>> index a5c86fc..5889e976 100644
>> --- a/include/uapi/linux/kvm.h
>> +++ b/include/uapi/linux/kvm.h
>> @@ -666,6 +666,7 @@ struct kvm_ppc_smmu_info {
>>   #define KVM_CAP_IRQ_MPIC 90
>>   #define KVM_CAP_PPC_RTAS 91
>>   #define KVM_CAP_IRQ_XICS 92
>> +#define KVM_CAP_MIPS_VZ_ASE 93
>>
>>   #ifdef KVM_CAP_IRQ_ROUTING
>>
>>
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 10/18] KVM/MIPS32-VZ: Add API for VZ-ASE Capability
  2013-05-30 17:07     ` David Daney
@ 2013-05-30 17:51       ` Paolo Bonzini
  2013-05-30 18:35         ` David Daney
  2013-05-30 18:30       ` Sanjay Lal
  1 sibling, 1 reply; 40+ messages in thread
From: Paolo Bonzini @ 2013-05-30 17:51 UTC (permalink / raw)
  To: David Daney
  Cc: Sanjay Lal, kvm, linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti

Il 30/05/2013 19:07, David Daney ha scritto:
> On 05/28/2013 09:34 AM, Paolo Bonzini wrote:
>> Il 19/05/2013 07:47, Sanjay Lal ha scritto:
>>> - Add API to allow clients (QEMU etc.) to check whether the H/W
>>>    supports the MIPS VZ-ASE.
>>
>> Why does this matter to userspace?  Do the userspace have some way to
>> detect if the kernel is unmodified or minimally-modified?
>>
> 
> There are (will be) two types of VM presented by MIPS KVM:
> 
> 1) That provided by the initial patch where a faux-MIPS is emulated and
> all kernel code must be in the USEG address space.
> 
> 2) Real MIPS, addressing works as per the architecture specification.
> 
> Presumably the user-space client would like to know which of these are
> supported, as well as be able to select the desired model.

Understood.  It's really two different machine types.

> I don't know the best way to do this, but I agree that
> KVM_CAP_MIPS_VZ_ASE is probably not the best name for it.
> 
> My idea was to have the arg of the KVM_CREATE_VM ioctl specify the
> desired style

Ok.  How complex is it?  Do you plan to do this when the patches are
"really ready" for Linus' tree?

Paolo

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 10/18] KVM/MIPS32-VZ: Add API for VZ-ASE Capability
  2013-05-30 17:07     ` David Daney
  2013-05-30 17:51       ` Paolo Bonzini
@ 2013-05-30 18:30       ` Sanjay Lal
  1 sibling, 0 replies; 40+ messages in thread
From: Sanjay Lal @ 2013-05-30 18:30 UTC (permalink / raw)
  To: David Daney
  Cc: Paolo Bonzini, kvm, linux-mips, Ralf Baechle, Gleb Natapov,
	Marcelo Tosatti


On May 30, 2013, at 10:07 AM, David Daney wrote:

> On 05/28/2013 09:34 AM, Paolo Bonzini wrote:
>> Il 19/05/2013 07:47, Sanjay Lal ha scritto:
>>> - Add API to allow clients (QEMU etc.) to check whether the H/W
>>>   supports the MIPS VZ-ASE.
>> 
>> Why does this matter to userspace?  Do the userspace have some way to
>> detect if the kernel is unmodified or minimally-modified?
>> 
> 
> There are (will be) two types of VM presented by MIPS KVM:
> 
> 1) That provided by the initial patch where a faux-MIPS is emulated and all kernel code must be in the USEG address space.
> 
> 2) Real MIPS, addressing works as per the architecture specification.
> 
> Presumably the user-space client would like to know which of these are supported, as well as be able to select the desired model.
> 
> I don't know the best way to do this, but I agree that KVM_CAP_MIPS_VZ_ASE is probably not the best name for it.
> 
> My idea was to have the arg of the KVM_CREATE_VM ioctl specify the desired style
> 
> David Daney
> 
> 


Hi Paolo, just wanted to add to David's comments.  KVM/MIPS currently supports the two modes David mentioned, based on a kernel config option.   KVM_CAP_MIPS_VZ_ASE is used by QEMU to make sure that the kvm module currently loaded supports the H/W virtualization.

Its a bit cumbersome on MIPS, because you really can't fall back to trap and emulate, since the guest kernel for trap and emulate has a user mode link address.

I am open to other ways of doing this.

Regards
Sanjay

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 06/18] KVM/MIPS32-VZ: VZ-ASE related callbacks to handle guest exceptions that trap to the Root context.
  2013-05-28 15:04   ` Paolo Bonzini
@ 2013-05-30 18:35     ` Sanjay Lal
  0 siblings, 0 replies; 40+ messages in thread
From: Sanjay Lal @ 2013-05-30 18:35 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti


On May 28, 2013, at 8:04 AM, Paolo Bonzini wrote:

> Il 19/05/2013 07:47, Sanjay Lal ha scritto:
>> +static int kvm_vz_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
>> +{
>> +	struct mips_coproc *cop0 = vcpu->arch.cop0;
>> +
>> +	/* some registers are not restored
>> +	 * random, count        : read-only
>> +	 * userlocal            : not implemented in qemu
>> +	 * config6              : not implemented in processor variant
>> +	 * compare, cause       : defer to kvm_vz_restore_guest_timer_int
>> +	 */
>> +
>> +	kvm_write_c0_guest_index(cop0, regs->cp0reg[MIPS_CP0_TLB_INDEX][0]);
>> +	kvm_write_c0_guest_entrylo0(cop0, regs->cp0reg[MIPS_CP0_TLB_LO0][0]);
>> +	kvm_write_c0_guest_entrylo1(cop0, regs->cp0reg[MIPS_CP0_TLB_LO1][0]);
>> +	kvm_write_c0_guest_context(cop0, regs->cp0reg[MIPS_CP0_TLB_CONTEXT][0]);
>> +	kvm_write_c0_guest_pagemask(cop0,
>> +				    regs->cp0reg[MIPS_CP0_TLB_PG_MASK][0]);
>> +	kvm_write_c0_guest_pagegrain(cop0,
>> +				     regs->cp0reg[MIPS_CP0_TLB_PG_MASK][1]);
>> +	kvm_write_c0_guest_wired(cop0, regs->cp0reg[MIPS_CP0_TLB_WIRED][0]);
>> +	kvm_write_c0_guest_hwrena(cop0, regs->cp0reg[MIPS_CP0_HWRENA][0]);
>> +	kvm_write_c0_guest_badvaddr(cop0, regs->cp0reg[MIPS_CP0_BAD_VADDR][0]);
>> +	/* skip kvm_write_c0_guest_count */
>> +	kvm_write_c0_guest_entryhi(cop0, regs->cp0reg[MIPS_CP0_TLB_HI][0]);
>> +	/* defer kvm_write_c0_guest_compare */
>> +	kvm_write_c0_guest_status(cop0, regs->cp0reg[MIPS_CP0_STATUS][0]);
>> +	kvm_write_c0_guest_intctl(cop0, regs->cp0reg[MIPS_CP0_STATUS][1]);
>> +	/* defer kvm_write_c0_guest_cause */
>> +	kvm_write_c0_guest_epc(cop0, regs->cp0reg[MIPS_CP0_EXC_PC][0]);
>> +	kvm_write_c0_guest_prid(cop0, regs->cp0reg[MIPS_CP0_PRID][0]);
>> +	kvm_write_c0_guest_ebase(cop0, regs->cp0reg[MIPS_CP0_PRID][1]);
>> +
>> +	/* only restore implemented config registers */
>> +	kvm_write_c0_guest_config(cop0, regs->cp0reg[MIPS_CP0_CONFIG][0]);
>> +
>> +	if ((regs->cp0reg[MIPS_CP0_CONFIG][0] & MIPS_CONF_M) &
>> +			cpu_vz_has_config1)
>> +		kvm_write_c0_guest_config1(cop0,
>> +				regs->cp0reg[MIPS_CP0_CONFIG][1]);
>> +
>> +	if ((regs->cp0reg[MIPS_CP0_CONFIG][1] & MIPS_CONF_M) &
>> +			cpu_vz_has_config2)
>> +		kvm_write_c0_guest_config2(cop0,
>> +				regs->cp0reg[MIPS_CP0_CONFIG][2]);
>> +
>> +	if ((regs->cp0reg[MIPS_CP0_CONFIG][2] & MIPS_CONF_M) &
>> +			cpu_vz_has_config3)
>> +		kvm_write_c0_guest_config3(cop0,
>> +				regs->cp0reg[MIPS_CP0_CONFIG][3]);
>> +
>> +	if ((regs->cp0reg[MIPS_CP0_CONFIG][3] & MIPS_CONF_M) &
>> +			cpu_vz_has_config4)
>> +		kvm_write_c0_guest_config4(cop0,
>> +				regs->cp0reg[MIPS_CP0_CONFIG][4]);
>> +
>> +	if ((regs->cp0reg[MIPS_CP0_CONFIG][4] & MIPS_CONF_M) &
>> +			cpu_vz_has_config5)
>> +		kvm_write_c0_guest_config5(cop0,
>> +				regs->cp0reg[MIPS_CP0_CONFIG][5]);
>> +
>> +	if (cpu_vz_has_config6)
>> +		kvm_write_c0_guest_config6(cop0,
>> +				regs->cp0reg[MIPS_CP0_CONFIG][6]);
>> +	if (cpu_vz_has_config7)
>> +		kvm_write_c0_guest_config7(cop0,
>> +				regs->cp0reg[MIPS_CP0_CONFIG][7]);
>> +
>> +	kvm_write_c0_guest_errorepc(cop0, regs->cp0reg[MIPS_CP0_ERROR_PC][0]);
>> +
>> +	/* call after setting MIPS_CP0_CAUSE to avoid having it overwritten
>> +	 * this will set guest compare and cause.TI if necessary
>> +	 */
>> +	kvm_vz_restore_guest_timer_int(vcpu, regs);
>> +
>> +	return 0;
>> +}
> 
> All this is now obsolete after David's patches (reusing kvm_regs looked
> a bit strange in fact).
> 
> Paolo
> 

These patched were against 3.10-rc2, now that David's patches have been accepted, I'll migrate to the new ABI for v2 of the patch set.

Regards
Sanjay

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 10/18] KVM/MIPS32-VZ: Add API for VZ-ASE Capability
  2013-05-30 17:51       ` Paolo Bonzini
@ 2013-05-30 18:35         ` David Daney
  0 siblings, 0 replies; 40+ messages in thread
From: David Daney @ 2013-05-30 18:35 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sanjay Lal, kvm, linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti

On 05/30/2013 10:51 AM, Paolo Bonzini wrote:
> Il 30/05/2013 19:07, David Daney ha scritto:
>> On 05/28/2013 09:34 AM, Paolo Bonzini wrote:
>>> Il 19/05/2013 07:47, Sanjay Lal ha scritto:
>>>> - Add API to allow clients (QEMU etc.) to check whether the H/W
>>>>     supports the MIPS VZ-ASE.
>>>
>>> Why does this matter to userspace?  Do the userspace have some way to
>>> detect if the kernel is unmodified or minimally-modified?
>>>
>>
>> There are (will be) two types of VM presented by MIPS KVM:
>>
>> 1) That provided by the initial patch where a faux-MIPS is emulated and
>> all kernel code must be in the USEG address space.
>>
>> 2) Real MIPS, addressing works as per the architecture specification.
>>
>> Presumably the user-space client would like to know which of these are
>> supported, as well as be able to select the desired model.
>
> Understood.  It's really two different machine types.
>
>> I don't know the best way to do this, but I agree that
>> KVM_CAP_MIPS_VZ_ASE is probably not the best name for it.
>>
>> My idea was to have the arg of the KVM_CREATE_VM ioctl specify the
>> desired style
>
> Ok.  How complex is it?  Do you plan to do this when the patches are
> "really ready" for Linus' tree?

I am currently working on preparing a patch set that implements MIPS-VZ 
in a slightly different manner than Sanjay's patches.  So there will 
likely be some back and forth getting everything properly integrated 
into a sane implementation

So I don't know exactly how to answer this question other than to say, 
that I don't think things should go into Linus's tree until they are 
"really ready", which would include resolving this issue.

David Daney

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 06/18] KVM/MIPS32-VZ: VZ-ASE related callbacks to handle guest exceptions that trap to the Root context.
  2013-05-28 16:14   ` Paolo Bonzini
@ 2013-05-30 18:35     ` Sanjay Lal
  2013-05-30 20:11       ` Paolo Bonzini
  0 siblings, 1 reply; 40+ messages in thread
From: Sanjay Lal @ 2013-05-30 18:35 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti


On May 28, 2013, at 9:14 AM, Paolo Bonzini wrote:

> Il 19/05/2013 07:47, Sanjay Lal ha scritto:
>> +#endif
>> +		local_irq_save(flags);
>> +		if (kvm_mips_handle_vz_root_tlb_fault(badvaddr, vcpu) < 0) {
>> +			run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>> +			er = EMULATE_FAIL;
>> +		}
>> +		local_irq_restore(flags);
>> +	}
> 
> This is introduced much later.  Please make sure that, with
> CONFIG_KVM_MIPS_VZ, every patch builds.
> 
> Paolo
> 

Again, I think this has to do with the fact that the patches were against 3.10-rc2, will rebase for v2.

Regards
Sanjay

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 06/18] KVM/MIPS32-VZ: VZ-ASE related callbacks to handle guest exceptions that trap to the Root context.
  2013-05-30 18:35     ` Sanjay Lal
@ 2013-05-30 20:11       ` Paolo Bonzini
  2013-05-31  1:56         ` Sanjay Lal
  0 siblings, 1 reply; 40+ messages in thread
From: Paolo Bonzini @ 2013-05-30 20:11 UTC (permalink / raw)
  To: Sanjay Lal; +Cc: kvm, linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti

Il 30/05/2013 20:35, Sanjay Lal ha scritto:
>>> >> +#endif
>>> >> +		local_irq_save(flags);
>>> >> +		if (kvm_mips_handle_vz_root_tlb_fault(badvaddr, vcpu) < 0) {
>>> >> +			run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>>> >> +			er = EMULATE_FAIL;
>>> >> +		}
>>> >> +		local_irq_restore(flags);
>>> >> +	}
>> > 
>> > This is introduced much later.  Please make sure that, with
>> > CONFIG_KVM_MIPS_VZ, every patch builds.
>> > 
>> > Paolo
>> > 
> Again, I think this has to do with the fact that the patches were
> against 3.10-rc2, will rebase for v2.

No, this is a simple patch ordering problem.
kvm_mips_handle_vz_root_tlb_fault is added in patch 11 only.

Paolo

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH 06/18] KVM/MIPS32-VZ: VZ-ASE related callbacks to handle guest exceptions that trap to the Root context.
  2013-05-30 20:11       ` Paolo Bonzini
@ 2013-05-31  1:56         ` Sanjay Lal
  0 siblings, 0 replies; 40+ messages in thread
From: Sanjay Lal @ 2013-05-31  1:56 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, linux-mips, Ralf Baechle, Gleb Natapov, Marcelo Tosatti


On May 30, 2013, at 1:11 PM, Paolo Bonzini wrote:

> Il 30/05/2013 20:35, Sanjay Lal ha scritto:
>>>>>> +#endif
>>>>>> +		local_irq_save(flags);
>>>>>> +		if (kvm_mips_handle_vz_root_tlb_fault(badvaddr, vcpu) < 0) {
>>>>>> +			run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>>>>>> +			er = EMULATE_FAIL;
>>>>>> +		}
>>>>>> +		local_irq_restore(flags);
>>>>>> +	}
>>>> 
>>>> This is introduced much later.  Please make sure that, with
>>>> CONFIG_KVM_MIPS_VZ, every patch builds.
>>>> 
>>>> Paolo
>>>> 
>> Again, I think this has to do with the fact that the patches were
>> against 3.10-rc2, will rebase for v2.
> 
> No, this is a simple patch ordering problem.
> kvm_mips_handle_vz_root_tlb_fault is added in patch 11 only.
> 
> Paolo
> 


Ah I see what you mean. Will fix the ordering in v2.

Thanks
Sanjay

^ permalink raw reply	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2013-05-31  1:57 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-05-19  5:47 [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE) Sanjay Lal
2013-05-19  5:47 ` [PATCH 01/18] Revert "MIPS: microMIPS: Support dynamic ASID sizing." Sanjay Lal
2013-05-19  5:47 ` [PATCH 02/18] Revert "MIPS: Allow ASID size to be determined at boot time." Sanjay Lal
2013-05-19  5:47 ` [PATCH 03/18] KVM/MIPS32: Export min_low_pfn Sanjay Lal
2013-05-19  5:47 ` [PATCH 04/18] KVM/MIPS32-VZ: MIPS VZ-ASE related register defines and helper macros Sanjay Lal
2013-05-19  5:47 ` [PATCH 05/18] KVM/MIPS32-VZ: VZ-ASE assembler wrapper functions to set GuestIDs Sanjay Lal
2013-05-19 13:36   ` Sergei Shtylyov
2013-05-19  5:47 ` [PATCH 06/18] KVM/MIPS32-VZ: VZ-ASE related callbacks to handle guest exceptions that trap to the Root context Sanjay Lal
2013-05-28 15:04   ` Paolo Bonzini
2013-05-30 18:35     ` Sanjay Lal
2013-05-28 16:14   ` Paolo Bonzini
2013-05-30 18:35     ` Sanjay Lal
2013-05-30 20:11       ` Paolo Bonzini
2013-05-31  1:56         ` Sanjay Lal
2013-05-19  5:47 ` [PATCH 07/18] KVM/MIPS32: VZ-ASE related CPU feature flags and options Sanjay Lal
2013-05-19  5:47 ` [PATCH 08/18] KVM/MIPS32-VZ: Entry point for trampolining to the guest and trap handlers Sanjay Lal
2013-05-28 14:43   ` Paolo Bonzini
2013-05-19  5:47 ` [PATCH 09/18] KVM/MIPS32-VZ: Add support for CONFIG_KVM_MIPS_VZ option Sanjay Lal
2013-05-19  5:47 ` [PATCH 10/18] KVM/MIPS32-VZ: Add API for VZ-ASE Capability Sanjay Lal
2013-05-28 16:34   ` Paolo Bonzini
2013-05-30 17:07     ` David Daney
2013-05-30 17:51       ` Paolo Bonzini
2013-05-30 18:35         ` David Daney
2013-05-30 18:30       ` Sanjay Lal
2013-05-19  5:47 ` [PATCH 11/18] KVM/MIPS32-VZ: VZ: Handle Guest TLB faults that are handled in Root context Sanjay Lal
2013-05-19  5:47 ` [PATCH 12/18] KVM/MIPS32-VZ: VM Exit Stats, add VZ exit reasons Sanjay Lal
2013-05-19  5:47 ` [PATCH 13/18] KVM/MIPS32-VZ: Top level handler for Guest faults Sanjay Lal
2013-05-19  5:47 ` [PATCH 14/18] KVM/MIPS32-VZ: Guest exception batching support Sanjay Lal
2013-05-19  5:47 ` [PATCH 15/18] KVM/MIPS32: Add dummy trap handler to catch unexpected exceptions and dump out useful info Sanjay Lal
2013-05-19  5:47 ` [PATCH 16/18] KVM/MIPS32-VZ: Add VZ-ASE support to KVM/MIPS data structures Sanjay Lal
2013-05-28 15:24   ` Paolo Bonzini
2013-05-19  5:47 ` [PATCH 17/18] KVM/MIPS32: Revert to older method for accessing ASID parameters Sanjay Lal
2013-05-19  5:47 ` [PATCH 18/18] KVM/MIPS32-VZ: Dump out additional info about VZ features as part of /proc/cpuinfo Sanjay Lal
2013-05-20 15:50 ` [PATCH 00/18] KVM/MIPS32: Support for the new Virtualization ASE (VZ-ASE) David Daney
2013-05-20 16:58   ` Sanjay Lal
2013-05-20 17:29     ` David Daney
2013-05-20 17:34       ` Sanjay Lal
2013-05-20 18:36     ` Maciej W. Rozycki
2013-05-20 18:58       ` David Daney
2013-05-27 12:45         ` Maciej W. Rozycki

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.