All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/27] Book3S_32 (PPC32) KVM support
@ 2010-04-15 22:11 ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

Since we do have support for Book3S_64 KVM now, the next obvious step is to
support the generation before that: Book3S_32.

This patch set adds support for Book3S_32 hosts, making your old G4 this much
more useful. It should also work on fancy exotic systems like the Wii and the
Game Cube, but I haven't tried yet.

As far as the path I took goes, I tried to merge as much functionality and code
as possible with the 64 bit host support. So whenever code was reusable, it gets
reused.

Alexander Graf (27):
  KVM: PPC: Name generic 64-bit code generic
  KVM: PPC: Add host MMU Support
  KVM: PPC: Add SR swapping code
  KVM: PPC: Add generic segment switching code
  PPC: Split context init/destroy functions
  KVM: PPC: Add kvm_book3s_64.h
  KVM: PPC: Add kvm_book3s_32.h
  KVM: PPC: Add fields to shadow vcpu
  KVM: PPC: Improve indirect svcpu accessors
  KVM: PPC: Use KVM_BOOK3S_HANDLER
  KVM: PPC: Use CONFIG_PPC_BOOK3S define
  PPC: Add STLU
  KVM: PPC: Use now shadowed vcpu fields
  KVM: PPC: Extract MMU init
  KVM: PPC: Make real mode handler generic
  KVM: PPC: Make highmem code generic
  KVM: PPC: Make SLB switching code the new segment framework
  KVM: PPC: Release clean pages as clean
  KVM: PPC: Remove fetch fail code
  KVM: PPC: Add SVCPU to Book3S_32
  KVM: PPC: Emulate segment fault
  KVM: PPC: Add Book3S compatibility code
  KVM: PPC: Export MMU variables
  PPC: Export SWITCH_FRAME_SIZE
  KVM: PPC: Check max IRQ prio
  KVM: PPC: Add KVM intercept handlers
  KVM: PPC: Enable Book3S_32 KVM building

 arch/powerpc/include/asm/asm-compat.h        |    2 +
 arch/powerpc/include/asm/kvm_book3s.h        |  100 +++++-
 arch/powerpc/include/asm/kvm_book3s_32.h     |   42 ++
 arch/powerpc/include/asm/kvm_book3s_64.h     |   28 ++
 arch/powerpc/include/asm/kvm_book3s_64_asm.h |   76 ----
 arch/powerpc/include/asm/kvm_book3s_asm.h    |   97 +++++
 arch/powerpc/include/asm/kvm_booke.h         |   96 +++++
 arch/powerpc/include/asm/kvm_host.h          |   16 +-
 arch/powerpc/include/asm/kvm_ppc.h           |   80 +----
 arch/powerpc/include/asm/mmu_context.h       |    2 +
 arch/powerpc/include/asm/paca.h              |   10 +-
 arch/powerpc/include/asm/processor.h         |    3 +
 arch/powerpc/kernel/asm-offsets.c            |  102 +++--
 arch/powerpc/kernel/head_32.S                |   14 +
 arch/powerpc/kernel/head_64.S                |    4 +-
 arch/powerpc/kernel/ppc_ksyms.c              |    5 +
 arch/powerpc/kvm/Kconfig                     |   24 +-
 arch/powerpc/kvm/Makefile                    |   18 +-
 arch/powerpc/kvm/book3s.c                    |  184 ++++++---
 arch/powerpc/kvm/book3s_32_mmu.c             |    3 +
 arch/powerpc/kvm/book3s_32_mmu_host.c        |  480 ++++++++++++++++++++++
 arch/powerpc/kvm/book3s_32_sr.S              |  143 +++++++
 arch/powerpc/kvm/book3s_64_emulate.c         |  566 -------------------------
 arch/powerpc/kvm/book3s_64_exports.c         |   32 --
 arch/powerpc/kvm/book3s_64_interrupts.S      |  318 --------------
 arch/powerpc/kvm/book3s_64_mmu.c             |    2 +-
 arch/powerpc/kvm/book3s_64_mmu_host.c        |   50 ++-
 arch/powerpc/kvm/book3s_64_rmhandlers.S      |  195 ---------
 arch/powerpc/kvm/book3s_64_slb.S             |  183 ++-------
 arch/powerpc/kvm/book3s_emulate.c            |  570 ++++++++++++++++++++++++++
 arch/powerpc/kvm/book3s_exports.c            |   32 ++
 arch/powerpc/kvm/book3s_interrupts.S         |  319 ++++++++++++++
 arch/powerpc/kvm/book3s_paired_singles.c     |    2 +-
 arch/powerpc/kvm/book3s_rmhandlers.S         |  252 ++++++++++++
 arch/powerpc/kvm/book3s_segment.S            |  258 ++++++++++++
 arch/powerpc/kvm/emulate.c                   |   17 +-
 arch/powerpc/kvm/powerpc.c                   |    2 +-
 arch/powerpc/mm/mmu_context_hash32.c         |   29 +-
 38 files changed, 2771 insertions(+), 1585 deletions(-)
 create mode 100644 arch/powerpc/include/asm/kvm_book3s_32.h
 create mode 100644 arch/powerpc/include/asm/kvm_book3s_64.h
 delete mode 100644 arch/powerpc/include/asm/kvm_book3s_64_asm.h
 create mode 100644 arch/powerpc/include/asm/kvm_book3s_asm.h
 create mode 100644 arch/powerpc/include/asm/kvm_booke.h
 create mode 100644 arch/powerpc/kvm/book3s_32_mmu_host.c
 create mode 100644 arch/powerpc/kvm/book3s_32_sr.S
 delete mode 100644 arch/powerpc/kvm/book3s_64_emulate.c
 delete mode 100644 arch/powerpc/kvm/book3s_64_exports.c
 delete mode 100644 arch/powerpc/kvm/book3s_64_interrupts.S
 delete mode 100644 arch/powerpc/kvm/book3s_64_rmhandlers.S
 create mode 100644 arch/powerpc/kvm/book3s_emulate.c
 create mode 100644 arch/powerpc/kvm/book3s_exports.c
 create mode 100644 arch/powerpc/kvm/book3s_interrupts.S
 create mode 100644 arch/powerpc/kvm/book3s_rmhandlers.S
 create mode 100644 arch/powerpc/kvm/book3s_segment.S

^ permalink raw reply	[flat|nested] 80+ messages in thread

* [PATCH 00/27] Book3S_32 (PPC32) KVM support
@ 2010-04-15 22:11 ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

Since we do have support for Book3S_64 KVM now, the next obvious step is to
support the generation before that: Book3S_32.

This patch set adds support for Book3S_32 hosts, making your old G4 this much
more useful. It should also work on fancy exotic systems like the Wii and the
Game Cube, but I haven't tried yet.

As far as the path I took goes, I tried to merge as much functionality and code
as possible with the 64 bit host support. So whenever code was reusable, it gets
reused.

Alexander Graf (27):
  KVM: PPC: Name generic 64-bit code generic
  KVM: PPC: Add host MMU Support
  KVM: PPC: Add SR swapping code
  KVM: PPC: Add generic segment switching code
  PPC: Split context init/destroy functions
  KVM: PPC: Add kvm_book3s_64.h
  KVM: PPC: Add kvm_book3s_32.h
  KVM: PPC: Add fields to shadow vcpu
  KVM: PPC: Improve indirect svcpu accessors
  KVM: PPC: Use KVM_BOOK3S_HANDLER
  KVM: PPC: Use CONFIG_PPC_BOOK3S define
  PPC: Add STLU
  KVM: PPC: Use now shadowed vcpu fields
  KVM: PPC: Extract MMU init
  KVM: PPC: Make real mode handler generic
  KVM: PPC: Make highmem code generic
  KVM: PPC: Make SLB switching code the new segment framework
  KVM: PPC: Release clean pages as clean
  KVM: PPC: Remove fetch fail code
  KVM: PPC: Add SVCPU to Book3S_32
  KVM: PPC: Emulate segment fault
  KVM: PPC: Add Book3S compatibility code
  KVM: PPC: Export MMU variables
  PPC: Export SWITCH_FRAME_SIZE
  KVM: PPC: Check max IRQ prio
  KVM: PPC: Add KVM intercept handlers
  KVM: PPC: Enable Book3S_32 KVM building

 arch/powerpc/include/asm/asm-compat.h        |    2 +
 arch/powerpc/include/asm/kvm_book3s.h        |  100 +++++-
 arch/powerpc/include/asm/kvm_book3s_32.h     |   42 ++
 arch/powerpc/include/asm/kvm_book3s_64.h     |   28 ++
 arch/powerpc/include/asm/kvm_book3s_64_asm.h |   76 ----
 arch/powerpc/include/asm/kvm_book3s_asm.h    |   97 +++++
 arch/powerpc/include/asm/kvm_booke.h         |   96 +++++
 arch/powerpc/include/asm/kvm_host.h          |   16 +-
 arch/powerpc/include/asm/kvm_ppc.h           |   80 +----
 arch/powerpc/include/asm/mmu_context.h       |    2 +
 arch/powerpc/include/asm/paca.h              |   10 +-
 arch/powerpc/include/asm/processor.h         |    3 +
 arch/powerpc/kernel/asm-offsets.c            |  102 +++--
 arch/powerpc/kernel/head_32.S                |   14 +
 arch/powerpc/kernel/head_64.S                |    4 +-
 arch/powerpc/kernel/ppc_ksyms.c              |    5 +
 arch/powerpc/kvm/Kconfig                     |   24 +-
 arch/powerpc/kvm/Makefile                    |   18 +-
 arch/powerpc/kvm/book3s.c                    |  184 ++++++---
 arch/powerpc/kvm/book3s_32_mmu.c             |    3 +
 arch/powerpc/kvm/book3s_32_mmu_host.c        |  480 ++++++++++++++++++++++
 arch/powerpc/kvm/book3s_32_sr.S              |  143 +++++++
 arch/powerpc/kvm/book3s_64_emulate.c         |  566 -------------------------
 arch/powerpc/kvm/book3s_64_exports.c         |   32 --
 arch/powerpc/kvm/book3s_64_interrupts.S      |  318 --------------
 arch/powerpc/kvm/book3s_64_mmu.c             |    2 +-
 arch/powerpc/kvm/book3s_64_mmu_host.c        |   50 ++-
 arch/powerpc/kvm/book3s_64_rmhandlers.S      |  195 ---------
 arch/powerpc/kvm/book3s_64_slb.S             |  183 ++-------
 arch/powerpc/kvm/book3s_emulate.c            |  570 ++++++++++++++++++++++++++
 arch/powerpc/kvm/book3s_exports.c            |   32 ++
 arch/powerpc/kvm/book3s_interrupts.S         |  319 ++++++++++++++
 arch/powerpc/kvm/book3s_paired_singles.c     |    2 +-
 arch/powerpc/kvm/book3s_rmhandlers.S         |  252 ++++++++++++
 arch/powerpc/kvm/book3s_segment.S            |  258 ++++++++++++
 arch/powerpc/kvm/emulate.c                   |   17 +-
 arch/powerpc/kvm/powerpc.c                   |    2 +-
 arch/powerpc/mm/mmu_context_hash32.c         |   29 +-
 38 files changed, 2771 insertions(+), 1585 deletions(-)
 create mode 100644 arch/powerpc/include/asm/kvm_book3s_32.h
 create mode 100644 arch/powerpc/include/asm/kvm_book3s_64.h
 delete mode 100644 arch/powerpc/include/asm/kvm_book3s_64_asm.h
 create mode 100644 arch/powerpc/include/asm/kvm_book3s_asm.h
 create mode 100644 arch/powerpc/include/asm/kvm_booke.h
 create mode 100644 arch/powerpc/kvm/book3s_32_mmu_host.c
 create mode 100644 arch/powerpc/kvm/book3s_32_sr.S
 delete mode 100644 arch/powerpc/kvm/book3s_64_emulate.c
 delete mode 100644 arch/powerpc/kvm/book3s_64_exports.c
 delete mode 100644 arch/powerpc/kvm/book3s_64_interrupts.S
 delete mode 100644 arch/powerpc/kvm/book3s_64_rmhandlers.S
 create mode 100644 arch/powerpc/kvm/book3s_emulate.c
 create mode 100644 arch/powerpc/kvm/book3s_exports.c
 create mode 100644 arch/powerpc/kvm/book3s_interrupts.S
 create mode 100644 arch/powerpc/kvm/book3s_rmhandlers.S
 create mode 100644 arch/powerpc/kvm/book3s_segment.S


^ permalink raw reply	[flat|nested] 80+ messages in thread

* [PATCH 01/27] KVM: PPC: Name generic 64-bit code generic
       [not found] ` <1271369518-11247-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-15 22:11     ` Alexander Graf
  2010-04-15 22:11     ` Alexander Graf
                       ` (15 subsequent siblings)
  16 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

We have quite some code that can be used by Book3S_32 and Book3S_64 alike,
so let's call it "Book3S" instead of "Book3S_64", so we can later on
use it from the 32 bit port too.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/include/asm/kvm_book3s.h        |    2 +-
 arch/powerpc/include/asm/kvm_book3s_64_asm.h |   76 ----
 arch/powerpc/include/asm/kvm_book3s_asm.h    |   76 ++++
 arch/powerpc/include/asm/paca.h              |    2 +-
 arch/powerpc/kernel/head_64.S                |    4 +-
 arch/powerpc/kvm/Makefile                    |    6 +-
 arch/powerpc/kvm/book3s_64_emulate.c         |  566 --------------------------
 arch/powerpc/kvm/book3s_64_exports.c         |   32 --
 arch/powerpc/kvm/book3s_64_interrupts.S      |  318 ---------------
 arch/powerpc/kvm/book3s_64_rmhandlers.S      |  195 ---------
 arch/powerpc/kvm/book3s_emulate.c            |  566 ++++++++++++++++++++++++++
 arch/powerpc/kvm/book3s_exports.c            |   32 ++
 arch/powerpc/kvm/book3s_interrupts.S         |  318 +++++++++++++++
 arch/powerpc/kvm/book3s_rmhandlers.S         |  195 +++++++++
 14 files changed, 1194 insertions(+), 1194 deletions(-)
 delete mode 100644 arch/powerpc/include/asm/kvm_book3s_64_asm.h
 create mode 100644 arch/powerpc/include/asm/kvm_book3s_asm.h
 delete mode 100644 arch/powerpc/kvm/book3s_64_emulate.c
 delete mode 100644 arch/powerpc/kvm/book3s_64_exports.c
 delete mode 100644 arch/powerpc/kvm/book3s_64_interrupts.S
 delete mode 100644 arch/powerpc/kvm/book3s_64_rmhandlers.S
 create mode 100644 arch/powerpc/kvm/book3s_emulate.c
 create mode 100644 arch/powerpc/kvm/book3s_exports.c
 create mode 100644 arch/powerpc/kvm/book3s_interrupts.S
 create mode 100644 arch/powerpc/kvm/book3s_rmhandlers.S

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index ee79921..7670e2a 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -22,7 +22,7 @@
 
 #include <linux/types.h>
 #include <linux/kvm_host.h>
-#include <asm/kvm_book3s_64_asm.h>
+#include <asm/kvm_book3s_asm.h>
 
 struct kvmppc_slb {
 	u64 esid;
diff --git a/arch/powerpc/include/asm/kvm_book3s_64_asm.h b/arch/powerpc/include/asm/kvm_book3s_64_asm.h
deleted file mode 100644
index 183461b..0000000
--- a/arch/powerpc/include/asm/kvm_book3s_64_asm.h
+++ /dev/null
@@ -1,76 +0,0 @@
-/*
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License, version 2, as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
- *
- * Copyright SUSE Linux Products GmbH 2009
- *
- * Authors: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
- */
-
-#ifndef __ASM_KVM_BOOK3S_ASM_H__
-#define __ASM_KVM_BOOK3S_ASM_H__
-
-#ifdef __ASSEMBLY__
-
-#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
-
-#include <asm/kvm_asm.h>
-
-.macro DO_KVM intno
-	.if (\intno == BOOK3S_INTERRUPT_SYSTEM_RESET) || \
-	    (\intno == BOOK3S_INTERRUPT_MACHINE_CHECK) || \
-	    (\intno == BOOK3S_INTERRUPT_DATA_STORAGE) || \
-	    (\intno == BOOK3S_INTERRUPT_INST_STORAGE) || \
-	    (\intno == BOOK3S_INTERRUPT_DATA_SEGMENT) || \
-	    (\intno == BOOK3S_INTERRUPT_INST_SEGMENT) || \
-	    (\intno == BOOK3S_INTERRUPT_EXTERNAL) || \
-	    (\intno == BOOK3S_INTERRUPT_ALIGNMENT) || \
-	    (\intno == BOOK3S_INTERRUPT_PROGRAM) || \
-	    (\intno == BOOK3S_INTERRUPT_FP_UNAVAIL) || \
-	    (\intno == BOOK3S_INTERRUPT_DECREMENTER) || \
-	    (\intno == BOOK3S_INTERRUPT_SYSCALL) || \
-	    (\intno == BOOK3S_INTERRUPT_TRACE) || \
-	    (\intno == BOOK3S_INTERRUPT_PERFMON) || \
-	    (\intno == BOOK3S_INTERRUPT_ALTIVEC) || \
-	    (\intno == BOOK3S_INTERRUPT_VSX)
-
-	b	kvmppc_trampoline_\intno
-kvmppc_resume_\intno:
-
-	.endif
-.endm
-
-#else
-
-.macro DO_KVM intno
-.endm
-
-#endif /* CONFIG_KVM_BOOK3S_64_HANDLER */
-
-#else  /*__ASSEMBLY__ */
-
-struct kvmppc_book3s_shadow_vcpu {
-	ulong gpr[14];
-	u32 cr;
-	u32 xer;
-	ulong host_r1;
-	ulong host_r2;
-	ulong handler;
-	ulong scratch0;
-	ulong scratch1;
-	ulong vmhandler;
-};
-
-#endif /*__ASSEMBLY__ */
-
-#endif /* __ASM_KVM_BOOK3S_ASM_H__ */
diff --git a/arch/powerpc/include/asm/kvm_book3s_asm.h b/arch/powerpc/include/asm/kvm_book3s_asm.h
new file mode 100644
index 0000000..183461b
--- /dev/null
+++ b/arch/powerpc/include/asm/kvm_book3s_asm.h
@@ -0,0 +1,76 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ *
+ * Copyright SUSE Linux Products GmbH 2009
+ *
+ * Authors: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
+ */
+
+#ifndef __ASM_KVM_BOOK3S_ASM_H__
+#define __ASM_KVM_BOOK3S_ASM_H__
+
+#ifdef __ASSEMBLY__
+
+#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
+
+#include <asm/kvm_asm.h>
+
+.macro DO_KVM intno
+	.if (\intno == BOOK3S_INTERRUPT_SYSTEM_RESET) || \
+	    (\intno == BOOK3S_INTERRUPT_MACHINE_CHECK) || \
+	    (\intno == BOOK3S_INTERRUPT_DATA_STORAGE) || \
+	    (\intno == BOOK3S_INTERRUPT_INST_STORAGE) || \
+	    (\intno == BOOK3S_INTERRUPT_DATA_SEGMENT) || \
+	    (\intno == BOOK3S_INTERRUPT_INST_SEGMENT) || \
+	    (\intno == BOOK3S_INTERRUPT_EXTERNAL) || \
+	    (\intno == BOOK3S_INTERRUPT_ALIGNMENT) || \
+	    (\intno == BOOK3S_INTERRUPT_PROGRAM) || \
+	    (\intno == BOOK3S_INTERRUPT_FP_UNAVAIL) || \
+	    (\intno == BOOK3S_INTERRUPT_DECREMENTER) || \
+	    (\intno == BOOK3S_INTERRUPT_SYSCALL) || \
+	    (\intno == BOOK3S_INTERRUPT_TRACE) || \
+	    (\intno == BOOK3S_INTERRUPT_PERFMON) || \
+	    (\intno == BOOK3S_INTERRUPT_ALTIVEC) || \
+	    (\intno == BOOK3S_INTERRUPT_VSX)
+
+	b	kvmppc_trampoline_\intno
+kvmppc_resume_\intno:
+
+	.endif
+.endm
+
+#else
+
+.macro DO_KVM intno
+.endm
+
+#endif /* CONFIG_KVM_BOOK3S_64_HANDLER */
+
+#else  /*__ASSEMBLY__ */
+
+struct kvmppc_book3s_shadow_vcpu {
+	ulong gpr[14];
+	u32 cr;
+	u32 xer;
+	ulong host_r1;
+	ulong host_r2;
+	ulong handler;
+	ulong scratch0;
+	ulong scratch1;
+	ulong vmhandler;
+};
+
+#endif /*__ASSEMBLY__ */
+
+#endif /* __ASM_KVM_BOOK3S_ASM_H__ */
diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index a011603..dc3ccdf 100644
--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -23,7 +23,7 @@
 #include <asm/page.h>
 #include <asm/exception-64e.h>
 #ifdef CONFIG_KVM_BOOK3S_64_HANDLER
-#include <asm/kvm_book3s_64_asm.h>
+#include <asm/kvm_book3s_asm.h>
 #endif
 
 register struct paca_struct *local_paca asm("r13");
diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index bed9a29..844a44b 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -37,7 +37,7 @@
 #include <asm/firmware.h>
 #include <asm/page_64.h>
 #include <asm/irqflags.h>
-#include <asm/kvm_book3s_64_asm.h>
+#include <asm/kvm_book3s_asm.h>
 
 /* The physical memory is layed out such that the secondary processor
  * spin code sits at 0x0000...0x00ff. On server, the vectors follow
@@ -169,7 +169,7 @@ exception_marker:
 /* KVM trampoline code needs to be close to the interrupt handlers */
 
 #ifdef CONFIG_KVM_BOOK3S_64_HANDLER
-#include "../kvm/book3s_64_rmhandlers.S"
+#include "../kvm/book3s_rmhandlers.S"
 #endif
 
 _GLOBAL(generic_secondary_thread_init)
diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile
index eba721e..0a67310 100644
--- a/arch/powerpc/kvm/Makefile
+++ b/arch/powerpc/kvm/Makefile
@@ -14,7 +14,7 @@ CFLAGS_emulate.o  := -I.
 
 common-objs-y += powerpc.o emulate.o
 obj-$(CONFIG_KVM_EXIT_TIMING) += timing.o
-obj-$(CONFIG_KVM_BOOK3S_64_HANDLER) += book3s_64_exports.o
+obj-$(CONFIG_KVM_BOOK3S_64_HANDLER) += book3s_exports.o
 
 AFLAGS_booke_interrupts.o := -I$(obj)
 
@@ -43,8 +43,8 @@ kvm-book3s_64-objs := \
 	fpu.o \
 	book3s_paired_singles.o \
 	book3s.o \
-	book3s_64_emulate.o \
-	book3s_64_interrupts.o \
+	book3s_emulate.o \
+	book3s_interrupts.o \
 	book3s_64_mmu_host.o \
 	book3s_64_mmu.o \
 	book3s_32_mmu.o
diff --git a/arch/powerpc/kvm/book3s_64_emulate.c b/arch/powerpc/kvm/book3s_64_emulate.c
deleted file mode 100644
index 8f50776..0000000
--- a/arch/powerpc/kvm/book3s_64_emulate.c
+++ /dev/null
@@ -1,566 +0,0 @@
-/*
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License, version 2, as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
- *
- * Copyright SUSE Linux Products GmbH 2009
- *
- * Authors: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
- */
-
-#include <asm/kvm_ppc.h>
-#include <asm/disassemble.h>
-#include <asm/kvm_book3s.h>
-#include <asm/reg.h>
-
-#define OP_19_XOP_RFID		18
-#define OP_19_XOP_RFI		50
-
-#define OP_31_XOP_MFMSR		83
-#define OP_31_XOP_MTMSR		146
-#define OP_31_XOP_MTMSRD	178
-#define OP_31_XOP_MTSR		210
-#define OP_31_XOP_MTSRIN	242
-#define OP_31_XOP_TLBIEL	274
-#define OP_31_XOP_TLBIE		306
-#define OP_31_XOP_SLBMTE	402
-#define OP_31_XOP_SLBIE		434
-#define OP_31_XOP_SLBIA		498
-#define OP_31_XOP_MFSR		595
-#define OP_31_XOP_MFSRIN	659
-#define OP_31_XOP_DCBA		758
-#define OP_31_XOP_SLBMFEV	851
-#define OP_31_XOP_EIOIO		854
-#define OP_31_XOP_SLBMFEE	915
-
-/* DCBZ is actually 1014, but we patch it to 1010 so we get a trap */
-#define OP_31_XOP_DCBZ		1010
-
-#define OP_LFS			48
-#define OP_LFD			50
-#define OP_STFS			52
-#define OP_STFD			54
-
-#define SPRN_GQR0		912
-#define SPRN_GQR1		913
-#define SPRN_GQR2		914
-#define SPRN_GQR3		915
-#define SPRN_GQR4		916
-#define SPRN_GQR5		917
-#define SPRN_GQR6		918
-#define SPRN_GQR7		919
-
-int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                           unsigned int inst, int *advance)
-{
-	int emulated = EMULATE_DONE;
-
-	switch (get_op(inst)) {
-	case 19:
-		switch (get_xop(inst)) {
-		case OP_19_XOP_RFID:
-		case OP_19_XOP_RFI:
-			vcpu->arch.pc = vcpu->arch.srr0;
-			kvmppc_set_msr(vcpu, vcpu->arch.srr1);
-			*advance = 0;
-			break;
-
-		default:
-			emulated = EMULATE_FAIL;
-			break;
-		}
-		break;
-	case 31:
-		switch (get_xop(inst)) {
-		case OP_31_XOP_MFMSR:
-			kvmppc_set_gpr(vcpu, get_rt(inst), vcpu->arch.msr);
-			break;
-		case OP_31_XOP_MTMSRD:
-		{
-			ulong rs = kvmppc_get_gpr(vcpu, get_rs(inst));
-			if (inst & 0x10000) {
-				vcpu->arch.msr &= ~(MSR_RI | MSR_EE);
-				vcpu->arch.msr |= rs & (MSR_RI | MSR_EE);
-			} else
-				kvmppc_set_msr(vcpu, rs);
-			break;
-		}
-		case OP_31_XOP_MTMSR:
-			kvmppc_set_msr(vcpu, kvmppc_get_gpr(vcpu, get_rs(inst)));
-			break;
-		case OP_31_XOP_MFSR:
-		{
-			int srnum;
-
-			srnum = kvmppc_get_field(inst, 12 + 32, 15 + 32);
-			if (vcpu->arch.mmu.mfsrin) {
-				u32 sr;
-				sr = vcpu->arch.mmu.mfsrin(vcpu, srnum);
-				kvmppc_set_gpr(vcpu, get_rt(inst), sr);
-			}
-			break;
-		}
-		case OP_31_XOP_MFSRIN:
-		{
-			int srnum;
-
-			srnum = (kvmppc_get_gpr(vcpu, get_rb(inst)) >> 28) & 0xf;
-			if (vcpu->arch.mmu.mfsrin) {
-				u32 sr;
-				sr = vcpu->arch.mmu.mfsrin(vcpu, srnum);
-				kvmppc_set_gpr(vcpu, get_rt(inst), sr);
-			}
-			break;
-		}
-		case OP_31_XOP_MTSR:
-			vcpu->arch.mmu.mtsrin(vcpu,
-				(inst >> 16) & 0xf,
-				kvmppc_get_gpr(vcpu, get_rs(inst)));
-			break;
-		case OP_31_XOP_MTSRIN:
-			vcpu->arch.mmu.mtsrin(vcpu,
-				(kvmppc_get_gpr(vcpu, get_rb(inst)) >> 28) & 0xf,
-				kvmppc_get_gpr(vcpu, get_rs(inst)));
-			break;
-		case OP_31_XOP_TLBIE:
-		case OP_31_XOP_TLBIEL:
-		{
-			bool large = (inst & 0x00200000) ? true : false;
-			ulong addr = kvmppc_get_gpr(vcpu, get_rb(inst));
-			vcpu->arch.mmu.tlbie(vcpu, addr, large);
-			break;
-		}
-		case OP_31_XOP_EIOIO:
-			break;
-		case OP_31_XOP_SLBMTE:
-			if (!vcpu->arch.mmu.slbmte)
-				return EMULATE_FAIL;
-
-			vcpu->arch.mmu.slbmte(vcpu,
-					kvmppc_get_gpr(vcpu, get_rs(inst)),
-					kvmppc_get_gpr(vcpu, get_rb(inst)));
-			break;
-		case OP_31_XOP_SLBIE:
-			if (!vcpu->arch.mmu.slbie)
-				return EMULATE_FAIL;
-
-			vcpu->arch.mmu.slbie(vcpu,
-					kvmppc_get_gpr(vcpu, get_rb(inst)));
-			break;
-		case OP_31_XOP_SLBIA:
-			if (!vcpu->arch.mmu.slbia)
-				return EMULATE_FAIL;
-
-			vcpu->arch.mmu.slbia(vcpu);
-			break;
-		case OP_31_XOP_SLBMFEE:
-			if (!vcpu->arch.mmu.slbmfee) {
-				emulated = EMULATE_FAIL;
-			} else {
-				ulong t, rb;
-
-				rb = kvmppc_get_gpr(vcpu, get_rb(inst));
-				t = vcpu->arch.mmu.slbmfee(vcpu, rb);
-				kvmppc_set_gpr(vcpu, get_rt(inst), t);
-			}
-			break;
-		case OP_31_XOP_SLBMFEV:
-			if (!vcpu->arch.mmu.slbmfev) {
-				emulated = EMULATE_FAIL;
-			} else {
-				ulong t, rb;
-
-				rb = kvmppc_get_gpr(vcpu, get_rb(inst));
-				t = vcpu->arch.mmu.slbmfev(vcpu, rb);
-				kvmppc_set_gpr(vcpu, get_rt(inst), t);
-			}
-			break;
-		case OP_31_XOP_DCBA:
-			/* Gets treated as NOP */
-			break;
-		case OP_31_XOP_DCBZ:
-		{
-			ulong rb = kvmppc_get_gpr(vcpu, get_rb(inst));
-			ulong ra = 0;
-			ulong addr, vaddr;
-			u32 zeros[8] = { 0, 0, 0, 0, 0, 0, 0, 0 };
-			u32 dsisr;
-			int r;
-
-			if (get_ra(inst))
-				ra = kvmppc_get_gpr(vcpu, get_ra(inst));
-
-			addr = (ra + rb) & ~31ULL;
-			if (!(vcpu->arch.msr & MSR_SF))
-				addr &= 0xffffffff;
-			vaddr = addr;
-
-			r = kvmppc_st(vcpu, &addr, 32, zeros, true);
-			if ((r == -ENOENT) || (r == -EPERM)) {
-				*advance = 0;
-				vcpu->arch.dear = vaddr;
-				vcpu->arch.fault_dear = vaddr;
-
-				dsisr = DSISR_ISSTORE;
-				if (r == -ENOENT)
-					dsisr |= DSISR_NOHPTE;
-				else if (r == -EPERM)
-					dsisr |= DSISR_PROTFAULT;
-
-				to_book3s(vcpu)->dsisr = dsisr;
-				vcpu->arch.fault_dsisr = dsisr;
-
-				kvmppc_book3s_queue_irqprio(vcpu,
-					BOOK3S_INTERRUPT_DATA_STORAGE);
-			}
-
-			break;
-		}
-		default:
-			emulated = EMULATE_FAIL;
-		}
-		break;
-	default:
-		emulated = EMULATE_FAIL;
-	}
-
-	if (emulated == EMULATE_FAIL)
-		emulated = kvmppc_emulate_paired_single(run, vcpu);
-
-	return emulated;
-}
-
-void kvmppc_set_bat(struct kvm_vcpu *vcpu, struct kvmppc_bat *bat, bool upper,
-                    u32 val)
-{
-	if (upper) {
-		/* Upper BAT */
-		u32 bl = (val >> 2) & 0x7ff;
-		bat->bepi_mask = (~bl << 17);
-		bat->bepi = val & 0xfffe0000;
-		bat->vs = (val & 2) ? 1 : 0;
-		bat->vp = (val & 1) ? 1 : 0;
-		bat->raw = (bat->raw & 0xffffffff00000000ULL) | val;
-	} else {
-		/* Lower BAT */
-		bat->brpn = val & 0xfffe0000;
-		bat->wimg = (val >> 3) & 0xf;
-		bat->pp = val & 3;
-		bat->raw = (bat->raw & 0x00000000ffffffffULL) | ((u64)val << 32);
-	}
-}
-
-static u32 kvmppc_read_bat(struct kvm_vcpu *vcpu, int sprn)
-{
-	struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
-	struct kvmppc_bat *bat;
-
-	switch (sprn) {
-	case SPRN_IBAT0U ... SPRN_IBAT3L:
-		bat = &vcpu_book3s->ibat[(sprn - SPRN_IBAT0U) / 2];
-		break;
-	case SPRN_IBAT4U ... SPRN_IBAT7L:
-		bat = &vcpu_book3s->ibat[4 + ((sprn - SPRN_IBAT4U) / 2)];
-		break;
-	case SPRN_DBAT0U ... SPRN_DBAT3L:
-		bat = &vcpu_book3s->dbat[(sprn - SPRN_DBAT0U) / 2];
-		break;
-	case SPRN_DBAT4U ... SPRN_DBAT7L:
-		bat = &vcpu_book3s->dbat[4 + ((sprn - SPRN_DBAT4U) / 2)];
-		break;
-	default:
-		BUG();
-	}
-
-	if (sprn % 2)
-		return bat->raw >> 32;
-	else
-		return bat->raw;
-}
-
-static void kvmppc_write_bat(struct kvm_vcpu *vcpu, int sprn, u32 val)
-{
-	struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
-	struct kvmppc_bat *bat;
-
-	switch (sprn) {
-	case SPRN_IBAT0U ... SPRN_IBAT3L:
-		bat = &vcpu_book3s->ibat[(sprn - SPRN_IBAT0U) / 2];
-		break;
-	case SPRN_IBAT4U ... SPRN_IBAT7L:
-		bat = &vcpu_book3s->ibat[4 + ((sprn - SPRN_IBAT4U) / 2)];
-		break;
-	case SPRN_DBAT0U ... SPRN_DBAT3L:
-		bat = &vcpu_book3s->dbat[(sprn - SPRN_DBAT0U) / 2];
-		break;
-	case SPRN_DBAT4U ... SPRN_DBAT7L:
-		bat = &vcpu_book3s->dbat[4 + ((sprn - SPRN_DBAT4U) / 2)];
-		break;
-	default:
-		BUG();
-	}
-
-	kvmppc_set_bat(vcpu, bat, !(sprn % 2), val);
-}
-
-int kvmppc_core_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs)
-{
-	int emulated = EMULATE_DONE;
-	ulong spr_val = kvmppc_get_gpr(vcpu, rs);
-
-	switch (sprn) {
-	case SPRN_SDR1:
-		to_book3s(vcpu)->sdr1 = spr_val;
-		break;
-	case SPRN_DSISR:
-		to_book3s(vcpu)->dsisr = spr_val;
-		break;
-	case SPRN_DAR:
-		vcpu->arch.dear = spr_val;
-		break;
-	case SPRN_HIOR:
-		to_book3s(vcpu)->hior = spr_val;
-		break;
-	case SPRN_IBAT0U ... SPRN_IBAT3L:
-	case SPRN_IBAT4U ... SPRN_IBAT7L:
-	case SPRN_DBAT0U ... SPRN_DBAT3L:
-	case SPRN_DBAT4U ... SPRN_DBAT7L:
-		kvmppc_write_bat(vcpu, sprn, (u32)spr_val);
-		/* BAT writes happen so rarely that we're ok to flush
-		 * everything here */
-		kvmppc_mmu_pte_flush(vcpu, 0, 0);
-		kvmppc_mmu_flush_segments(vcpu);
-		break;
-	case SPRN_HID0:
-		to_book3s(vcpu)->hid[0] = spr_val;
-		break;
-	case SPRN_HID1:
-		to_book3s(vcpu)->hid[1] = spr_val;
-		break;
-	case SPRN_HID2:
-		to_book3s(vcpu)->hid[2] = spr_val;
-		break;
-	case SPRN_HID2_GEKKO:
-		to_book3s(vcpu)->hid[2] = spr_val;
-		/* HID2.PSE controls paired single on gekko */
-		switch (vcpu->arch.pvr) {
-		case 0x00080200:	/* lonestar 2.0 */
-		case 0x00088202:	/* lonestar 2.2 */
-		case 0x70000100:	/* gekko 1.0 */
-		case 0x00080100:	/* gekko 2.0 */
-		case 0x00083203:	/* gekko 2.3a */
-		case 0x00083213:	/* gekko 2.3b */
-		case 0x00083204:	/* gekko 2.4 */
-		case 0x00083214:	/* gekko 2.4e (8SE) - retail HW2 */
-			if (spr_val & (1 << 29)) { /* HID2.PSE */
-				vcpu->arch.hflags |= BOOK3S_HFLAG_PAIRED_SINGLE;
-				kvmppc_giveup_ext(vcpu, MSR_FP);
-			} else {
-				vcpu->arch.hflags &= ~BOOK3S_HFLAG_PAIRED_SINGLE;
-			}
-			break;
-		}
-		break;
-	case SPRN_HID4:
-	case SPRN_HID4_GEKKO:
-		to_book3s(vcpu)->hid[4] = spr_val;
-		break;
-	case SPRN_HID5:
-		to_book3s(vcpu)->hid[5] = spr_val;
-		/* guest HID5 set can change is_dcbz32 */
-		if (vcpu->arch.mmu.is_dcbz32(vcpu) &&
-		    (mfmsr() & MSR_HV))
-			vcpu->arch.hflags |= BOOK3S_HFLAG_DCBZ32;
-		break;
-	case SPRN_GQR0:
-	case SPRN_GQR1:
-	case SPRN_GQR2:
-	case SPRN_GQR3:
-	case SPRN_GQR4:
-	case SPRN_GQR5:
-	case SPRN_GQR6:
-	case SPRN_GQR7:
-		to_book3s(vcpu)->gqr[sprn - SPRN_GQR0] = spr_val;
-		break;
-	case SPRN_ICTC:
-	case SPRN_THRM1:
-	case SPRN_THRM2:
-	case SPRN_THRM3:
-	case SPRN_CTRLF:
-	case SPRN_CTRLT:
-	case SPRN_L2CR:
-	case SPRN_MMCR0_GEKKO:
-	case SPRN_MMCR1_GEKKO:
-	case SPRN_PMC1_GEKKO:
-	case SPRN_PMC2_GEKKO:
-	case SPRN_PMC3_GEKKO:
-	case SPRN_PMC4_GEKKO:
-	case SPRN_WPAR_GEKKO:
-		break;
-	default:
-		printk(KERN_INFO "KVM: invalid SPR write: %d\n", sprn);
-#ifndef DEBUG_SPR
-		emulated = EMULATE_FAIL;
-#endif
-		break;
-	}
-
-	return emulated;
-}
-
-int kvmppc_core_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
-{
-	int emulated = EMULATE_DONE;
-
-	switch (sprn) {
-	case SPRN_IBAT0U ... SPRN_IBAT3L:
-	case SPRN_IBAT4U ... SPRN_IBAT7L:
-	case SPRN_DBAT0U ... SPRN_DBAT3L:
-	case SPRN_DBAT4U ... SPRN_DBAT7L:
-		kvmppc_set_gpr(vcpu, rt, kvmppc_read_bat(vcpu, sprn));
-		break;
-	case SPRN_SDR1:
-		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->sdr1);
-		break;
-	case SPRN_DSISR:
-		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->dsisr);
-		break;
-	case SPRN_DAR:
-		kvmppc_set_gpr(vcpu, rt, vcpu->arch.dear);
-		break;
-	case SPRN_HIOR:
-		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hior);
-		break;
-	case SPRN_HID0:
-		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[0]);
-		break;
-	case SPRN_HID1:
-		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[1]);
-		break;
-	case SPRN_HID2:
-	case SPRN_HID2_GEKKO:
-		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[2]);
-		break;
-	case SPRN_HID4:
-	case SPRN_HID4_GEKKO:
-		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[4]);
-		break;
-	case SPRN_HID5:
-		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[5]);
-		break;
-	case SPRN_GQR0:
-	case SPRN_GQR1:
-	case SPRN_GQR2:
-	case SPRN_GQR3:
-	case SPRN_GQR4:
-	case SPRN_GQR5:
-	case SPRN_GQR6:
-	case SPRN_GQR7:
-		kvmppc_set_gpr(vcpu, rt,
-			       to_book3s(vcpu)->gqr[sprn - SPRN_GQR0]);
-		break;
-	case SPRN_THRM1:
-	case SPRN_THRM2:
-	case SPRN_THRM3:
-	case SPRN_CTRLF:
-	case SPRN_CTRLT:
-	case SPRN_L2CR:
-	case SPRN_MMCR0_GEKKO:
-	case SPRN_MMCR1_GEKKO:
-	case SPRN_PMC1_GEKKO:
-	case SPRN_PMC2_GEKKO:
-	case SPRN_PMC3_GEKKO:
-	case SPRN_PMC4_GEKKO:
-	case SPRN_WPAR_GEKKO:
-		kvmppc_set_gpr(vcpu, rt, 0);
-		break;
-	default:
-		printk(KERN_INFO "KVM: invalid SPR read: %d\n", sprn);
-#ifndef DEBUG_SPR
-		emulated = EMULATE_FAIL;
-#endif
-		break;
-	}
-
-	return emulated;
-}
-
-u32 kvmppc_alignment_dsisr(struct kvm_vcpu *vcpu, unsigned int inst)
-{
-	u32 dsisr = 0;
-
-	/*
-	 * This is what the spec says about DSISR bits (not mentioned = 0):
-	 *
-	 * 12:13		[DS]	Set to bits 30:31
-	 * 15:16		[X]	Set to bits 29:30
-	 * 17			[X]	Set to bit 25
-	 *			[D/DS]	Set to bit 5
-	 * 18:21		[X]	Set to bits 21:24
-	 *			[D/DS]	Set to bits 1:4
-	 * 22:26			Set to bits 6:10 (RT/RS/FRT/FRS)
-	 * 27:31			Set to bits 11:15 (RA)
-	 */
-
-	switch (get_op(inst)) {
-	/* D-form */
-	case OP_LFS:
-	case OP_LFD:
-	case OP_STFD:
-	case OP_STFS:
-		dsisr |= (inst >> 12) & 0x4000;	/* bit 17 */
-		dsisr |= (inst >> 17) & 0x3c00; /* bits 18:21 */
-		break;
-	/* X-form */
-	case 31:
-		dsisr |= (inst << 14) & 0x18000; /* bits 15:16 */
-		dsisr |= (inst << 8)  & 0x04000; /* bit 17 */
-		dsisr |= (inst << 3)  & 0x03c00; /* bits 18:21 */
-		break;
-	default:
-		printk(KERN_INFO "KVM: Unaligned instruction 0x%x\n", inst);
-		break;
-	}
-
-	dsisr |= (inst >> 16) & 0x03ff; /* bits 22:31 */
-
-	return dsisr;
-}
-
-ulong kvmppc_alignment_dar(struct kvm_vcpu *vcpu, unsigned int inst)
-{
-	ulong dar = 0;
-	ulong ra;
-
-	switch (get_op(inst)) {
-	case OP_LFS:
-	case OP_LFD:
-	case OP_STFD:
-	case OP_STFS:
-		ra = get_ra(inst);
-		if (ra)
-			dar = kvmppc_get_gpr(vcpu, ra);
-		dar += (s32)((s16)inst);
-		break;
-	case 31:
-		ra = get_ra(inst);
-		if (ra)
-			dar = kvmppc_get_gpr(vcpu, ra);
-		dar += kvmppc_get_gpr(vcpu, get_rb(inst));
-		break;
-	default:
-		printk(KERN_INFO "KVM: Unaligned instruction 0x%x\n", inst);
-		break;
-	}
-
-	return dar;
-}
diff --git a/arch/powerpc/kvm/book3s_64_exports.c b/arch/powerpc/kvm/book3s_64_exports.c
deleted file mode 100644
index 1dd5a1d..0000000
--- a/arch/powerpc/kvm/book3s_64_exports.c
+++ /dev/null
@@ -1,32 +0,0 @@
-/*
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License, version 2, as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
- *
- * Copyright SUSE Linux Products GmbH 2009
- *
- * Authors: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
- */
-
-#include <linux/module.h>
-#include <asm/kvm_book3s.h>
-
-EXPORT_SYMBOL_GPL(kvmppc_trampoline_enter);
-EXPORT_SYMBOL_GPL(kvmppc_trampoline_lowmem);
-EXPORT_SYMBOL_GPL(kvmppc_rmcall);
-EXPORT_SYMBOL_GPL(kvmppc_load_up_fpu);
-#ifdef CONFIG_ALTIVEC
-EXPORT_SYMBOL_GPL(kvmppc_load_up_altivec);
-#endif
-#ifdef CONFIG_VSX
-EXPORT_SYMBOL_GPL(kvmppc_load_up_vsx);
-#endif
diff --git a/arch/powerpc/kvm/book3s_64_interrupts.S b/arch/powerpc/kvm/book3s_64_interrupts.S
deleted file mode 100644
index faca876..0000000
--- a/arch/powerpc/kvm/book3s_64_interrupts.S
+++ /dev/null
@@ -1,318 +0,0 @@
-/*
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License, version 2, as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
- *
- * Copyright SUSE Linux Products GmbH 2009
- *
- * Authors: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
- */
-
-#include <asm/ppc_asm.h>
-#include <asm/kvm_asm.h>
-#include <asm/reg.h>
-#include <asm/page.h>
-#include <asm/asm-offsets.h>
-#include <asm/exception-64s.h>
-
-#define KVMPPC_HANDLE_EXIT .kvmppc_handle_exit
-#define ULONG_SIZE 8
-#define VCPU_GPR(n)     (VCPU_GPRS + (n * ULONG_SIZE))
-
-.macro DISABLE_INTERRUPTS
-       mfmsr   r0
-       rldicl  r0,r0,48,1
-       rotldi  r0,r0,16
-       mtmsrd  r0,1
-.endm
-
-#define VCPU_LOAD_NVGPRS(vcpu) \
-	ld	r14, VCPU_GPR(r14)(vcpu); \
-	ld	r15, VCPU_GPR(r15)(vcpu); \
-	ld	r16, VCPU_GPR(r16)(vcpu); \
-	ld	r17, VCPU_GPR(r17)(vcpu); \
-	ld	r18, VCPU_GPR(r18)(vcpu); \
-	ld	r19, VCPU_GPR(r19)(vcpu); \
-	ld	r20, VCPU_GPR(r20)(vcpu); \
-	ld	r21, VCPU_GPR(r21)(vcpu); \
-	ld	r22, VCPU_GPR(r22)(vcpu); \
-	ld	r23, VCPU_GPR(r23)(vcpu); \
-	ld	r24, VCPU_GPR(r24)(vcpu); \
-	ld	r25, VCPU_GPR(r25)(vcpu); \
-	ld	r26, VCPU_GPR(r26)(vcpu); \
-	ld	r27, VCPU_GPR(r27)(vcpu); \
-	ld	r28, VCPU_GPR(r28)(vcpu); \
-	ld	r29, VCPU_GPR(r29)(vcpu); \
-	ld	r30, VCPU_GPR(r30)(vcpu); \
-	ld	r31, VCPU_GPR(r31)(vcpu); \
-
-/*****************************************************************************
- *                                                                           *
- *     Guest entry / exit code that is in kernel module memory (highmem)     *
- *                                                                           *
- ****************************************************************************/
-
-/* Registers:
- *  r3: kvm_run pointer
- *  r4: vcpu pointer
- */
-_GLOBAL(__kvmppc_vcpu_entry)
-
-kvm_start_entry:
-	/* Write correct stack frame */
-	mflr    r0
-	std     r0,16(r1)
-
-	/* Save host state to the stack */
-	stdu	r1, -SWITCH_FRAME_SIZE(r1)
-
-	/* Save r3 (kvm_run) and r4 (vcpu) */
-	SAVE_2GPRS(3, r1)
-
-	/* Save non-volatile registers (r14 - r31) */
-	SAVE_NVGPRS(r1)
-
-	/* Save LR */
-	std	r0, _LINK(r1)
-
-	/* Load non-volatile guest state from the vcpu */
-	VCPU_LOAD_NVGPRS(r4)
-
-	/* Save R1/R2 in the PACA */
-	std	r1, PACA_KVM_HOST_R1(r13)
-	std	r2, PACA_KVM_HOST_R2(r13)
-
-	/* XXX swap in/out on load? */
-	ld	r3, VCPU_HIGHMEM_HANDLER(r4)
-	std	r3, PACA_KVM_VMHANDLER(r13)
-
-kvm_start_lightweight:
-
-	ld	r9, VCPU_PC(r4)			/* r9 = vcpu->arch.pc */
-	ld	r10, VCPU_SHADOW_MSR(r4)	/* r10 = vcpu->arch.shadow_msr */
-
-	/* Load some guest state in the respective registers */
-	ld	r5, VCPU_CTR(r4)	/* r5 = vcpu->arch.ctr */
-					/* will be swapped in by rmcall */
-
-	ld	r3, VCPU_LR(r4)		/* r3 = vcpu->arch.lr */
-	mtlr	r3			/* LR = r3 */
-
-	DISABLE_INTERRUPTS
-
-	/* Some guests may need to have dcbz set to 32 byte length.
-	 *
-	 * Usually we ensure that by patching the guest's instructions
-	 * to trap on dcbz and emulate it in the hypervisor.
-	 *
-	 * If we can, we should tell the CPU to use 32 byte dcbz though,
-	 * because that's a lot faster.
-	 */
-
-	ld	r3, VCPU_HFLAGS(r4)
-	rldicl.	r3, r3, 0, 63		/* CR = ((r3 & 1) == 0) */
-	beq	no_dcbz32_on
-
-	mfspr   r3,SPRN_HID5
-	ori     r3, r3, 0x80		/* XXX HID5_dcbz32 = 0x80 */
-	mtspr   SPRN_HID5,r3
-
-no_dcbz32_on:
-
-	ld	r6, VCPU_RMCALL(r4)
-	mtctr	r6
-
-	ld	r3, VCPU_TRAMPOLINE_ENTER(r4)
-	LOAD_REG_IMMEDIATE(r4, MSR_KERNEL & ~(MSR_IR | MSR_DR))
-
-	/* Jump to SLB patching handlder and into our guest */
-	bctr
-
-/*
- * This is the handler in module memory. It gets jumped at from the
- * lowmem trampoline code, so it's basically the guest exit code.
- *
- */
-
-.global kvmppc_handler_highmem
-kvmppc_handler_highmem:
-
-	/*
-	 * Register usage at this point:
-	 *
-	 * R0         = guest last inst
-	 * R1         = host R1
-	 * R2         = host R2
-	 * R3         = guest PC
-	 * R4         = guest MSR
-	 * R5         = guest DAR
-	 * R6         = guest DSISR
-	 * R13        = PACA
-	 * PACA.KVM.* = guest *
-	 *
-	 */
-
-	/* R7 = vcpu */
-	ld	r7, GPR4(r1)
-
-	/* Now save the guest state */
-
-	stw	r0, VCPU_LAST_INST(r7)
-
-	std	r3, VCPU_PC(r7)
-	std	r4, VCPU_SHADOW_SRR1(r7)
-	std	r5, VCPU_FAULT_DEAR(r7)
-	stw	r6, VCPU_FAULT_DSISR(r7)
-
-	ld	r5, VCPU_HFLAGS(r7)
-	rldicl.	r5, r5, 0, 63		/* CR = ((r5 & 1) == 0) */
-	beq	no_dcbz32_off
-
-	li	r4, 0
-	mfspr   r5,SPRN_HID5
-	rldimi  r5,r4,6,56
-	mtspr   SPRN_HID5,r5
-
-no_dcbz32_off:
-
-	std	r14, VCPU_GPR(r14)(r7)
-	std	r15, VCPU_GPR(r15)(r7)
-	std	r16, VCPU_GPR(r16)(r7)
-	std	r17, VCPU_GPR(r17)(r7)
-	std	r18, VCPU_GPR(r18)(r7)
-	std	r19, VCPU_GPR(r19)(r7)
-	std	r20, VCPU_GPR(r20)(r7)
-	std	r21, VCPU_GPR(r21)(r7)
-	std	r22, VCPU_GPR(r22)(r7)
-	std	r23, VCPU_GPR(r23)(r7)
-	std	r24, VCPU_GPR(r24)(r7)
-	std	r25, VCPU_GPR(r25)(r7)
-	std	r26, VCPU_GPR(r26)(r7)
-	std	r27, VCPU_GPR(r27)(r7)
-	std	r28, VCPU_GPR(r28)(r7)
-	std	r29, VCPU_GPR(r29)(r7)
-	std	r30, VCPU_GPR(r30)(r7)
-	std	r31, VCPU_GPR(r31)(r7)
-
-	/* Save guest CTR */
-	mfctr	r5
-	std	r5, VCPU_CTR(r7)
-
-	/* Save guest LR */
-	mflr	r5
-	std	r5, VCPU_LR(r7)
-
-	/* Restore host msr -> SRR1 */
-	ld	r6, VCPU_HOST_MSR(r7)
-
-	/*
-	 * For some interrupts, we need to call the real Linux
-	 * handler, so it can do work for us. This has to happen
-	 * as if the interrupt arrived from the kernel though,
-	 * so let's fake it here where most state is restored.
-	 *
-	 * Call Linux for hardware interrupts/decrementer
-	 * r3 = address of interrupt handler (exit reason)
-	 */
-
-	cmpwi	r12, BOOK3S_INTERRUPT_EXTERNAL
-	beq	call_linux_handler
-	cmpwi	r12, BOOK3S_INTERRUPT_DECREMENTER
-	beq	call_linux_handler
-
-	/* Back to EE=1 */
-	mtmsr	r6
-	b	kvm_return_point
-
-call_linux_handler:
-
-	/*
-	 * If we land here we need to jump back to the handler we
-	 * came from.
-	 *
-	 * We have a page that we can access from real mode, so let's
-	 * jump back to that and use it as a trampoline to get back into the
-	 * interrupt handler!
-	 *
-	 * R3 still contains the exit code,
-	 * R5 VCPU_HOST_RETIP and
-	 * R6 VCPU_HOST_MSR
-	 */
-
-	/* Restore host IP -> SRR0 */
-	ld	r5, VCPU_HOST_RETIP(r7)
-
-	/* XXX Better move to a safe function?
-	 *     What if we get an HTAB flush in between mtsrr0 and mtsrr1? */
-
-	mtlr	r12
-
-	ld	r4, VCPU_TRAMPOLINE_LOWMEM(r7)
-	mtsrr0	r4
-	LOAD_REG_IMMEDIATE(r3, MSR_KERNEL & ~(MSR_IR | MSR_DR))
-	mtsrr1	r3
-
-	RFI
-
-.global kvm_return_point
-kvm_return_point:
-
-	/* Jump back to lightweight entry if we're supposed to */
-	/* go back into the guest */
-
-	/* Pass the exit number as 3rd argument to kvmppc_handle_exit */
-	mr	r5, r12
-
-	/* Restore r3 (kvm_run) and r4 (vcpu) */
-	REST_2GPRS(3, r1)
-	bl	KVMPPC_HANDLE_EXIT
-
-	/* If RESUME_GUEST, get back in the loop */
-	cmpwi	r3, RESUME_GUEST
-	beq	kvm_loop_lightweight
-
-	cmpwi	r3, RESUME_GUEST_NV
-	beq	kvm_loop_heavyweight
-
-kvm_exit_loop:
-
-	ld	r4, _LINK(r1)
-	mtlr	r4
-
-	/* Restore non-volatile host registers (r14 - r31) */
-	REST_NVGPRS(r1)
-
-	addi    r1, r1, SWITCH_FRAME_SIZE
-	blr
-
-kvm_loop_heavyweight:
-
-	ld	r4, _LINK(r1)
-	std     r4, (16 + SWITCH_FRAME_SIZE)(r1)
-
-	/* Load vcpu and cpu_run */
-	REST_2GPRS(3, r1)
-
-	/* Load non-volatile guest state from the vcpu */
-	VCPU_LOAD_NVGPRS(r4)
-
-	/* Jump back into the beginning of this function */
-	b	kvm_start_lightweight
-
-kvm_loop_lightweight:
-
-	/* We'll need the vcpu pointer */
-	REST_GPR(4, r1)
-
-	/* Jump back into the beginning of this function */
-	b	kvm_start_lightweight
-
diff --git a/arch/powerpc/kvm/book3s_64_rmhandlers.S b/arch/powerpc/kvm/book3s_64_rmhandlers.S
deleted file mode 100644
index bd08535..0000000
--- a/arch/powerpc/kvm/book3s_64_rmhandlers.S
+++ /dev/null
@@ -1,195 +0,0 @@
-/*
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License, version 2, as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
- *
- * Copyright SUSE Linux Products GmbH 2009
- *
- * Authors: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
- */
-
-#include <asm/ppc_asm.h>
-#include <asm/kvm_asm.h>
-#include <asm/reg.h>
-#include <asm/page.h>
-#include <asm/asm-offsets.h>
-#include <asm/exception-64s.h>
-
-/*****************************************************************************
- *                                                                           *
- *        Real Mode handlers that need to be in low physical memory          *
- *                                                                           *
- ****************************************************************************/
-
-
-.macro INTERRUPT_TRAMPOLINE intno
-
-.global kvmppc_trampoline_\intno
-kvmppc_trampoline_\intno:
-
-	mtspr	SPRN_SPRG_SCRATCH0, r13		/* Save r13 */
-
-	/*
-	 * First thing to do is to find out if we're coming
-	 * from a KVM guest or a Linux process.
-	 *
-	 * To distinguish, we check a magic byte in the PACA
-	 */
-	mfspr	r13, SPRN_SPRG_PACA		/* r13 = PACA */
-	std	r12, PACA_KVM_SCRATCH0(r13)
-	mfcr	r12
-	stw	r12, PACA_KVM_SCRATCH1(r13)
-	lbz	r12, PACA_KVM_IN_GUEST(r13)
-	cmpwi	r12, KVM_GUEST_MODE_NONE
-	bne	..kvmppc_handler_hasmagic_\intno
-	/* No KVM guest? Then jump back to the Linux handler! */
-	lwz	r12, PACA_KVM_SCRATCH1(r13)
-	mtcr	r12
-	ld	r12, PACA_KVM_SCRATCH0(r13)
-	mfspr	r13, SPRN_SPRG_SCRATCH0		/* r13 = original r13 */
-	b	kvmppc_resume_\intno		/* Get back original handler */
-
-	/* Now we know we're handling a KVM guest */
-..kvmppc_handler_hasmagic_\intno:
-
-	/* Should we just skip the faulting instruction? */
-	cmpwi	r12, KVM_GUEST_MODE_SKIP
-	beq	kvmppc_handler_skip_ins
-
-	/* Let's store which interrupt we're handling */
-	li	r12, \intno
-
-	/* Jump into the SLB exit code that goes to the highmem handler */
-	b	kvmppc_handler_trampoline_exit
-
-.endm
-
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_SYSTEM_RESET
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_MACHINE_CHECK
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_DATA_STORAGE
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_DATA_SEGMENT
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_INST_STORAGE
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_INST_SEGMENT
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_EXTERNAL
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_ALIGNMENT
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_PROGRAM
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_FP_UNAVAIL
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_DECREMENTER
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_SYSCALL
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_TRACE
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_PERFMON
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_ALTIVEC
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_VSX
-
-/*
- * Bring us back to the faulting code, but skip the
- * faulting instruction.
- *
- * This is a generic exit path from the interrupt
- * trampolines above.
- *
- * Input Registers:
- *
- * R12               = free
- * R13               = PACA
- * PACA.KVM.SCRATCH0 = guest R12
- * PACA.KVM.SCRATCH1 = guest CR
- * SPRG_SCRATCH0     = guest R13
- *
- */
-kvmppc_handler_skip_ins:
-
-	/* Patch the IP to the next instruction */
-	mfsrr0	r12
-	addi	r12, r12, 4
-	mtsrr0	r12
-
-	/* Clean up all state */
-	lwz	r12, PACA_KVM_SCRATCH1(r13)
-	mtcr	r12
-	ld	r12, PACA_KVM_SCRATCH0(r13)
-	mfspr	r13, SPRN_SPRG_SCRATCH0
-
-	/* And get back into the code */
-	RFI
-
-/*
- * This trampoline brings us back to a real mode handler
- *
- * Input Registers:
- *
- * R5 = SRR0
- * R6 = SRR1
- * LR = real-mode IP
- *
- */
-.global kvmppc_handler_lowmem_trampoline
-kvmppc_handler_lowmem_trampoline:
-
-	mtsrr0	r5
-	mtsrr1	r6
-	blr
-kvmppc_handler_lowmem_trampoline_end:
-
-/*
- * Call a function in real mode
- *
- * Input Registers:
- *
- * R3 = function
- * R4 = MSR
- * R5 = CTR
- *
- */
-_GLOBAL(kvmppc_rmcall)
-	mtmsr	r4		/* Disable relocation, so mtsrr
-				   doesn't get interrupted */
-	mtctr	r5
-	mtsrr0	r3
-	mtsrr1	r4
-	RFI
-
-/*
- * Activate current's external feature (FPU/Altivec/VSX)
- */
-#define define_load_up(what) 				\
-							\
-_GLOBAL(kvmppc_load_up_ ## what);			\
-	stdu	r1, -INT_FRAME_SIZE(r1);		\
-	mflr	r3;					\
-	std	r3, _LINK(r1);				\
-							\
-	bl	.load_up_ ## what;			\
-							\
-	ld	r3, _LINK(r1);				\
-	mtlr	r3;					\
-	addi	r1, r1, INT_FRAME_SIZE;			\
-	blr
-
-define_load_up(fpu)
-#ifdef CONFIG_ALTIVEC
-define_load_up(altivec)
-#endif
-#ifdef CONFIG_VSX
-define_load_up(vsx)
-#endif
-
-.global kvmppc_trampoline_lowmem
-kvmppc_trampoline_lowmem:
-	.long kvmppc_handler_lowmem_trampoline - _stext
-
-.global kvmppc_trampoline_enter
-kvmppc_trampoline_enter:
-	.long kvmppc_handler_trampoline_enter - _stext
-
-#include "book3s_64_slb.S"
-
diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
new file mode 100644
index 0000000..8f50776
--- /dev/null
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -0,0 +1,566 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ *
+ * Copyright SUSE Linux Products GmbH 2009
+ *
+ * Authors: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
+ */
+
+#include <asm/kvm_ppc.h>
+#include <asm/disassemble.h>
+#include <asm/kvm_book3s.h>
+#include <asm/reg.h>
+
+#define OP_19_XOP_RFID		18
+#define OP_19_XOP_RFI		50
+
+#define OP_31_XOP_MFMSR		83
+#define OP_31_XOP_MTMSR		146
+#define OP_31_XOP_MTMSRD	178
+#define OP_31_XOP_MTSR		210
+#define OP_31_XOP_MTSRIN	242
+#define OP_31_XOP_TLBIEL	274
+#define OP_31_XOP_TLBIE		306
+#define OP_31_XOP_SLBMTE	402
+#define OP_31_XOP_SLBIE		434
+#define OP_31_XOP_SLBIA		498
+#define OP_31_XOP_MFSR		595
+#define OP_31_XOP_MFSRIN	659
+#define OP_31_XOP_DCBA		758
+#define OP_31_XOP_SLBMFEV	851
+#define OP_31_XOP_EIOIO		854
+#define OP_31_XOP_SLBMFEE	915
+
+/* DCBZ is actually 1014, but we patch it to 1010 so we get a trap */
+#define OP_31_XOP_DCBZ		1010
+
+#define OP_LFS			48
+#define OP_LFD			50
+#define OP_STFS			52
+#define OP_STFD			54
+
+#define SPRN_GQR0		912
+#define SPRN_GQR1		913
+#define SPRN_GQR2		914
+#define SPRN_GQR3		915
+#define SPRN_GQR4		916
+#define SPRN_GQR5		917
+#define SPRN_GQR6		918
+#define SPRN_GQR7		919
+
+int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
+                           unsigned int inst, int *advance)
+{
+	int emulated = EMULATE_DONE;
+
+	switch (get_op(inst)) {
+	case 19:
+		switch (get_xop(inst)) {
+		case OP_19_XOP_RFID:
+		case OP_19_XOP_RFI:
+			vcpu->arch.pc = vcpu->arch.srr0;
+			kvmppc_set_msr(vcpu, vcpu->arch.srr1);
+			*advance = 0;
+			break;
+
+		default:
+			emulated = EMULATE_FAIL;
+			break;
+		}
+		break;
+	case 31:
+		switch (get_xop(inst)) {
+		case OP_31_XOP_MFMSR:
+			kvmppc_set_gpr(vcpu, get_rt(inst), vcpu->arch.msr);
+			break;
+		case OP_31_XOP_MTMSRD:
+		{
+			ulong rs = kvmppc_get_gpr(vcpu, get_rs(inst));
+			if (inst & 0x10000) {
+				vcpu->arch.msr &= ~(MSR_RI | MSR_EE);
+				vcpu->arch.msr |= rs & (MSR_RI | MSR_EE);
+			} else
+				kvmppc_set_msr(vcpu, rs);
+			break;
+		}
+		case OP_31_XOP_MTMSR:
+			kvmppc_set_msr(vcpu, kvmppc_get_gpr(vcpu, get_rs(inst)));
+			break;
+		case OP_31_XOP_MFSR:
+		{
+			int srnum;
+
+			srnum = kvmppc_get_field(inst, 12 + 32, 15 + 32);
+			if (vcpu->arch.mmu.mfsrin) {
+				u32 sr;
+				sr = vcpu->arch.mmu.mfsrin(vcpu, srnum);
+				kvmppc_set_gpr(vcpu, get_rt(inst), sr);
+			}
+			break;
+		}
+		case OP_31_XOP_MFSRIN:
+		{
+			int srnum;
+
+			srnum = (kvmppc_get_gpr(vcpu, get_rb(inst)) >> 28) & 0xf;
+			if (vcpu->arch.mmu.mfsrin) {
+				u32 sr;
+				sr = vcpu->arch.mmu.mfsrin(vcpu, srnum);
+				kvmppc_set_gpr(vcpu, get_rt(inst), sr);
+			}
+			break;
+		}
+		case OP_31_XOP_MTSR:
+			vcpu->arch.mmu.mtsrin(vcpu,
+				(inst >> 16) & 0xf,
+				kvmppc_get_gpr(vcpu, get_rs(inst)));
+			break;
+		case OP_31_XOP_MTSRIN:
+			vcpu->arch.mmu.mtsrin(vcpu,
+				(kvmppc_get_gpr(vcpu, get_rb(inst)) >> 28) & 0xf,
+				kvmppc_get_gpr(vcpu, get_rs(inst)));
+			break;
+		case OP_31_XOP_TLBIE:
+		case OP_31_XOP_TLBIEL:
+		{
+			bool large = (inst & 0x00200000) ? true : false;
+			ulong addr = kvmppc_get_gpr(vcpu, get_rb(inst));
+			vcpu->arch.mmu.tlbie(vcpu, addr, large);
+			break;
+		}
+		case OP_31_XOP_EIOIO:
+			break;
+		case OP_31_XOP_SLBMTE:
+			if (!vcpu->arch.mmu.slbmte)
+				return EMULATE_FAIL;
+
+			vcpu->arch.mmu.slbmte(vcpu,
+					kvmppc_get_gpr(vcpu, get_rs(inst)),
+					kvmppc_get_gpr(vcpu, get_rb(inst)));
+			break;
+		case OP_31_XOP_SLBIE:
+			if (!vcpu->arch.mmu.slbie)
+				return EMULATE_FAIL;
+
+			vcpu->arch.mmu.slbie(vcpu,
+					kvmppc_get_gpr(vcpu, get_rb(inst)));
+			break;
+		case OP_31_XOP_SLBIA:
+			if (!vcpu->arch.mmu.slbia)
+				return EMULATE_FAIL;
+
+			vcpu->arch.mmu.slbia(vcpu);
+			break;
+		case OP_31_XOP_SLBMFEE:
+			if (!vcpu->arch.mmu.slbmfee) {
+				emulated = EMULATE_FAIL;
+			} else {
+				ulong t, rb;
+
+				rb = kvmppc_get_gpr(vcpu, get_rb(inst));
+				t = vcpu->arch.mmu.slbmfee(vcpu, rb);
+				kvmppc_set_gpr(vcpu, get_rt(inst), t);
+			}
+			break;
+		case OP_31_XOP_SLBMFEV:
+			if (!vcpu->arch.mmu.slbmfev) {
+				emulated = EMULATE_FAIL;
+			} else {
+				ulong t, rb;
+
+				rb = kvmppc_get_gpr(vcpu, get_rb(inst));
+				t = vcpu->arch.mmu.slbmfev(vcpu, rb);
+				kvmppc_set_gpr(vcpu, get_rt(inst), t);
+			}
+			break;
+		case OP_31_XOP_DCBA:
+			/* Gets treated as NOP */
+			break;
+		case OP_31_XOP_DCBZ:
+		{
+			ulong rb = kvmppc_get_gpr(vcpu, get_rb(inst));
+			ulong ra = 0;
+			ulong addr, vaddr;
+			u32 zeros[8] = { 0, 0, 0, 0, 0, 0, 0, 0 };
+			u32 dsisr;
+			int r;
+
+			if (get_ra(inst))
+				ra = kvmppc_get_gpr(vcpu, get_ra(inst));
+
+			addr = (ra + rb) & ~31ULL;
+			if (!(vcpu->arch.msr & MSR_SF))
+				addr &= 0xffffffff;
+			vaddr = addr;
+
+			r = kvmppc_st(vcpu, &addr, 32, zeros, true);
+			if ((r == -ENOENT) || (r == -EPERM)) {
+				*advance = 0;
+				vcpu->arch.dear = vaddr;
+				vcpu->arch.fault_dear = vaddr;
+
+				dsisr = DSISR_ISSTORE;
+				if (r == -ENOENT)
+					dsisr |= DSISR_NOHPTE;
+				else if (r == -EPERM)
+					dsisr |= DSISR_PROTFAULT;
+
+				to_book3s(vcpu)->dsisr = dsisr;
+				vcpu->arch.fault_dsisr = dsisr;
+
+				kvmppc_book3s_queue_irqprio(vcpu,
+					BOOK3S_INTERRUPT_DATA_STORAGE);
+			}
+
+			break;
+		}
+		default:
+			emulated = EMULATE_FAIL;
+		}
+		break;
+	default:
+		emulated = EMULATE_FAIL;
+	}
+
+	if (emulated == EMULATE_FAIL)
+		emulated = kvmppc_emulate_paired_single(run, vcpu);
+
+	return emulated;
+}
+
+void kvmppc_set_bat(struct kvm_vcpu *vcpu, struct kvmppc_bat *bat, bool upper,
+                    u32 val)
+{
+	if (upper) {
+		/* Upper BAT */
+		u32 bl = (val >> 2) & 0x7ff;
+		bat->bepi_mask = (~bl << 17);
+		bat->bepi = val & 0xfffe0000;
+		bat->vs = (val & 2) ? 1 : 0;
+		bat->vp = (val & 1) ? 1 : 0;
+		bat->raw = (bat->raw & 0xffffffff00000000ULL) | val;
+	} else {
+		/* Lower BAT */
+		bat->brpn = val & 0xfffe0000;
+		bat->wimg = (val >> 3) & 0xf;
+		bat->pp = val & 3;
+		bat->raw = (bat->raw & 0x00000000ffffffffULL) | ((u64)val << 32);
+	}
+}
+
+static u32 kvmppc_read_bat(struct kvm_vcpu *vcpu, int sprn)
+{
+	struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
+	struct kvmppc_bat *bat;
+
+	switch (sprn) {
+	case SPRN_IBAT0U ... SPRN_IBAT3L:
+		bat = &vcpu_book3s->ibat[(sprn - SPRN_IBAT0U) / 2];
+		break;
+	case SPRN_IBAT4U ... SPRN_IBAT7L:
+		bat = &vcpu_book3s->ibat[4 + ((sprn - SPRN_IBAT4U) / 2)];
+		break;
+	case SPRN_DBAT0U ... SPRN_DBAT3L:
+		bat = &vcpu_book3s->dbat[(sprn - SPRN_DBAT0U) / 2];
+		break;
+	case SPRN_DBAT4U ... SPRN_DBAT7L:
+		bat = &vcpu_book3s->dbat[4 + ((sprn - SPRN_DBAT4U) / 2)];
+		break;
+	default:
+		BUG();
+	}
+
+	if (sprn % 2)
+		return bat->raw >> 32;
+	else
+		return bat->raw;
+}
+
+static void kvmppc_write_bat(struct kvm_vcpu *vcpu, int sprn, u32 val)
+{
+	struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
+	struct kvmppc_bat *bat;
+
+	switch (sprn) {
+	case SPRN_IBAT0U ... SPRN_IBAT3L:
+		bat = &vcpu_book3s->ibat[(sprn - SPRN_IBAT0U) / 2];
+		break;
+	case SPRN_IBAT4U ... SPRN_IBAT7L:
+		bat = &vcpu_book3s->ibat[4 + ((sprn - SPRN_IBAT4U) / 2)];
+		break;
+	case SPRN_DBAT0U ... SPRN_DBAT3L:
+		bat = &vcpu_book3s->dbat[(sprn - SPRN_DBAT0U) / 2];
+		break;
+	case SPRN_DBAT4U ... SPRN_DBAT7L:
+		bat = &vcpu_book3s->dbat[4 + ((sprn - SPRN_DBAT4U) / 2)];
+		break;
+	default:
+		BUG();
+	}
+
+	kvmppc_set_bat(vcpu, bat, !(sprn % 2), val);
+}
+
+int kvmppc_core_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs)
+{
+	int emulated = EMULATE_DONE;
+	ulong spr_val = kvmppc_get_gpr(vcpu, rs);
+
+	switch (sprn) {
+	case SPRN_SDR1:
+		to_book3s(vcpu)->sdr1 = spr_val;
+		break;
+	case SPRN_DSISR:
+		to_book3s(vcpu)->dsisr = spr_val;
+		break;
+	case SPRN_DAR:
+		vcpu->arch.dear = spr_val;
+		break;
+	case SPRN_HIOR:
+		to_book3s(vcpu)->hior = spr_val;
+		break;
+	case SPRN_IBAT0U ... SPRN_IBAT3L:
+	case SPRN_IBAT4U ... SPRN_IBAT7L:
+	case SPRN_DBAT0U ... SPRN_DBAT3L:
+	case SPRN_DBAT4U ... SPRN_DBAT7L:
+		kvmppc_write_bat(vcpu, sprn, (u32)spr_val);
+		/* BAT writes happen so rarely that we're ok to flush
+		 * everything here */
+		kvmppc_mmu_pte_flush(vcpu, 0, 0);
+		kvmppc_mmu_flush_segments(vcpu);
+		break;
+	case SPRN_HID0:
+		to_book3s(vcpu)->hid[0] = spr_val;
+		break;
+	case SPRN_HID1:
+		to_book3s(vcpu)->hid[1] = spr_val;
+		break;
+	case SPRN_HID2:
+		to_book3s(vcpu)->hid[2] = spr_val;
+		break;
+	case SPRN_HID2_GEKKO:
+		to_book3s(vcpu)->hid[2] = spr_val;
+		/* HID2.PSE controls paired single on gekko */
+		switch (vcpu->arch.pvr) {
+		case 0x00080200:	/* lonestar 2.0 */
+		case 0x00088202:	/* lonestar 2.2 */
+		case 0x70000100:	/* gekko 1.0 */
+		case 0x00080100:	/* gekko 2.0 */
+		case 0x00083203:	/* gekko 2.3a */
+		case 0x00083213:	/* gekko 2.3b */
+		case 0x00083204:	/* gekko 2.4 */
+		case 0x00083214:	/* gekko 2.4e (8SE) - retail HW2 */
+			if (spr_val & (1 << 29)) { /* HID2.PSE */
+				vcpu->arch.hflags |= BOOK3S_HFLAG_PAIRED_SINGLE;
+				kvmppc_giveup_ext(vcpu, MSR_FP);
+			} else {
+				vcpu->arch.hflags &= ~BOOK3S_HFLAG_PAIRED_SINGLE;
+			}
+			break;
+		}
+		break;
+	case SPRN_HID4:
+	case SPRN_HID4_GEKKO:
+		to_book3s(vcpu)->hid[4] = spr_val;
+		break;
+	case SPRN_HID5:
+		to_book3s(vcpu)->hid[5] = spr_val;
+		/* guest HID5 set can change is_dcbz32 */
+		if (vcpu->arch.mmu.is_dcbz32(vcpu) &&
+		    (mfmsr() & MSR_HV))
+			vcpu->arch.hflags |= BOOK3S_HFLAG_DCBZ32;
+		break;
+	case SPRN_GQR0:
+	case SPRN_GQR1:
+	case SPRN_GQR2:
+	case SPRN_GQR3:
+	case SPRN_GQR4:
+	case SPRN_GQR5:
+	case SPRN_GQR6:
+	case SPRN_GQR7:
+		to_book3s(vcpu)->gqr[sprn - SPRN_GQR0] = spr_val;
+		break;
+	case SPRN_ICTC:
+	case SPRN_THRM1:
+	case SPRN_THRM2:
+	case SPRN_THRM3:
+	case SPRN_CTRLF:
+	case SPRN_CTRLT:
+	case SPRN_L2CR:
+	case SPRN_MMCR0_GEKKO:
+	case SPRN_MMCR1_GEKKO:
+	case SPRN_PMC1_GEKKO:
+	case SPRN_PMC2_GEKKO:
+	case SPRN_PMC3_GEKKO:
+	case SPRN_PMC4_GEKKO:
+	case SPRN_WPAR_GEKKO:
+		break;
+	default:
+		printk(KERN_INFO "KVM: invalid SPR write: %d\n", sprn);
+#ifndef DEBUG_SPR
+		emulated = EMULATE_FAIL;
+#endif
+		break;
+	}
+
+	return emulated;
+}
+
+int kvmppc_core_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
+{
+	int emulated = EMULATE_DONE;
+
+	switch (sprn) {
+	case SPRN_IBAT0U ... SPRN_IBAT3L:
+	case SPRN_IBAT4U ... SPRN_IBAT7L:
+	case SPRN_DBAT0U ... SPRN_DBAT3L:
+	case SPRN_DBAT4U ... SPRN_DBAT7L:
+		kvmppc_set_gpr(vcpu, rt, kvmppc_read_bat(vcpu, sprn));
+		break;
+	case SPRN_SDR1:
+		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->sdr1);
+		break;
+	case SPRN_DSISR:
+		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->dsisr);
+		break;
+	case SPRN_DAR:
+		kvmppc_set_gpr(vcpu, rt, vcpu->arch.dear);
+		break;
+	case SPRN_HIOR:
+		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hior);
+		break;
+	case SPRN_HID0:
+		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[0]);
+		break;
+	case SPRN_HID1:
+		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[1]);
+		break;
+	case SPRN_HID2:
+	case SPRN_HID2_GEKKO:
+		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[2]);
+		break;
+	case SPRN_HID4:
+	case SPRN_HID4_GEKKO:
+		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[4]);
+		break;
+	case SPRN_HID5:
+		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[5]);
+		break;
+	case SPRN_GQR0:
+	case SPRN_GQR1:
+	case SPRN_GQR2:
+	case SPRN_GQR3:
+	case SPRN_GQR4:
+	case SPRN_GQR5:
+	case SPRN_GQR6:
+	case SPRN_GQR7:
+		kvmppc_set_gpr(vcpu, rt,
+			       to_book3s(vcpu)->gqr[sprn - SPRN_GQR0]);
+		break;
+	case SPRN_THRM1:
+	case SPRN_THRM2:
+	case SPRN_THRM3:
+	case SPRN_CTRLF:
+	case SPRN_CTRLT:
+	case SPRN_L2CR:
+	case SPRN_MMCR0_GEKKO:
+	case SPRN_MMCR1_GEKKO:
+	case SPRN_PMC1_GEKKO:
+	case SPRN_PMC2_GEKKO:
+	case SPRN_PMC3_GEKKO:
+	case SPRN_PMC4_GEKKO:
+	case SPRN_WPAR_GEKKO:
+		kvmppc_set_gpr(vcpu, rt, 0);
+		break;
+	default:
+		printk(KERN_INFO "KVM: invalid SPR read: %d\n", sprn);
+#ifndef DEBUG_SPR
+		emulated = EMULATE_FAIL;
+#endif
+		break;
+	}
+
+	return emulated;
+}
+
+u32 kvmppc_alignment_dsisr(struct kvm_vcpu *vcpu, unsigned int inst)
+{
+	u32 dsisr = 0;
+
+	/*
+	 * This is what the spec says about DSISR bits (not mentioned = 0):
+	 *
+	 * 12:13		[DS]	Set to bits 30:31
+	 * 15:16		[X]	Set to bits 29:30
+	 * 17			[X]	Set to bit 25
+	 *			[D/DS]	Set to bit 5
+	 * 18:21		[X]	Set to bits 21:24
+	 *			[D/DS]	Set to bits 1:4
+	 * 22:26			Set to bits 6:10 (RT/RS/FRT/FRS)
+	 * 27:31			Set to bits 11:15 (RA)
+	 */
+
+	switch (get_op(inst)) {
+	/* D-form */
+	case OP_LFS:
+	case OP_LFD:
+	case OP_STFD:
+	case OP_STFS:
+		dsisr |= (inst >> 12) & 0x4000;	/* bit 17 */
+		dsisr |= (inst >> 17) & 0x3c00; /* bits 18:21 */
+		break;
+	/* X-form */
+	case 31:
+		dsisr |= (inst << 14) & 0x18000; /* bits 15:16 */
+		dsisr |= (inst << 8)  & 0x04000; /* bit 17 */
+		dsisr |= (inst << 3)  & 0x03c00; /* bits 18:21 */
+		break;
+	default:
+		printk(KERN_INFO "KVM: Unaligned instruction 0x%x\n", inst);
+		break;
+	}
+
+	dsisr |= (inst >> 16) & 0x03ff; /* bits 22:31 */
+
+	return dsisr;
+}
+
+ulong kvmppc_alignment_dar(struct kvm_vcpu *vcpu, unsigned int inst)
+{
+	ulong dar = 0;
+	ulong ra;
+
+	switch (get_op(inst)) {
+	case OP_LFS:
+	case OP_LFD:
+	case OP_STFD:
+	case OP_STFS:
+		ra = get_ra(inst);
+		if (ra)
+			dar = kvmppc_get_gpr(vcpu, ra);
+		dar += (s32)((s16)inst);
+		break;
+	case 31:
+		ra = get_ra(inst);
+		if (ra)
+			dar = kvmppc_get_gpr(vcpu, ra);
+		dar += kvmppc_get_gpr(vcpu, get_rb(inst));
+		break;
+	default:
+		printk(KERN_INFO "KVM: Unaligned instruction 0x%x\n", inst);
+		break;
+	}
+
+	return dar;
+}
diff --git a/arch/powerpc/kvm/book3s_exports.c b/arch/powerpc/kvm/book3s_exports.c
new file mode 100644
index 0000000..1dd5a1d
--- /dev/null
+++ b/arch/powerpc/kvm/book3s_exports.c
@@ -0,0 +1,32 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ *
+ * Copyright SUSE Linux Products GmbH 2009
+ *
+ * Authors: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
+ */
+
+#include <linux/module.h>
+#include <asm/kvm_book3s.h>
+
+EXPORT_SYMBOL_GPL(kvmppc_trampoline_enter);
+EXPORT_SYMBOL_GPL(kvmppc_trampoline_lowmem);
+EXPORT_SYMBOL_GPL(kvmppc_rmcall);
+EXPORT_SYMBOL_GPL(kvmppc_load_up_fpu);
+#ifdef CONFIG_ALTIVEC
+EXPORT_SYMBOL_GPL(kvmppc_load_up_altivec);
+#endif
+#ifdef CONFIG_VSX
+EXPORT_SYMBOL_GPL(kvmppc_load_up_vsx);
+#endif
diff --git a/arch/powerpc/kvm/book3s_interrupts.S b/arch/powerpc/kvm/book3s_interrupts.S
new file mode 100644
index 0000000..faca876
--- /dev/null
+++ b/arch/powerpc/kvm/book3s_interrupts.S
@@ -0,0 +1,318 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ *
+ * Copyright SUSE Linux Products GmbH 2009
+ *
+ * Authors: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
+ */
+
+#include <asm/ppc_asm.h>
+#include <asm/kvm_asm.h>
+#include <asm/reg.h>
+#include <asm/page.h>
+#include <asm/asm-offsets.h>
+#include <asm/exception-64s.h>
+
+#define KVMPPC_HANDLE_EXIT .kvmppc_handle_exit
+#define ULONG_SIZE 8
+#define VCPU_GPR(n)     (VCPU_GPRS + (n * ULONG_SIZE))
+
+.macro DISABLE_INTERRUPTS
+       mfmsr   r0
+       rldicl  r0,r0,48,1
+       rotldi  r0,r0,16
+       mtmsrd  r0,1
+.endm
+
+#define VCPU_LOAD_NVGPRS(vcpu) \
+	ld	r14, VCPU_GPR(r14)(vcpu); \
+	ld	r15, VCPU_GPR(r15)(vcpu); \
+	ld	r16, VCPU_GPR(r16)(vcpu); \
+	ld	r17, VCPU_GPR(r17)(vcpu); \
+	ld	r18, VCPU_GPR(r18)(vcpu); \
+	ld	r19, VCPU_GPR(r19)(vcpu); \
+	ld	r20, VCPU_GPR(r20)(vcpu); \
+	ld	r21, VCPU_GPR(r21)(vcpu); \
+	ld	r22, VCPU_GPR(r22)(vcpu); \
+	ld	r23, VCPU_GPR(r23)(vcpu); \
+	ld	r24, VCPU_GPR(r24)(vcpu); \
+	ld	r25, VCPU_GPR(r25)(vcpu); \
+	ld	r26, VCPU_GPR(r26)(vcpu); \
+	ld	r27, VCPU_GPR(r27)(vcpu); \
+	ld	r28, VCPU_GPR(r28)(vcpu); \
+	ld	r29, VCPU_GPR(r29)(vcpu); \
+	ld	r30, VCPU_GPR(r30)(vcpu); \
+	ld	r31, VCPU_GPR(r31)(vcpu); \
+
+/*****************************************************************************
+ *                                                                           *
+ *     Guest entry / exit code that is in kernel module memory (highmem)     *
+ *                                                                           *
+ ****************************************************************************/
+
+/* Registers:
+ *  r3: kvm_run pointer
+ *  r4: vcpu pointer
+ */
+_GLOBAL(__kvmppc_vcpu_entry)
+
+kvm_start_entry:
+	/* Write correct stack frame */
+	mflr    r0
+	std     r0,16(r1)
+
+	/* Save host state to the stack */
+	stdu	r1, -SWITCH_FRAME_SIZE(r1)
+
+	/* Save r3 (kvm_run) and r4 (vcpu) */
+	SAVE_2GPRS(3, r1)
+
+	/* Save non-volatile registers (r14 - r31) */
+	SAVE_NVGPRS(r1)
+
+	/* Save LR */
+	std	r0, _LINK(r1)
+
+	/* Load non-volatile guest state from the vcpu */
+	VCPU_LOAD_NVGPRS(r4)
+
+	/* Save R1/R2 in the PACA */
+	std	r1, PACA_KVM_HOST_R1(r13)
+	std	r2, PACA_KVM_HOST_R2(r13)
+
+	/* XXX swap in/out on load? */
+	ld	r3, VCPU_HIGHMEM_HANDLER(r4)
+	std	r3, PACA_KVM_VMHANDLER(r13)
+
+kvm_start_lightweight:
+
+	ld	r9, VCPU_PC(r4)			/* r9 = vcpu->arch.pc */
+	ld	r10, VCPU_SHADOW_MSR(r4)	/* r10 = vcpu->arch.shadow_msr */
+
+	/* Load some guest state in the respective registers */
+	ld	r5, VCPU_CTR(r4)	/* r5 = vcpu->arch.ctr */
+					/* will be swapped in by rmcall */
+
+	ld	r3, VCPU_LR(r4)		/* r3 = vcpu->arch.lr */
+	mtlr	r3			/* LR = r3 */
+
+	DISABLE_INTERRUPTS
+
+	/* Some guests may need to have dcbz set to 32 byte length.
+	 *
+	 * Usually we ensure that by patching the guest's instructions
+	 * to trap on dcbz and emulate it in the hypervisor.
+	 *
+	 * If we can, we should tell the CPU to use 32 byte dcbz though,
+	 * because that's a lot faster.
+	 */
+
+	ld	r3, VCPU_HFLAGS(r4)
+	rldicl.	r3, r3, 0, 63		/* CR = ((r3 & 1) == 0) */
+	beq	no_dcbz32_on
+
+	mfspr   r3,SPRN_HID5
+	ori     r3, r3, 0x80		/* XXX HID5_dcbz32 = 0x80 */
+	mtspr   SPRN_HID5,r3
+
+no_dcbz32_on:
+
+	ld	r6, VCPU_RMCALL(r4)
+	mtctr	r6
+
+	ld	r3, VCPU_TRAMPOLINE_ENTER(r4)
+	LOAD_REG_IMMEDIATE(r4, MSR_KERNEL & ~(MSR_IR | MSR_DR))
+
+	/* Jump to SLB patching handlder and into our guest */
+	bctr
+
+/*
+ * This is the handler in module memory. It gets jumped at from the
+ * lowmem trampoline code, so it's basically the guest exit code.
+ *
+ */
+
+.global kvmppc_handler_highmem
+kvmppc_handler_highmem:
+
+	/*
+	 * Register usage at this point:
+	 *
+	 * R0         = guest last inst
+	 * R1         = host R1
+	 * R2         = host R2
+	 * R3         = guest PC
+	 * R4         = guest MSR
+	 * R5         = guest DAR
+	 * R6         = guest DSISR
+	 * R13        = PACA
+	 * PACA.KVM.* = guest *
+	 *
+	 */
+
+	/* R7 = vcpu */
+	ld	r7, GPR4(r1)
+
+	/* Now save the guest state */
+
+	stw	r0, VCPU_LAST_INST(r7)
+
+	std	r3, VCPU_PC(r7)
+	std	r4, VCPU_SHADOW_SRR1(r7)
+	std	r5, VCPU_FAULT_DEAR(r7)
+	stw	r6, VCPU_FAULT_DSISR(r7)
+
+	ld	r5, VCPU_HFLAGS(r7)
+	rldicl.	r5, r5, 0, 63		/* CR = ((r5 & 1) == 0) */
+	beq	no_dcbz32_off
+
+	li	r4, 0
+	mfspr   r5,SPRN_HID5
+	rldimi  r5,r4,6,56
+	mtspr   SPRN_HID5,r5
+
+no_dcbz32_off:
+
+	std	r14, VCPU_GPR(r14)(r7)
+	std	r15, VCPU_GPR(r15)(r7)
+	std	r16, VCPU_GPR(r16)(r7)
+	std	r17, VCPU_GPR(r17)(r7)
+	std	r18, VCPU_GPR(r18)(r7)
+	std	r19, VCPU_GPR(r19)(r7)
+	std	r20, VCPU_GPR(r20)(r7)
+	std	r21, VCPU_GPR(r21)(r7)
+	std	r22, VCPU_GPR(r22)(r7)
+	std	r23, VCPU_GPR(r23)(r7)
+	std	r24, VCPU_GPR(r24)(r7)
+	std	r25, VCPU_GPR(r25)(r7)
+	std	r26, VCPU_GPR(r26)(r7)
+	std	r27, VCPU_GPR(r27)(r7)
+	std	r28, VCPU_GPR(r28)(r7)
+	std	r29, VCPU_GPR(r29)(r7)
+	std	r30, VCPU_GPR(r30)(r7)
+	std	r31, VCPU_GPR(r31)(r7)
+
+	/* Save guest CTR */
+	mfctr	r5
+	std	r5, VCPU_CTR(r7)
+
+	/* Save guest LR */
+	mflr	r5
+	std	r5, VCPU_LR(r7)
+
+	/* Restore host msr -> SRR1 */
+	ld	r6, VCPU_HOST_MSR(r7)
+
+	/*
+	 * For some interrupts, we need to call the real Linux
+	 * handler, so it can do work for us. This has to happen
+	 * as if the interrupt arrived from the kernel though,
+	 * so let's fake it here where most state is restored.
+	 *
+	 * Call Linux for hardware interrupts/decrementer
+	 * r3 = address of interrupt handler (exit reason)
+	 */
+
+	cmpwi	r12, BOOK3S_INTERRUPT_EXTERNAL
+	beq	call_linux_handler
+	cmpwi	r12, BOOK3S_INTERRUPT_DECREMENTER
+	beq	call_linux_handler
+
+	/* Back to EE=1 */
+	mtmsr	r6
+	b	kvm_return_point
+
+call_linux_handler:
+
+	/*
+	 * If we land here we need to jump back to the handler we
+	 * came from.
+	 *
+	 * We have a page that we can access from real mode, so let's
+	 * jump back to that and use it as a trampoline to get back into the
+	 * interrupt handler!
+	 *
+	 * R3 still contains the exit code,
+	 * R5 VCPU_HOST_RETIP and
+	 * R6 VCPU_HOST_MSR
+	 */
+
+	/* Restore host IP -> SRR0 */
+	ld	r5, VCPU_HOST_RETIP(r7)
+
+	/* XXX Better move to a safe function?
+	 *     What if we get an HTAB flush in between mtsrr0 and mtsrr1? */
+
+	mtlr	r12
+
+	ld	r4, VCPU_TRAMPOLINE_LOWMEM(r7)
+	mtsrr0	r4
+	LOAD_REG_IMMEDIATE(r3, MSR_KERNEL & ~(MSR_IR | MSR_DR))
+	mtsrr1	r3
+
+	RFI
+
+.global kvm_return_point
+kvm_return_point:
+
+	/* Jump back to lightweight entry if we're supposed to */
+	/* go back into the guest */
+
+	/* Pass the exit number as 3rd argument to kvmppc_handle_exit */
+	mr	r5, r12
+
+	/* Restore r3 (kvm_run) and r4 (vcpu) */
+	REST_2GPRS(3, r1)
+	bl	KVMPPC_HANDLE_EXIT
+
+	/* If RESUME_GUEST, get back in the loop */
+	cmpwi	r3, RESUME_GUEST
+	beq	kvm_loop_lightweight
+
+	cmpwi	r3, RESUME_GUEST_NV
+	beq	kvm_loop_heavyweight
+
+kvm_exit_loop:
+
+	ld	r4, _LINK(r1)
+	mtlr	r4
+
+	/* Restore non-volatile host registers (r14 - r31) */
+	REST_NVGPRS(r1)
+
+	addi    r1, r1, SWITCH_FRAME_SIZE
+	blr
+
+kvm_loop_heavyweight:
+
+	ld	r4, _LINK(r1)
+	std     r4, (16 + SWITCH_FRAME_SIZE)(r1)
+
+	/* Load vcpu and cpu_run */
+	REST_2GPRS(3, r1)
+
+	/* Load non-volatile guest state from the vcpu */
+	VCPU_LOAD_NVGPRS(r4)
+
+	/* Jump back into the beginning of this function */
+	b	kvm_start_lightweight
+
+kvm_loop_lightweight:
+
+	/* We'll need the vcpu pointer */
+	REST_GPR(4, r1)
+
+	/* Jump back into the beginning of this function */
+	b	kvm_start_lightweight
+
diff --git a/arch/powerpc/kvm/book3s_rmhandlers.S b/arch/powerpc/kvm/book3s_rmhandlers.S
new file mode 100644
index 0000000..bd08535
--- /dev/null
+++ b/arch/powerpc/kvm/book3s_rmhandlers.S
@@ -0,0 +1,195 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ *
+ * Copyright SUSE Linux Products GmbH 2009
+ *
+ * Authors: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
+ */
+
+#include <asm/ppc_asm.h>
+#include <asm/kvm_asm.h>
+#include <asm/reg.h>
+#include <asm/page.h>
+#include <asm/asm-offsets.h>
+#include <asm/exception-64s.h>
+
+/*****************************************************************************
+ *                                                                           *
+ *        Real Mode handlers that need to be in low physical memory          *
+ *                                                                           *
+ ****************************************************************************/
+
+
+.macro INTERRUPT_TRAMPOLINE intno
+
+.global kvmppc_trampoline_\intno
+kvmppc_trampoline_\intno:
+
+	mtspr	SPRN_SPRG_SCRATCH0, r13		/* Save r13 */
+
+	/*
+	 * First thing to do is to find out if we're coming
+	 * from a KVM guest or a Linux process.
+	 *
+	 * To distinguish, we check a magic byte in the PACA
+	 */
+	mfspr	r13, SPRN_SPRG_PACA		/* r13 = PACA */
+	std	r12, PACA_KVM_SCRATCH0(r13)
+	mfcr	r12
+	stw	r12, PACA_KVM_SCRATCH1(r13)
+	lbz	r12, PACA_KVM_IN_GUEST(r13)
+	cmpwi	r12, KVM_GUEST_MODE_NONE
+	bne	..kvmppc_handler_hasmagic_\intno
+	/* No KVM guest? Then jump back to the Linux handler! */
+	lwz	r12, PACA_KVM_SCRATCH1(r13)
+	mtcr	r12
+	ld	r12, PACA_KVM_SCRATCH0(r13)
+	mfspr	r13, SPRN_SPRG_SCRATCH0		/* r13 = original r13 */
+	b	kvmppc_resume_\intno		/* Get back original handler */
+
+	/* Now we know we're handling a KVM guest */
+..kvmppc_handler_hasmagic_\intno:
+
+	/* Should we just skip the faulting instruction? */
+	cmpwi	r12, KVM_GUEST_MODE_SKIP
+	beq	kvmppc_handler_skip_ins
+
+	/* Let's store which interrupt we're handling */
+	li	r12, \intno
+
+	/* Jump into the SLB exit code that goes to the highmem handler */
+	b	kvmppc_handler_trampoline_exit
+
+.endm
+
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_SYSTEM_RESET
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_MACHINE_CHECK
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_DATA_STORAGE
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_DATA_SEGMENT
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_INST_STORAGE
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_INST_SEGMENT
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_EXTERNAL
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_ALIGNMENT
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_PROGRAM
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_FP_UNAVAIL
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_DECREMENTER
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_SYSCALL
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_TRACE
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_PERFMON
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_ALTIVEC
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_VSX
+
+/*
+ * Bring us back to the faulting code, but skip the
+ * faulting instruction.
+ *
+ * This is a generic exit path from the interrupt
+ * trampolines above.
+ *
+ * Input Registers:
+ *
+ * R12               = free
+ * R13               = PACA
+ * PACA.KVM.SCRATCH0 = guest R12
+ * PACA.KVM.SCRATCH1 = guest CR
+ * SPRG_SCRATCH0     = guest R13
+ *
+ */
+kvmppc_handler_skip_ins:
+
+	/* Patch the IP to the next instruction */
+	mfsrr0	r12
+	addi	r12, r12, 4
+	mtsrr0	r12
+
+	/* Clean up all state */
+	lwz	r12, PACA_KVM_SCRATCH1(r13)
+	mtcr	r12
+	ld	r12, PACA_KVM_SCRATCH0(r13)
+	mfspr	r13, SPRN_SPRG_SCRATCH0
+
+	/* And get back into the code */
+	RFI
+
+/*
+ * This trampoline brings us back to a real mode handler
+ *
+ * Input Registers:
+ *
+ * R5 = SRR0
+ * R6 = SRR1
+ * LR = real-mode IP
+ *
+ */
+.global kvmppc_handler_lowmem_trampoline
+kvmppc_handler_lowmem_trampoline:
+
+	mtsrr0	r5
+	mtsrr1	r6
+	blr
+kvmppc_handler_lowmem_trampoline_end:
+
+/*
+ * Call a function in real mode
+ *
+ * Input Registers:
+ *
+ * R3 = function
+ * R4 = MSR
+ * R5 = CTR
+ *
+ */
+_GLOBAL(kvmppc_rmcall)
+	mtmsr	r4		/* Disable relocation, so mtsrr
+				   doesn't get interrupted */
+	mtctr	r5
+	mtsrr0	r3
+	mtsrr1	r4
+	RFI
+
+/*
+ * Activate current's external feature (FPU/Altivec/VSX)
+ */
+#define define_load_up(what) 				\
+							\
+_GLOBAL(kvmppc_load_up_ ## what);			\
+	stdu	r1, -INT_FRAME_SIZE(r1);		\
+	mflr	r3;					\
+	std	r3, _LINK(r1);				\
+							\
+	bl	.load_up_ ## what;			\
+							\
+	ld	r3, _LINK(r1);				\
+	mtlr	r3;					\
+	addi	r1, r1, INT_FRAME_SIZE;			\
+	blr
+
+define_load_up(fpu)
+#ifdef CONFIG_ALTIVEC
+define_load_up(altivec)
+#endif
+#ifdef CONFIG_VSX
+define_load_up(vsx)
+#endif
+
+.global kvmppc_trampoline_lowmem
+kvmppc_trampoline_lowmem:
+	.long kvmppc_handler_lowmem_trampoline - _stext
+
+.global kvmppc_trampoline_enter
+kvmppc_trampoline_enter:
+	.long kvmppc_handler_trampoline_enter - _stext
+
+#include "book3s_64_slb.S"
+
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 01/27] KVM: PPC: Name generic 64-bit code generic
@ 2010-04-15 22:11     ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

We have quite some code that can be used by Book3S_32 and Book3S_64 alike,
so let's call it "Book3S" instead of "Book3S_64", so we can later on
use it from the 32 bit port too.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s.h        |    2 +-
 arch/powerpc/include/asm/kvm_book3s_64_asm.h |   76 ----
 arch/powerpc/include/asm/kvm_book3s_asm.h    |   76 ++++
 arch/powerpc/include/asm/paca.h              |    2 +-
 arch/powerpc/kernel/head_64.S                |    4 +-
 arch/powerpc/kvm/Makefile                    |    6 +-
 arch/powerpc/kvm/book3s_64_emulate.c         |  566 --------------------------
 arch/powerpc/kvm/book3s_64_exports.c         |   32 --
 arch/powerpc/kvm/book3s_64_interrupts.S      |  318 ---------------
 arch/powerpc/kvm/book3s_64_rmhandlers.S      |  195 ---------
 arch/powerpc/kvm/book3s_emulate.c            |  566 ++++++++++++++++++++++++++
 arch/powerpc/kvm/book3s_exports.c            |   32 ++
 arch/powerpc/kvm/book3s_interrupts.S         |  318 +++++++++++++++
 arch/powerpc/kvm/book3s_rmhandlers.S         |  195 +++++++++
 14 files changed, 1194 insertions(+), 1194 deletions(-)
 delete mode 100644 arch/powerpc/include/asm/kvm_book3s_64_asm.h
 create mode 100644 arch/powerpc/include/asm/kvm_book3s_asm.h
 delete mode 100644 arch/powerpc/kvm/book3s_64_emulate.c
 delete mode 100644 arch/powerpc/kvm/book3s_64_exports.c
 delete mode 100644 arch/powerpc/kvm/book3s_64_interrupts.S
 delete mode 100644 arch/powerpc/kvm/book3s_64_rmhandlers.S
 create mode 100644 arch/powerpc/kvm/book3s_emulate.c
 create mode 100644 arch/powerpc/kvm/book3s_exports.c
 create mode 100644 arch/powerpc/kvm/book3s_interrupts.S
 create mode 100644 arch/powerpc/kvm/book3s_rmhandlers.S

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index ee79921..7670e2a 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -22,7 +22,7 @@
 
 #include <linux/types.h>
 #include <linux/kvm_host.h>
-#include <asm/kvm_book3s_64_asm.h>
+#include <asm/kvm_book3s_asm.h>
 
 struct kvmppc_slb {
 	u64 esid;
diff --git a/arch/powerpc/include/asm/kvm_book3s_64_asm.h b/arch/powerpc/include/asm/kvm_book3s_64_asm.h
deleted file mode 100644
index 183461b..0000000
--- a/arch/powerpc/include/asm/kvm_book3s_64_asm.h
+++ /dev/null
@@ -1,76 +0,0 @@
-/*
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License, version 2, as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
- *
- * Copyright SUSE Linux Products GmbH 2009
- *
- * Authors: Alexander Graf <agraf@suse.de>
- */
-
-#ifndef __ASM_KVM_BOOK3S_ASM_H__
-#define __ASM_KVM_BOOK3S_ASM_H__
-
-#ifdef __ASSEMBLY__
-
-#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
-
-#include <asm/kvm_asm.h>
-
-.macro DO_KVM intno
-	.if (\intno = BOOK3S_INTERRUPT_SYSTEM_RESET) || \
-	    (\intno = BOOK3S_INTERRUPT_MACHINE_CHECK) || \
-	    (\intno = BOOK3S_INTERRUPT_DATA_STORAGE) || \
-	    (\intno = BOOK3S_INTERRUPT_INST_STORAGE) || \
-	    (\intno = BOOK3S_INTERRUPT_DATA_SEGMENT) || \
-	    (\intno = BOOK3S_INTERRUPT_INST_SEGMENT) || \
-	    (\intno = BOOK3S_INTERRUPT_EXTERNAL) || \
-	    (\intno = BOOK3S_INTERRUPT_ALIGNMENT) || \
-	    (\intno = BOOK3S_INTERRUPT_PROGRAM) || \
-	    (\intno = BOOK3S_INTERRUPT_FP_UNAVAIL) || \
-	    (\intno = BOOK3S_INTERRUPT_DECREMENTER) || \
-	    (\intno = BOOK3S_INTERRUPT_SYSCALL) || \
-	    (\intno = BOOK3S_INTERRUPT_TRACE) || \
-	    (\intno = BOOK3S_INTERRUPT_PERFMON) || \
-	    (\intno = BOOK3S_INTERRUPT_ALTIVEC) || \
-	    (\intno = BOOK3S_INTERRUPT_VSX)
-
-	b	kvmppc_trampoline_\intno
-kvmppc_resume_\intno:
-
-	.endif
-.endm
-
-#else
-
-.macro DO_KVM intno
-.endm
-
-#endif /* CONFIG_KVM_BOOK3S_64_HANDLER */
-
-#else  /*__ASSEMBLY__ */
-
-struct kvmppc_book3s_shadow_vcpu {
-	ulong gpr[14];
-	u32 cr;
-	u32 xer;
-	ulong host_r1;
-	ulong host_r2;
-	ulong handler;
-	ulong scratch0;
-	ulong scratch1;
-	ulong vmhandler;
-};
-
-#endif /*__ASSEMBLY__ */
-
-#endif /* __ASM_KVM_BOOK3S_ASM_H__ */
diff --git a/arch/powerpc/include/asm/kvm_book3s_asm.h b/arch/powerpc/include/asm/kvm_book3s_asm.h
new file mode 100644
index 0000000..183461b
--- /dev/null
+++ b/arch/powerpc/include/asm/kvm_book3s_asm.h
@@ -0,0 +1,76 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ *
+ * Copyright SUSE Linux Products GmbH 2009
+ *
+ * Authors: Alexander Graf <agraf@suse.de>
+ */
+
+#ifndef __ASM_KVM_BOOK3S_ASM_H__
+#define __ASM_KVM_BOOK3S_ASM_H__
+
+#ifdef __ASSEMBLY__
+
+#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
+
+#include <asm/kvm_asm.h>
+
+.macro DO_KVM intno
+	.if (\intno = BOOK3S_INTERRUPT_SYSTEM_RESET) || \
+	    (\intno = BOOK3S_INTERRUPT_MACHINE_CHECK) || \
+	    (\intno = BOOK3S_INTERRUPT_DATA_STORAGE) || \
+	    (\intno = BOOK3S_INTERRUPT_INST_STORAGE) || \
+	    (\intno = BOOK3S_INTERRUPT_DATA_SEGMENT) || \
+	    (\intno = BOOK3S_INTERRUPT_INST_SEGMENT) || \
+	    (\intno = BOOK3S_INTERRUPT_EXTERNAL) || \
+	    (\intno = BOOK3S_INTERRUPT_ALIGNMENT) || \
+	    (\intno = BOOK3S_INTERRUPT_PROGRAM) || \
+	    (\intno = BOOK3S_INTERRUPT_FP_UNAVAIL) || \
+	    (\intno = BOOK3S_INTERRUPT_DECREMENTER) || \
+	    (\intno = BOOK3S_INTERRUPT_SYSCALL) || \
+	    (\intno = BOOK3S_INTERRUPT_TRACE) || \
+	    (\intno = BOOK3S_INTERRUPT_PERFMON) || \
+	    (\intno = BOOK3S_INTERRUPT_ALTIVEC) || \
+	    (\intno = BOOK3S_INTERRUPT_VSX)
+
+	b	kvmppc_trampoline_\intno
+kvmppc_resume_\intno:
+
+	.endif
+.endm
+
+#else
+
+.macro DO_KVM intno
+.endm
+
+#endif /* CONFIG_KVM_BOOK3S_64_HANDLER */
+
+#else  /*__ASSEMBLY__ */
+
+struct kvmppc_book3s_shadow_vcpu {
+	ulong gpr[14];
+	u32 cr;
+	u32 xer;
+	ulong host_r1;
+	ulong host_r2;
+	ulong handler;
+	ulong scratch0;
+	ulong scratch1;
+	ulong vmhandler;
+};
+
+#endif /*__ASSEMBLY__ */
+
+#endif /* __ASM_KVM_BOOK3S_ASM_H__ */
diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index a011603..dc3ccdf 100644
--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -23,7 +23,7 @@
 #include <asm/page.h>
 #include <asm/exception-64e.h>
 #ifdef CONFIG_KVM_BOOK3S_64_HANDLER
-#include <asm/kvm_book3s_64_asm.h>
+#include <asm/kvm_book3s_asm.h>
 #endif
 
 register struct paca_struct *local_paca asm("r13");
diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index bed9a29..844a44b 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -37,7 +37,7 @@
 #include <asm/firmware.h>
 #include <asm/page_64.h>
 #include <asm/irqflags.h>
-#include <asm/kvm_book3s_64_asm.h>
+#include <asm/kvm_book3s_asm.h>
 
 /* The physical memory is layed out such that the secondary processor
  * spin code sits at 0x0000...0x00ff. On server, the vectors follow
@@ -169,7 +169,7 @@ exception_marker:
 /* KVM trampoline code needs to be close to the interrupt handlers */
 
 #ifdef CONFIG_KVM_BOOK3S_64_HANDLER
-#include "../kvm/book3s_64_rmhandlers.S"
+#include "../kvm/book3s_rmhandlers.S"
 #endif
 
 _GLOBAL(generic_secondary_thread_init)
diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile
index eba721e..0a67310 100644
--- a/arch/powerpc/kvm/Makefile
+++ b/arch/powerpc/kvm/Makefile
@@ -14,7 +14,7 @@ CFLAGS_emulate.o  := -I.
 
 common-objs-y += powerpc.o emulate.o
 obj-$(CONFIG_KVM_EXIT_TIMING) += timing.o
-obj-$(CONFIG_KVM_BOOK3S_64_HANDLER) += book3s_64_exports.o
+obj-$(CONFIG_KVM_BOOK3S_64_HANDLER) += book3s_exports.o
 
 AFLAGS_booke_interrupts.o := -I$(obj)
 
@@ -43,8 +43,8 @@ kvm-book3s_64-objs := \
 	fpu.o \
 	book3s_paired_singles.o \
 	book3s.o \
-	book3s_64_emulate.o \
-	book3s_64_interrupts.o \
+	book3s_emulate.o \
+	book3s_interrupts.o \
 	book3s_64_mmu_host.o \
 	book3s_64_mmu.o \
 	book3s_32_mmu.o
diff --git a/arch/powerpc/kvm/book3s_64_emulate.c b/arch/powerpc/kvm/book3s_64_emulate.c
deleted file mode 100644
index 8f50776..0000000
--- a/arch/powerpc/kvm/book3s_64_emulate.c
+++ /dev/null
@@ -1,566 +0,0 @@
-/*
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License, version 2, as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
- *
- * Copyright SUSE Linux Products GmbH 2009
- *
- * Authors: Alexander Graf <agraf@suse.de>
- */
-
-#include <asm/kvm_ppc.h>
-#include <asm/disassemble.h>
-#include <asm/kvm_book3s.h>
-#include <asm/reg.h>
-
-#define OP_19_XOP_RFID		18
-#define OP_19_XOP_RFI		50
-
-#define OP_31_XOP_MFMSR		83
-#define OP_31_XOP_MTMSR		146
-#define OP_31_XOP_MTMSRD	178
-#define OP_31_XOP_MTSR		210
-#define OP_31_XOP_MTSRIN	242
-#define OP_31_XOP_TLBIEL	274
-#define OP_31_XOP_TLBIE		306
-#define OP_31_XOP_SLBMTE	402
-#define OP_31_XOP_SLBIE		434
-#define OP_31_XOP_SLBIA		498
-#define OP_31_XOP_MFSR		595
-#define OP_31_XOP_MFSRIN	659
-#define OP_31_XOP_DCBA		758
-#define OP_31_XOP_SLBMFEV	851
-#define OP_31_XOP_EIOIO		854
-#define OP_31_XOP_SLBMFEE	915
-
-/* DCBZ is actually 1014, but we patch it to 1010 so we get a trap */
-#define OP_31_XOP_DCBZ		1010
-
-#define OP_LFS			48
-#define OP_LFD			50
-#define OP_STFS			52
-#define OP_STFD			54
-
-#define SPRN_GQR0		912
-#define SPRN_GQR1		913
-#define SPRN_GQR2		914
-#define SPRN_GQR3		915
-#define SPRN_GQR4		916
-#define SPRN_GQR5		917
-#define SPRN_GQR6		918
-#define SPRN_GQR7		919
-
-int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
-                           unsigned int inst, int *advance)
-{
-	int emulated = EMULATE_DONE;
-
-	switch (get_op(inst)) {
-	case 19:
-		switch (get_xop(inst)) {
-		case OP_19_XOP_RFID:
-		case OP_19_XOP_RFI:
-			vcpu->arch.pc = vcpu->arch.srr0;
-			kvmppc_set_msr(vcpu, vcpu->arch.srr1);
-			*advance = 0;
-			break;
-
-		default:
-			emulated = EMULATE_FAIL;
-			break;
-		}
-		break;
-	case 31:
-		switch (get_xop(inst)) {
-		case OP_31_XOP_MFMSR:
-			kvmppc_set_gpr(vcpu, get_rt(inst), vcpu->arch.msr);
-			break;
-		case OP_31_XOP_MTMSRD:
-		{
-			ulong rs = kvmppc_get_gpr(vcpu, get_rs(inst));
-			if (inst & 0x10000) {
-				vcpu->arch.msr &= ~(MSR_RI | MSR_EE);
-				vcpu->arch.msr |= rs & (MSR_RI | MSR_EE);
-			} else
-				kvmppc_set_msr(vcpu, rs);
-			break;
-		}
-		case OP_31_XOP_MTMSR:
-			kvmppc_set_msr(vcpu, kvmppc_get_gpr(vcpu, get_rs(inst)));
-			break;
-		case OP_31_XOP_MFSR:
-		{
-			int srnum;
-
-			srnum = kvmppc_get_field(inst, 12 + 32, 15 + 32);
-			if (vcpu->arch.mmu.mfsrin) {
-				u32 sr;
-				sr = vcpu->arch.mmu.mfsrin(vcpu, srnum);
-				kvmppc_set_gpr(vcpu, get_rt(inst), sr);
-			}
-			break;
-		}
-		case OP_31_XOP_MFSRIN:
-		{
-			int srnum;
-
-			srnum = (kvmppc_get_gpr(vcpu, get_rb(inst)) >> 28) & 0xf;
-			if (vcpu->arch.mmu.mfsrin) {
-				u32 sr;
-				sr = vcpu->arch.mmu.mfsrin(vcpu, srnum);
-				kvmppc_set_gpr(vcpu, get_rt(inst), sr);
-			}
-			break;
-		}
-		case OP_31_XOP_MTSR:
-			vcpu->arch.mmu.mtsrin(vcpu,
-				(inst >> 16) & 0xf,
-				kvmppc_get_gpr(vcpu, get_rs(inst)));
-			break;
-		case OP_31_XOP_MTSRIN:
-			vcpu->arch.mmu.mtsrin(vcpu,
-				(kvmppc_get_gpr(vcpu, get_rb(inst)) >> 28) & 0xf,
-				kvmppc_get_gpr(vcpu, get_rs(inst)));
-			break;
-		case OP_31_XOP_TLBIE:
-		case OP_31_XOP_TLBIEL:
-		{
-			bool large = (inst & 0x00200000) ? true : false;
-			ulong addr = kvmppc_get_gpr(vcpu, get_rb(inst));
-			vcpu->arch.mmu.tlbie(vcpu, addr, large);
-			break;
-		}
-		case OP_31_XOP_EIOIO:
-			break;
-		case OP_31_XOP_SLBMTE:
-			if (!vcpu->arch.mmu.slbmte)
-				return EMULATE_FAIL;
-
-			vcpu->arch.mmu.slbmte(vcpu,
-					kvmppc_get_gpr(vcpu, get_rs(inst)),
-					kvmppc_get_gpr(vcpu, get_rb(inst)));
-			break;
-		case OP_31_XOP_SLBIE:
-			if (!vcpu->arch.mmu.slbie)
-				return EMULATE_FAIL;
-
-			vcpu->arch.mmu.slbie(vcpu,
-					kvmppc_get_gpr(vcpu, get_rb(inst)));
-			break;
-		case OP_31_XOP_SLBIA:
-			if (!vcpu->arch.mmu.slbia)
-				return EMULATE_FAIL;
-
-			vcpu->arch.mmu.slbia(vcpu);
-			break;
-		case OP_31_XOP_SLBMFEE:
-			if (!vcpu->arch.mmu.slbmfee) {
-				emulated = EMULATE_FAIL;
-			} else {
-				ulong t, rb;
-
-				rb = kvmppc_get_gpr(vcpu, get_rb(inst));
-				t = vcpu->arch.mmu.slbmfee(vcpu, rb);
-				kvmppc_set_gpr(vcpu, get_rt(inst), t);
-			}
-			break;
-		case OP_31_XOP_SLBMFEV:
-			if (!vcpu->arch.mmu.slbmfev) {
-				emulated = EMULATE_FAIL;
-			} else {
-				ulong t, rb;
-
-				rb = kvmppc_get_gpr(vcpu, get_rb(inst));
-				t = vcpu->arch.mmu.slbmfev(vcpu, rb);
-				kvmppc_set_gpr(vcpu, get_rt(inst), t);
-			}
-			break;
-		case OP_31_XOP_DCBA:
-			/* Gets treated as NOP */
-			break;
-		case OP_31_XOP_DCBZ:
-		{
-			ulong rb = kvmppc_get_gpr(vcpu, get_rb(inst));
-			ulong ra = 0;
-			ulong addr, vaddr;
-			u32 zeros[8] = { 0, 0, 0, 0, 0, 0, 0, 0 };
-			u32 dsisr;
-			int r;
-
-			if (get_ra(inst))
-				ra = kvmppc_get_gpr(vcpu, get_ra(inst));
-
-			addr = (ra + rb) & ~31ULL;
-			if (!(vcpu->arch.msr & MSR_SF))
-				addr &= 0xffffffff;
-			vaddr = addr;
-
-			r = kvmppc_st(vcpu, &addr, 32, zeros, true);
-			if ((r = -ENOENT) || (r = -EPERM)) {
-				*advance = 0;
-				vcpu->arch.dear = vaddr;
-				vcpu->arch.fault_dear = vaddr;
-
-				dsisr = DSISR_ISSTORE;
-				if (r = -ENOENT)
-					dsisr |= DSISR_NOHPTE;
-				else if (r = -EPERM)
-					dsisr |= DSISR_PROTFAULT;
-
-				to_book3s(vcpu)->dsisr = dsisr;
-				vcpu->arch.fault_dsisr = dsisr;
-
-				kvmppc_book3s_queue_irqprio(vcpu,
-					BOOK3S_INTERRUPT_DATA_STORAGE);
-			}
-
-			break;
-		}
-		default:
-			emulated = EMULATE_FAIL;
-		}
-		break;
-	default:
-		emulated = EMULATE_FAIL;
-	}
-
-	if (emulated = EMULATE_FAIL)
-		emulated = kvmppc_emulate_paired_single(run, vcpu);
-
-	return emulated;
-}
-
-void kvmppc_set_bat(struct kvm_vcpu *vcpu, struct kvmppc_bat *bat, bool upper,
-                    u32 val)
-{
-	if (upper) {
-		/* Upper BAT */
-		u32 bl = (val >> 2) & 0x7ff;
-		bat->bepi_mask = (~bl << 17);
-		bat->bepi = val & 0xfffe0000;
-		bat->vs = (val & 2) ? 1 : 0;
-		bat->vp = (val & 1) ? 1 : 0;
-		bat->raw = (bat->raw & 0xffffffff00000000ULL) | val;
-	} else {
-		/* Lower BAT */
-		bat->brpn = val & 0xfffe0000;
-		bat->wimg = (val >> 3) & 0xf;
-		bat->pp = val & 3;
-		bat->raw = (bat->raw & 0x00000000ffffffffULL) | ((u64)val << 32);
-	}
-}
-
-static u32 kvmppc_read_bat(struct kvm_vcpu *vcpu, int sprn)
-{
-	struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
-	struct kvmppc_bat *bat;
-
-	switch (sprn) {
-	case SPRN_IBAT0U ... SPRN_IBAT3L:
-		bat = &vcpu_book3s->ibat[(sprn - SPRN_IBAT0U) / 2];
-		break;
-	case SPRN_IBAT4U ... SPRN_IBAT7L:
-		bat = &vcpu_book3s->ibat[4 + ((sprn - SPRN_IBAT4U) / 2)];
-		break;
-	case SPRN_DBAT0U ... SPRN_DBAT3L:
-		bat = &vcpu_book3s->dbat[(sprn - SPRN_DBAT0U) / 2];
-		break;
-	case SPRN_DBAT4U ... SPRN_DBAT7L:
-		bat = &vcpu_book3s->dbat[4 + ((sprn - SPRN_DBAT4U) / 2)];
-		break;
-	default:
-		BUG();
-	}
-
-	if (sprn % 2)
-		return bat->raw >> 32;
-	else
-		return bat->raw;
-}
-
-static void kvmppc_write_bat(struct kvm_vcpu *vcpu, int sprn, u32 val)
-{
-	struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
-	struct kvmppc_bat *bat;
-
-	switch (sprn) {
-	case SPRN_IBAT0U ... SPRN_IBAT3L:
-		bat = &vcpu_book3s->ibat[(sprn - SPRN_IBAT0U) / 2];
-		break;
-	case SPRN_IBAT4U ... SPRN_IBAT7L:
-		bat = &vcpu_book3s->ibat[4 + ((sprn - SPRN_IBAT4U) / 2)];
-		break;
-	case SPRN_DBAT0U ... SPRN_DBAT3L:
-		bat = &vcpu_book3s->dbat[(sprn - SPRN_DBAT0U) / 2];
-		break;
-	case SPRN_DBAT4U ... SPRN_DBAT7L:
-		bat = &vcpu_book3s->dbat[4 + ((sprn - SPRN_DBAT4U) / 2)];
-		break;
-	default:
-		BUG();
-	}
-
-	kvmppc_set_bat(vcpu, bat, !(sprn % 2), val);
-}
-
-int kvmppc_core_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs)
-{
-	int emulated = EMULATE_DONE;
-	ulong spr_val = kvmppc_get_gpr(vcpu, rs);
-
-	switch (sprn) {
-	case SPRN_SDR1:
-		to_book3s(vcpu)->sdr1 = spr_val;
-		break;
-	case SPRN_DSISR:
-		to_book3s(vcpu)->dsisr = spr_val;
-		break;
-	case SPRN_DAR:
-		vcpu->arch.dear = spr_val;
-		break;
-	case SPRN_HIOR:
-		to_book3s(vcpu)->hior = spr_val;
-		break;
-	case SPRN_IBAT0U ... SPRN_IBAT3L:
-	case SPRN_IBAT4U ... SPRN_IBAT7L:
-	case SPRN_DBAT0U ... SPRN_DBAT3L:
-	case SPRN_DBAT4U ... SPRN_DBAT7L:
-		kvmppc_write_bat(vcpu, sprn, (u32)spr_val);
-		/* BAT writes happen so rarely that we're ok to flush
-		 * everything here */
-		kvmppc_mmu_pte_flush(vcpu, 0, 0);
-		kvmppc_mmu_flush_segments(vcpu);
-		break;
-	case SPRN_HID0:
-		to_book3s(vcpu)->hid[0] = spr_val;
-		break;
-	case SPRN_HID1:
-		to_book3s(vcpu)->hid[1] = spr_val;
-		break;
-	case SPRN_HID2:
-		to_book3s(vcpu)->hid[2] = spr_val;
-		break;
-	case SPRN_HID2_GEKKO:
-		to_book3s(vcpu)->hid[2] = spr_val;
-		/* HID2.PSE controls paired single on gekko */
-		switch (vcpu->arch.pvr) {
-		case 0x00080200:	/* lonestar 2.0 */
-		case 0x00088202:	/* lonestar 2.2 */
-		case 0x70000100:	/* gekko 1.0 */
-		case 0x00080100:	/* gekko 2.0 */
-		case 0x00083203:	/* gekko 2.3a */
-		case 0x00083213:	/* gekko 2.3b */
-		case 0x00083204:	/* gekko 2.4 */
-		case 0x00083214:	/* gekko 2.4e (8SE) - retail HW2 */
-			if (spr_val & (1 << 29)) { /* HID2.PSE */
-				vcpu->arch.hflags |= BOOK3S_HFLAG_PAIRED_SINGLE;
-				kvmppc_giveup_ext(vcpu, MSR_FP);
-			} else {
-				vcpu->arch.hflags &= ~BOOK3S_HFLAG_PAIRED_SINGLE;
-			}
-			break;
-		}
-		break;
-	case SPRN_HID4:
-	case SPRN_HID4_GEKKO:
-		to_book3s(vcpu)->hid[4] = spr_val;
-		break;
-	case SPRN_HID5:
-		to_book3s(vcpu)->hid[5] = spr_val;
-		/* guest HID5 set can change is_dcbz32 */
-		if (vcpu->arch.mmu.is_dcbz32(vcpu) &&
-		    (mfmsr() & MSR_HV))
-			vcpu->arch.hflags |= BOOK3S_HFLAG_DCBZ32;
-		break;
-	case SPRN_GQR0:
-	case SPRN_GQR1:
-	case SPRN_GQR2:
-	case SPRN_GQR3:
-	case SPRN_GQR4:
-	case SPRN_GQR5:
-	case SPRN_GQR6:
-	case SPRN_GQR7:
-		to_book3s(vcpu)->gqr[sprn - SPRN_GQR0] = spr_val;
-		break;
-	case SPRN_ICTC:
-	case SPRN_THRM1:
-	case SPRN_THRM2:
-	case SPRN_THRM3:
-	case SPRN_CTRLF:
-	case SPRN_CTRLT:
-	case SPRN_L2CR:
-	case SPRN_MMCR0_GEKKO:
-	case SPRN_MMCR1_GEKKO:
-	case SPRN_PMC1_GEKKO:
-	case SPRN_PMC2_GEKKO:
-	case SPRN_PMC3_GEKKO:
-	case SPRN_PMC4_GEKKO:
-	case SPRN_WPAR_GEKKO:
-		break;
-	default:
-		printk(KERN_INFO "KVM: invalid SPR write: %d\n", sprn);
-#ifndef DEBUG_SPR
-		emulated = EMULATE_FAIL;
-#endif
-		break;
-	}
-
-	return emulated;
-}
-
-int kvmppc_core_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
-{
-	int emulated = EMULATE_DONE;
-
-	switch (sprn) {
-	case SPRN_IBAT0U ... SPRN_IBAT3L:
-	case SPRN_IBAT4U ... SPRN_IBAT7L:
-	case SPRN_DBAT0U ... SPRN_DBAT3L:
-	case SPRN_DBAT4U ... SPRN_DBAT7L:
-		kvmppc_set_gpr(vcpu, rt, kvmppc_read_bat(vcpu, sprn));
-		break;
-	case SPRN_SDR1:
-		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->sdr1);
-		break;
-	case SPRN_DSISR:
-		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->dsisr);
-		break;
-	case SPRN_DAR:
-		kvmppc_set_gpr(vcpu, rt, vcpu->arch.dear);
-		break;
-	case SPRN_HIOR:
-		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hior);
-		break;
-	case SPRN_HID0:
-		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[0]);
-		break;
-	case SPRN_HID1:
-		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[1]);
-		break;
-	case SPRN_HID2:
-	case SPRN_HID2_GEKKO:
-		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[2]);
-		break;
-	case SPRN_HID4:
-	case SPRN_HID4_GEKKO:
-		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[4]);
-		break;
-	case SPRN_HID5:
-		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[5]);
-		break;
-	case SPRN_GQR0:
-	case SPRN_GQR1:
-	case SPRN_GQR2:
-	case SPRN_GQR3:
-	case SPRN_GQR4:
-	case SPRN_GQR5:
-	case SPRN_GQR6:
-	case SPRN_GQR7:
-		kvmppc_set_gpr(vcpu, rt,
-			       to_book3s(vcpu)->gqr[sprn - SPRN_GQR0]);
-		break;
-	case SPRN_THRM1:
-	case SPRN_THRM2:
-	case SPRN_THRM3:
-	case SPRN_CTRLF:
-	case SPRN_CTRLT:
-	case SPRN_L2CR:
-	case SPRN_MMCR0_GEKKO:
-	case SPRN_MMCR1_GEKKO:
-	case SPRN_PMC1_GEKKO:
-	case SPRN_PMC2_GEKKO:
-	case SPRN_PMC3_GEKKO:
-	case SPRN_PMC4_GEKKO:
-	case SPRN_WPAR_GEKKO:
-		kvmppc_set_gpr(vcpu, rt, 0);
-		break;
-	default:
-		printk(KERN_INFO "KVM: invalid SPR read: %d\n", sprn);
-#ifndef DEBUG_SPR
-		emulated = EMULATE_FAIL;
-#endif
-		break;
-	}
-
-	return emulated;
-}
-
-u32 kvmppc_alignment_dsisr(struct kvm_vcpu *vcpu, unsigned int inst)
-{
-	u32 dsisr = 0;
-
-	/*
-	 * This is what the spec says about DSISR bits (not mentioned = 0):
-	 *
-	 * 12:13		[DS]	Set to bits 30:31
-	 * 15:16		[X]	Set to bits 29:30
-	 * 17			[X]	Set to bit 25
-	 *			[D/DS]	Set to bit 5
-	 * 18:21		[X]	Set to bits 21:24
-	 *			[D/DS]	Set to bits 1:4
-	 * 22:26			Set to bits 6:10 (RT/RS/FRT/FRS)
-	 * 27:31			Set to bits 11:15 (RA)
-	 */
-
-	switch (get_op(inst)) {
-	/* D-form */
-	case OP_LFS:
-	case OP_LFD:
-	case OP_STFD:
-	case OP_STFS:
-		dsisr |= (inst >> 12) & 0x4000;	/* bit 17 */
-		dsisr |= (inst >> 17) & 0x3c00; /* bits 18:21 */
-		break;
-	/* X-form */
-	case 31:
-		dsisr |= (inst << 14) & 0x18000; /* bits 15:16 */
-		dsisr |= (inst << 8)  & 0x04000; /* bit 17 */
-		dsisr |= (inst << 3)  & 0x03c00; /* bits 18:21 */
-		break;
-	default:
-		printk(KERN_INFO "KVM: Unaligned instruction 0x%x\n", inst);
-		break;
-	}
-
-	dsisr |= (inst >> 16) & 0x03ff; /* bits 22:31 */
-
-	return dsisr;
-}
-
-ulong kvmppc_alignment_dar(struct kvm_vcpu *vcpu, unsigned int inst)
-{
-	ulong dar = 0;
-	ulong ra;
-
-	switch (get_op(inst)) {
-	case OP_LFS:
-	case OP_LFD:
-	case OP_STFD:
-	case OP_STFS:
-		ra = get_ra(inst);
-		if (ra)
-			dar = kvmppc_get_gpr(vcpu, ra);
-		dar += (s32)((s16)inst);
-		break;
-	case 31:
-		ra = get_ra(inst);
-		if (ra)
-			dar = kvmppc_get_gpr(vcpu, ra);
-		dar += kvmppc_get_gpr(vcpu, get_rb(inst));
-		break;
-	default:
-		printk(KERN_INFO "KVM: Unaligned instruction 0x%x\n", inst);
-		break;
-	}
-
-	return dar;
-}
diff --git a/arch/powerpc/kvm/book3s_64_exports.c b/arch/powerpc/kvm/book3s_64_exports.c
deleted file mode 100644
index 1dd5a1d..0000000
--- a/arch/powerpc/kvm/book3s_64_exports.c
+++ /dev/null
@@ -1,32 +0,0 @@
-/*
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License, version 2, as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
- *
- * Copyright SUSE Linux Products GmbH 2009
- *
- * Authors: Alexander Graf <agraf@suse.de>
- */
-
-#include <linux/module.h>
-#include <asm/kvm_book3s.h>
-
-EXPORT_SYMBOL_GPL(kvmppc_trampoline_enter);
-EXPORT_SYMBOL_GPL(kvmppc_trampoline_lowmem);
-EXPORT_SYMBOL_GPL(kvmppc_rmcall);
-EXPORT_SYMBOL_GPL(kvmppc_load_up_fpu);
-#ifdef CONFIG_ALTIVEC
-EXPORT_SYMBOL_GPL(kvmppc_load_up_altivec);
-#endif
-#ifdef CONFIG_VSX
-EXPORT_SYMBOL_GPL(kvmppc_load_up_vsx);
-#endif
diff --git a/arch/powerpc/kvm/book3s_64_interrupts.S b/arch/powerpc/kvm/book3s_64_interrupts.S
deleted file mode 100644
index faca876..0000000
--- a/arch/powerpc/kvm/book3s_64_interrupts.S
+++ /dev/null
@@ -1,318 +0,0 @@
-/*
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License, version 2, as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
- *
- * Copyright SUSE Linux Products GmbH 2009
- *
- * Authors: Alexander Graf <agraf@suse.de>
- */
-
-#include <asm/ppc_asm.h>
-#include <asm/kvm_asm.h>
-#include <asm/reg.h>
-#include <asm/page.h>
-#include <asm/asm-offsets.h>
-#include <asm/exception-64s.h>
-
-#define KVMPPC_HANDLE_EXIT .kvmppc_handle_exit
-#define ULONG_SIZE 8
-#define VCPU_GPR(n)     (VCPU_GPRS + (n * ULONG_SIZE))
-
-.macro DISABLE_INTERRUPTS
-       mfmsr   r0
-       rldicl  r0,r0,48,1
-       rotldi  r0,r0,16
-       mtmsrd  r0,1
-.endm
-
-#define VCPU_LOAD_NVGPRS(vcpu) \
-	ld	r14, VCPU_GPR(r14)(vcpu); \
-	ld	r15, VCPU_GPR(r15)(vcpu); \
-	ld	r16, VCPU_GPR(r16)(vcpu); \
-	ld	r17, VCPU_GPR(r17)(vcpu); \
-	ld	r18, VCPU_GPR(r18)(vcpu); \
-	ld	r19, VCPU_GPR(r19)(vcpu); \
-	ld	r20, VCPU_GPR(r20)(vcpu); \
-	ld	r21, VCPU_GPR(r21)(vcpu); \
-	ld	r22, VCPU_GPR(r22)(vcpu); \
-	ld	r23, VCPU_GPR(r23)(vcpu); \
-	ld	r24, VCPU_GPR(r24)(vcpu); \
-	ld	r25, VCPU_GPR(r25)(vcpu); \
-	ld	r26, VCPU_GPR(r26)(vcpu); \
-	ld	r27, VCPU_GPR(r27)(vcpu); \
-	ld	r28, VCPU_GPR(r28)(vcpu); \
-	ld	r29, VCPU_GPR(r29)(vcpu); \
-	ld	r30, VCPU_GPR(r30)(vcpu); \
-	ld	r31, VCPU_GPR(r31)(vcpu); \
-
-/*****************************************************************************
- *                                                                           *
- *     Guest entry / exit code that is in kernel module memory (highmem)     *
- *                                                                           *
- ****************************************************************************/
-
-/* Registers:
- *  r3: kvm_run pointer
- *  r4: vcpu pointer
- */
-_GLOBAL(__kvmppc_vcpu_entry)
-
-kvm_start_entry:
-	/* Write correct stack frame */
-	mflr    r0
-	std     r0,16(r1)
-
-	/* Save host state to the stack */
-	stdu	r1, -SWITCH_FRAME_SIZE(r1)
-
-	/* Save r3 (kvm_run) and r4 (vcpu) */
-	SAVE_2GPRS(3, r1)
-
-	/* Save non-volatile registers (r14 - r31) */
-	SAVE_NVGPRS(r1)
-
-	/* Save LR */
-	std	r0, _LINK(r1)
-
-	/* Load non-volatile guest state from the vcpu */
-	VCPU_LOAD_NVGPRS(r4)
-
-	/* Save R1/R2 in the PACA */
-	std	r1, PACA_KVM_HOST_R1(r13)
-	std	r2, PACA_KVM_HOST_R2(r13)
-
-	/* XXX swap in/out on load? */
-	ld	r3, VCPU_HIGHMEM_HANDLER(r4)
-	std	r3, PACA_KVM_VMHANDLER(r13)
-
-kvm_start_lightweight:
-
-	ld	r9, VCPU_PC(r4)			/* r9 = vcpu->arch.pc */
-	ld	r10, VCPU_SHADOW_MSR(r4)	/* r10 = vcpu->arch.shadow_msr */
-
-	/* Load some guest state in the respective registers */
-	ld	r5, VCPU_CTR(r4)	/* r5 = vcpu->arch.ctr */
-					/* will be swapped in by rmcall */
-
-	ld	r3, VCPU_LR(r4)		/* r3 = vcpu->arch.lr */
-	mtlr	r3			/* LR = r3 */
-
-	DISABLE_INTERRUPTS
-
-	/* Some guests may need to have dcbz set to 32 byte length.
-	 *
-	 * Usually we ensure that by patching the guest's instructions
-	 * to trap on dcbz and emulate it in the hypervisor.
-	 *
-	 * If we can, we should tell the CPU to use 32 byte dcbz though,
-	 * because that's a lot faster.
-	 */
-
-	ld	r3, VCPU_HFLAGS(r4)
-	rldicl.	r3, r3, 0, 63		/* CR = ((r3 & 1) = 0) */
-	beq	no_dcbz32_on
-
-	mfspr   r3,SPRN_HID5
-	ori     r3, r3, 0x80		/* XXX HID5_dcbz32 = 0x80 */
-	mtspr   SPRN_HID5,r3
-
-no_dcbz32_on:
-
-	ld	r6, VCPU_RMCALL(r4)
-	mtctr	r6
-
-	ld	r3, VCPU_TRAMPOLINE_ENTER(r4)
-	LOAD_REG_IMMEDIATE(r4, MSR_KERNEL & ~(MSR_IR | MSR_DR))
-
-	/* Jump to SLB patching handlder and into our guest */
-	bctr
-
-/*
- * This is the handler in module memory. It gets jumped at from the
- * lowmem trampoline code, so it's basically the guest exit code.
- *
- */
-
-.global kvmppc_handler_highmem
-kvmppc_handler_highmem:
-
-	/*
-	 * Register usage at this point:
-	 *
-	 * R0         = guest last inst
-	 * R1         = host R1
-	 * R2         = host R2
-	 * R3         = guest PC
-	 * R4         = guest MSR
-	 * R5         = guest DAR
-	 * R6         = guest DSISR
-	 * R13        = PACA
-	 * PACA.KVM.* = guest *
-	 *
-	 */
-
-	/* R7 = vcpu */
-	ld	r7, GPR4(r1)
-
-	/* Now save the guest state */
-
-	stw	r0, VCPU_LAST_INST(r7)
-
-	std	r3, VCPU_PC(r7)
-	std	r4, VCPU_SHADOW_SRR1(r7)
-	std	r5, VCPU_FAULT_DEAR(r7)
-	stw	r6, VCPU_FAULT_DSISR(r7)
-
-	ld	r5, VCPU_HFLAGS(r7)
-	rldicl.	r5, r5, 0, 63		/* CR = ((r5 & 1) = 0) */
-	beq	no_dcbz32_off
-
-	li	r4, 0
-	mfspr   r5,SPRN_HID5
-	rldimi  r5,r4,6,56
-	mtspr   SPRN_HID5,r5
-
-no_dcbz32_off:
-
-	std	r14, VCPU_GPR(r14)(r7)
-	std	r15, VCPU_GPR(r15)(r7)
-	std	r16, VCPU_GPR(r16)(r7)
-	std	r17, VCPU_GPR(r17)(r7)
-	std	r18, VCPU_GPR(r18)(r7)
-	std	r19, VCPU_GPR(r19)(r7)
-	std	r20, VCPU_GPR(r20)(r7)
-	std	r21, VCPU_GPR(r21)(r7)
-	std	r22, VCPU_GPR(r22)(r7)
-	std	r23, VCPU_GPR(r23)(r7)
-	std	r24, VCPU_GPR(r24)(r7)
-	std	r25, VCPU_GPR(r25)(r7)
-	std	r26, VCPU_GPR(r26)(r7)
-	std	r27, VCPU_GPR(r27)(r7)
-	std	r28, VCPU_GPR(r28)(r7)
-	std	r29, VCPU_GPR(r29)(r7)
-	std	r30, VCPU_GPR(r30)(r7)
-	std	r31, VCPU_GPR(r31)(r7)
-
-	/* Save guest CTR */
-	mfctr	r5
-	std	r5, VCPU_CTR(r7)
-
-	/* Save guest LR */
-	mflr	r5
-	std	r5, VCPU_LR(r7)
-
-	/* Restore host msr -> SRR1 */
-	ld	r6, VCPU_HOST_MSR(r7)
-
-	/*
-	 * For some interrupts, we need to call the real Linux
-	 * handler, so it can do work for us. This has to happen
-	 * as if the interrupt arrived from the kernel though,
-	 * so let's fake it here where most state is restored.
-	 *
-	 * Call Linux for hardware interrupts/decrementer
-	 * r3 = address of interrupt handler (exit reason)
-	 */
-
-	cmpwi	r12, BOOK3S_INTERRUPT_EXTERNAL
-	beq	call_linux_handler
-	cmpwi	r12, BOOK3S_INTERRUPT_DECREMENTER
-	beq	call_linux_handler
-
-	/* Back to EE=1 */
-	mtmsr	r6
-	b	kvm_return_point
-
-call_linux_handler:
-
-	/*
-	 * If we land here we need to jump back to the handler we
-	 * came from.
-	 *
-	 * We have a page that we can access from real mode, so let's
-	 * jump back to that and use it as a trampoline to get back into the
-	 * interrupt handler!
-	 *
-	 * R3 still contains the exit code,
-	 * R5 VCPU_HOST_RETIP and
-	 * R6 VCPU_HOST_MSR
-	 */
-
-	/* Restore host IP -> SRR0 */
-	ld	r5, VCPU_HOST_RETIP(r7)
-
-	/* XXX Better move to a safe function?
-	 *     What if we get an HTAB flush in between mtsrr0 and mtsrr1? */
-
-	mtlr	r12
-
-	ld	r4, VCPU_TRAMPOLINE_LOWMEM(r7)
-	mtsrr0	r4
-	LOAD_REG_IMMEDIATE(r3, MSR_KERNEL & ~(MSR_IR | MSR_DR))
-	mtsrr1	r3
-
-	RFI
-
-.global kvm_return_point
-kvm_return_point:
-
-	/* Jump back to lightweight entry if we're supposed to */
-	/* go back into the guest */
-
-	/* Pass the exit number as 3rd argument to kvmppc_handle_exit */
-	mr	r5, r12
-
-	/* Restore r3 (kvm_run) and r4 (vcpu) */
-	REST_2GPRS(3, r1)
-	bl	KVMPPC_HANDLE_EXIT
-
-	/* If RESUME_GUEST, get back in the loop */
-	cmpwi	r3, RESUME_GUEST
-	beq	kvm_loop_lightweight
-
-	cmpwi	r3, RESUME_GUEST_NV
-	beq	kvm_loop_heavyweight
-
-kvm_exit_loop:
-
-	ld	r4, _LINK(r1)
-	mtlr	r4
-
-	/* Restore non-volatile host registers (r14 - r31) */
-	REST_NVGPRS(r1)
-
-	addi    r1, r1, SWITCH_FRAME_SIZE
-	blr
-
-kvm_loop_heavyweight:
-
-	ld	r4, _LINK(r1)
-	std     r4, (16 + SWITCH_FRAME_SIZE)(r1)
-
-	/* Load vcpu and cpu_run */
-	REST_2GPRS(3, r1)
-
-	/* Load non-volatile guest state from the vcpu */
-	VCPU_LOAD_NVGPRS(r4)
-
-	/* Jump back into the beginning of this function */
-	b	kvm_start_lightweight
-
-kvm_loop_lightweight:
-
-	/* We'll need the vcpu pointer */
-	REST_GPR(4, r1)
-
-	/* Jump back into the beginning of this function */
-	b	kvm_start_lightweight
-
diff --git a/arch/powerpc/kvm/book3s_64_rmhandlers.S b/arch/powerpc/kvm/book3s_64_rmhandlers.S
deleted file mode 100644
index bd08535..0000000
--- a/arch/powerpc/kvm/book3s_64_rmhandlers.S
+++ /dev/null
@@ -1,195 +0,0 @@
-/*
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License, version 2, as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
- *
- * Copyright SUSE Linux Products GmbH 2009
- *
- * Authors: Alexander Graf <agraf@suse.de>
- */
-
-#include <asm/ppc_asm.h>
-#include <asm/kvm_asm.h>
-#include <asm/reg.h>
-#include <asm/page.h>
-#include <asm/asm-offsets.h>
-#include <asm/exception-64s.h>
-
-/*****************************************************************************
- *                                                                           *
- *        Real Mode handlers that need to be in low physical memory          *
- *                                                                           *
- ****************************************************************************/
-
-
-.macro INTERRUPT_TRAMPOLINE intno
-
-.global kvmppc_trampoline_\intno
-kvmppc_trampoline_\intno:
-
-	mtspr	SPRN_SPRG_SCRATCH0, r13		/* Save r13 */
-
-	/*
-	 * First thing to do is to find out if we're coming
-	 * from a KVM guest or a Linux process.
-	 *
-	 * To distinguish, we check a magic byte in the PACA
-	 */
-	mfspr	r13, SPRN_SPRG_PACA		/* r13 = PACA */
-	std	r12, PACA_KVM_SCRATCH0(r13)
-	mfcr	r12
-	stw	r12, PACA_KVM_SCRATCH1(r13)
-	lbz	r12, PACA_KVM_IN_GUEST(r13)
-	cmpwi	r12, KVM_GUEST_MODE_NONE
-	bne	..kvmppc_handler_hasmagic_\intno
-	/* No KVM guest? Then jump back to the Linux handler! */
-	lwz	r12, PACA_KVM_SCRATCH1(r13)
-	mtcr	r12
-	ld	r12, PACA_KVM_SCRATCH0(r13)
-	mfspr	r13, SPRN_SPRG_SCRATCH0		/* r13 = original r13 */
-	b	kvmppc_resume_\intno		/* Get back original handler */
-
-	/* Now we know we're handling a KVM guest */
-..kvmppc_handler_hasmagic_\intno:
-
-	/* Should we just skip the faulting instruction? */
-	cmpwi	r12, KVM_GUEST_MODE_SKIP
-	beq	kvmppc_handler_skip_ins
-
-	/* Let's store which interrupt we're handling */
-	li	r12, \intno
-
-	/* Jump into the SLB exit code that goes to the highmem handler */
-	b	kvmppc_handler_trampoline_exit
-
-.endm
-
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_SYSTEM_RESET
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_MACHINE_CHECK
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_DATA_STORAGE
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_DATA_SEGMENT
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_INST_STORAGE
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_INST_SEGMENT
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_EXTERNAL
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_ALIGNMENT
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_PROGRAM
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_FP_UNAVAIL
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_DECREMENTER
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_SYSCALL
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_TRACE
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_PERFMON
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_ALTIVEC
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_VSX
-
-/*
- * Bring us back to the faulting code, but skip the
- * faulting instruction.
- *
- * This is a generic exit path from the interrupt
- * trampolines above.
- *
- * Input Registers:
- *
- * R12               = free
- * R13               = PACA
- * PACA.KVM.SCRATCH0 = guest R12
- * PACA.KVM.SCRATCH1 = guest CR
- * SPRG_SCRATCH0     = guest R13
- *
- */
-kvmppc_handler_skip_ins:
-
-	/* Patch the IP to the next instruction */
-	mfsrr0	r12
-	addi	r12, r12, 4
-	mtsrr0	r12
-
-	/* Clean up all state */
-	lwz	r12, PACA_KVM_SCRATCH1(r13)
-	mtcr	r12
-	ld	r12, PACA_KVM_SCRATCH0(r13)
-	mfspr	r13, SPRN_SPRG_SCRATCH0
-
-	/* And get back into the code */
-	RFI
-
-/*
- * This trampoline brings us back to a real mode handler
- *
- * Input Registers:
- *
- * R5 = SRR0
- * R6 = SRR1
- * LR = real-mode IP
- *
- */
-.global kvmppc_handler_lowmem_trampoline
-kvmppc_handler_lowmem_trampoline:
-
-	mtsrr0	r5
-	mtsrr1	r6
-	blr
-kvmppc_handler_lowmem_trampoline_end:
-
-/*
- * Call a function in real mode
- *
- * Input Registers:
- *
- * R3 = function
- * R4 = MSR
- * R5 = CTR
- *
- */
-_GLOBAL(kvmppc_rmcall)
-	mtmsr	r4		/* Disable relocation, so mtsrr
-				   doesn't get interrupted */
-	mtctr	r5
-	mtsrr0	r3
-	mtsrr1	r4
-	RFI
-
-/*
- * Activate current's external feature (FPU/Altivec/VSX)
- */
-#define define_load_up(what) 				\
-							\
-_GLOBAL(kvmppc_load_up_ ## what);			\
-	stdu	r1, -INT_FRAME_SIZE(r1);		\
-	mflr	r3;					\
-	std	r3, _LINK(r1);				\
-							\
-	bl	.load_up_ ## what;			\
-							\
-	ld	r3, _LINK(r1);				\
-	mtlr	r3;					\
-	addi	r1, r1, INT_FRAME_SIZE;			\
-	blr
-
-define_load_up(fpu)
-#ifdef CONFIG_ALTIVEC
-define_load_up(altivec)
-#endif
-#ifdef CONFIG_VSX
-define_load_up(vsx)
-#endif
-
-.global kvmppc_trampoline_lowmem
-kvmppc_trampoline_lowmem:
-	.long kvmppc_handler_lowmem_trampoline - _stext
-
-.global kvmppc_trampoline_enter
-kvmppc_trampoline_enter:
-	.long kvmppc_handler_trampoline_enter - _stext
-
-#include "book3s_64_slb.S"
-
diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
new file mode 100644
index 0000000..8f50776
--- /dev/null
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -0,0 +1,566 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ *
+ * Copyright SUSE Linux Products GmbH 2009
+ *
+ * Authors: Alexander Graf <agraf@suse.de>
+ */
+
+#include <asm/kvm_ppc.h>
+#include <asm/disassemble.h>
+#include <asm/kvm_book3s.h>
+#include <asm/reg.h>
+
+#define OP_19_XOP_RFID		18
+#define OP_19_XOP_RFI		50
+
+#define OP_31_XOP_MFMSR		83
+#define OP_31_XOP_MTMSR		146
+#define OP_31_XOP_MTMSRD	178
+#define OP_31_XOP_MTSR		210
+#define OP_31_XOP_MTSRIN	242
+#define OP_31_XOP_TLBIEL	274
+#define OP_31_XOP_TLBIE		306
+#define OP_31_XOP_SLBMTE	402
+#define OP_31_XOP_SLBIE		434
+#define OP_31_XOP_SLBIA		498
+#define OP_31_XOP_MFSR		595
+#define OP_31_XOP_MFSRIN	659
+#define OP_31_XOP_DCBA		758
+#define OP_31_XOP_SLBMFEV	851
+#define OP_31_XOP_EIOIO		854
+#define OP_31_XOP_SLBMFEE	915
+
+/* DCBZ is actually 1014, but we patch it to 1010 so we get a trap */
+#define OP_31_XOP_DCBZ		1010
+
+#define OP_LFS			48
+#define OP_LFD			50
+#define OP_STFS			52
+#define OP_STFD			54
+
+#define SPRN_GQR0		912
+#define SPRN_GQR1		913
+#define SPRN_GQR2		914
+#define SPRN_GQR3		915
+#define SPRN_GQR4		916
+#define SPRN_GQR5		917
+#define SPRN_GQR6		918
+#define SPRN_GQR7		919
+
+int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
+                           unsigned int inst, int *advance)
+{
+	int emulated = EMULATE_DONE;
+
+	switch (get_op(inst)) {
+	case 19:
+		switch (get_xop(inst)) {
+		case OP_19_XOP_RFID:
+		case OP_19_XOP_RFI:
+			vcpu->arch.pc = vcpu->arch.srr0;
+			kvmppc_set_msr(vcpu, vcpu->arch.srr1);
+			*advance = 0;
+			break;
+
+		default:
+			emulated = EMULATE_FAIL;
+			break;
+		}
+		break;
+	case 31:
+		switch (get_xop(inst)) {
+		case OP_31_XOP_MFMSR:
+			kvmppc_set_gpr(vcpu, get_rt(inst), vcpu->arch.msr);
+			break;
+		case OP_31_XOP_MTMSRD:
+		{
+			ulong rs = kvmppc_get_gpr(vcpu, get_rs(inst));
+			if (inst & 0x10000) {
+				vcpu->arch.msr &= ~(MSR_RI | MSR_EE);
+				vcpu->arch.msr |= rs & (MSR_RI | MSR_EE);
+			} else
+				kvmppc_set_msr(vcpu, rs);
+			break;
+		}
+		case OP_31_XOP_MTMSR:
+			kvmppc_set_msr(vcpu, kvmppc_get_gpr(vcpu, get_rs(inst)));
+			break;
+		case OP_31_XOP_MFSR:
+		{
+			int srnum;
+
+			srnum = kvmppc_get_field(inst, 12 + 32, 15 + 32);
+			if (vcpu->arch.mmu.mfsrin) {
+				u32 sr;
+				sr = vcpu->arch.mmu.mfsrin(vcpu, srnum);
+				kvmppc_set_gpr(vcpu, get_rt(inst), sr);
+			}
+			break;
+		}
+		case OP_31_XOP_MFSRIN:
+		{
+			int srnum;
+
+			srnum = (kvmppc_get_gpr(vcpu, get_rb(inst)) >> 28) & 0xf;
+			if (vcpu->arch.mmu.mfsrin) {
+				u32 sr;
+				sr = vcpu->arch.mmu.mfsrin(vcpu, srnum);
+				kvmppc_set_gpr(vcpu, get_rt(inst), sr);
+			}
+			break;
+		}
+		case OP_31_XOP_MTSR:
+			vcpu->arch.mmu.mtsrin(vcpu,
+				(inst >> 16) & 0xf,
+				kvmppc_get_gpr(vcpu, get_rs(inst)));
+			break;
+		case OP_31_XOP_MTSRIN:
+			vcpu->arch.mmu.mtsrin(vcpu,
+				(kvmppc_get_gpr(vcpu, get_rb(inst)) >> 28) & 0xf,
+				kvmppc_get_gpr(vcpu, get_rs(inst)));
+			break;
+		case OP_31_XOP_TLBIE:
+		case OP_31_XOP_TLBIEL:
+		{
+			bool large = (inst & 0x00200000) ? true : false;
+			ulong addr = kvmppc_get_gpr(vcpu, get_rb(inst));
+			vcpu->arch.mmu.tlbie(vcpu, addr, large);
+			break;
+		}
+		case OP_31_XOP_EIOIO:
+			break;
+		case OP_31_XOP_SLBMTE:
+			if (!vcpu->arch.mmu.slbmte)
+				return EMULATE_FAIL;
+
+			vcpu->arch.mmu.slbmte(vcpu,
+					kvmppc_get_gpr(vcpu, get_rs(inst)),
+					kvmppc_get_gpr(vcpu, get_rb(inst)));
+			break;
+		case OP_31_XOP_SLBIE:
+			if (!vcpu->arch.mmu.slbie)
+				return EMULATE_FAIL;
+
+			vcpu->arch.mmu.slbie(vcpu,
+					kvmppc_get_gpr(vcpu, get_rb(inst)));
+			break;
+		case OP_31_XOP_SLBIA:
+			if (!vcpu->arch.mmu.slbia)
+				return EMULATE_FAIL;
+
+			vcpu->arch.mmu.slbia(vcpu);
+			break;
+		case OP_31_XOP_SLBMFEE:
+			if (!vcpu->arch.mmu.slbmfee) {
+				emulated = EMULATE_FAIL;
+			} else {
+				ulong t, rb;
+
+				rb = kvmppc_get_gpr(vcpu, get_rb(inst));
+				t = vcpu->arch.mmu.slbmfee(vcpu, rb);
+				kvmppc_set_gpr(vcpu, get_rt(inst), t);
+			}
+			break;
+		case OP_31_XOP_SLBMFEV:
+			if (!vcpu->arch.mmu.slbmfev) {
+				emulated = EMULATE_FAIL;
+			} else {
+				ulong t, rb;
+
+				rb = kvmppc_get_gpr(vcpu, get_rb(inst));
+				t = vcpu->arch.mmu.slbmfev(vcpu, rb);
+				kvmppc_set_gpr(vcpu, get_rt(inst), t);
+			}
+			break;
+		case OP_31_XOP_DCBA:
+			/* Gets treated as NOP */
+			break;
+		case OP_31_XOP_DCBZ:
+		{
+			ulong rb = kvmppc_get_gpr(vcpu, get_rb(inst));
+			ulong ra = 0;
+			ulong addr, vaddr;
+			u32 zeros[8] = { 0, 0, 0, 0, 0, 0, 0, 0 };
+			u32 dsisr;
+			int r;
+
+			if (get_ra(inst))
+				ra = kvmppc_get_gpr(vcpu, get_ra(inst));
+
+			addr = (ra + rb) & ~31ULL;
+			if (!(vcpu->arch.msr & MSR_SF))
+				addr &= 0xffffffff;
+			vaddr = addr;
+
+			r = kvmppc_st(vcpu, &addr, 32, zeros, true);
+			if ((r = -ENOENT) || (r = -EPERM)) {
+				*advance = 0;
+				vcpu->arch.dear = vaddr;
+				vcpu->arch.fault_dear = vaddr;
+
+				dsisr = DSISR_ISSTORE;
+				if (r = -ENOENT)
+					dsisr |= DSISR_NOHPTE;
+				else if (r = -EPERM)
+					dsisr |= DSISR_PROTFAULT;
+
+				to_book3s(vcpu)->dsisr = dsisr;
+				vcpu->arch.fault_dsisr = dsisr;
+
+				kvmppc_book3s_queue_irqprio(vcpu,
+					BOOK3S_INTERRUPT_DATA_STORAGE);
+			}
+
+			break;
+		}
+		default:
+			emulated = EMULATE_FAIL;
+		}
+		break;
+	default:
+		emulated = EMULATE_FAIL;
+	}
+
+	if (emulated = EMULATE_FAIL)
+		emulated = kvmppc_emulate_paired_single(run, vcpu);
+
+	return emulated;
+}
+
+void kvmppc_set_bat(struct kvm_vcpu *vcpu, struct kvmppc_bat *bat, bool upper,
+                    u32 val)
+{
+	if (upper) {
+		/* Upper BAT */
+		u32 bl = (val >> 2) & 0x7ff;
+		bat->bepi_mask = (~bl << 17);
+		bat->bepi = val & 0xfffe0000;
+		bat->vs = (val & 2) ? 1 : 0;
+		bat->vp = (val & 1) ? 1 : 0;
+		bat->raw = (bat->raw & 0xffffffff00000000ULL) | val;
+	} else {
+		/* Lower BAT */
+		bat->brpn = val & 0xfffe0000;
+		bat->wimg = (val >> 3) & 0xf;
+		bat->pp = val & 3;
+		bat->raw = (bat->raw & 0x00000000ffffffffULL) | ((u64)val << 32);
+	}
+}
+
+static u32 kvmppc_read_bat(struct kvm_vcpu *vcpu, int sprn)
+{
+	struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
+	struct kvmppc_bat *bat;
+
+	switch (sprn) {
+	case SPRN_IBAT0U ... SPRN_IBAT3L:
+		bat = &vcpu_book3s->ibat[(sprn - SPRN_IBAT0U) / 2];
+		break;
+	case SPRN_IBAT4U ... SPRN_IBAT7L:
+		bat = &vcpu_book3s->ibat[4 + ((sprn - SPRN_IBAT4U) / 2)];
+		break;
+	case SPRN_DBAT0U ... SPRN_DBAT3L:
+		bat = &vcpu_book3s->dbat[(sprn - SPRN_DBAT0U) / 2];
+		break;
+	case SPRN_DBAT4U ... SPRN_DBAT7L:
+		bat = &vcpu_book3s->dbat[4 + ((sprn - SPRN_DBAT4U) / 2)];
+		break;
+	default:
+		BUG();
+	}
+
+	if (sprn % 2)
+		return bat->raw >> 32;
+	else
+		return bat->raw;
+}
+
+static void kvmppc_write_bat(struct kvm_vcpu *vcpu, int sprn, u32 val)
+{
+	struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
+	struct kvmppc_bat *bat;
+
+	switch (sprn) {
+	case SPRN_IBAT0U ... SPRN_IBAT3L:
+		bat = &vcpu_book3s->ibat[(sprn - SPRN_IBAT0U) / 2];
+		break;
+	case SPRN_IBAT4U ... SPRN_IBAT7L:
+		bat = &vcpu_book3s->ibat[4 + ((sprn - SPRN_IBAT4U) / 2)];
+		break;
+	case SPRN_DBAT0U ... SPRN_DBAT3L:
+		bat = &vcpu_book3s->dbat[(sprn - SPRN_DBAT0U) / 2];
+		break;
+	case SPRN_DBAT4U ... SPRN_DBAT7L:
+		bat = &vcpu_book3s->dbat[4 + ((sprn - SPRN_DBAT4U) / 2)];
+		break;
+	default:
+		BUG();
+	}
+
+	kvmppc_set_bat(vcpu, bat, !(sprn % 2), val);
+}
+
+int kvmppc_core_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs)
+{
+	int emulated = EMULATE_DONE;
+	ulong spr_val = kvmppc_get_gpr(vcpu, rs);
+
+	switch (sprn) {
+	case SPRN_SDR1:
+		to_book3s(vcpu)->sdr1 = spr_val;
+		break;
+	case SPRN_DSISR:
+		to_book3s(vcpu)->dsisr = spr_val;
+		break;
+	case SPRN_DAR:
+		vcpu->arch.dear = spr_val;
+		break;
+	case SPRN_HIOR:
+		to_book3s(vcpu)->hior = spr_val;
+		break;
+	case SPRN_IBAT0U ... SPRN_IBAT3L:
+	case SPRN_IBAT4U ... SPRN_IBAT7L:
+	case SPRN_DBAT0U ... SPRN_DBAT3L:
+	case SPRN_DBAT4U ... SPRN_DBAT7L:
+		kvmppc_write_bat(vcpu, sprn, (u32)spr_val);
+		/* BAT writes happen so rarely that we're ok to flush
+		 * everything here */
+		kvmppc_mmu_pte_flush(vcpu, 0, 0);
+		kvmppc_mmu_flush_segments(vcpu);
+		break;
+	case SPRN_HID0:
+		to_book3s(vcpu)->hid[0] = spr_val;
+		break;
+	case SPRN_HID1:
+		to_book3s(vcpu)->hid[1] = spr_val;
+		break;
+	case SPRN_HID2:
+		to_book3s(vcpu)->hid[2] = spr_val;
+		break;
+	case SPRN_HID2_GEKKO:
+		to_book3s(vcpu)->hid[2] = spr_val;
+		/* HID2.PSE controls paired single on gekko */
+		switch (vcpu->arch.pvr) {
+		case 0x00080200:	/* lonestar 2.0 */
+		case 0x00088202:	/* lonestar 2.2 */
+		case 0x70000100:	/* gekko 1.0 */
+		case 0x00080100:	/* gekko 2.0 */
+		case 0x00083203:	/* gekko 2.3a */
+		case 0x00083213:	/* gekko 2.3b */
+		case 0x00083204:	/* gekko 2.4 */
+		case 0x00083214:	/* gekko 2.4e (8SE) - retail HW2 */
+			if (spr_val & (1 << 29)) { /* HID2.PSE */
+				vcpu->arch.hflags |= BOOK3S_HFLAG_PAIRED_SINGLE;
+				kvmppc_giveup_ext(vcpu, MSR_FP);
+			} else {
+				vcpu->arch.hflags &= ~BOOK3S_HFLAG_PAIRED_SINGLE;
+			}
+			break;
+		}
+		break;
+	case SPRN_HID4:
+	case SPRN_HID4_GEKKO:
+		to_book3s(vcpu)->hid[4] = spr_val;
+		break;
+	case SPRN_HID5:
+		to_book3s(vcpu)->hid[5] = spr_val;
+		/* guest HID5 set can change is_dcbz32 */
+		if (vcpu->arch.mmu.is_dcbz32(vcpu) &&
+		    (mfmsr() & MSR_HV))
+			vcpu->arch.hflags |= BOOK3S_HFLAG_DCBZ32;
+		break;
+	case SPRN_GQR0:
+	case SPRN_GQR1:
+	case SPRN_GQR2:
+	case SPRN_GQR3:
+	case SPRN_GQR4:
+	case SPRN_GQR5:
+	case SPRN_GQR6:
+	case SPRN_GQR7:
+		to_book3s(vcpu)->gqr[sprn - SPRN_GQR0] = spr_val;
+		break;
+	case SPRN_ICTC:
+	case SPRN_THRM1:
+	case SPRN_THRM2:
+	case SPRN_THRM3:
+	case SPRN_CTRLF:
+	case SPRN_CTRLT:
+	case SPRN_L2CR:
+	case SPRN_MMCR0_GEKKO:
+	case SPRN_MMCR1_GEKKO:
+	case SPRN_PMC1_GEKKO:
+	case SPRN_PMC2_GEKKO:
+	case SPRN_PMC3_GEKKO:
+	case SPRN_PMC4_GEKKO:
+	case SPRN_WPAR_GEKKO:
+		break;
+	default:
+		printk(KERN_INFO "KVM: invalid SPR write: %d\n", sprn);
+#ifndef DEBUG_SPR
+		emulated = EMULATE_FAIL;
+#endif
+		break;
+	}
+
+	return emulated;
+}
+
+int kvmppc_core_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
+{
+	int emulated = EMULATE_DONE;
+
+	switch (sprn) {
+	case SPRN_IBAT0U ... SPRN_IBAT3L:
+	case SPRN_IBAT4U ... SPRN_IBAT7L:
+	case SPRN_DBAT0U ... SPRN_DBAT3L:
+	case SPRN_DBAT4U ... SPRN_DBAT7L:
+		kvmppc_set_gpr(vcpu, rt, kvmppc_read_bat(vcpu, sprn));
+		break;
+	case SPRN_SDR1:
+		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->sdr1);
+		break;
+	case SPRN_DSISR:
+		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->dsisr);
+		break;
+	case SPRN_DAR:
+		kvmppc_set_gpr(vcpu, rt, vcpu->arch.dear);
+		break;
+	case SPRN_HIOR:
+		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hior);
+		break;
+	case SPRN_HID0:
+		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[0]);
+		break;
+	case SPRN_HID1:
+		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[1]);
+		break;
+	case SPRN_HID2:
+	case SPRN_HID2_GEKKO:
+		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[2]);
+		break;
+	case SPRN_HID4:
+	case SPRN_HID4_GEKKO:
+		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[4]);
+		break;
+	case SPRN_HID5:
+		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[5]);
+		break;
+	case SPRN_GQR0:
+	case SPRN_GQR1:
+	case SPRN_GQR2:
+	case SPRN_GQR3:
+	case SPRN_GQR4:
+	case SPRN_GQR5:
+	case SPRN_GQR6:
+	case SPRN_GQR7:
+		kvmppc_set_gpr(vcpu, rt,
+			       to_book3s(vcpu)->gqr[sprn - SPRN_GQR0]);
+		break;
+	case SPRN_THRM1:
+	case SPRN_THRM2:
+	case SPRN_THRM3:
+	case SPRN_CTRLF:
+	case SPRN_CTRLT:
+	case SPRN_L2CR:
+	case SPRN_MMCR0_GEKKO:
+	case SPRN_MMCR1_GEKKO:
+	case SPRN_PMC1_GEKKO:
+	case SPRN_PMC2_GEKKO:
+	case SPRN_PMC3_GEKKO:
+	case SPRN_PMC4_GEKKO:
+	case SPRN_WPAR_GEKKO:
+		kvmppc_set_gpr(vcpu, rt, 0);
+		break;
+	default:
+		printk(KERN_INFO "KVM: invalid SPR read: %d\n", sprn);
+#ifndef DEBUG_SPR
+		emulated = EMULATE_FAIL;
+#endif
+		break;
+	}
+
+	return emulated;
+}
+
+u32 kvmppc_alignment_dsisr(struct kvm_vcpu *vcpu, unsigned int inst)
+{
+	u32 dsisr = 0;
+
+	/*
+	 * This is what the spec says about DSISR bits (not mentioned = 0):
+	 *
+	 * 12:13		[DS]	Set to bits 30:31
+	 * 15:16		[X]	Set to bits 29:30
+	 * 17			[X]	Set to bit 25
+	 *			[D/DS]	Set to bit 5
+	 * 18:21		[X]	Set to bits 21:24
+	 *			[D/DS]	Set to bits 1:4
+	 * 22:26			Set to bits 6:10 (RT/RS/FRT/FRS)
+	 * 27:31			Set to bits 11:15 (RA)
+	 */
+
+	switch (get_op(inst)) {
+	/* D-form */
+	case OP_LFS:
+	case OP_LFD:
+	case OP_STFD:
+	case OP_STFS:
+		dsisr |= (inst >> 12) & 0x4000;	/* bit 17 */
+		dsisr |= (inst >> 17) & 0x3c00; /* bits 18:21 */
+		break;
+	/* X-form */
+	case 31:
+		dsisr |= (inst << 14) & 0x18000; /* bits 15:16 */
+		dsisr |= (inst << 8)  & 0x04000; /* bit 17 */
+		dsisr |= (inst << 3)  & 0x03c00; /* bits 18:21 */
+		break;
+	default:
+		printk(KERN_INFO "KVM: Unaligned instruction 0x%x\n", inst);
+		break;
+	}
+
+	dsisr |= (inst >> 16) & 0x03ff; /* bits 22:31 */
+
+	return dsisr;
+}
+
+ulong kvmppc_alignment_dar(struct kvm_vcpu *vcpu, unsigned int inst)
+{
+	ulong dar = 0;
+	ulong ra;
+
+	switch (get_op(inst)) {
+	case OP_LFS:
+	case OP_LFD:
+	case OP_STFD:
+	case OP_STFS:
+		ra = get_ra(inst);
+		if (ra)
+			dar = kvmppc_get_gpr(vcpu, ra);
+		dar += (s32)((s16)inst);
+		break;
+	case 31:
+		ra = get_ra(inst);
+		if (ra)
+			dar = kvmppc_get_gpr(vcpu, ra);
+		dar += kvmppc_get_gpr(vcpu, get_rb(inst));
+		break;
+	default:
+		printk(KERN_INFO "KVM: Unaligned instruction 0x%x\n", inst);
+		break;
+	}
+
+	return dar;
+}
diff --git a/arch/powerpc/kvm/book3s_exports.c b/arch/powerpc/kvm/book3s_exports.c
new file mode 100644
index 0000000..1dd5a1d
--- /dev/null
+++ b/arch/powerpc/kvm/book3s_exports.c
@@ -0,0 +1,32 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ *
+ * Copyright SUSE Linux Products GmbH 2009
+ *
+ * Authors: Alexander Graf <agraf@suse.de>
+ */
+
+#include <linux/module.h>
+#include <asm/kvm_book3s.h>
+
+EXPORT_SYMBOL_GPL(kvmppc_trampoline_enter);
+EXPORT_SYMBOL_GPL(kvmppc_trampoline_lowmem);
+EXPORT_SYMBOL_GPL(kvmppc_rmcall);
+EXPORT_SYMBOL_GPL(kvmppc_load_up_fpu);
+#ifdef CONFIG_ALTIVEC
+EXPORT_SYMBOL_GPL(kvmppc_load_up_altivec);
+#endif
+#ifdef CONFIG_VSX
+EXPORT_SYMBOL_GPL(kvmppc_load_up_vsx);
+#endif
diff --git a/arch/powerpc/kvm/book3s_interrupts.S b/arch/powerpc/kvm/book3s_interrupts.S
new file mode 100644
index 0000000..faca876
--- /dev/null
+++ b/arch/powerpc/kvm/book3s_interrupts.S
@@ -0,0 +1,318 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ *
+ * Copyright SUSE Linux Products GmbH 2009
+ *
+ * Authors: Alexander Graf <agraf@suse.de>
+ */
+
+#include <asm/ppc_asm.h>
+#include <asm/kvm_asm.h>
+#include <asm/reg.h>
+#include <asm/page.h>
+#include <asm/asm-offsets.h>
+#include <asm/exception-64s.h>
+
+#define KVMPPC_HANDLE_EXIT .kvmppc_handle_exit
+#define ULONG_SIZE 8
+#define VCPU_GPR(n)     (VCPU_GPRS + (n * ULONG_SIZE))
+
+.macro DISABLE_INTERRUPTS
+       mfmsr   r0
+       rldicl  r0,r0,48,1
+       rotldi  r0,r0,16
+       mtmsrd  r0,1
+.endm
+
+#define VCPU_LOAD_NVGPRS(vcpu) \
+	ld	r14, VCPU_GPR(r14)(vcpu); \
+	ld	r15, VCPU_GPR(r15)(vcpu); \
+	ld	r16, VCPU_GPR(r16)(vcpu); \
+	ld	r17, VCPU_GPR(r17)(vcpu); \
+	ld	r18, VCPU_GPR(r18)(vcpu); \
+	ld	r19, VCPU_GPR(r19)(vcpu); \
+	ld	r20, VCPU_GPR(r20)(vcpu); \
+	ld	r21, VCPU_GPR(r21)(vcpu); \
+	ld	r22, VCPU_GPR(r22)(vcpu); \
+	ld	r23, VCPU_GPR(r23)(vcpu); \
+	ld	r24, VCPU_GPR(r24)(vcpu); \
+	ld	r25, VCPU_GPR(r25)(vcpu); \
+	ld	r26, VCPU_GPR(r26)(vcpu); \
+	ld	r27, VCPU_GPR(r27)(vcpu); \
+	ld	r28, VCPU_GPR(r28)(vcpu); \
+	ld	r29, VCPU_GPR(r29)(vcpu); \
+	ld	r30, VCPU_GPR(r30)(vcpu); \
+	ld	r31, VCPU_GPR(r31)(vcpu); \
+
+/*****************************************************************************
+ *                                                                           *
+ *     Guest entry / exit code that is in kernel module memory (highmem)     *
+ *                                                                           *
+ ****************************************************************************/
+
+/* Registers:
+ *  r3: kvm_run pointer
+ *  r4: vcpu pointer
+ */
+_GLOBAL(__kvmppc_vcpu_entry)
+
+kvm_start_entry:
+	/* Write correct stack frame */
+	mflr    r0
+	std     r0,16(r1)
+
+	/* Save host state to the stack */
+	stdu	r1, -SWITCH_FRAME_SIZE(r1)
+
+	/* Save r3 (kvm_run) and r4 (vcpu) */
+	SAVE_2GPRS(3, r1)
+
+	/* Save non-volatile registers (r14 - r31) */
+	SAVE_NVGPRS(r1)
+
+	/* Save LR */
+	std	r0, _LINK(r1)
+
+	/* Load non-volatile guest state from the vcpu */
+	VCPU_LOAD_NVGPRS(r4)
+
+	/* Save R1/R2 in the PACA */
+	std	r1, PACA_KVM_HOST_R1(r13)
+	std	r2, PACA_KVM_HOST_R2(r13)
+
+	/* XXX swap in/out on load? */
+	ld	r3, VCPU_HIGHMEM_HANDLER(r4)
+	std	r3, PACA_KVM_VMHANDLER(r13)
+
+kvm_start_lightweight:
+
+	ld	r9, VCPU_PC(r4)			/* r9 = vcpu->arch.pc */
+	ld	r10, VCPU_SHADOW_MSR(r4)	/* r10 = vcpu->arch.shadow_msr */
+
+	/* Load some guest state in the respective registers */
+	ld	r5, VCPU_CTR(r4)	/* r5 = vcpu->arch.ctr */
+					/* will be swapped in by rmcall */
+
+	ld	r3, VCPU_LR(r4)		/* r3 = vcpu->arch.lr */
+	mtlr	r3			/* LR = r3 */
+
+	DISABLE_INTERRUPTS
+
+	/* Some guests may need to have dcbz set to 32 byte length.
+	 *
+	 * Usually we ensure that by patching the guest's instructions
+	 * to trap on dcbz and emulate it in the hypervisor.
+	 *
+	 * If we can, we should tell the CPU to use 32 byte dcbz though,
+	 * because that's a lot faster.
+	 */
+
+	ld	r3, VCPU_HFLAGS(r4)
+	rldicl.	r3, r3, 0, 63		/* CR = ((r3 & 1) = 0) */
+	beq	no_dcbz32_on
+
+	mfspr   r3,SPRN_HID5
+	ori     r3, r3, 0x80		/* XXX HID5_dcbz32 = 0x80 */
+	mtspr   SPRN_HID5,r3
+
+no_dcbz32_on:
+
+	ld	r6, VCPU_RMCALL(r4)
+	mtctr	r6
+
+	ld	r3, VCPU_TRAMPOLINE_ENTER(r4)
+	LOAD_REG_IMMEDIATE(r4, MSR_KERNEL & ~(MSR_IR | MSR_DR))
+
+	/* Jump to SLB patching handlder and into our guest */
+	bctr
+
+/*
+ * This is the handler in module memory. It gets jumped at from the
+ * lowmem trampoline code, so it's basically the guest exit code.
+ *
+ */
+
+.global kvmppc_handler_highmem
+kvmppc_handler_highmem:
+
+	/*
+	 * Register usage at this point:
+	 *
+	 * R0         = guest last inst
+	 * R1         = host R1
+	 * R2         = host R2
+	 * R3         = guest PC
+	 * R4         = guest MSR
+	 * R5         = guest DAR
+	 * R6         = guest DSISR
+	 * R13        = PACA
+	 * PACA.KVM.* = guest *
+	 *
+	 */
+
+	/* R7 = vcpu */
+	ld	r7, GPR4(r1)
+
+	/* Now save the guest state */
+
+	stw	r0, VCPU_LAST_INST(r7)
+
+	std	r3, VCPU_PC(r7)
+	std	r4, VCPU_SHADOW_SRR1(r7)
+	std	r5, VCPU_FAULT_DEAR(r7)
+	stw	r6, VCPU_FAULT_DSISR(r7)
+
+	ld	r5, VCPU_HFLAGS(r7)
+	rldicl.	r5, r5, 0, 63		/* CR = ((r5 & 1) = 0) */
+	beq	no_dcbz32_off
+
+	li	r4, 0
+	mfspr   r5,SPRN_HID5
+	rldimi  r5,r4,6,56
+	mtspr   SPRN_HID5,r5
+
+no_dcbz32_off:
+
+	std	r14, VCPU_GPR(r14)(r7)
+	std	r15, VCPU_GPR(r15)(r7)
+	std	r16, VCPU_GPR(r16)(r7)
+	std	r17, VCPU_GPR(r17)(r7)
+	std	r18, VCPU_GPR(r18)(r7)
+	std	r19, VCPU_GPR(r19)(r7)
+	std	r20, VCPU_GPR(r20)(r7)
+	std	r21, VCPU_GPR(r21)(r7)
+	std	r22, VCPU_GPR(r22)(r7)
+	std	r23, VCPU_GPR(r23)(r7)
+	std	r24, VCPU_GPR(r24)(r7)
+	std	r25, VCPU_GPR(r25)(r7)
+	std	r26, VCPU_GPR(r26)(r7)
+	std	r27, VCPU_GPR(r27)(r7)
+	std	r28, VCPU_GPR(r28)(r7)
+	std	r29, VCPU_GPR(r29)(r7)
+	std	r30, VCPU_GPR(r30)(r7)
+	std	r31, VCPU_GPR(r31)(r7)
+
+	/* Save guest CTR */
+	mfctr	r5
+	std	r5, VCPU_CTR(r7)
+
+	/* Save guest LR */
+	mflr	r5
+	std	r5, VCPU_LR(r7)
+
+	/* Restore host msr -> SRR1 */
+	ld	r6, VCPU_HOST_MSR(r7)
+
+	/*
+	 * For some interrupts, we need to call the real Linux
+	 * handler, so it can do work for us. This has to happen
+	 * as if the interrupt arrived from the kernel though,
+	 * so let's fake it here where most state is restored.
+	 *
+	 * Call Linux for hardware interrupts/decrementer
+	 * r3 = address of interrupt handler (exit reason)
+	 */
+
+	cmpwi	r12, BOOK3S_INTERRUPT_EXTERNAL
+	beq	call_linux_handler
+	cmpwi	r12, BOOK3S_INTERRUPT_DECREMENTER
+	beq	call_linux_handler
+
+	/* Back to EE=1 */
+	mtmsr	r6
+	b	kvm_return_point
+
+call_linux_handler:
+
+	/*
+	 * If we land here we need to jump back to the handler we
+	 * came from.
+	 *
+	 * We have a page that we can access from real mode, so let's
+	 * jump back to that and use it as a trampoline to get back into the
+	 * interrupt handler!
+	 *
+	 * R3 still contains the exit code,
+	 * R5 VCPU_HOST_RETIP and
+	 * R6 VCPU_HOST_MSR
+	 */
+
+	/* Restore host IP -> SRR0 */
+	ld	r5, VCPU_HOST_RETIP(r7)
+
+	/* XXX Better move to a safe function?
+	 *     What if we get an HTAB flush in between mtsrr0 and mtsrr1? */
+
+	mtlr	r12
+
+	ld	r4, VCPU_TRAMPOLINE_LOWMEM(r7)
+	mtsrr0	r4
+	LOAD_REG_IMMEDIATE(r3, MSR_KERNEL & ~(MSR_IR | MSR_DR))
+	mtsrr1	r3
+
+	RFI
+
+.global kvm_return_point
+kvm_return_point:
+
+	/* Jump back to lightweight entry if we're supposed to */
+	/* go back into the guest */
+
+	/* Pass the exit number as 3rd argument to kvmppc_handle_exit */
+	mr	r5, r12
+
+	/* Restore r3 (kvm_run) and r4 (vcpu) */
+	REST_2GPRS(3, r1)
+	bl	KVMPPC_HANDLE_EXIT
+
+	/* If RESUME_GUEST, get back in the loop */
+	cmpwi	r3, RESUME_GUEST
+	beq	kvm_loop_lightweight
+
+	cmpwi	r3, RESUME_GUEST_NV
+	beq	kvm_loop_heavyweight
+
+kvm_exit_loop:
+
+	ld	r4, _LINK(r1)
+	mtlr	r4
+
+	/* Restore non-volatile host registers (r14 - r31) */
+	REST_NVGPRS(r1)
+
+	addi    r1, r1, SWITCH_FRAME_SIZE
+	blr
+
+kvm_loop_heavyweight:
+
+	ld	r4, _LINK(r1)
+	std     r4, (16 + SWITCH_FRAME_SIZE)(r1)
+
+	/* Load vcpu and cpu_run */
+	REST_2GPRS(3, r1)
+
+	/* Load non-volatile guest state from the vcpu */
+	VCPU_LOAD_NVGPRS(r4)
+
+	/* Jump back into the beginning of this function */
+	b	kvm_start_lightweight
+
+kvm_loop_lightweight:
+
+	/* We'll need the vcpu pointer */
+	REST_GPR(4, r1)
+
+	/* Jump back into the beginning of this function */
+	b	kvm_start_lightweight
+
diff --git a/arch/powerpc/kvm/book3s_rmhandlers.S b/arch/powerpc/kvm/book3s_rmhandlers.S
new file mode 100644
index 0000000..bd08535
--- /dev/null
+++ b/arch/powerpc/kvm/book3s_rmhandlers.S
@@ -0,0 +1,195 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ *
+ * Copyright SUSE Linux Products GmbH 2009
+ *
+ * Authors: Alexander Graf <agraf@suse.de>
+ */
+
+#include <asm/ppc_asm.h>
+#include <asm/kvm_asm.h>
+#include <asm/reg.h>
+#include <asm/page.h>
+#include <asm/asm-offsets.h>
+#include <asm/exception-64s.h>
+
+/*****************************************************************************
+ *                                                                           *
+ *        Real Mode handlers that need to be in low physical memory          *
+ *                                                                           *
+ ****************************************************************************/
+
+
+.macro INTERRUPT_TRAMPOLINE intno
+
+.global kvmppc_trampoline_\intno
+kvmppc_trampoline_\intno:
+
+	mtspr	SPRN_SPRG_SCRATCH0, r13		/* Save r13 */
+
+	/*
+	 * First thing to do is to find out if we're coming
+	 * from a KVM guest or a Linux process.
+	 *
+	 * To distinguish, we check a magic byte in the PACA
+	 */
+	mfspr	r13, SPRN_SPRG_PACA		/* r13 = PACA */
+	std	r12, PACA_KVM_SCRATCH0(r13)
+	mfcr	r12
+	stw	r12, PACA_KVM_SCRATCH1(r13)
+	lbz	r12, PACA_KVM_IN_GUEST(r13)
+	cmpwi	r12, KVM_GUEST_MODE_NONE
+	bne	..kvmppc_handler_hasmagic_\intno
+	/* No KVM guest? Then jump back to the Linux handler! */
+	lwz	r12, PACA_KVM_SCRATCH1(r13)
+	mtcr	r12
+	ld	r12, PACA_KVM_SCRATCH0(r13)
+	mfspr	r13, SPRN_SPRG_SCRATCH0		/* r13 = original r13 */
+	b	kvmppc_resume_\intno		/* Get back original handler */
+
+	/* Now we know we're handling a KVM guest */
+..kvmppc_handler_hasmagic_\intno:
+
+	/* Should we just skip the faulting instruction? */
+	cmpwi	r12, KVM_GUEST_MODE_SKIP
+	beq	kvmppc_handler_skip_ins
+
+	/* Let's store which interrupt we're handling */
+	li	r12, \intno
+
+	/* Jump into the SLB exit code that goes to the highmem handler */
+	b	kvmppc_handler_trampoline_exit
+
+.endm
+
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_SYSTEM_RESET
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_MACHINE_CHECK
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_DATA_STORAGE
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_DATA_SEGMENT
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_INST_STORAGE
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_INST_SEGMENT
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_EXTERNAL
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_ALIGNMENT
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_PROGRAM
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_FP_UNAVAIL
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_DECREMENTER
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_SYSCALL
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_TRACE
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_PERFMON
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_ALTIVEC
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_VSX
+
+/*
+ * Bring us back to the faulting code, but skip the
+ * faulting instruction.
+ *
+ * This is a generic exit path from the interrupt
+ * trampolines above.
+ *
+ * Input Registers:
+ *
+ * R12               = free
+ * R13               = PACA
+ * PACA.KVM.SCRATCH0 = guest R12
+ * PACA.KVM.SCRATCH1 = guest CR
+ * SPRG_SCRATCH0     = guest R13
+ *
+ */
+kvmppc_handler_skip_ins:
+
+	/* Patch the IP to the next instruction */
+	mfsrr0	r12
+	addi	r12, r12, 4
+	mtsrr0	r12
+
+	/* Clean up all state */
+	lwz	r12, PACA_KVM_SCRATCH1(r13)
+	mtcr	r12
+	ld	r12, PACA_KVM_SCRATCH0(r13)
+	mfspr	r13, SPRN_SPRG_SCRATCH0
+
+	/* And get back into the code */
+	RFI
+
+/*
+ * This trampoline brings us back to a real mode handler
+ *
+ * Input Registers:
+ *
+ * R5 = SRR0
+ * R6 = SRR1
+ * LR = real-mode IP
+ *
+ */
+.global kvmppc_handler_lowmem_trampoline
+kvmppc_handler_lowmem_trampoline:
+
+	mtsrr0	r5
+	mtsrr1	r6
+	blr
+kvmppc_handler_lowmem_trampoline_end:
+
+/*
+ * Call a function in real mode
+ *
+ * Input Registers:
+ *
+ * R3 = function
+ * R4 = MSR
+ * R5 = CTR
+ *
+ */
+_GLOBAL(kvmppc_rmcall)
+	mtmsr	r4		/* Disable relocation, so mtsrr
+				   doesn't get interrupted */
+	mtctr	r5
+	mtsrr0	r3
+	mtsrr1	r4
+	RFI
+
+/*
+ * Activate current's external feature (FPU/Altivec/VSX)
+ */
+#define define_load_up(what) 				\
+							\
+_GLOBAL(kvmppc_load_up_ ## what);			\
+	stdu	r1, -INT_FRAME_SIZE(r1);		\
+	mflr	r3;					\
+	std	r3, _LINK(r1);				\
+							\
+	bl	.load_up_ ## what;			\
+							\
+	ld	r3, _LINK(r1);				\
+	mtlr	r3;					\
+	addi	r1, r1, INT_FRAME_SIZE;			\
+	blr
+
+define_load_up(fpu)
+#ifdef CONFIG_ALTIVEC
+define_load_up(altivec)
+#endif
+#ifdef CONFIG_VSX
+define_load_up(vsx)
+#endif
+
+.global kvmppc_trampoline_lowmem
+kvmppc_trampoline_lowmem:
+	.long kvmppc_handler_lowmem_trampoline - _stext
+
+.global kvmppc_trampoline_enter
+kvmppc_trampoline_enter:
+	.long kvmppc_handler_trampoline_enter - _stext
+
+#include "book3s_64_slb.S"
+
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 02/27] KVM: PPC: Add host MMU Support
       [not found] ` <1271369518-11247-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-15 22:11     ` Alexander Graf
  2010-04-15 22:11     ` Alexander Graf
                       ` (15 subsequent siblings)
  16 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

In order to support 32 bit Book3S, we need to add code to enable our
shadow MMU to actually add shadow PTEs. This is the module enabling
that support.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s_32_mmu_host.c |  480 +++++++++++++++++++++++++++++++++
 1 files changed, 480 insertions(+), 0 deletions(-)
 create mode 100644 arch/powerpc/kvm/book3s_32_mmu_host.c

diff --git a/arch/powerpc/kvm/book3s_32_mmu_host.c b/arch/powerpc/kvm/book3s_32_mmu_host.c
new file mode 100644
index 0000000..ce1bfb1
--- /dev/null
+++ b/arch/powerpc/kvm/book3s_32_mmu_host.c
@@ -0,0 +1,480 @@
+/*
+ * Copyright (C) 2010 SUSE Linux Products GmbH. All rights reserved.
+ *
+ * Authors:
+ *     Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+#include <linux/kvm_host.h>
+
+#include <asm/kvm_ppc.h>
+#include <asm/kvm_book3s.h>
+#include <asm/mmu-hash32.h>
+#include <asm/machdep.h>
+#include <asm/mmu_context.h>
+#include <asm/hw_irq.h>
+
+/* #define DEBUG_MMU */
+/* #define DEBUG_SR */
+
+#ifdef DEBUG_MMU
+#define dprintk_mmu(a, ...) printk(KERN_INFO a, __VA_ARGS__)
+#else
+#define dprintk_mmu(a, ...) do { } while(0)
+#endif
+
+#ifdef DEBUG_SR
+#define dprintk_sr(a, ...) printk(KERN_INFO a, __VA_ARGS__)
+#else
+#define dprintk_sr(a, ...) do { } while(0)
+#endif
+
+#if PAGE_SHIFT != 12
+#error Unknown page size
+#endif
+
+#ifdef CONFIG_SMP
+#error XXX need to grab mmu_hash_lock
+#endif
+
+#ifdef CONFIG_PTE_64BIT
+#error Only 32 bit pages are supported for now
+#endif
+
+static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
+{
+	volatile u32 *pteg;
+
+	dprintk_mmu("KVM: Flushing SPTE: 0x%llx (0x%llx) -> 0x%llx\n",
+		    pte->pte.eaddr, pte->pte.vpage, pte->host_va);
+
+	pteg = (u32*)pte->slot;
+
+	pteg[0] = 0;
+	asm volatile ("sync");
+	asm volatile ("tlbie %0" : : "r" (pte->pte.eaddr) : "memory");
+	asm volatile ("sync");
+	asm volatile ("tlbsync");
+
+	pte->host_va = 0;
+
+	if (pte->pte.may_write)
+		kvm_release_pfn_dirty(pte->pfn);
+	else
+		kvm_release_pfn_clean(pte->pfn);
+}
+
+void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, u64 _guest_ea, u64 _ea_mask)
+{
+	int i;
+	u32 guest_ea = _guest_ea;
+	u32 ea_mask = _ea_mask;
+
+	dprintk_mmu("KVM: Flushing %d Shadow PTEs: 0x%x & 0x%x\n",
+		    vcpu->arch.hpte_cache_offset, guest_ea, ea_mask);
+	BUG_ON(vcpu->arch.hpte_cache_offset > HPTEG_CACHE_NUM);
+
+	guest_ea &= ea_mask;
+	for (i = 0; i < vcpu->arch.hpte_cache_offset; i++) {
+		struct hpte_cache *pte;
+
+		pte = &vcpu->arch.hpte_cache[i];
+		if (!pte->host_va)
+			continue;
+
+		if ((pte->pte.eaddr & ea_mask) == guest_ea) {
+			invalidate_pte(vcpu, pte);
+		}
+	}
+
+	/* Doing a complete flush -> start from scratch */
+	if (!ea_mask)
+		vcpu->arch.hpte_cache_offset = 0;
+}
+
+void kvmppc_mmu_pte_vflush(struct kvm_vcpu *vcpu, u64 guest_vp, u64 vp_mask)
+{
+	int i;
+
+	dprintk_mmu("KVM: Flushing %d Shadow vPTEs: 0x%llx & 0x%llx\n",
+		    vcpu->arch.hpte_cache_offset, guest_vp, vp_mask);
+	BUG_ON(vcpu->arch.hpte_cache_offset > HPTEG_CACHE_NUM);
+
+	guest_vp &= vp_mask;
+	for (i = 0; i < vcpu->arch.hpte_cache_offset; i++) {
+		struct hpte_cache *pte;
+
+		pte = &vcpu->arch.hpte_cache[i];
+		if (!pte->host_va)
+			continue;
+
+		if ((pte->pte.vpage & vp_mask) == guest_vp) {
+			invalidate_pte(vcpu, pte);
+		}
+	}
+}
+
+void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, u64 pa_start, u64 pa_end)
+{
+	int i;
+
+	dprintk_mmu("KVM: Flushing %d Shadow pPTEs: 0x%llx & 0x%llx\n",
+		    vcpu->arch.hpte_cache_offset, pa_start, pa_end);
+	BUG_ON(vcpu->arch.hpte_cache_offset > HPTEG_CACHE_NUM);
+
+	for (i = 0; i < vcpu->arch.hpte_cache_offset; i++) {
+		struct hpte_cache *pte;
+
+		pte = &vcpu->arch.hpte_cache[i];
+		if (!pte->host_va)
+			continue;
+
+		if ((pte->pte.raddr >= pa_start) &&
+		    (pte->pte.raddr < pa_end)) {
+			invalidate_pte(vcpu, pte);
+		}
+	}
+}
+
+struct kvmppc_pte *kvmppc_mmu_find_pte(struct kvm_vcpu *vcpu, u64 ea, bool data)
+{
+	int i;
+	u64 guest_vp;
+
+	guest_vp = vcpu->arch.mmu.ea_to_vp(vcpu, ea, false);
+	for (i=0; i<vcpu->arch.hpte_cache_offset; i++) {
+		struct hpte_cache *pte;
+
+		pte = &vcpu->arch.hpte_cache[i];
+		if (!pte->host_va)
+			continue;
+
+		if (pte->pte.vpage == guest_vp)
+			return &pte->pte;
+	}
+
+	return NULL;
+}
+
+static int kvmppc_mmu_hpte_cache_next(struct kvm_vcpu *vcpu)
+{
+	if (vcpu->arch.hpte_cache_offset == HPTEG_CACHE_NUM)
+		kvmppc_mmu_pte_flush(vcpu, 0, 0);
+
+	return vcpu->arch.hpte_cache_offset++;
+}
+
+/* We keep 512 gvsid->hvsid entries, mapping the guest ones to the array using
+ * a hash, so we don't waste cycles on looping */
+static u16 kvmppc_sid_hash(struct kvm_vcpu *vcpu, u64 gvsid)
+{
+	return (u16)(((gvsid >> (SID_MAP_BITS * 7)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 6)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 5)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 4)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 3)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 2)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 1)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 0)) & SID_MAP_MASK));
+}
+
+
+static struct kvmppc_sid_map *find_sid_vsid(struct kvm_vcpu *vcpu, u64 gvsid)
+{
+	struct kvmppc_sid_map *map;
+	u16 sid_map_mask;
+
+	if (vcpu->arch.msr & MSR_PR)
+		gvsid |= VSID_PR;
+
+	sid_map_mask = kvmppc_sid_hash(vcpu, gvsid);
+	map = &to_book3s(vcpu)->sid_map[sid_map_mask];
+	if (map->guest_vsid == gvsid) {
+		dprintk_sr("SR: Searching 0x%llx -> 0x%llx\n",
+			    gvsid, map->host_vsid);
+		return map;
+	}
+
+	map = &to_book3s(vcpu)->sid_map[SID_MAP_MASK - sid_map_mask];
+	if (map->guest_vsid == gvsid) {
+		dprintk_sr("SR: Searching 0x%llx -> 0x%llx\n",
+			    gvsid, map->host_vsid);
+		return map;
+	}
+
+	dprintk_sr("SR: Searching 0x%llx -> not found\n", gvsid);
+	return NULL;
+}
+
+extern struct hash_pte *Hash;
+extern unsigned long _SDR1;
+
+static u32 *kvmppc_mmu_get_pteg(struct kvm_vcpu *vcpu, u32 vsid, u32 eaddr,
+				bool primary)
+{
+	u32 page, hash, htabmask;
+	ulong pteg = (ulong)Hash;
+
+	page = (eaddr & ~ESID_MASK) >> 12;
+
+	hash = ((vsid ^ page) << 6);
+	if (!primary)
+		hash = ~hash;
+
+	htabmask = ((_SDR1 & 0x1FF) << 16) | 0xFFC0;
+	hash &= htabmask;
+
+	pteg |= hash;
+
+	dprintk_mmu("htab: %p | hash: %x | htabmask: %x | pteg: %lx\n",
+		Hash, hash, htabmask, pteg);
+
+	return (u32*)pteg;
+}
+
+extern char etext[];
+
+int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte)
+{
+	pfn_t hpaddr;
+	u64 va;
+	u64 vsid;
+	struct kvmppc_sid_map *map;
+	volatile u32 *pteg;
+	u32 eaddr = orig_pte->eaddr;
+	u32 pteg0, pteg1;
+	register int rr = 0;
+	bool primary = false;
+	bool evict = false;
+	int hpte_id;
+	struct hpte_cache *pte;
+
+	/* Get host physical address for gpa */
+	hpaddr = gfn_to_pfn(vcpu->kvm, orig_pte->raddr >> PAGE_SHIFT);
+	if (kvm_is_error_hva(hpaddr)) {
+		printk(KERN_INFO "Couldn't get guest page for gfn %llx!\n",
+				 orig_pte->eaddr);
+		return -EINVAL;
+	}
+	hpaddr <<= PAGE_SHIFT;
+
+	/* and write the mapping ea -> hpa into the pt */
+	vcpu->arch.mmu.esid_to_vsid(vcpu, orig_pte->eaddr >> SID_SHIFT, &vsid);
+	map = find_sid_vsid(vcpu, vsid);
+	if (!map) {
+		kvmppc_mmu_map_segment(vcpu, eaddr);
+		map = find_sid_vsid(vcpu, vsid);
+	}
+	BUG_ON(!map);
+
+	vsid = map->host_vsid;
+	va = (vsid << SID_SHIFT) | (eaddr & ~ESID_MASK);
+
+next_pteg:
+	if (rr == 16) {
+		primary = !primary;
+		evict = true;
+		rr = 0;
+	}
+
+	pteg = kvmppc_mmu_get_pteg(vcpu, vsid, eaddr, primary);
+
+	/* not evicting yet */
+	if (!evict && (pteg[rr] & PTE_V)) {
+		rr += 2;
+		goto next_pteg;
+	}
+
+	dprintk_mmu("KVM: old PTEG: %p (%d)\n", pteg, rr);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[0], pteg[1]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[2], pteg[3]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[4], pteg[5]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[6], pteg[7]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[8], pteg[9]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[10], pteg[11]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[12], pteg[13]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[14], pteg[15]);
+
+	pteg0 = ((eaddr & 0x0fffffff) >> 22) | (vsid << 7) | PTE_V |
+		(primary ? 0 : PTE_SEC);
+	pteg1 = hpaddr | PTE_M | PTE_R | PTE_C;
+
+	if (orig_pte->may_write) {
+		pteg1 |= PP_RWRW;
+		mark_page_dirty(vcpu->kvm, orig_pte->raddr >> PAGE_SHIFT);
+	} else {
+		pteg1 |= PP_RWRX;
+	}
+
+	local_irq_disable();
+
+	if (pteg[rr]) {
+		pteg[rr] = 0;
+		asm volatile ("sync");
+	}
+	pteg[rr + 1] = pteg1;
+	pteg[rr] = pteg0;
+	asm volatile ("sync");
+
+	local_irq_enable();
+
+	dprintk_mmu("KVM: new PTEG: %p\n", pteg);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[0], pteg[1]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[2], pteg[3]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[4], pteg[5]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[6], pteg[7]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[8], pteg[9]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[10], pteg[11]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[12], pteg[13]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[14], pteg[15]);
+
+
+	/* Now tell our Shadow PTE code about the new page */
+
+	hpte_id = kvmppc_mmu_hpte_cache_next(vcpu);
+	pte = &vcpu->arch.hpte_cache[hpte_id];
+
+	dprintk_mmu("KVM: %c%c Map 0x%llx: [%lx] 0x%llx (0x%llx) -> %lx\n",
+		    orig_pte->may_write ? 'w' : '-',
+		    orig_pte->may_execute ? 'x' : '-',
+		    orig_pte->eaddr, (ulong)pteg, va,
+		    orig_pte->vpage, hpaddr);
+
+	pte->slot = (ulong)&pteg[rr];
+	pte->host_va = va;
+	pte->pte = *orig_pte;
+	pte->pfn = hpaddr >> PAGE_SHIFT;
+
+	return 0;
+}
+
+static struct kvmppc_sid_map *create_sid_map(struct kvm_vcpu *vcpu, u64 gvsid)
+{
+	struct kvmppc_sid_map *map;
+	struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
+	u16 sid_map_mask;
+	static int backwards_map = 0;
+
+	if (vcpu->arch.msr & MSR_PR)
+		gvsid |= VSID_PR;
+
+	/* We might get collisions that trap in preceding order, so let's
+	   map them differently */
+
+	sid_map_mask = kvmppc_sid_hash(vcpu, gvsid);
+	if (backwards_map)
+		sid_map_mask = SID_MAP_MASK - sid_map_mask;
+
+	map = &to_book3s(vcpu)->sid_map[sid_map_mask];
+
+	/* Make sure we're taking the other map next time */
+	backwards_map = !backwards_map;
+
+	/* Uh-oh ... out of mappings. Let's flush! */
+	if (vcpu_book3s->vsid_next >= vcpu_book3s->vsid_max) {
+		vcpu_book3s->vsid_next = vcpu_book3s->vsid_first;
+		memset(vcpu_book3s->sid_map, 0,
+		       sizeof(struct kvmppc_sid_map) * SID_MAP_NUM);
+		kvmppc_mmu_pte_flush(vcpu, 0, 0);
+		kvmppc_mmu_flush_segments(vcpu);
+	}
+	map->host_vsid = vcpu_book3s->vsid_next;
+
+	/* Would have to be 111 to be completely aligned with the rest of
+	   Linux, but that is just way too little space! */
+	vcpu_book3s->vsid_next+=1;
+
+	map->guest_vsid = gvsid;
+	map->valid = true;
+
+	return map;
+}
+
+int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr)
+{
+	u32 esid = eaddr >> SID_SHIFT;
+	u64 gvsid;
+	u32 sr;
+	struct kvmppc_sid_map *map;
+	struct kvmppc_book3s_shadow_vcpu *svcpu = to_svcpu(vcpu);
+
+	if (vcpu->arch.mmu.esid_to_vsid(vcpu, esid, &gvsid)) {
+		/* Invalidate an entry */
+		svcpu->sr[esid] = SR_INVALID;
+		return -ENOENT;
+	}
+
+	map = find_sid_vsid(vcpu, gvsid);
+	if (!map)
+		map = create_sid_map(vcpu, gvsid);
+
+	map->guest_esid = esid;
+	sr = map->host_vsid | SR_KP;
+	svcpu->sr[esid] = sr;
+
+	dprintk_sr("MMU: mtsr %d, 0x%x\n", esid, sr);
+
+	return 0;
+}
+
+void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu)
+{
+	int i;
+	struct kvmppc_book3s_shadow_vcpu *svcpu = to_svcpu(vcpu);
+
+	dprintk_sr("MMU: flushing all segments (%d)\n", ARRAY_SIZE(svcpu->sr));
+	for (i = 0; i < ARRAY_SIZE(svcpu->sr); i++)
+		svcpu->sr[i] = SR_INVALID;
+}
+
+void kvmppc_mmu_destroy(struct kvm_vcpu *vcpu)
+{
+	kvmppc_mmu_pte_flush(vcpu, 0, 0);
+	preempt_disable();
+	__destroy_context(to_book3s(vcpu)->context_id);
+	preempt_enable();
+}
+
+/* From mm/mmu_context_hash32.c */
+#define CTX_TO_VSID(ctx) (((ctx) * (897 * 16)) & 0xffffff)
+
+int kvmppc_mmu_init(struct kvm_vcpu *vcpu)
+{
+	struct kvmppc_vcpu_book3s *vcpu3s = to_book3s(vcpu);
+	int err;
+
+	err = __init_new_context();
+	if (err < 0)
+		return -1;
+	vcpu3s->context_id = err;
+
+	vcpu3s->vsid_max = CTX_TO_VSID(vcpu3s->context_id + 1) - 1;
+	vcpu3s->vsid_first = CTX_TO_VSID(vcpu3s->context_id);
+
+#if 0 /* XXX still doesn't guarantee uniqueness */
+	/* We could collide with the Linux vsid space because the vsid
+	 * wraps around at 24 bits. We're safe if we do our own space
+	 * though, so let's always set the highest bit. */
+
+	vcpu3s->vsid_max |= 0x00800000;
+	vcpu3s->vsid_first |= 0x00800000;
+#endif
+	BUG_ON(vcpu3s->vsid_max < vcpu3s->vsid_first);
+
+	vcpu3s->vsid_next = vcpu3s->vsid_first;
+
+	return 0;
+}
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 02/27] KVM: PPC: Add host MMU Support
@ 2010-04-15 22:11     ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

In order to support 32 bit Book3S, we need to add code to enable our
shadow MMU to actually add shadow PTEs. This is the module enabling
that support.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_32_mmu_host.c |  480 +++++++++++++++++++++++++++++++++
 1 files changed, 480 insertions(+), 0 deletions(-)
 create mode 100644 arch/powerpc/kvm/book3s_32_mmu_host.c

diff --git a/arch/powerpc/kvm/book3s_32_mmu_host.c b/arch/powerpc/kvm/book3s_32_mmu_host.c
new file mode 100644
index 0000000..ce1bfb1
--- /dev/null
+++ b/arch/powerpc/kvm/book3s_32_mmu_host.c
@@ -0,0 +1,480 @@
+/*
+ * Copyright (C) 2010 SUSE Linux Products GmbH. All rights reserved.
+ *
+ * Authors:
+ *     Alexander Graf <agraf@suse.de>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ */
+
+#include <linux/kvm_host.h>
+
+#include <asm/kvm_ppc.h>
+#include <asm/kvm_book3s.h>
+#include <asm/mmu-hash32.h>
+#include <asm/machdep.h>
+#include <asm/mmu_context.h>
+#include <asm/hw_irq.h>
+
+/* #define DEBUG_MMU */
+/* #define DEBUG_SR */
+
+#ifdef DEBUG_MMU
+#define dprintk_mmu(a, ...) printk(KERN_INFO a, __VA_ARGS__)
+#else
+#define dprintk_mmu(a, ...) do { } while(0)
+#endif
+
+#ifdef DEBUG_SR
+#define dprintk_sr(a, ...) printk(KERN_INFO a, __VA_ARGS__)
+#else
+#define dprintk_sr(a, ...) do { } while(0)
+#endif
+
+#if PAGE_SHIFT != 12
+#error Unknown page size
+#endif
+
+#ifdef CONFIG_SMP
+#error XXX need to grab mmu_hash_lock
+#endif
+
+#ifdef CONFIG_PTE_64BIT
+#error Only 32 bit pages are supported for now
+#endif
+
+static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
+{
+	volatile u32 *pteg;
+
+	dprintk_mmu("KVM: Flushing SPTE: 0x%llx (0x%llx) -> 0x%llx\n",
+		    pte->pte.eaddr, pte->pte.vpage, pte->host_va);
+
+	pteg = (u32*)pte->slot;
+
+	pteg[0] = 0;
+	asm volatile ("sync");
+	asm volatile ("tlbie %0" : : "r" (pte->pte.eaddr) : "memory");
+	asm volatile ("sync");
+	asm volatile ("tlbsync");
+
+	pte->host_va = 0;
+
+	if (pte->pte.may_write)
+		kvm_release_pfn_dirty(pte->pfn);
+	else
+		kvm_release_pfn_clean(pte->pfn);
+}
+
+void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, u64 _guest_ea, u64 _ea_mask)
+{
+	int i;
+	u32 guest_ea = _guest_ea;
+	u32 ea_mask = _ea_mask;
+
+	dprintk_mmu("KVM: Flushing %d Shadow PTEs: 0x%x & 0x%x\n",
+		    vcpu->arch.hpte_cache_offset, guest_ea, ea_mask);
+	BUG_ON(vcpu->arch.hpte_cache_offset > HPTEG_CACHE_NUM);
+
+	guest_ea &= ea_mask;
+	for (i = 0; i < vcpu->arch.hpte_cache_offset; i++) {
+		struct hpte_cache *pte;
+
+		pte = &vcpu->arch.hpte_cache[i];
+		if (!pte->host_va)
+			continue;
+
+		if ((pte->pte.eaddr & ea_mask) = guest_ea) {
+			invalidate_pte(vcpu, pte);
+		}
+	}
+
+	/* Doing a complete flush -> start from scratch */
+	if (!ea_mask)
+		vcpu->arch.hpte_cache_offset = 0;
+}
+
+void kvmppc_mmu_pte_vflush(struct kvm_vcpu *vcpu, u64 guest_vp, u64 vp_mask)
+{
+	int i;
+
+	dprintk_mmu("KVM: Flushing %d Shadow vPTEs: 0x%llx & 0x%llx\n",
+		    vcpu->arch.hpte_cache_offset, guest_vp, vp_mask);
+	BUG_ON(vcpu->arch.hpte_cache_offset > HPTEG_CACHE_NUM);
+
+	guest_vp &= vp_mask;
+	for (i = 0; i < vcpu->arch.hpte_cache_offset; i++) {
+		struct hpte_cache *pte;
+
+		pte = &vcpu->arch.hpte_cache[i];
+		if (!pte->host_va)
+			continue;
+
+		if ((pte->pte.vpage & vp_mask) = guest_vp) {
+			invalidate_pte(vcpu, pte);
+		}
+	}
+}
+
+void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, u64 pa_start, u64 pa_end)
+{
+	int i;
+
+	dprintk_mmu("KVM: Flushing %d Shadow pPTEs: 0x%llx & 0x%llx\n",
+		    vcpu->arch.hpte_cache_offset, pa_start, pa_end);
+	BUG_ON(vcpu->arch.hpte_cache_offset > HPTEG_CACHE_NUM);
+
+	for (i = 0; i < vcpu->arch.hpte_cache_offset; i++) {
+		struct hpte_cache *pte;
+
+		pte = &vcpu->arch.hpte_cache[i];
+		if (!pte->host_va)
+			continue;
+
+		if ((pte->pte.raddr >= pa_start) &&
+		    (pte->pte.raddr < pa_end)) {
+			invalidate_pte(vcpu, pte);
+		}
+	}
+}
+
+struct kvmppc_pte *kvmppc_mmu_find_pte(struct kvm_vcpu *vcpu, u64 ea, bool data)
+{
+	int i;
+	u64 guest_vp;
+
+	guest_vp = vcpu->arch.mmu.ea_to_vp(vcpu, ea, false);
+	for (i=0; i<vcpu->arch.hpte_cache_offset; i++) {
+		struct hpte_cache *pte;
+
+		pte = &vcpu->arch.hpte_cache[i];
+		if (!pte->host_va)
+			continue;
+
+		if (pte->pte.vpage = guest_vp)
+			return &pte->pte;
+	}
+
+	return NULL;
+}
+
+static int kvmppc_mmu_hpte_cache_next(struct kvm_vcpu *vcpu)
+{
+	if (vcpu->arch.hpte_cache_offset = HPTEG_CACHE_NUM)
+		kvmppc_mmu_pte_flush(vcpu, 0, 0);
+
+	return vcpu->arch.hpte_cache_offset++;
+}
+
+/* We keep 512 gvsid->hvsid entries, mapping the guest ones to the array using
+ * a hash, so we don't waste cycles on looping */
+static u16 kvmppc_sid_hash(struct kvm_vcpu *vcpu, u64 gvsid)
+{
+	return (u16)(((gvsid >> (SID_MAP_BITS * 7)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 6)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 5)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 4)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 3)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 2)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 1)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 0)) & SID_MAP_MASK));
+}
+
+
+static struct kvmppc_sid_map *find_sid_vsid(struct kvm_vcpu *vcpu, u64 gvsid)
+{
+	struct kvmppc_sid_map *map;
+	u16 sid_map_mask;
+
+	if (vcpu->arch.msr & MSR_PR)
+		gvsid |= VSID_PR;
+
+	sid_map_mask = kvmppc_sid_hash(vcpu, gvsid);
+	map = &to_book3s(vcpu)->sid_map[sid_map_mask];
+	if (map->guest_vsid = gvsid) {
+		dprintk_sr("SR: Searching 0x%llx -> 0x%llx\n",
+			    gvsid, map->host_vsid);
+		return map;
+	}
+
+	map = &to_book3s(vcpu)->sid_map[SID_MAP_MASK - sid_map_mask];
+	if (map->guest_vsid = gvsid) {
+		dprintk_sr("SR: Searching 0x%llx -> 0x%llx\n",
+			    gvsid, map->host_vsid);
+		return map;
+	}
+
+	dprintk_sr("SR: Searching 0x%llx -> not found\n", gvsid);
+	return NULL;
+}
+
+extern struct hash_pte *Hash;
+extern unsigned long _SDR1;
+
+static u32 *kvmppc_mmu_get_pteg(struct kvm_vcpu *vcpu, u32 vsid, u32 eaddr,
+				bool primary)
+{
+	u32 page, hash, htabmask;
+	ulong pteg = (ulong)Hash;
+
+	page = (eaddr & ~ESID_MASK) >> 12;
+
+	hash = ((vsid ^ page) << 6);
+	if (!primary)
+		hash = ~hash;
+
+	htabmask = ((_SDR1 & 0x1FF) << 16) | 0xFFC0;
+	hash &= htabmask;
+
+	pteg |= hash;
+
+	dprintk_mmu("htab: %p | hash: %x | htabmask: %x | pteg: %lx\n",
+		Hash, hash, htabmask, pteg);
+
+	return (u32*)pteg;
+}
+
+extern char etext[];
+
+int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte)
+{
+	pfn_t hpaddr;
+	u64 va;
+	u64 vsid;
+	struct kvmppc_sid_map *map;
+	volatile u32 *pteg;
+	u32 eaddr = orig_pte->eaddr;
+	u32 pteg0, pteg1;
+	register int rr = 0;
+	bool primary = false;
+	bool evict = false;
+	int hpte_id;
+	struct hpte_cache *pte;
+
+	/* Get host physical address for gpa */
+	hpaddr = gfn_to_pfn(vcpu->kvm, orig_pte->raddr >> PAGE_SHIFT);
+	if (kvm_is_error_hva(hpaddr)) {
+		printk(KERN_INFO "Couldn't get guest page for gfn %llx!\n",
+				 orig_pte->eaddr);
+		return -EINVAL;
+	}
+	hpaddr <<= PAGE_SHIFT;
+
+	/* and write the mapping ea -> hpa into the pt */
+	vcpu->arch.mmu.esid_to_vsid(vcpu, orig_pte->eaddr >> SID_SHIFT, &vsid);
+	map = find_sid_vsid(vcpu, vsid);
+	if (!map) {
+		kvmppc_mmu_map_segment(vcpu, eaddr);
+		map = find_sid_vsid(vcpu, vsid);
+	}
+	BUG_ON(!map);
+
+	vsid = map->host_vsid;
+	va = (vsid << SID_SHIFT) | (eaddr & ~ESID_MASK);
+
+next_pteg:
+	if (rr = 16) {
+		primary = !primary;
+		evict = true;
+		rr = 0;
+	}
+
+	pteg = kvmppc_mmu_get_pteg(vcpu, vsid, eaddr, primary);
+
+	/* not evicting yet */
+	if (!evict && (pteg[rr] & PTE_V)) {
+		rr += 2;
+		goto next_pteg;
+	}
+
+	dprintk_mmu("KVM: old PTEG: %p (%d)\n", pteg, rr);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[0], pteg[1]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[2], pteg[3]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[4], pteg[5]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[6], pteg[7]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[8], pteg[9]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[10], pteg[11]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[12], pteg[13]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[14], pteg[15]);
+
+	pteg0 = ((eaddr & 0x0fffffff) >> 22) | (vsid << 7) | PTE_V |
+		(primary ? 0 : PTE_SEC);
+	pteg1 = hpaddr | PTE_M | PTE_R | PTE_C;
+
+	if (orig_pte->may_write) {
+		pteg1 |= PP_RWRW;
+		mark_page_dirty(vcpu->kvm, orig_pte->raddr >> PAGE_SHIFT);
+	} else {
+		pteg1 |= PP_RWRX;
+	}
+
+	local_irq_disable();
+
+	if (pteg[rr]) {
+		pteg[rr] = 0;
+		asm volatile ("sync");
+	}
+	pteg[rr + 1] = pteg1;
+	pteg[rr] = pteg0;
+	asm volatile ("sync");
+
+	local_irq_enable();
+
+	dprintk_mmu("KVM: new PTEG: %p\n", pteg);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[0], pteg[1]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[2], pteg[3]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[4], pteg[5]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[6], pteg[7]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[8], pteg[9]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[10], pteg[11]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[12], pteg[13]);
+	dprintk_mmu("KVM:   %08x - %08x\n", pteg[14], pteg[15]);
+
+
+	/* Now tell our Shadow PTE code about the new page */
+
+	hpte_id = kvmppc_mmu_hpte_cache_next(vcpu);
+	pte = &vcpu->arch.hpte_cache[hpte_id];
+
+	dprintk_mmu("KVM: %c%c Map 0x%llx: [%lx] 0x%llx (0x%llx) -> %lx\n",
+		    orig_pte->may_write ? 'w' : '-',
+		    orig_pte->may_execute ? 'x' : '-',
+		    orig_pte->eaddr, (ulong)pteg, va,
+		    orig_pte->vpage, hpaddr);
+
+	pte->slot = (ulong)&pteg[rr];
+	pte->host_va = va;
+	pte->pte = *orig_pte;
+	pte->pfn = hpaddr >> PAGE_SHIFT;
+
+	return 0;
+}
+
+static struct kvmppc_sid_map *create_sid_map(struct kvm_vcpu *vcpu, u64 gvsid)
+{
+	struct kvmppc_sid_map *map;
+	struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
+	u16 sid_map_mask;
+	static int backwards_map = 0;
+
+	if (vcpu->arch.msr & MSR_PR)
+		gvsid |= VSID_PR;
+
+	/* We might get collisions that trap in preceding order, so let's
+	   map them differently */
+
+	sid_map_mask = kvmppc_sid_hash(vcpu, gvsid);
+	if (backwards_map)
+		sid_map_mask = SID_MAP_MASK - sid_map_mask;
+
+	map = &to_book3s(vcpu)->sid_map[sid_map_mask];
+
+	/* Make sure we're taking the other map next time */
+	backwards_map = !backwards_map;
+
+	/* Uh-oh ... out of mappings. Let's flush! */
+	if (vcpu_book3s->vsid_next >= vcpu_book3s->vsid_max) {
+		vcpu_book3s->vsid_next = vcpu_book3s->vsid_first;
+		memset(vcpu_book3s->sid_map, 0,
+		       sizeof(struct kvmppc_sid_map) * SID_MAP_NUM);
+		kvmppc_mmu_pte_flush(vcpu, 0, 0);
+		kvmppc_mmu_flush_segments(vcpu);
+	}
+	map->host_vsid = vcpu_book3s->vsid_next;
+
+	/* Would have to be 111 to be completely aligned with the rest of
+	   Linux, but that is just way too little space! */
+	vcpu_book3s->vsid_next+=1;
+
+	map->guest_vsid = gvsid;
+	map->valid = true;
+
+	return map;
+}
+
+int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr)
+{
+	u32 esid = eaddr >> SID_SHIFT;
+	u64 gvsid;
+	u32 sr;
+	struct kvmppc_sid_map *map;
+	struct kvmppc_book3s_shadow_vcpu *svcpu = to_svcpu(vcpu);
+
+	if (vcpu->arch.mmu.esid_to_vsid(vcpu, esid, &gvsid)) {
+		/* Invalidate an entry */
+		svcpu->sr[esid] = SR_INVALID;
+		return -ENOENT;
+	}
+
+	map = find_sid_vsid(vcpu, gvsid);
+	if (!map)
+		map = create_sid_map(vcpu, gvsid);
+
+	map->guest_esid = esid;
+	sr = map->host_vsid | SR_KP;
+	svcpu->sr[esid] = sr;
+
+	dprintk_sr("MMU: mtsr %d, 0x%x\n", esid, sr);
+
+	return 0;
+}
+
+void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu)
+{
+	int i;
+	struct kvmppc_book3s_shadow_vcpu *svcpu = to_svcpu(vcpu);
+
+	dprintk_sr("MMU: flushing all segments (%d)\n", ARRAY_SIZE(svcpu->sr));
+	for (i = 0; i < ARRAY_SIZE(svcpu->sr); i++)
+		svcpu->sr[i] = SR_INVALID;
+}
+
+void kvmppc_mmu_destroy(struct kvm_vcpu *vcpu)
+{
+	kvmppc_mmu_pte_flush(vcpu, 0, 0);
+	preempt_disable();
+	__destroy_context(to_book3s(vcpu)->context_id);
+	preempt_enable();
+}
+
+/* From mm/mmu_context_hash32.c */
+#define CTX_TO_VSID(ctx) (((ctx) * (897 * 16)) & 0xffffff)
+
+int kvmppc_mmu_init(struct kvm_vcpu *vcpu)
+{
+	struct kvmppc_vcpu_book3s *vcpu3s = to_book3s(vcpu);
+	int err;
+
+	err = __init_new_context();
+	if (err < 0)
+		return -1;
+	vcpu3s->context_id = err;
+
+	vcpu3s->vsid_max = CTX_TO_VSID(vcpu3s->context_id + 1) - 1;
+	vcpu3s->vsid_first = CTX_TO_VSID(vcpu3s->context_id);
+
+#if 0 /* XXX still doesn't guarantee uniqueness */
+	/* We could collide with the Linux vsid space because the vsid
+	 * wraps around at 24 bits. We're safe if we do our own space
+	 * though, so let's always set the highest bit. */
+
+	vcpu3s->vsid_max |= 0x00800000;
+	vcpu3s->vsid_first |= 0x00800000;
+#endif
+	BUG_ON(vcpu3s->vsid_max < vcpu3s->vsid_first);
+
+	vcpu3s->vsid_next = vcpu3s->vsid_first;
+
+	return 0;
+}
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 03/27] KVM: PPC: Add SR swapping code
       [not found] ` <1271369518-11247-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-15 22:11     ` Alexander Graf
  2010-04-15 22:11     ` Alexander Graf
                       ` (15 subsequent siblings)
  16 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

Later in this series we will move the current segment switch code to
generic code and make that call hooks for the specific sub-archs (32
vs. 64 bit). This is the hook for 32 bits.

It enabled the entry and exit code to swap segment registers with
values from the shadow cpu structure.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s_32_sr.S |  143 +++++++++++++++++++++++++++++++++++++++
 1 files changed, 143 insertions(+), 0 deletions(-)
 create mode 100644 arch/powerpc/kvm/book3s_32_sr.S

diff --git a/arch/powerpc/kvm/book3s_32_sr.S b/arch/powerpc/kvm/book3s_32_sr.S
new file mode 100644
index 0000000..3608471
--- /dev/null
+++ b/arch/powerpc/kvm/book3s_32_sr.S
@@ -0,0 +1,143 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ *
+ * Copyright SUSE Linux Products GmbH 2009
+ *
+ * Authors: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
+ */
+
+/******************************************************************************
+ *                                                                            *
+ *                               Entry code                                   *
+ *                                                                            *
+ *****************************************************************************/
+
+.macro LOAD_GUEST_SEGMENTS
+
+	/* Required state:
+	 *
+	 * MSR = ~IR|DR
+	 * R1 = host R1
+	 * R2 = host R2
+	 * R3 = shadow vcpu
+	 * all other volatile GPRS = free
+	 * SVCPU[CR]  = guest CR
+	 * SVCPU[XER] = guest XER
+	 * SVCPU[CTR] = guest CTR
+	 * SVCPU[LR]  = guest LR
+	 */
+
+#define XCHG_SR(n)	lwz	r9, (SVCPU_SR+(n*4))(r3);  \
+			mtsr	n, r9
+
+	XCHG_SR(0)
+	XCHG_SR(1)
+	XCHG_SR(2)
+	XCHG_SR(3)
+	XCHG_SR(4)
+	XCHG_SR(5)
+	XCHG_SR(6)
+	XCHG_SR(7)
+	XCHG_SR(8)
+	XCHG_SR(9)
+	XCHG_SR(10)
+	XCHG_SR(11)
+	XCHG_SR(12)
+	XCHG_SR(13)
+	XCHG_SR(14)
+	XCHG_SR(15)
+
+	/* Clear BATs. */
+
+#define KVM_KILL_BAT(n, reg)		\
+        mtspr   SPRN_IBAT##n##U,reg;	\
+        mtspr   SPRN_IBAT##n##L,reg;	\
+        mtspr   SPRN_DBAT##n##U,reg;	\
+        mtspr   SPRN_DBAT##n##L,reg;	\
+
+        li	r9, 0
+	KVM_KILL_BAT(0, r9)
+	KVM_KILL_BAT(1, r9)
+	KVM_KILL_BAT(2, r9)
+	KVM_KILL_BAT(3, r9)
+
+.endm
+
+/******************************************************************************
+ *                                                                            *
+ *                               Exit code                                    *
+ *                                                                            *
+ *****************************************************************************/
+
+.macro LOAD_HOST_SEGMENTS
+
+	/* Register usage at this point:
+	 *
+	 * R1         = host R1
+	 * R2         = host R2
+	 * R12        = exit handler id
+	 * R13        = shadow vcpu - SHADOW_VCPU_OFF
+	 * SVCPU.*    = guest *
+	 * SVCPU[CR]  = guest CR
+	 * SVCPU[XER] = guest XER
+	 * SVCPU[CTR] = guest CTR
+	 * SVCPU[LR]  = guest LR
+	 *
+	 */
+
+	/* Restore BATs */
+
+	/* We only overwrite the upper part, so we only restoree
+	   the upper part. */
+#define KVM_LOAD_BAT(n, reg, RA, RB)	\
+	lwz	RA,(n*16)+0(reg);	\
+	lwz	RB,(n*16)+4(reg);	\
+	mtspr	SPRN_IBAT##n##U,RA;	\
+	mtspr	SPRN_IBAT##n##L,RB;	\
+	lwz	RA,(n*16)+8(reg);	\
+	lwz	RB,(n*16)+12(reg);	\
+	mtspr	SPRN_DBAT##n##U,RA;	\
+	mtspr	SPRN_DBAT##n##L,RB;	\
+
+	lis     r9, BATS@ha
+	addi    r9, r9, BATS@l
+	tophys(r9, r9)
+	KVM_LOAD_BAT(0, r9, r10, r11)
+	KVM_LOAD_BAT(1, r9, r10, r11)
+	KVM_LOAD_BAT(2, r9, r10, r11)
+	KVM_LOAD_BAT(3, r9, r10, r11)
+
+	/* Restore Segment Registers */
+
+	/* 0xc - 0xf */
+
+        li      r0, 4
+        mtctr   r0
+	LOAD_REG_IMMEDIATE(r3, 0x20000000 | (0x111 * 0xc))
+        lis     r4, 0xc000
+3:      mtsrin  r3, r4
+        addi    r3, r3, 0x111     /* increment VSID */
+        addis   r4, r4, 0x1000    /* address of next segment */
+        bdnz    3b
+
+	/* 0x0 - 0xb */
+
+	/* 'current->mm' needs to be in r4 */
+	tophys(r4, r2)
+	lwz	r4, MM(r4)
+	tophys(r4, r4)
+	/* This only clobbers r0, r3, r4 and r5 */
+	bl	switch_mmu_context
+
+.endm
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 03/27] KVM: PPC: Add SR swapping code
@ 2010-04-15 22:11     ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

Later in this series we will move the current segment switch code to
generic code and make that call hooks for the specific sub-archs (32
vs. 64 bit). This is the hook for 32 bits.

It enabled the entry and exit code to swap segment registers with
values from the shadow cpu structure.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_32_sr.S |  143 +++++++++++++++++++++++++++++++++++++++
 1 files changed, 143 insertions(+), 0 deletions(-)
 create mode 100644 arch/powerpc/kvm/book3s_32_sr.S

diff --git a/arch/powerpc/kvm/book3s_32_sr.S b/arch/powerpc/kvm/book3s_32_sr.S
new file mode 100644
index 0000000..3608471
--- /dev/null
+++ b/arch/powerpc/kvm/book3s_32_sr.S
@@ -0,0 +1,143 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ *
+ * Copyright SUSE Linux Products GmbH 2009
+ *
+ * Authors: Alexander Graf <agraf@suse.de>
+ */
+
+/******************************************************************************
+ *                                                                            *
+ *                               Entry code                                   *
+ *                                                                            *
+ *****************************************************************************/
+
+.macro LOAD_GUEST_SEGMENTS
+
+	/* Required state:
+	 *
+	 * MSR = ~IR|DR
+	 * R1 = host R1
+	 * R2 = host R2
+	 * R3 = shadow vcpu
+	 * all other volatile GPRS = free
+	 * SVCPU[CR]  = guest CR
+	 * SVCPU[XER] = guest XER
+	 * SVCPU[CTR] = guest CTR
+	 * SVCPU[LR]  = guest LR
+	 */
+
+#define XCHG_SR(n)	lwz	r9, (SVCPU_SR+(n*4))(r3);  \
+			mtsr	n, r9
+
+	XCHG_SR(0)
+	XCHG_SR(1)
+	XCHG_SR(2)
+	XCHG_SR(3)
+	XCHG_SR(4)
+	XCHG_SR(5)
+	XCHG_SR(6)
+	XCHG_SR(7)
+	XCHG_SR(8)
+	XCHG_SR(9)
+	XCHG_SR(10)
+	XCHG_SR(11)
+	XCHG_SR(12)
+	XCHG_SR(13)
+	XCHG_SR(14)
+	XCHG_SR(15)
+
+	/* Clear BATs. */
+
+#define KVM_KILL_BAT(n, reg)		\
+        mtspr   SPRN_IBAT##n##U,reg;	\
+        mtspr   SPRN_IBAT##n##L,reg;	\
+        mtspr   SPRN_DBAT##n##U,reg;	\
+        mtspr   SPRN_DBAT##n##L,reg;	\
+
+        li	r9, 0
+	KVM_KILL_BAT(0, r9)
+	KVM_KILL_BAT(1, r9)
+	KVM_KILL_BAT(2, r9)
+	KVM_KILL_BAT(3, r9)
+
+.endm
+
+/******************************************************************************
+ *                                                                            *
+ *                               Exit code                                    *
+ *                                                                            *
+ *****************************************************************************/
+
+.macro LOAD_HOST_SEGMENTS
+
+	/* Register usage at this point:
+	 *
+	 * R1         = host R1
+	 * R2         = host R2
+	 * R12        = exit handler id
+	 * R13        = shadow vcpu - SHADOW_VCPU_OFF
+	 * SVCPU.*    = guest *
+	 * SVCPU[CR]  = guest CR
+	 * SVCPU[XER] = guest XER
+	 * SVCPU[CTR] = guest CTR
+	 * SVCPU[LR]  = guest LR
+	 *
+	 */
+
+	/* Restore BATs */
+
+	/* We only overwrite the upper part, so we only restoree
+	   the upper part. */
+#define KVM_LOAD_BAT(n, reg, RA, RB)	\
+	lwz	RA,(n*16)+0(reg);	\
+	lwz	RB,(n*16)+4(reg);	\
+	mtspr	SPRN_IBAT##n##U,RA;	\
+	mtspr	SPRN_IBAT##n##L,RB;	\
+	lwz	RA,(n*16)+8(reg);	\
+	lwz	RB,(n*16)+12(reg);	\
+	mtspr	SPRN_DBAT##n##U,RA;	\
+	mtspr	SPRN_DBAT##n##L,RB;	\
+
+	lis     r9, BATS@ha
+	addi    r9, r9, BATS@l
+	tophys(r9, r9)
+	KVM_LOAD_BAT(0, r9, r10, r11)
+	KVM_LOAD_BAT(1, r9, r10, r11)
+	KVM_LOAD_BAT(2, r9, r10, r11)
+	KVM_LOAD_BAT(3, r9, r10, r11)
+
+	/* Restore Segment Registers */
+
+	/* 0xc - 0xf */
+
+        li      r0, 4
+        mtctr   r0
+	LOAD_REG_IMMEDIATE(r3, 0x20000000 | (0x111 * 0xc))
+        lis     r4, 0xc000
+3:      mtsrin  r3, r4
+        addi    r3, r3, 0x111     /* increment VSID */
+        addis   r4, r4, 0x1000    /* address of next segment */
+        bdnz    3b
+
+	/* 0x0 - 0xb */
+
+	/* 'current->mm' needs to be in r4 */
+	tophys(r4, r2)
+	lwz	r4, MM(r4)
+	tophys(r4, r4)
+	/* This only clobbers r0, r3, r4 and r5 */
+	bl	switch_mmu_context
+
+.endm
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 04/27] KVM: PPC: Add generic segment switching code
       [not found] ` <1271369518-11247-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-15 22:11     ` Alexander Graf
  2010-04-15 22:11     ` Alexander Graf
                       ` (15 subsequent siblings)
  16 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

This is the code that will later be used instead of book3s_64_slb.S. It
does the last step of guest entry and the first generic steps of guest
exiting, once we have determined the interrupt is a KVM interrupt.

It also reads the last used instruction from the guest virtual address
space if necessary, to speed up that path.

The new thing about this file is that it makes use of generic long load
and store functions and calls a macro to fill in the actual segment
switching code. That still needs to be done differently for book3s_32 and
book3s_64.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s_segment.S |  258 +++++++++++++++++++++++++++++++++++++
 1 files changed, 258 insertions(+), 0 deletions(-)
 create mode 100644 arch/powerpc/kvm/book3s_segment.S

diff --git a/arch/powerpc/kvm/book3s_segment.S b/arch/powerpc/kvm/book3s_segment.S
new file mode 100644
index 0000000..778e3fc
--- /dev/null
+++ b/arch/powerpc/kvm/book3s_segment.S
@@ -0,0 +1,258 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ *
+ * Copyright SUSE Linux Products GmbH 2010
+ *
+ * Authors: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
+ */
+
+/* Real mode helpers */
+
+#if defined(CONFIG_PPC_BOOK3S_64)
+
+#define GET_SHADOW_VCPU(reg)    \
+	addi    reg, r13, PACA_KVM_SVCPU
+
+#elif defined(CONFIG_PPC_BOOK3S_32)
+
+#define GET_SHADOW_VCPU(reg)    			\
+	tophys(reg, r2);       			\
+	lwz     reg, (THREAD + THREAD_KVM_SVCPU)(reg);	\
+	tophys(reg, reg)
+
+#endif
+
+/* Disable for nested KVM */
+#define USE_QUICK_LAST_INST
+
+
+/* Get helper functions for subarch specific functionality */
+
+#if defined(CONFIG_PPC_BOOK3S_64)
+#include "book3s_64_slb.S"
+#elif defined(CONFIG_PPC_BOOK3S_32)
+#include "book3s_32_sr.S"
+#endif
+
+/******************************************************************************
+ *                                                                            *
+ *                               Entry code                                   *
+ *                                                                            *
+ *****************************************************************************/
+
+.global kvmppc_handler_trampoline_enter
+kvmppc_handler_trampoline_enter:
+
+	/* Required state:
+	 *
+	 * MSR = ~IR|DR
+	 * R13 = PACA
+	 * R1 = host R1
+	 * R2 = host R2
+	 * R10 = guest MSR
+	 * all other volatile GPRS = free
+	 * SVCPU[CR] = guest CR
+	 * SVCPU[XER] = guest XER
+	 * SVCPU[CTR] = guest CTR
+	 * SVCPU[LR] = guest LR
+	 */
+
+	/* r3 = shadow vcpu */
+	GET_SHADOW_VCPU(r3)
+
+	/* Move SRR0 and SRR1 into the respective regs */
+	PPC_LL  r9, SVCPU_PC(r3)
+	mtsrr0	r9
+	mtsrr1	r10
+
+	/* Activate guest mode, so faults get handled by KVM */
+	li	r11, KVM_GUEST_MODE_GUEST
+	stb	r11, SVCPU_IN_GUEST(r3)
+
+	/* Switch to guest segment. This is subarch specific. */
+	LOAD_GUEST_SEGMENTS
+
+	/* Enter guest */
+
+	PPC_LL	r4, (SVCPU_CTR)(r3)
+	PPC_LL	r5, (SVCPU_LR)(r3)
+	lwz	r6, (SVCPU_CR)(r3)
+	lwz	r7, (SVCPU_XER)(r3)
+
+	mtctr	r4
+	mtlr	r5
+	mtcr	r6
+	mtxer	r7
+
+	PPC_LL	r0, (SVCPU_R0)(r3)
+	PPC_LL	r1, (SVCPU_R1)(r3)
+	PPC_LL	r2, (SVCPU_R2)(r3)
+	PPC_LL	r4, (SVCPU_R4)(r3)
+	PPC_LL	r5, (SVCPU_R5)(r3)
+	PPC_LL	r6, (SVCPU_R6)(r3)
+	PPC_LL	r7, (SVCPU_R7)(r3)
+	PPC_LL	r8, (SVCPU_R8)(r3)
+	PPC_LL	r9, (SVCPU_R9)(r3)
+	PPC_LL	r10, (SVCPU_R10)(r3)
+	PPC_LL	r11, (SVCPU_R11)(r3)
+	PPC_LL	r12, (SVCPU_R12)(r3)
+	PPC_LL	r13, (SVCPU_R13)(r3)
+
+	PPC_LL	r3, (SVCPU_R3)(r3)
+
+	RFI
+kvmppc_handler_trampoline_enter_end:
+
+
+
+/******************************************************************************
+ *                                                                            *
+ *                               Exit code                                    *
+ *                                                                            *
+ *****************************************************************************/
+
+.global kvmppc_handler_trampoline_exit
+kvmppc_handler_trampoline_exit:
+
+	/* Register usage at this point:
+	 *
+	 * SPRG_SCRATCH0  = guest R13
+	 * R12            = exit handler id
+	 * R13            = shadow vcpu - SHADOW_VCPU_OFF [=PACA on PPC64]
+	 * SVCPU.SCRATCH0 = guest R12
+	 * SVCPU.SCRATCH1 = guest CR
+	 *
+	 */
+
+	/* Save registers */
+
+	PPC_STL	r0, (SHADOW_VCPU_OFF + SVCPU_R0)(r13)
+	PPC_STL	r1, (SHADOW_VCPU_OFF + SVCPU_R1)(r13)
+	PPC_STL	r2, (SHADOW_VCPU_OFF + SVCPU_R2)(r13)
+	PPC_STL	r3, (SHADOW_VCPU_OFF + SVCPU_R3)(r13)
+	PPC_STL	r4, (SHADOW_VCPU_OFF + SVCPU_R4)(r13)
+	PPC_STL	r5, (SHADOW_VCPU_OFF + SVCPU_R5)(r13)
+	PPC_STL	r6, (SHADOW_VCPU_OFF + SVCPU_R6)(r13)
+	PPC_STL	r7, (SHADOW_VCPU_OFF + SVCPU_R7)(r13)
+	PPC_STL	r8, (SHADOW_VCPU_OFF + SVCPU_R8)(r13)
+	PPC_STL	r9, (SHADOW_VCPU_OFF + SVCPU_R9)(r13)
+	PPC_STL	r10, (SHADOW_VCPU_OFF + SVCPU_R10)(r13)
+	PPC_STL	r11, (SHADOW_VCPU_OFF + SVCPU_R11)(r13)
+
+	/* Restore R1/R2 so we can handle faults */
+	PPC_LL	r1, (SHADOW_VCPU_OFF + SVCPU_HOST_R1)(r13)
+	PPC_LL	r2, (SHADOW_VCPU_OFF + SVCPU_HOST_R2)(r13)
+
+	/* Save guest PC and MSR */
+	mfsrr0	r3
+	mfsrr1	r4
+
+	PPC_STL	r3, (SHADOW_VCPU_OFF + SVCPU_PC)(r13)
+	PPC_STL	r4, (SHADOW_VCPU_OFF + SVCPU_SHADOW_SRR1)(r13)
+
+	/* Get scratch'ed off registers */
+	mfspr	r9, SPRN_SPRG_SCRATCH0
+	PPC_LL	r8, (SHADOW_VCPU_OFF + SVCPU_SCRATCH0)(r13)
+	lwz	r7, (SHADOW_VCPU_OFF + SVCPU_SCRATCH1)(r13)
+
+	PPC_STL	r9, (SHADOW_VCPU_OFF + SVCPU_R13)(r13)
+	PPC_STL	r8, (SHADOW_VCPU_OFF + SVCPU_R12)(r13)
+	stw	r7, (SHADOW_VCPU_OFF + SVCPU_CR)(r13)
+
+	/* Save more register state  */
+
+	mfxer	r5
+	mfdar	r6
+	mfdsisr	r7
+	mfctr	r8
+	mflr	r9
+
+	stw	r5, (SHADOW_VCPU_OFF + SVCPU_XER)(r13)
+	PPC_STL	r6, (SHADOW_VCPU_OFF + SVCPU_FAULT_DAR)(r13)
+	stw	r7, (SHADOW_VCPU_OFF + SVCPU_FAULT_DSISR)(r13)
+	PPC_STL	r8, (SHADOW_VCPU_OFF + SVCPU_CTR)(r13)
+	PPC_STL	r9, (SHADOW_VCPU_OFF + SVCPU_LR)(r13)
+
+	/*
+	 * In order for us to easily get the last instruction,
+	 * we got the #vmexit at, we exploit the fact that the
+	 * virtual layout is still the same here, so we can just
+	 * ld from the guest's PC address
+	 */
+
+	/* We only load the last instruction when it's safe */
+	cmpwi	r12, BOOK3S_INTERRUPT_DATA_STORAGE
+	beq	ld_last_inst
+	cmpwi	r12, BOOK3S_INTERRUPT_PROGRAM
+	beq	ld_last_inst
+
+	b	no_ld_last_inst
+
+ld_last_inst:
+	/* Save off the guest instruction we're at */
+
+	/* In case lwz faults */
+	li	r0, KVM_INST_FETCH_FAILED
+
+#ifdef USE_QUICK_LAST_INST
+
+	/* Set guest mode to 'jump over instruction' so if lwz faults
+	 * we'll just continue at the next IP. */
+	li	r9, KVM_GUEST_MODE_SKIP
+	stb	r9, (SHADOW_VCPU_OFF + SVCPU_IN_GUEST)(r13)
+
+	/*    1) enable paging for data */
+	mfmsr	r9
+	ori	r11, r9, MSR_DR			/* Enable paging for data */
+	mtmsr	r11
+	sync
+	/*    2) fetch the instruction */
+	lwz	r0, 0(r3)
+	/*    3) disable paging again */
+	mtmsr	r9
+	sync
+
+#endif
+	stw	r0, (SHADOW_VCPU_OFF + SVCPU_LAST_INST)(r13)
+
+no_ld_last_inst:
+
+	/* Unset guest mode */
+	li	r9, KVM_GUEST_MODE_NONE
+	stb	r9, (SHADOW_VCPU_OFF + SVCPU_IN_GUEST)(r13)
+
+	/* Switch back to host MMU */
+	LOAD_HOST_SEGMENTS
+
+	/* Register usage at this point:
+	 *
+	 * R1       = host R1
+	 * R2       = host R2
+	 * R12      = exit handler id
+	 * R13      = shadow vcpu - SHADOW_VCPU_OFF [=PACA on PPC64]
+	 * SVCPU.*  = guest *
+	 *
+	 */
+
+	/* RFI into the highmem handler */
+	mfmsr	r7
+	ori	r7, r7, MSR_IR|MSR_DR|MSR_RI|MSR_ME	/* Enable paging */
+	mtsrr1	r7
+	/* Load highmem handler address */
+	PPC_LL	r8, (SHADOW_VCPU_OFF + SVCPU_VMHANDLER)(r13)
+	mtsrr0	r8
+
+	RFI
+kvmppc_handler_trampoline_exit_end:
+
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 04/27] KVM: PPC: Add generic segment switching code
@ 2010-04-15 22:11     ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

This is the code that will later be used instead of book3s_64_slb.S. It
does the last step of guest entry and the first generic steps of guest
exiting, once we have determined the interrupt is a KVM interrupt.

It also reads the last used instruction from the guest virtual address
space if necessary, to speed up that path.

The new thing about this file is that it makes use of generic long load
and store functions and calls a macro to fill in the actual segment
switching code. That still needs to be done differently for book3s_32 and
book3s_64.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_segment.S |  258 +++++++++++++++++++++++++++++++++++++
 1 files changed, 258 insertions(+), 0 deletions(-)
 create mode 100644 arch/powerpc/kvm/book3s_segment.S

diff --git a/arch/powerpc/kvm/book3s_segment.S b/arch/powerpc/kvm/book3s_segment.S
new file mode 100644
index 0000000..778e3fc
--- /dev/null
+++ b/arch/powerpc/kvm/book3s_segment.S
@@ -0,0 +1,258 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ *
+ * Copyright SUSE Linux Products GmbH 2010
+ *
+ * Authors: Alexander Graf <agraf@suse.de>
+ */
+
+/* Real mode helpers */
+
+#if defined(CONFIG_PPC_BOOK3S_64)
+
+#define GET_SHADOW_VCPU(reg)    \
+	addi    reg, r13, PACA_KVM_SVCPU
+
+#elif defined(CONFIG_PPC_BOOK3S_32)
+
+#define GET_SHADOW_VCPU(reg)    			\
+	tophys(reg, r2);       			\
+	lwz     reg, (THREAD + THREAD_KVM_SVCPU)(reg);	\
+	tophys(reg, reg)
+
+#endif
+
+/* Disable for nested KVM */
+#define USE_QUICK_LAST_INST
+
+
+/* Get helper functions for subarch specific functionality */
+
+#if defined(CONFIG_PPC_BOOK3S_64)
+#include "book3s_64_slb.S"
+#elif defined(CONFIG_PPC_BOOK3S_32)
+#include "book3s_32_sr.S"
+#endif
+
+/******************************************************************************
+ *                                                                            *
+ *                               Entry code                                   *
+ *                                                                            *
+ *****************************************************************************/
+
+.global kvmppc_handler_trampoline_enter
+kvmppc_handler_trampoline_enter:
+
+	/* Required state:
+	 *
+	 * MSR = ~IR|DR
+	 * R13 = PACA
+	 * R1 = host R1
+	 * R2 = host R2
+	 * R10 = guest MSR
+	 * all other volatile GPRS = free
+	 * SVCPU[CR] = guest CR
+	 * SVCPU[XER] = guest XER
+	 * SVCPU[CTR] = guest CTR
+	 * SVCPU[LR] = guest LR
+	 */
+
+	/* r3 = shadow vcpu */
+	GET_SHADOW_VCPU(r3)
+
+	/* Move SRR0 and SRR1 into the respective regs */
+	PPC_LL  r9, SVCPU_PC(r3)
+	mtsrr0	r9
+	mtsrr1	r10
+
+	/* Activate guest mode, so faults get handled by KVM */
+	li	r11, KVM_GUEST_MODE_GUEST
+	stb	r11, SVCPU_IN_GUEST(r3)
+
+	/* Switch to guest segment. This is subarch specific. */
+	LOAD_GUEST_SEGMENTS
+
+	/* Enter guest */
+
+	PPC_LL	r4, (SVCPU_CTR)(r3)
+	PPC_LL	r5, (SVCPU_LR)(r3)
+	lwz	r6, (SVCPU_CR)(r3)
+	lwz	r7, (SVCPU_XER)(r3)
+
+	mtctr	r4
+	mtlr	r5
+	mtcr	r6
+	mtxer	r7
+
+	PPC_LL	r0, (SVCPU_R0)(r3)
+	PPC_LL	r1, (SVCPU_R1)(r3)
+	PPC_LL	r2, (SVCPU_R2)(r3)
+	PPC_LL	r4, (SVCPU_R4)(r3)
+	PPC_LL	r5, (SVCPU_R5)(r3)
+	PPC_LL	r6, (SVCPU_R6)(r3)
+	PPC_LL	r7, (SVCPU_R7)(r3)
+	PPC_LL	r8, (SVCPU_R8)(r3)
+	PPC_LL	r9, (SVCPU_R9)(r3)
+	PPC_LL	r10, (SVCPU_R10)(r3)
+	PPC_LL	r11, (SVCPU_R11)(r3)
+	PPC_LL	r12, (SVCPU_R12)(r3)
+	PPC_LL	r13, (SVCPU_R13)(r3)
+
+	PPC_LL	r3, (SVCPU_R3)(r3)
+
+	RFI
+kvmppc_handler_trampoline_enter_end:
+
+
+
+/******************************************************************************
+ *                                                                            *
+ *                               Exit code                                    *
+ *                                                                            *
+ *****************************************************************************/
+
+.global kvmppc_handler_trampoline_exit
+kvmppc_handler_trampoline_exit:
+
+	/* Register usage at this point:
+	 *
+	 * SPRG_SCRATCH0  = guest R13
+	 * R12            = exit handler id
+	 * R13            = shadow vcpu - SHADOW_VCPU_OFF [=PACA on PPC64]
+	 * SVCPU.SCRATCH0 = guest R12
+	 * SVCPU.SCRATCH1 = guest CR
+	 *
+	 */
+
+	/* Save registers */
+
+	PPC_STL	r0, (SHADOW_VCPU_OFF + SVCPU_R0)(r13)
+	PPC_STL	r1, (SHADOW_VCPU_OFF + SVCPU_R1)(r13)
+	PPC_STL	r2, (SHADOW_VCPU_OFF + SVCPU_R2)(r13)
+	PPC_STL	r3, (SHADOW_VCPU_OFF + SVCPU_R3)(r13)
+	PPC_STL	r4, (SHADOW_VCPU_OFF + SVCPU_R4)(r13)
+	PPC_STL	r5, (SHADOW_VCPU_OFF + SVCPU_R5)(r13)
+	PPC_STL	r6, (SHADOW_VCPU_OFF + SVCPU_R6)(r13)
+	PPC_STL	r7, (SHADOW_VCPU_OFF + SVCPU_R7)(r13)
+	PPC_STL	r8, (SHADOW_VCPU_OFF + SVCPU_R8)(r13)
+	PPC_STL	r9, (SHADOW_VCPU_OFF + SVCPU_R9)(r13)
+	PPC_STL	r10, (SHADOW_VCPU_OFF + SVCPU_R10)(r13)
+	PPC_STL	r11, (SHADOW_VCPU_OFF + SVCPU_R11)(r13)
+
+	/* Restore R1/R2 so we can handle faults */
+	PPC_LL	r1, (SHADOW_VCPU_OFF + SVCPU_HOST_R1)(r13)
+	PPC_LL	r2, (SHADOW_VCPU_OFF + SVCPU_HOST_R2)(r13)
+
+	/* Save guest PC and MSR */
+	mfsrr0	r3
+	mfsrr1	r4
+
+	PPC_STL	r3, (SHADOW_VCPU_OFF + SVCPU_PC)(r13)
+	PPC_STL	r4, (SHADOW_VCPU_OFF + SVCPU_SHADOW_SRR1)(r13)
+
+	/* Get scratch'ed off registers */
+	mfspr	r9, SPRN_SPRG_SCRATCH0
+	PPC_LL	r8, (SHADOW_VCPU_OFF + SVCPU_SCRATCH0)(r13)
+	lwz	r7, (SHADOW_VCPU_OFF + SVCPU_SCRATCH1)(r13)
+
+	PPC_STL	r9, (SHADOW_VCPU_OFF + SVCPU_R13)(r13)
+	PPC_STL	r8, (SHADOW_VCPU_OFF + SVCPU_R12)(r13)
+	stw	r7, (SHADOW_VCPU_OFF + SVCPU_CR)(r13)
+
+	/* Save more register state  */
+
+	mfxer	r5
+	mfdar	r6
+	mfdsisr	r7
+	mfctr	r8
+	mflr	r9
+
+	stw	r5, (SHADOW_VCPU_OFF + SVCPU_XER)(r13)
+	PPC_STL	r6, (SHADOW_VCPU_OFF + SVCPU_FAULT_DAR)(r13)
+	stw	r7, (SHADOW_VCPU_OFF + SVCPU_FAULT_DSISR)(r13)
+	PPC_STL	r8, (SHADOW_VCPU_OFF + SVCPU_CTR)(r13)
+	PPC_STL	r9, (SHADOW_VCPU_OFF + SVCPU_LR)(r13)
+
+	/*
+	 * In order for us to easily get the last instruction,
+	 * we got the #vmexit at, we exploit the fact that the
+	 * virtual layout is still the same here, so we can just
+	 * ld from the guest's PC address
+	 */
+
+	/* We only load the last instruction when it's safe */
+	cmpwi	r12, BOOK3S_INTERRUPT_DATA_STORAGE
+	beq	ld_last_inst
+	cmpwi	r12, BOOK3S_INTERRUPT_PROGRAM
+	beq	ld_last_inst
+
+	b	no_ld_last_inst
+
+ld_last_inst:
+	/* Save off the guest instruction we're at */
+
+	/* In case lwz faults */
+	li	r0, KVM_INST_FETCH_FAILED
+
+#ifdef USE_QUICK_LAST_INST
+
+	/* Set guest mode to 'jump over instruction' so if lwz faults
+	 * we'll just continue at the next IP. */
+	li	r9, KVM_GUEST_MODE_SKIP
+	stb	r9, (SHADOW_VCPU_OFF + SVCPU_IN_GUEST)(r13)
+
+	/*    1) enable paging for data */
+	mfmsr	r9
+	ori	r11, r9, MSR_DR			/* Enable paging for data */
+	mtmsr	r11
+	sync
+	/*    2) fetch the instruction */
+	lwz	r0, 0(r3)
+	/*    3) disable paging again */
+	mtmsr	r9
+	sync
+
+#endif
+	stw	r0, (SHADOW_VCPU_OFF + SVCPU_LAST_INST)(r13)
+
+no_ld_last_inst:
+
+	/* Unset guest mode */
+	li	r9, KVM_GUEST_MODE_NONE
+	stb	r9, (SHADOW_VCPU_OFF + SVCPU_IN_GUEST)(r13)
+
+	/* Switch back to host MMU */
+	LOAD_HOST_SEGMENTS
+
+	/* Register usage at this point:
+	 *
+	 * R1       = host R1
+	 * R2       = host R2
+	 * R12      = exit handler id
+	 * R13      = shadow vcpu - SHADOW_VCPU_OFF [=PACA on PPC64]
+	 * SVCPU.*  = guest *
+	 *
+	 */
+
+	/* RFI into the highmem handler */
+	mfmsr	r7
+	ori	r7, r7, MSR_IR|MSR_DR|MSR_RI|MSR_ME	/* Enable paging */
+	mtsrr1	r7
+	/* Load highmem handler address */
+	PPC_LL	r8, (SHADOW_VCPU_OFF + SVCPU_VMHANDLER)(r13)
+	mtsrr0	r8
+
+	RFI
+kvmppc_handler_trampoline_exit_end:
+
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 05/27] PPC: Split context init/destroy functions
  2010-04-15 22:11 ` Alexander Graf
@ 2010-04-15 22:11   ` Alexander Graf
  -1 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm, Benjamin Herrenschmidt

We need to reserve a context from KVM to make sure we have our own
segment space. While we did that split for Book3S_64 already, 32 bit
is still outstanding.

So let's split it now.

Signed-off-by: Alexander Graf <agraf@suse.de>
CC: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
 arch/powerpc/include/asm/mmu_context.h |    2 ++
 arch/powerpc/mm/mmu_context_hash32.c   |   29 ++++++++++++++++++++++-------
 2 files changed, 24 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
index 26383e0..81fb412 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -27,6 +27,8 @@ extern int __init_new_context(void);
 extern void __destroy_context(int context_id);
 static inline void mmu_context_init(void) { }
 #else
+extern unsigned long __init_new_context(void);
+extern void __destroy_context(unsigned long context_id);
 extern void mmu_context_init(void);
 #endif
 
diff --git a/arch/powerpc/mm/mmu_context_hash32.c b/arch/powerpc/mm/mmu_context_hash32.c
index 0dfba2b..d0ee554 100644
--- a/arch/powerpc/mm/mmu_context_hash32.c
+++ b/arch/powerpc/mm/mmu_context_hash32.c
@@ -60,11 +60,7 @@
 static unsigned long next_mmu_context;
 static unsigned long context_map[LAST_CONTEXT / BITS_PER_LONG + 1];
 
-
-/*
- * Set up the context for a new address space.
- */
-int init_new_context(struct task_struct *t, struct mm_struct *mm)
+unsigned long __init_new_context(void)
 {
 	unsigned long ctx = next_mmu_context;
 
@@ -74,19 +70,38 @@ int init_new_context(struct task_struct *t, struct mm_struct *mm)
 			ctx = 0;
 	}
 	next_mmu_context = (ctx + 1) & LAST_CONTEXT;
-	mm->context.id = ctx;
+
+	return ctx;
+}
+EXPORT_SYMBOL_GPL(__init_new_context);
+
+/*
+ * Set up the context for a new address space.
+ */
+int init_new_context(struct task_struct *t, struct mm_struct *mm)
+{
+	mm->context.id = __init_new_context();
 
 	return 0;
 }
 
 /*
+ * Free a context ID. Make sure to call this with preempt disabled!
+ */
+void __destroy_context(unsigned long ctx)
+{
+	clear_bit(ctx, context_map);
+}
+EXPORT_SYMBOL_GPL(__destroy_context);
+
+/*
  * We're finished using the context for an address space.
  */
 void destroy_context(struct mm_struct *mm)
 {
 	preempt_disable();
 	if (mm->context.id != NO_CONTEXT) {
-		clear_bit(mm->context.id, context_map);
+		__destroy_context(mm->context.id);
 		mm->context.id = NO_CONTEXT;
 	}
 	preempt_enable();
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 05/27] PPC: Split context init/destroy functions
@ 2010-04-15 22:11   ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm, Benjamin Herrenschmidt

We need to reserve a context from KVM to make sure we have our own
segment space. While we did that split for Book3S_64 already, 32 bit
is still outstanding.

So let's split it now.

Signed-off-by: Alexander Graf <agraf@suse.de>
CC: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
 arch/powerpc/include/asm/mmu_context.h |    2 ++
 arch/powerpc/mm/mmu_context_hash32.c   |   29 ++++++++++++++++++++++-------
 2 files changed, 24 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
index 26383e0..81fb412 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -27,6 +27,8 @@ extern int __init_new_context(void);
 extern void __destroy_context(int context_id);
 static inline void mmu_context_init(void) { }
 #else
+extern unsigned long __init_new_context(void);
+extern void __destroy_context(unsigned long context_id);
 extern void mmu_context_init(void);
 #endif
 
diff --git a/arch/powerpc/mm/mmu_context_hash32.c b/arch/powerpc/mm/mmu_context_hash32.c
index 0dfba2b..d0ee554 100644
--- a/arch/powerpc/mm/mmu_context_hash32.c
+++ b/arch/powerpc/mm/mmu_context_hash32.c
@@ -60,11 +60,7 @@
 static unsigned long next_mmu_context;
 static unsigned long context_map[LAST_CONTEXT / BITS_PER_LONG + 1];
 
-
-/*
- * Set up the context for a new address space.
- */
-int init_new_context(struct task_struct *t, struct mm_struct *mm)
+unsigned long __init_new_context(void)
 {
 	unsigned long ctx = next_mmu_context;
 
@@ -74,19 +70,38 @@ int init_new_context(struct task_struct *t, struct mm_struct *mm)
 			ctx = 0;
 	}
 	next_mmu_context = (ctx + 1) & LAST_CONTEXT;
-	mm->context.id = ctx;
+
+	return ctx;
+}
+EXPORT_SYMBOL_GPL(__init_new_context);
+
+/*
+ * Set up the context for a new address space.
+ */
+int init_new_context(struct task_struct *t, struct mm_struct *mm)
+{
+	mm->context.id = __init_new_context();
 
 	return 0;
 }
 
 /*
+ * Free a context ID. Make sure to call this with preempt disabled!
+ */
+void __destroy_context(unsigned long ctx)
+{
+	clear_bit(ctx, context_map);
+}
+EXPORT_SYMBOL_GPL(__destroy_context);
+
+/*
  * We're finished using the context for an address space.
  */
 void destroy_context(struct mm_struct *mm)
 {
 	preempt_disable();
 	if (mm->context.id != NO_CONTEXT) {
-		clear_bit(mm->context.id, context_map);
+		__destroy_context(mm->context.id);
 		mm->context.id = NO_CONTEXT;
 	}
 	preempt_enable();
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 06/27] KVM: PPC: Add kvm_book3s_64.h
  2010-04-15 22:11 ` Alexander Graf
@ 2010-04-15 22:11   ` Alexander Graf
  -1 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

In the process of generalizing as much code as possible, I also moved
the shadow vcpu code together to a generic book3s file. Unfortunately
the location of the shadow vcpu is different on 32 and 64 bit, so we
need a wrapper function to tell us where it is.

That sounded like a perfect fit for a subarch specific header file.
Here we can put anything that needs to be different between those two.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s_64.h |   28 ++++++++++++++++++++++++++++
 1 files changed, 28 insertions(+), 0 deletions(-)
 create mode 100644 arch/powerpc/include/asm/kvm_book3s_64.h

diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h
new file mode 100644
index 0000000..4cadd61
--- /dev/null
+++ b/arch/powerpc/include/asm/kvm_book3s_64.h
@@ -0,0 +1,28 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ *
+ * Copyright SUSE Linux Products GmbH 2010
+ *
+ * Authors: Alexander Graf <agraf@suse.de>
+ */
+
+#ifndef __ASM_KVM_BOOK3S_64_H__
+#define __ASM_KVM_BOOK3S_64_H__
+
+static inline struct kvmppc_book3s_shadow_vcpu *to_svcpu(struct kvm_vcpu *vcpu)
+{
+	return &get_paca()->shadow_vcpu;
+}
+
+#endif /* __ASM_KVM_BOOK3S_64_H__ */
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 06/27] KVM: PPC: Add kvm_book3s_64.h
@ 2010-04-15 22:11   ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

In the process of generalizing as much code as possible, I also moved
the shadow vcpu code together to a generic book3s file. Unfortunately
the location of the shadow vcpu is different on 32 and 64 bit, so we
need a wrapper function to tell us where it is.

That sounded like a perfect fit for a subarch specific header file.
Here we can put anything that needs to be different between those two.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s_64.h |   28 ++++++++++++++++++++++++++++
 1 files changed, 28 insertions(+), 0 deletions(-)
 create mode 100644 arch/powerpc/include/asm/kvm_book3s_64.h

diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h
new file mode 100644
index 0000000..4cadd61
--- /dev/null
+++ b/arch/powerpc/include/asm/kvm_book3s_64.h
@@ -0,0 +1,28 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ *
+ * Copyright SUSE Linux Products GmbH 2010
+ *
+ * Authors: Alexander Graf <agraf@suse.de>
+ */
+
+#ifndef __ASM_KVM_BOOK3S_64_H__
+#define __ASM_KVM_BOOK3S_64_H__
+
+static inline struct kvmppc_book3s_shadow_vcpu *to_svcpu(struct kvm_vcpu *vcpu)
+{
+	return &get_paca()->shadow_vcpu;
+}
+
+#endif /* __ASM_KVM_BOOK3S_64_H__ */
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 07/27] KVM: PPC: Add kvm_book3s_32.h
  2010-04-15 22:11 ` Alexander Graf
@ 2010-04-15 22:11   ` Alexander Graf
  -1 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

In analogy to the 64 bit specific header file, this is the 32 bit
pendant. With this in place we can just always call to_svcpu and
be assured we get the right pointer anywhere.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s_32.h |   42 ++++++++++++++++++++++++++++++
 1 files changed, 42 insertions(+), 0 deletions(-)
 create mode 100644 arch/powerpc/include/asm/kvm_book3s_32.h

diff --git a/arch/powerpc/include/asm/kvm_book3s_32.h b/arch/powerpc/include/asm/kvm_book3s_32.h
new file mode 100644
index 0000000..de604db
--- /dev/null
+++ b/arch/powerpc/include/asm/kvm_book3s_32.h
@@ -0,0 +1,42 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ *
+ * Copyright SUSE Linux Products GmbH 2010
+ *
+ * Authors: Alexander Graf <agraf@suse.de>
+ */
+
+#ifndef __ASM_KVM_BOOK3S_32_H__
+#define __ASM_KVM_BOOK3S_32_H__
+
+static inline struct kvmppc_book3s_shadow_vcpu *to_svcpu(struct kvm_vcpu *vcpu)
+{
+	return to_book3s(vcpu)->shadow_vcpu;
+}
+
+#define PTE_SIZE	12
+#define VSID_ALL	0
+#define SR_INVALID	0x00000001	/* VSID 1 should always be unused */
+#define SR_KP		0x20000000
+#define PTE_V		0x80000000
+#define PTE_SEC		0x00000040
+#define PTE_M		0x00000010
+#define PTE_R		0x00000100
+#define PTE_C		0x00000080
+
+#define SID_SHIFT	28
+#define ESID_MASK	0xf0000000
+#define VSID_MASK	0x00fffffff0000000ULL
+
+#endif /* __ASM_KVM_BOOK3S_32_H__ */
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 07/27] KVM: PPC: Add kvm_book3s_32.h
@ 2010-04-15 22:11   ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

In analogy to the 64 bit specific header file, this is the 32 bit
pendant. With this in place we can just always call to_svcpu and
be assured we get the right pointer anywhere.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s_32.h |   42 ++++++++++++++++++++++++++++++
 1 files changed, 42 insertions(+), 0 deletions(-)
 create mode 100644 arch/powerpc/include/asm/kvm_book3s_32.h

diff --git a/arch/powerpc/include/asm/kvm_book3s_32.h b/arch/powerpc/include/asm/kvm_book3s_32.h
new file mode 100644
index 0000000..de604db
--- /dev/null
+++ b/arch/powerpc/include/asm/kvm_book3s_32.h
@@ -0,0 +1,42 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ *
+ * Copyright SUSE Linux Products GmbH 2010
+ *
+ * Authors: Alexander Graf <agraf@suse.de>
+ */
+
+#ifndef __ASM_KVM_BOOK3S_32_H__
+#define __ASM_KVM_BOOK3S_32_H__
+
+static inline struct kvmppc_book3s_shadow_vcpu *to_svcpu(struct kvm_vcpu *vcpu)
+{
+	return to_book3s(vcpu)->shadow_vcpu;
+}
+
+#define PTE_SIZE	12
+#define VSID_ALL	0
+#define SR_INVALID	0x00000001	/* VSID 1 should always be unused */
+#define SR_KP		0x20000000
+#define PTE_V		0x80000000
+#define PTE_SEC		0x00000040
+#define PTE_M		0x00000010
+#define PTE_R		0x00000100
+#define PTE_C		0x00000080
+
+#define SID_SHIFT	28
+#define ESID_MASK	0xf0000000
+#define VSID_MASK	0x00fffffff0000000ULL
+
+#endif /* __ASM_KVM_BOOK3S_32_H__ */
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 08/27] KVM: PPC: Add fields to shadow vcpu
  2010-04-15 22:11 ` Alexander Graf
@ 2010-04-15 22:11   ` Alexander Graf
  -1 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

After a lot of thought on how to make the entry / exit code easier,
I figured it'd be clever to put even more register state into the
shadow vcpu. That way we have more registers available to use, making
the code easier to read.

So this patch adds a few new fields to that shadow vcpu. Later on we
will remove the originals from the vcpu and paca.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s_asm.h |   21 +++++++++++++++++++++
 1 files changed, 21 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s_asm.h b/arch/powerpc/include/asm/kvm_book3s_asm.h
index 183461b..e915e7d 100644
--- a/arch/powerpc/include/asm/kvm_book3s_asm.h
+++ b/arch/powerpc/include/asm/kvm_book3s_asm.h
@@ -63,12 +63,33 @@ struct kvmppc_book3s_shadow_vcpu {
 	ulong gpr[14];
 	u32 cr;
 	u32 xer;
+
+	u32 fault_dsisr;
+	u32 last_inst;
+	ulong ctr;
+	ulong lr;
+	ulong pc;
+	ulong shadow_srr1;
+	ulong fault_dar;
+
 	ulong host_r1;
 	ulong host_r2;
 	ulong handler;
 	ulong scratch0;
 	ulong scratch1;
 	ulong vmhandler;
+	u8 in_guest;
+
+#ifdef CONFIG_PPC_BOOK3S_32
+	u32     sr[16];			/* Guest SRs */
+#endif
+#ifdef CONFIG_PPC_BOOK3S_64
+	u8 slb_max;			/* highest used guest slb entry */
+	struct  {
+		u64     esid;
+		u64     vsid;
+	} slb[64];			/* guest SLB */
+#endif
 };
 
 #endif /*__ASSEMBLY__ */
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 08/27] KVM: PPC: Add fields to shadow vcpu
@ 2010-04-15 22:11   ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

After a lot of thought on how to make the entry / exit code easier,
I figured it'd be clever to put even more register state into the
shadow vcpu. That way we have more registers available to use, making
the code easier to read.

So this patch adds a few new fields to that shadow vcpu. Later on we
will remove the originals from the vcpu and paca.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s_asm.h |   21 +++++++++++++++++++++
 1 files changed, 21 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s_asm.h b/arch/powerpc/include/asm/kvm_book3s_asm.h
index 183461b..e915e7d 100644
--- a/arch/powerpc/include/asm/kvm_book3s_asm.h
+++ b/arch/powerpc/include/asm/kvm_book3s_asm.h
@@ -63,12 +63,33 @@ struct kvmppc_book3s_shadow_vcpu {
 	ulong gpr[14];
 	u32 cr;
 	u32 xer;
+
+	u32 fault_dsisr;
+	u32 last_inst;
+	ulong ctr;
+	ulong lr;
+	ulong pc;
+	ulong shadow_srr1;
+	ulong fault_dar;
+
 	ulong host_r1;
 	ulong host_r2;
 	ulong handler;
 	ulong scratch0;
 	ulong scratch1;
 	ulong vmhandler;
+	u8 in_guest;
+
+#ifdef CONFIG_PPC_BOOK3S_32
+	u32     sr[16];			/* Guest SRs */
+#endif
+#ifdef CONFIG_PPC_BOOK3S_64
+	u8 slb_max;			/* highest used guest slb entry */
+	struct  {
+		u64     esid;
+		u64     vsid;
+	} slb[64];			/* guest SLB */
+#endif
 };
 
 #endif /*__ASSEMBLY__ */
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 09/27] KVM: PPC: Improve indirect svcpu accessors
       [not found] ` <1271369518-11247-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-15 22:11     ` Alexander Graf
  2010-04-15 22:11     ` Alexander Graf
                       ` (15 subsequent siblings)
  16 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

We already have some inline fuctions we use to access vcpu or svcpu structs,
depending on whether we're on booke or book3s. Since we just put a few more
registers into the svcpu, we also need to make sure the respective callbacks
are available and get used.

So this patch moves direct use of the now in the svcpu struct fields to
inline function calls. While at it, it also moves the definition of those
inline function calls to respective header files for booke and book3s,
greatly improving readability.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/include/asm/kvm_book3s.h    |   98 +++++++++++++++++++++++-
 arch/powerpc/include/asm/kvm_booke.h     |   96 +++++++++++++++++++++++
 arch/powerpc/include/asm/kvm_ppc.h       |   79 +------------------
 arch/powerpc/kvm/book3s.c                |  125 +++++++++++++++++-------------
 arch/powerpc/kvm/book3s_64_mmu.c         |    2 +-
 arch/powerpc/kvm/book3s_64_mmu_host.c    |   26 +++---
 arch/powerpc/kvm/book3s_emulate.c        |    6 +-
 arch/powerpc/kvm/book3s_paired_singles.c |    2 +-
 arch/powerpc/kvm/emulate.c               |    7 +-
 arch/powerpc/kvm/powerpc.c               |    2 +-
 10 files changed, 290 insertions(+), 153 deletions(-)
 create mode 100644 arch/powerpc/include/asm/kvm_booke.h

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 7670e2a..9517b8d 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -71,7 +71,7 @@ struct kvmppc_sid_map {
 
 struct kvmppc_vcpu_book3s {
 	struct kvm_vcpu vcpu;
-	struct kvmppc_book3s_shadow_vcpu shadow_vcpu;
+	struct kvmppc_book3s_shadow_vcpu *shadow_vcpu;
 	struct kvmppc_sid_map sid_map[SID_MAP_NUM];
 	struct kvmppc_slb slb[64];
 	struct {
@@ -147,6 +147,94 @@ static inline ulong dsisr(void)
 }
 
 extern void kvm_return_point(void);
+static inline struct kvmppc_book3s_shadow_vcpu *to_svcpu(struct kvm_vcpu *vcpu);
+
+static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
+{
+	if ( num < 14 ) {
+		to_svcpu(vcpu)->gpr[num] = val;
+		to_book3s(vcpu)->shadow_vcpu->gpr[num] = val;
+	} else
+		vcpu->arch.gpr[num] = val;
+}
+
+static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
+{
+	if ( num < 14 )
+		return to_svcpu(vcpu)->gpr[num];
+	else
+		return vcpu->arch.gpr[num];
+}
+
+static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
+{
+	to_svcpu(vcpu)->cr = val;
+	to_book3s(vcpu)->shadow_vcpu->cr = val;
+}
+
+static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
+{
+	return to_svcpu(vcpu)->cr;
+}
+
+static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
+{
+	to_svcpu(vcpu)->xer = val;
+	to_book3s(vcpu)->shadow_vcpu->xer = val;
+}
+
+static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
+{
+	return to_svcpu(vcpu)->xer;
+}
+
+static inline void kvmppc_set_ctr(struct kvm_vcpu *vcpu, ulong val)
+{
+	to_svcpu(vcpu)->ctr = val;
+}
+
+static inline ulong kvmppc_get_ctr(struct kvm_vcpu *vcpu)
+{
+	return to_svcpu(vcpu)->ctr;
+}
+
+static inline void kvmppc_set_lr(struct kvm_vcpu *vcpu, ulong val)
+{
+	to_svcpu(vcpu)->lr = val;
+}
+
+static inline ulong kvmppc_get_lr(struct kvm_vcpu *vcpu)
+{
+	return to_svcpu(vcpu)->lr;
+}
+
+static inline void kvmppc_set_pc(struct kvm_vcpu *vcpu, ulong val)
+{
+	to_svcpu(vcpu)->pc = val;
+}
+
+static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
+{
+	return to_svcpu(vcpu)->pc;
+}
+
+static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
+{
+	ulong pc = kvmppc_get_pc(vcpu);
+	struct kvmppc_book3s_shadow_vcpu *svcpu = to_svcpu(vcpu);
+
+	/* Load the instruction manually if it failed to do so in the
+	 * exit path */
+	if (svcpu->last_inst == KVM_INST_FETCH_FAILED)
+		kvmppc_ld(vcpu, &pc, sizeof(u32), &svcpu->last_inst, false);
+
+	return svcpu->last_inst;
+}
+
+static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
+{
+	return to_svcpu(vcpu)->fault_dar;
+}
 
 /* Magic register values loaded into r3 and r4 before the 'sc' assembly
  * instruction for the OSI hypercalls */
@@ -155,4 +243,12 @@ extern void kvm_return_point(void);
 
 #define INS_DCBZ			0x7c0007ec
 
+/* Also add subarch specific defines */
+
+#ifdef CONFIG_PPC_BOOK3S_32
+#include <asm/kvm_book3s_32.h>
+#else
+#include <asm/kvm_book3s_64.h>
+#endif
+
 #endif /* __ASM_KVM_BOOK3S_H__ */
diff --git a/arch/powerpc/include/asm/kvm_booke.h b/arch/powerpc/include/asm/kvm_booke.h
new file mode 100644
index 0000000..9c9ba3d
--- /dev/null
+++ b/arch/powerpc/include/asm/kvm_booke.h
@@ -0,0 +1,96 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ *
+ * Copyright SUSE Linux Products GmbH 2010
+ *
+ * Authors: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
+ */
+
+#ifndef __ASM_KVM_BOOKE_H__
+#define __ASM_KVM_BOOKE_H__
+
+#include <linux/types.h>
+#include <linux/kvm_host.h>
+
+static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
+{
+	vcpu->arch.gpr[num] = val;
+}
+
+static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
+{
+	return vcpu->arch.gpr[num];
+}
+
+static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
+{
+	vcpu->arch.cr = val;
+}
+
+static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.cr;
+}
+
+static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
+{
+	vcpu->arch.xer = val;
+}
+
+static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.xer;
+}
+
+static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.last_inst;
+}
+
+static inline void kvmppc_set_ctr(struct kvm_vcpu *vcpu, ulong val)
+{
+	vcpu->arch.ctr = val;
+}
+
+static inline ulong kvmppc_get_ctr(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.ctr;
+}
+
+static inline void kvmppc_set_lr(struct kvm_vcpu *vcpu, ulong val)
+{
+	vcpu->arch.lr = val;
+}
+
+static inline ulong kvmppc_get_lr(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.lr;
+}
+
+static inline void kvmppc_set_pc(struct kvm_vcpu *vcpu, ulong val)
+{
+	vcpu->arch.pc = val;
+}
+
+static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.pc;
+}
+
+static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.fault_dear;
+}
+
+#endif /* __ASM_KVM_BOOKE_H__ */
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index 6a2464e..edade84 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -30,6 +30,8 @@
 #include <linux/kvm_host.h>
 #ifdef CONFIG_PPC_BOOK3S
 #include <asm/kvm_book3s.h>
+#else
+#include <asm/kvm_booke.h>
 #endif
 
 enum emulation_result {
@@ -138,81 +140,4 @@ static inline u32 kvmppc_set_field(u64 inst, int msb, int lsb, int value)
 	return r;
 }
 
-#ifdef CONFIG_PPC_BOOK3S
-
-/* We assume we're always acting on the current vcpu */
-
-static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
-{
-	if ( num < 14 ) {
-		get_paca()->shadow_vcpu.gpr[num] = val;
-		to_book3s(vcpu)->shadow_vcpu.gpr[num] = val;
-	} else
-		vcpu->arch.gpr[num] = val;
-}
-
-static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
-{
-	if ( num < 14 )
-		return get_paca()->shadow_vcpu.gpr[num];
-	else
-		return vcpu->arch.gpr[num];
-}
-
-static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
-{
-	get_paca()->shadow_vcpu.cr = val;
-	to_book3s(vcpu)->shadow_vcpu.cr = val;
-}
-
-static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
-{
-	return get_paca()->shadow_vcpu.cr;
-}
-
-static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
-{
-	get_paca()->shadow_vcpu.xer = val;
-	to_book3s(vcpu)->shadow_vcpu.xer = val;
-}
-
-static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
-{
-	return get_paca()->shadow_vcpu.xer;
-}
-
-#else
-
-static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
-{
-	vcpu->arch.gpr[num] = val;
-}
-
-static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
-{
-	return vcpu->arch.gpr[num];
-}
-
-static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
-{
-	vcpu->arch.cr = val;
-}
-
-static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
-{
-	return vcpu->arch.cr;
-}
-
-static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
-{
-	vcpu->arch.xer = val;
-}
-
-static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
-{
-	return vcpu->arch.xer;
-}
-
-#endif
-
 #endif /* __POWERPC_KVM_PPC_H__ */
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 4ac7b15..6eb2da2 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -70,18 +70,26 @@ void kvmppc_core_load_guest_debugstate(struct kvm_vcpu *vcpu)
 
 void kvmppc_core_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 {
-	memcpy(get_paca()->kvm_slb, to_book3s(vcpu)->slb_shadow, sizeof(get_paca()->kvm_slb));
-	memcpy(&get_paca()->shadow_vcpu, &to_book3s(vcpu)->shadow_vcpu,
+#ifdef CONFIG_PPC_BOOK3S_64
+	memcpy(to_svcpu(vcpu)->slb, to_book3s(vcpu)->slb_shadow, sizeof(to_svcpu(vcpu)->slb));
+	memcpy(&get_paca()->shadow_vcpu, to_book3s(vcpu)->shadow_vcpu,
 	       sizeof(get_paca()->shadow_vcpu));
-	get_paca()->kvm_slb_max = to_book3s(vcpu)->slb_shadow_max;
+	to_svcpu(vcpu)->slb_max = to_book3s(vcpu)->slb_shadow_max;
+#endif
+
+#ifdef CONFIG_PPC_BOOK3S_32
+	current->thread.kvm_shadow_vcpu = to_book3s(vcpu)->shadow_vcpu;
+#endif
 }
 
 void kvmppc_core_vcpu_put(struct kvm_vcpu *vcpu)
 {
-	memcpy(to_book3s(vcpu)->slb_shadow, get_paca()->kvm_slb, sizeof(get_paca()->kvm_slb));
-	memcpy(&to_book3s(vcpu)->shadow_vcpu, &get_paca()->shadow_vcpu,
+#ifdef CONFIG_PPC_BOOK3S_64
+	memcpy(to_book3s(vcpu)->slb_shadow, to_svcpu(vcpu)->slb, sizeof(to_svcpu(vcpu)->slb));
+	memcpy(to_book3s(vcpu)->shadow_vcpu, &get_paca()->shadow_vcpu,
 	       sizeof(get_paca()->shadow_vcpu));
-	to_book3s(vcpu)->slb_shadow_max = get_paca()->kvm_slb_max;
+	to_book3s(vcpu)->slb_shadow_max = to_svcpu(vcpu)->slb_max;
+#endif
 
 	kvmppc_giveup_ext(vcpu, MSR_FP);
 	kvmppc_giveup_ext(vcpu, MSR_VEC);
@@ -143,7 +151,7 @@ void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 msr)
 					      VSID_SPLIT_MASK);
 
 		kvmppc_mmu_flush_segments(vcpu);
-		kvmppc_mmu_map_segment(vcpu, vcpu->arch.pc);
+		kvmppc_mmu_map_segment(vcpu, kvmppc_get_pc(vcpu));
 	}
 
 	/* Preload FPU if it's enabled */
@@ -153,9 +161,9 @@ void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 msr)
 
 void kvmppc_inject_interrupt(struct kvm_vcpu *vcpu, int vec, u64 flags)
 {
-	vcpu->arch.srr0 = vcpu->arch.pc;
+	vcpu->arch.srr0 = kvmppc_get_pc(vcpu);
 	vcpu->arch.srr1 = vcpu->arch.msr | flags;
-	vcpu->arch.pc = to_book3s(vcpu)->hior + vec;
+	kvmppc_set_pc(vcpu, to_book3s(vcpu)->hior + vec);
 	vcpu->arch.mmu.reset_msr(vcpu);
 }
 
@@ -550,20 +558,20 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 	if (page_found == -ENOENT) {
 		/* Page not found in guest PTE entries */
-		vcpu->arch.dear = vcpu->arch.fault_dear;
-		to_book3s(vcpu)->dsisr = vcpu->arch.fault_dsisr;
-		vcpu->arch.msr |= (vcpu->arch.shadow_srr1 & 0x00000000f8000000ULL);
+		vcpu->arch.dear = kvmppc_get_fault_dar(vcpu);
+		to_book3s(vcpu)->dsisr = to_svcpu(vcpu)->fault_dsisr;
+		vcpu->arch.msr |= (to_svcpu(vcpu)->shadow_srr1 & 0x00000000f8000000ULL);
 		kvmppc_book3s_queue_irqprio(vcpu, vec);
 	} else if (page_found == -EPERM) {
 		/* Storage protection */
-		vcpu->arch.dear = vcpu->arch.fault_dear;
-		to_book3s(vcpu)->dsisr = vcpu->arch.fault_dsisr & ~DSISR_NOHPTE;
+		vcpu->arch.dear = kvmppc_get_fault_dar(vcpu);
+		to_book3s(vcpu)->dsisr = to_svcpu(vcpu)->fault_dsisr & ~DSISR_NOHPTE;
 		to_book3s(vcpu)->dsisr |= DSISR_PROTFAULT;
-		vcpu->arch.msr |= (vcpu->arch.shadow_srr1 & 0x00000000f8000000ULL);
+		vcpu->arch.msr |= (to_svcpu(vcpu)->shadow_srr1 & 0x00000000f8000000ULL);
 		kvmppc_book3s_queue_irqprio(vcpu, vec);
 	} else if (page_found == -EINVAL) {
 		/* Page not found in guest SLB */
-		vcpu->arch.dear = vcpu->arch.fault_dear;
+		vcpu->arch.dear = kvmppc_get_fault_dar(vcpu);
 		kvmppc_book3s_queue_irqprio(vcpu, vec + 0x80);
 	} else if (!is_mmio &&
 		   kvmppc_visible_gfn(vcpu, pte.raddr >> PAGE_SHIFT)) {
@@ -645,10 +653,11 @@ void kvmppc_giveup_ext(struct kvm_vcpu *vcpu, ulong msr)
 
 static int kvmppc_read_inst(struct kvm_vcpu *vcpu)
 {
-	ulong srr0 = vcpu->arch.pc;
+	ulong srr0 = kvmppc_get_pc(vcpu);
+	u32 last_inst = kvmppc_get_last_inst(vcpu);
 	int ret;
 
-	ret = kvmppc_ld(vcpu, &srr0, sizeof(u32), &vcpu->arch.last_inst, false);
+	ret = kvmppc_ld(vcpu, &srr0, sizeof(u32), &last_inst, false);
 	if (ret == -ENOENT) {
 		vcpu->arch.msr = kvmppc_set_field(vcpu->arch.msr, 33, 33, 1);
 		vcpu->arch.msr = kvmppc_set_field(vcpu->arch.msr, 34, 36, 0);
@@ -753,12 +762,12 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	run->ready_for_interrupt_injection = 1;
 #ifdef EXIT_DEBUG
 	printk(KERN_EMERG "exit_nr=0x%x | pc=0x%lx | dar=0x%lx | dec=0x%x | msr=0x%lx\n",
-		exit_nr, vcpu->arch.pc, vcpu->arch.fault_dear,
-		kvmppc_get_dec(vcpu), vcpu->arch.msr);
+		exit_nr, kvmppc_get_pc(vcpu), kvmppc_get_fault_dar(vcpu),
+		kvmppc_get_dec(vcpu), to_svcpu(vcpu)->shadow_srr1);
 #elif defined (EXIT_DEBUG_SIMPLE)
 	if ((exit_nr != 0x900) && (exit_nr != 0x500))
 		printk(KERN_EMERG "exit_nr=0x%x | pc=0x%lx | dar=0x%lx | msr=0x%lx\n",
-			exit_nr, vcpu->arch.pc, vcpu->arch.fault_dear,
+			exit_nr, kvmppc_get_pc(vcpu), kvmppc_get_fault_dar(vcpu),
 			vcpu->arch.msr);
 #endif
 	kvm_resched(vcpu);
@@ -766,8 +775,8 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	case BOOK3S_INTERRUPT_INST_STORAGE:
 		vcpu->stat.pf_instruc++;
 		/* only care about PTEG not found errors, but leave NX alone */
-		if (vcpu->arch.shadow_srr1 & 0x40000000) {
-			r = kvmppc_handle_pagefault(run, vcpu, vcpu->arch.pc, exit_nr);
+		if (to_svcpu(vcpu)->shadow_srr1 & 0x40000000) {
+			r = kvmppc_handle_pagefault(run, vcpu, kvmppc_get_pc(vcpu), exit_nr);
 			vcpu->stat.sp_instruc++;
 		} else if (vcpu->arch.mmu.is_dcbz32(vcpu) &&
 			  (!(vcpu->arch.hflags & BOOK3S_HFLAG_DCBZ32))) {
@@ -776,38 +785,41 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			 *     so we can't use the NX bit inside the guest. Let's cross our fingers,
 			 *     that no guest that needs the dcbz hack does NX.
 			 */
-			kvmppc_mmu_pte_flush(vcpu, vcpu->arch.pc, ~0xFFFULL);
+			kvmppc_mmu_pte_flush(vcpu, kvmppc_get_pc(vcpu), ~0xFFFULL);
 			r = RESUME_GUEST;
 		} else {
-			vcpu->arch.msr |= vcpu->arch.shadow_srr1 & 0x58000000;
+			vcpu->arch.msr |= to_svcpu(vcpu)->shadow_srr1 & 0x58000000;
 			kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
-			kvmppc_mmu_pte_flush(vcpu, vcpu->arch.pc, ~0xFFFULL);
+			kvmppc_mmu_pte_flush(vcpu, kvmppc_get_pc(vcpu), ~0xFFFULL);
 			r = RESUME_GUEST;
 		}
 		break;
 	case BOOK3S_INTERRUPT_DATA_STORAGE:
+	{
+		ulong dar = kvmppc_get_fault_dar(vcpu);
 		vcpu->stat.pf_storage++;
 		/* The only case we need to handle is missing shadow PTEs */
-		if (vcpu->arch.fault_dsisr & DSISR_NOHPTE) {
-			r = kvmppc_handle_pagefault(run, vcpu, vcpu->arch.fault_dear, exit_nr);
+		if (to_svcpu(vcpu)->fault_dsisr & DSISR_NOHPTE) {
+			r = kvmppc_handle_pagefault(run, vcpu, dar, exit_nr);
 		} else {
-			vcpu->arch.dear = vcpu->arch.fault_dear;
-			to_book3s(vcpu)->dsisr = vcpu->arch.fault_dsisr;
+			vcpu->arch.dear = dar;
+			to_book3s(vcpu)->dsisr = to_svcpu(vcpu)->fault_dsisr;
 			kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
 			kvmppc_mmu_pte_flush(vcpu, vcpu->arch.dear, ~0xFFFULL);
 			r = RESUME_GUEST;
 		}
 		break;
+	}
 	case BOOK3S_INTERRUPT_DATA_SEGMENT:
-		if (kvmppc_mmu_map_segment(vcpu, vcpu->arch.fault_dear) < 0) {
-			vcpu->arch.dear = vcpu->arch.fault_dear;
+		if (kvmppc_mmu_map_segment(vcpu, kvmppc_get_fault_dar(vcpu)) < 0) {
+			vcpu->arch.dear = kvmppc_get_fault_dar(vcpu);
 			kvmppc_book3s_queue_irqprio(vcpu,
 				BOOK3S_INTERRUPT_DATA_SEGMENT);
 		}
 		r = RESUME_GUEST;
 		break;
 	case BOOK3S_INTERRUPT_INST_SEGMENT:
-		if (kvmppc_mmu_map_segment(vcpu, vcpu->arch.pc) < 0) {
+		if (kvmppc_mmu_map_segment(vcpu, kvmppc_get_pc(vcpu)) < 0) {
 			kvmppc_book3s_queue_irqprio(vcpu,
 				BOOK3S_INTERRUPT_INST_SEGMENT);
 		}
@@ -828,13 +840,13 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		ulong flags;
 
 program_interrupt:
-		flags = vcpu->arch.shadow_srr1 & 0x1f0000ull;
+		flags = to_svcpu(vcpu)->shadow_srr1 & 0x1f0000ull;
 
 		if (vcpu->arch.msr & MSR_PR) {
 #ifdef EXIT_DEBUG
-			printk(KERN_INFO "Userspace triggered 0x700 exception at 0x%lx (0x%x)\n", vcpu->arch.pc, vcpu->arch.last_inst);
+			printk(KERN_INFO "Userspace triggered 0x700 exception at 0x%lx (0x%x)\n", kvmppc_get_pc(vcpu), kvmppc_get_last_inst(vcpu));
 #endif
-			if ((vcpu->arch.last_inst & 0xff0007ff) !=
+			if ((kvmppc_get_last_inst(vcpu) & 0xff0007ff) !=
 			    (INS_DCBZ & 0xfffffff7)) {
 				kvmppc_core_queue_program(vcpu, flags);
 				r = RESUME_GUEST;
@@ -853,7 +865,7 @@ program_interrupt:
 			break;
 		case EMULATE_FAIL:
 			printk(KERN_CRIT "%s: emulation at %lx failed (%08x)\n",
-			       __func__, vcpu->arch.pc, vcpu->arch.last_inst);
+			       __func__, kvmppc_get_pc(vcpu), kvmppc_get_last_inst(vcpu));
 			kvmppc_core_queue_program(vcpu, flags);
 			r = RESUME_GUEST;
 			break;
@@ -916,9 +928,9 @@ program_interrupt:
 	case BOOK3S_INTERRUPT_ALIGNMENT:
 		if (kvmppc_read_inst(vcpu) == EMULATE_DONE) {
 			to_book3s(vcpu)->dsisr = kvmppc_alignment_dsisr(vcpu,
-				vcpu->arch.last_inst);
+				kvmppc_get_last_inst(vcpu));
 			vcpu->arch.dear = kvmppc_alignment_dar(vcpu,
-				vcpu->arch.last_inst);
+				kvmppc_get_last_inst(vcpu));
 			kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
 		}
 		r = RESUME_GUEST;
@@ -931,7 +943,7 @@ program_interrupt:
 	default:
 		/* Ugh - bork here! What did we get? */
 		printk(KERN_EMERG "exit_nr=0x%x | pc=0x%lx | msr=0x%lx\n",
-			exit_nr, vcpu->arch.pc, vcpu->arch.shadow_srr1);
+			exit_nr, kvmppc_get_pc(vcpu), to_svcpu(vcpu)->shadow_srr1);
 		r = RESUME_HOST;
 		BUG();
 		break;
@@ -958,7 +970,7 @@ program_interrupt:
 	}
 
 #ifdef EXIT_DEBUG
-	printk(KERN_EMERG "KVM exit: vcpu=0x%p pc=0x%lx r=0x%x\n", vcpu, vcpu->arch.pc, r);
+	printk(KERN_EMERG "KVM exit: vcpu=0x%p pc=0x%lx r=0x%x\n", vcpu, kvmppc_get_pc(vcpu), r);
 #endif
 
 	return r;
@@ -975,10 +987,10 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 
 	vcpu_load(vcpu);
 
-	regs->pc = vcpu->arch.pc;
+	regs->pc = kvmppc_get_pc(vcpu);
 	regs->cr = kvmppc_get_cr(vcpu);
-	regs->ctr = vcpu->arch.ctr;
-	regs->lr = vcpu->arch.lr;
+	regs->ctr = kvmppc_get_ctr(vcpu);
+	regs->lr = kvmppc_get_lr(vcpu);
 	regs->xer = kvmppc_get_xer(vcpu);
 	regs->msr = vcpu->arch.msr;
 	regs->srr0 = vcpu->arch.srr0;
@@ -1006,10 +1018,10 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 
 	vcpu_load(vcpu);
 
-	vcpu->arch.pc = regs->pc;
+	kvmppc_set_pc(vcpu, regs->pc);
 	kvmppc_set_cr(vcpu, regs->cr);
-	vcpu->arch.ctr = regs->ctr;
-	vcpu->arch.lr = regs->lr;
+	kvmppc_set_ctr(vcpu, regs->ctr);
+	kvmppc_set_lr(vcpu, regs->lr);
 	kvmppc_set_xer(vcpu, regs->xer);
 	kvmppc_set_msr(vcpu, regs->msr);
 	vcpu->arch.srr0 = regs->srr0;
@@ -1156,19 +1168,23 @@ struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm, unsigned int id)
 {
 	struct kvmppc_vcpu_book3s *vcpu_book3s;
 	struct kvm_vcpu *vcpu;
-	int err;
+	int err = -ENOMEM;
 
 	vcpu_book3s = vmalloc(sizeof(struct kvmppc_vcpu_book3s));
-	if (!vcpu_book3s) {
-		err = -ENOMEM;
+	if (!vcpu_book3s)
 		goto out;
-	}
+
 	memset(vcpu_book3s, 0, sizeof(struct kvmppc_vcpu_book3s));
 
+	vcpu_book3s->shadow_vcpu = (struct kvmppc_book3s_shadow_vcpu *)
+		kzalloc(sizeof(*vcpu_book3s->shadow_vcpu), GFP_KERNEL);
+	if (!vcpu_book3s->shadow_vcpu)
+		goto free_vcpu;
+
 	vcpu = &vcpu_book3s->vcpu;
 	err = kvm_vcpu_init(vcpu, kvm, id);
 	if (err)
-		goto free_vcpu;
+		goto free_shadow_vcpu;
 
 	vcpu->arch.host_retip = kvm_return_point;
 	vcpu->arch.host_msr = mfmsr();
@@ -1187,7 +1203,7 @@ struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm, unsigned int id)
 
 	err = __init_new_context();
 	if (err < 0)
-		goto free_vcpu;
+		goto free_shadow_vcpu;
 	vcpu_book3s->context_id = err;
 
 	vcpu_book3s->vsid_max = ((vcpu_book3s->context_id + 1) << USER_ESID_BITS) - 1;
@@ -1196,6 +1212,8 @@ struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm, unsigned int id)
 
 	return vcpu;
 
+free_shadow_vcpu:
+	kfree(vcpu_book3s->shadow_vcpu);
 free_vcpu:
 	vfree(vcpu_book3s);
 out:
@@ -1208,6 +1226,7 @@ void kvmppc_core_vcpu_free(struct kvm_vcpu *vcpu)
 
 	__destroy_context(vcpu_book3s->context_id);
 	kvm_vcpu_uninit(vcpu);
+	kfree(vcpu_book3s->shadow_vcpu);
 	vfree(vcpu_book3s);
 }
 
diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c
index 512dcff..12e4c97 100644
--- a/arch/powerpc/kvm/book3s_64_mmu.c
+++ b/arch/powerpc/kvm/book3s_64_mmu.c
@@ -383,7 +383,7 @@ static void kvmppc_mmu_book3s_64_slbia(struct kvm_vcpu *vcpu)
 
 	if (vcpu->arch.msr & MSR_IR) {
 		kvmppc_mmu_flush_segments(vcpu);
-		kvmppc_mmu_map_segment(vcpu, vcpu->arch.pc);
+		kvmppc_mmu_map_segment(vcpu, kvmppc_get_pc(vcpu));
 	}
 }
 
diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index a01e9c5..b0f5b4e 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -331,14 +331,14 @@ static int kvmppc_mmu_next_segment(struct kvm_vcpu *vcpu, ulong esid)
 	int found_inval = -1;
 	int r;
 
-	if (!get_paca()->kvm_slb_max)
-		get_paca()->kvm_slb_max = 1;
+	if (!to_svcpu(vcpu)->slb_max)
+		to_svcpu(vcpu)->slb_max = 1;
 
 	/* Are we overwriting? */
-	for (i = 1; i < get_paca()->kvm_slb_max; i++) {
-		if (!(get_paca()->kvm_slb[i].esid & SLB_ESID_V))
+	for (i = 1; i < to_svcpu(vcpu)->slb_max; i++) {
+		if (!(to_svcpu(vcpu)->slb[i].esid & SLB_ESID_V))
 			found_inval = i;
-		else if ((get_paca()->kvm_slb[i].esid & ESID_MASK) == esid)
+		else if ((to_svcpu(vcpu)->slb[i].esid & ESID_MASK) == esid)
 			return i;
 	}
 
@@ -352,11 +352,11 @@ static int kvmppc_mmu_next_segment(struct kvm_vcpu *vcpu, ulong esid)
 		max_slb_size = mmu_slb_size;
 
 	/* Overflowing -> purge */
-	if ((get_paca()->kvm_slb_max) == max_slb_size)
+	if ((to_svcpu(vcpu)->slb_max) == max_slb_size)
 		kvmppc_mmu_flush_segments(vcpu);
 
-	r = get_paca()->kvm_slb_max;
-	get_paca()->kvm_slb_max++;
+	r = to_svcpu(vcpu)->slb_max;
+	to_svcpu(vcpu)->slb_max++;
 
 	return r;
 }
@@ -374,7 +374,7 @@ int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr)
 
 	if (vcpu->arch.mmu.esid_to_vsid(vcpu, esid, &gvsid)) {
 		/* Invalidate an entry */
-		get_paca()->kvm_slb[slb_index].esid = 0;
+		to_svcpu(vcpu)->slb[slb_index].esid = 0;
 		return -ENOENT;
 	}
 
@@ -388,8 +388,8 @@ int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr)
 	slb_vsid &= ~SLB_VSID_KP;
 	slb_esid |= slb_index;
 
-	get_paca()->kvm_slb[slb_index].esid = slb_esid;
-	get_paca()->kvm_slb[slb_index].vsid = slb_vsid;
+	to_svcpu(vcpu)->slb[slb_index].esid = slb_esid;
+	to_svcpu(vcpu)->slb[slb_index].vsid = slb_vsid;
 
 	dprintk_slb("slbmte %#llx, %#llx\n", slb_vsid, slb_esid);
 
@@ -398,8 +398,8 @@ int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr)
 
 void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu)
 {
-	get_paca()->kvm_slb_max = 1;
-	get_paca()->kvm_slb[0].esid = 0;
+	to_svcpu(vcpu)->slb_max = 1;
+	to_svcpu(vcpu)->slb[0].esid = 0;
 }
 
 void kvmppc_mmu_destroy(struct kvm_vcpu *vcpu)
diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index 8f50776..daa829b 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -69,7 +69,7 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		switch (get_xop(inst)) {
 		case OP_19_XOP_RFID:
 		case OP_19_XOP_RFI:
-			vcpu->arch.pc = vcpu->arch.srr0;
+			kvmppc_set_pc(vcpu, vcpu->arch.srr0);
 			kvmppc_set_msr(vcpu, vcpu->arch.srr1);
 			*advance = 0;
 			break;
@@ -208,7 +208,7 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			if ((r == -ENOENT) || (r == -EPERM)) {
 				*advance = 0;
 				vcpu->arch.dear = vaddr;
-				vcpu->arch.fault_dear = vaddr;
+				to_svcpu(vcpu)->fault_dar = vaddr;
 
 				dsisr = DSISR_ISSTORE;
 				if (r == -ENOENT)
@@ -217,7 +217,7 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
 					dsisr |= DSISR_PROTFAULT;
 
 				to_book3s(vcpu)->dsisr = dsisr;
-				vcpu->arch.fault_dsisr = dsisr;
+				to_svcpu(vcpu)->fault_dsisr = dsisr;
 
 				kvmppc_book3s_queue_irqprio(vcpu,
 					BOOK3S_INTERRUPT_DATA_STORAGE);
diff --git a/arch/powerpc/kvm/book3s_paired_singles.c b/arch/powerpc/kvm/book3s_paired_singles.c
index 7a27bac..a9f66ab 100644
--- a/arch/powerpc/kvm/book3s_paired_singles.c
+++ b/arch/powerpc/kvm/book3s_paired_singles.c
@@ -656,7 +656,7 @@ static int kvmppc_ps_one_in(struct kvm_vcpu *vcpu, bool rc,
 
 int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	u32 inst = vcpu->arch.last_inst;
+	u32 inst = kvmppc_get_last_inst(vcpu);
 	enum emulation_result emulated = EMULATE_DONE;
 
 	int ax_rd = inst_get_field(inst, 6, 10);
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index dbb5d68..c6db28c 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -132,7 +132,7 @@ void kvmppc_emulate_dec(struct kvm_vcpu *vcpu)
  * from opcode tables in the future. */
 int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	u32 inst = vcpu->arch.last_inst;
+	u32 inst = kvmppc_get_last_inst(vcpu);
 	u32 ea;
 	int ra;
 	int rb;
@@ -516,10 +516,11 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
 		}
 	}
 
-	trace_kvm_ppc_instr(inst, vcpu->arch.pc, emulated);
+	trace_kvm_ppc_instr(inst, kvmppc_get_pc(vcpu), emulated);
 
+	/* Advance past emulated instruction. */
 	if (advance)
-		vcpu->arch.pc += 4; /* Advance past emulated instruction. */
+		kvmppc_set_pc(vcpu, kvmppc_get_pc(vcpu) + 4);
 
 	return emulated;
 }
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 1af210c..0b6b018 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -69,7 +69,7 @@ int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu)
 	case EMULATE_FAIL:
 		/* XXX Deliver Program interrupt to guest. */
 		printk(KERN_EMERG "%s: emulation failed (%08x)\n", __func__,
-		       vcpu->arch.last_inst);
+		       kvmppc_get_last_inst(vcpu));
 		r = RESUME_HOST;
 		break;
 	default:
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 09/27] KVM: PPC: Improve indirect svcpu accessors
@ 2010-04-15 22:11     ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

We already have some inline fuctions we use to access vcpu or svcpu structs,
depending on whether we're on booke or book3s. Since we just put a few more
registers into the svcpu, we also need to make sure the respective callbacks
are available and get used.

So this patch moves direct use of the now in the svcpu struct fields to
inline function calls. While at it, it also moves the definition of those
inline function calls to respective header files for booke and book3s,
greatly improving readability.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s.h    |   98 +++++++++++++++++++++++-
 arch/powerpc/include/asm/kvm_booke.h     |   96 +++++++++++++++++++++++
 arch/powerpc/include/asm/kvm_ppc.h       |   79 +------------------
 arch/powerpc/kvm/book3s.c                |  125 +++++++++++++++++-------------
 arch/powerpc/kvm/book3s_64_mmu.c         |    2 +-
 arch/powerpc/kvm/book3s_64_mmu_host.c    |   26 +++---
 arch/powerpc/kvm/book3s_emulate.c        |    6 +-
 arch/powerpc/kvm/book3s_paired_singles.c |    2 +-
 arch/powerpc/kvm/emulate.c               |    7 +-
 arch/powerpc/kvm/powerpc.c               |    2 +-
 10 files changed, 290 insertions(+), 153 deletions(-)
 create mode 100644 arch/powerpc/include/asm/kvm_booke.h

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 7670e2a..9517b8d 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -71,7 +71,7 @@ struct kvmppc_sid_map {
 
 struct kvmppc_vcpu_book3s {
 	struct kvm_vcpu vcpu;
-	struct kvmppc_book3s_shadow_vcpu shadow_vcpu;
+	struct kvmppc_book3s_shadow_vcpu *shadow_vcpu;
 	struct kvmppc_sid_map sid_map[SID_MAP_NUM];
 	struct kvmppc_slb slb[64];
 	struct {
@@ -147,6 +147,94 @@ static inline ulong dsisr(void)
 }
 
 extern void kvm_return_point(void);
+static inline struct kvmppc_book3s_shadow_vcpu *to_svcpu(struct kvm_vcpu *vcpu);
+
+static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
+{
+	if ( num < 14 ) {
+		to_svcpu(vcpu)->gpr[num] = val;
+		to_book3s(vcpu)->shadow_vcpu->gpr[num] = val;
+	} else
+		vcpu->arch.gpr[num] = val;
+}
+
+static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
+{
+	if ( num < 14 )
+		return to_svcpu(vcpu)->gpr[num];
+	else
+		return vcpu->arch.gpr[num];
+}
+
+static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
+{
+	to_svcpu(vcpu)->cr = val;
+	to_book3s(vcpu)->shadow_vcpu->cr = val;
+}
+
+static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
+{
+	return to_svcpu(vcpu)->cr;
+}
+
+static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
+{
+	to_svcpu(vcpu)->xer = val;
+	to_book3s(vcpu)->shadow_vcpu->xer = val;
+}
+
+static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
+{
+	return to_svcpu(vcpu)->xer;
+}
+
+static inline void kvmppc_set_ctr(struct kvm_vcpu *vcpu, ulong val)
+{
+	to_svcpu(vcpu)->ctr = val;
+}
+
+static inline ulong kvmppc_get_ctr(struct kvm_vcpu *vcpu)
+{
+	return to_svcpu(vcpu)->ctr;
+}
+
+static inline void kvmppc_set_lr(struct kvm_vcpu *vcpu, ulong val)
+{
+	to_svcpu(vcpu)->lr = val;
+}
+
+static inline ulong kvmppc_get_lr(struct kvm_vcpu *vcpu)
+{
+	return to_svcpu(vcpu)->lr;
+}
+
+static inline void kvmppc_set_pc(struct kvm_vcpu *vcpu, ulong val)
+{
+	to_svcpu(vcpu)->pc = val;
+}
+
+static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
+{
+	return to_svcpu(vcpu)->pc;
+}
+
+static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
+{
+	ulong pc = kvmppc_get_pc(vcpu);
+	struct kvmppc_book3s_shadow_vcpu *svcpu = to_svcpu(vcpu);
+
+	/* Load the instruction manually if it failed to do so in the
+	 * exit path */
+	if (svcpu->last_inst = KVM_INST_FETCH_FAILED)
+		kvmppc_ld(vcpu, &pc, sizeof(u32), &svcpu->last_inst, false);
+
+	return svcpu->last_inst;
+}
+
+static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
+{
+	return to_svcpu(vcpu)->fault_dar;
+}
 
 /* Magic register values loaded into r3 and r4 before the 'sc' assembly
  * instruction for the OSI hypercalls */
@@ -155,4 +243,12 @@ extern void kvm_return_point(void);
 
 #define INS_DCBZ			0x7c0007ec
 
+/* Also add subarch specific defines */
+
+#ifdef CONFIG_PPC_BOOK3S_32
+#include <asm/kvm_book3s_32.h>
+#else
+#include <asm/kvm_book3s_64.h>
+#endif
+
 #endif /* __ASM_KVM_BOOK3S_H__ */
diff --git a/arch/powerpc/include/asm/kvm_booke.h b/arch/powerpc/include/asm/kvm_booke.h
new file mode 100644
index 0000000..9c9ba3d
--- /dev/null
+++ b/arch/powerpc/include/asm/kvm_booke.h
@@ -0,0 +1,96 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
+ *
+ * Copyright SUSE Linux Products GmbH 2010
+ *
+ * Authors: Alexander Graf <agraf@suse.de>
+ */
+
+#ifndef __ASM_KVM_BOOKE_H__
+#define __ASM_KVM_BOOKE_H__
+
+#include <linux/types.h>
+#include <linux/kvm_host.h>
+
+static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
+{
+	vcpu->arch.gpr[num] = val;
+}
+
+static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
+{
+	return vcpu->arch.gpr[num];
+}
+
+static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
+{
+	vcpu->arch.cr = val;
+}
+
+static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.cr;
+}
+
+static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
+{
+	vcpu->arch.xer = val;
+}
+
+static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.xer;
+}
+
+static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.last_inst;
+}
+
+static inline void kvmppc_set_ctr(struct kvm_vcpu *vcpu, ulong val)
+{
+	vcpu->arch.ctr = val;
+}
+
+static inline ulong kvmppc_get_ctr(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.ctr;
+}
+
+static inline void kvmppc_set_lr(struct kvm_vcpu *vcpu, ulong val)
+{
+	vcpu->arch.lr = val;
+}
+
+static inline ulong kvmppc_get_lr(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.lr;
+}
+
+static inline void kvmppc_set_pc(struct kvm_vcpu *vcpu, ulong val)
+{
+	vcpu->arch.pc = val;
+}
+
+static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.pc;
+}
+
+static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.fault_dear;
+}
+
+#endif /* __ASM_KVM_BOOKE_H__ */
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index 6a2464e..edade84 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -30,6 +30,8 @@
 #include <linux/kvm_host.h>
 #ifdef CONFIG_PPC_BOOK3S
 #include <asm/kvm_book3s.h>
+#else
+#include <asm/kvm_booke.h>
 #endif
 
 enum emulation_result {
@@ -138,81 +140,4 @@ static inline u32 kvmppc_set_field(u64 inst, int msb, int lsb, int value)
 	return r;
 }
 
-#ifdef CONFIG_PPC_BOOK3S
-
-/* We assume we're always acting on the current vcpu */
-
-static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
-{
-	if ( num < 14 ) {
-		get_paca()->shadow_vcpu.gpr[num] = val;
-		to_book3s(vcpu)->shadow_vcpu.gpr[num] = val;
-	} else
-		vcpu->arch.gpr[num] = val;
-}
-
-static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
-{
-	if ( num < 14 )
-		return get_paca()->shadow_vcpu.gpr[num];
-	else
-		return vcpu->arch.gpr[num];
-}
-
-static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
-{
-	get_paca()->shadow_vcpu.cr = val;
-	to_book3s(vcpu)->shadow_vcpu.cr = val;
-}
-
-static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
-{
-	return get_paca()->shadow_vcpu.cr;
-}
-
-static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
-{
-	get_paca()->shadow_vcpu.xer = val;
-	to_book3s(vcpu)->shadow_vcpu.xer = val;
-}
-
-static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
-{
-	return get_paca()->shadow_vcpu.xer;
-}
-
-#else
-
-static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
-{
-	vcpu->arch.gpr[num] = val;
-}
-
-static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
-{
-	return vcpu->arch.gpr[num];
-}
-
-static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
-{
-	vcpu->arch.cr = val;
-}
-
-static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
-{
-	return vcpu->arch.cr;
-}
-
-static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
-{
-	vcpu->arch.xer = val;
-}
-
-static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
-{
-	return vcpu->arch.xer;
-}
-
-#endif
-
 #endif /* __POWERPC_KVM_PPC_H__ */
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 4ac7b15..6eb2da2 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -70,18 +70,26 @@ void kvmppc_core_load_guest_debugstate(struct kvm_vcpu *vcpu)
 
 void kvmppc_core_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 {
-	memcpy(get_paca()->kvm_slb, to_book3s(vcpu)->slb_shadow, sizeof(get_paca()->kvm_slb));
-	memcpy(&get_paca()->shadow_vcpu, &to_book3s(vcpu)->shadow_vcpu,
+#ifdef CONFIG_PPC_BOOK3S_64
+	memcpy(to_svcpu(vcpu)->slb, to_book3s(vcpu)->slb_shadow, sizeof(to_svcpu(vcpu)->slb));
+	memcpy(&get_paca()->shadow_vcpu, to_book3s(vcpu)->shadow_vcpu,
 	       sizeof(get_paca()->shadow_vcpu));
-	get_paca()->kvm_slb_max = to_book3s(vcpu)->slb_shadow_max;
+	to_svcpu(vcpu)->slb_max = to_book3s(vcpu)->slb_shadow_max;
+#endif
+
+#ifdef CONFIG_PPC_BOOK3S_32
+	current->thread.kvm_shadow_vcpu = to_book3s(vcpu)->shadow_vcpu;
+#endif
 }
 
 void kvmppc_core_vcpu_put(struct kvm_vcpu *vcpu)
 {
-	memcpy(to_book3s(vcpu)->slb_shadow, get_paca()->kvm_slb, sizeof(get_paca()->kvm_slb));
-	memcpy(&to_book3s(vcpu)->shadow_vcpu, &get_paca()->shadow_vcpu,
+#ifdef CONFIG_PPC_BOOK3S_64
+	memcpy(to_book3s(vcpu)->slb_shadow, to_svcpu(vcpu)->slb, sizeof(to_svcpu(vcpu)->slb));
+	memcpy(to_book3s(vcpu)->shadow_vcpu, &get_paca()->shadow_vcpu,
 	       sizeof(get_paca()->shadow_vcpu));
-	to_book3s(vcpu)->slb_shadow_max = get_paca()->kvm_slb_max;
+	to_book3s(vcpu)->slb_shadow_max = to_svcpu(vcpu)->slb_max;
+#endif
 
 	kvmppc_giveup_ext(vcpu, MSR_FP);
 	kvmppc_giveup_ext(vcpu, MSR_VEC);
@@ -143,7 +151,7 @@ void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 msr)
 					      VSID_SPLIT_MASK);
 
 		kvmppc_mmu_flush_segments(vcpu);
-		kvmppc_mmu_map_segment(vcpu, vcpu->arch.pc);
+		kvmppc_mmu_map_segment(vcpu, kvmppc_get_pc(vcpu));
 	}
 
 	/* Preload FPU if it's enabled */
@@ -153,9 +161,9 @@ void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 msr)
 
 void kvmppc_inject_interrupt(struct kvm_vcpu *vcpu, int vec, u64 flags)
 {
-	vcpu->arch.srr0 = vcpu->arch.pc;
+	vcpu->arch.srr0 = kvmppc_get_pc(vcpu);
 	vcpu->arch.srr1 = vcpu->arch.msr | flags;
-	vcpu->arch.pc = to_book3s(vcpu)->hior + vec;
+	kvmppc_set_pc(vcpu, to_book3s(vcpu)->hior + vec);
 	vcpu->arch.mmu.reset_msr(vcpu);
 }
 
@@ -550,20 +558,20 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 	if (page_found = -ENOENT) {
 		/* Page not found in guest PTE entries */
-		vcpu->arch.dear = vcpu->arch.fault_dear;
-		to_book3s(vcpu)->dsisr = vcpu->arch.fault_dsisr;
-		vcpu->arch.msr |= (vcpu->arch.shadow_srr1 & 0x00000000f8000000ULL);
+		vcpu->arch.dear = kvmppc_get_fault_dar(vcpu);
+		to_book3s(vcpu)->dsisr = to_svcpu(vcpu)->fault_dsisr;
+		vcpu->arch.msr |= (to_svcpu(vcpu)->shadow_srr1 & 0x00000000f8000000ULL);
 		kvmppc_book3s_queue_irqprio(vcpu, vec);
 	} else if (page_found = -EPERM) {
 		/* Storage protection */
-		vcpu->arch.dear = vcpu->arch.fault_dear;
-		to_book3s(vcpu)->dsisr = vcpu->arch.fault_dsisr & ~DSISR_NOHPTE;
+		vcpu->arch.dear = kvmppc_get_fault_dar(vcpu);
+		to_book3s(vcpu)->dsisr = to_svcpu(vcpu)->fault_dsisr & ~DSISR_NOHPTE;
 		to_book3s(vcpu)->dsisr |= DSISR_PROTFAULT;
-		vcpu->arch.msr |= (vcpu->arch.shadow_srr1 & 0x00000000f8000000ULL);
+		vcpu->arch.msr |= (to_svcpu(vcpu)->shadow_srr1 & 0x00000000f8000000ULL);
 		kvmppc_book3s_queue_irqprio(vcpu, vec);
 	} else if (page_found = -EINVAL) {
 		/* Page not found in guest SLB */
-		vcpu->arch.dear = vcpu->arch.fault_dear;
+		vcpu->arch.dear = kvmppc_get_fault_dar(vcpu);
 		kvmppc_book3s_queue_irqprio(vcpu, vec + 0x80);
 	} else if (!is_mmio &&
 		   kvmppc_visible_gfn(vcpu, pte.raddr >> PAGE_SHIFT)) {
@@ -645,10 +653,11 @@ void kvmppc_giveup_ext(struct kvm_vcpu *vcpu, ulong msr)
 
 static int kvmppc_read_inst(struct kvm_vcpu *vcpu)
 {
-	ulong srr0 = vcpu->arch.pc;
+	ulong srr0 = kvmppc_get_pc(vcpu);
+	u32 last_inst = kvmppc_get_last_inst(vcpu);
 	int ret;
 
-	ret = kvmppc_ld(vcpu, &srr0, sizeof(u32), &vcpu->arch.last_inst, false);
+	ret = kvmppc_ld(vcpu, &srr0, sizeof(u32), &last_inst, false);
 	if (ret = -ENOENT) {
 		vcpu->arch.msr = kvmppc_set_field(vcpu->arch.msr, 33, 33, 1);
 		vcpu->arch.msr = kvmppc_set_field(vcpu->arch.msr, 34, 36, 0);
@@ -753,12 +762,12 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	run->ready_for_interrupt_injection = 1;
 #ifdef EXIT_DEBUG
 	printk(KERN_EMERG "exit_nr=0x%x | pc=0x%lx | dar=0x%lx | dec=0x%x | msr=0x%lx\n",
-		exit_nr, vcpu->arch.pc, vcpu->arch.fault_dear,
-		kvmppc_get_dec(vcpu), vcpu->arch.msr);
+		exit_nr, kvmppc_get_pc(vcpu), kvmppc_get_fault_dar(vcpu),
+		kvmppc_get_dec(vcpu), to_svcpu(vcpu)->shadow_srr1);
 #elif defined (EXIT_DEBUG_SIMPLE)
 	if ((exit_nr != 0x900) && (exit_nr != 0x500))
 		printk(KERN_EMERG "exit_nr=0x%x | pc=0x%lx | dar=0x%lx | msr=0x%lx\n",
-			exit_nr, vcpu->arch.pc, vcpu->arch.fault_dear,
+			exit_nr, kvmppc_get_pc(vcpu), kvmppc_get_fault_dar(vcpu),
 			vcpu->arch.msr);
 #endif
 	kvm_resched(vcpu);
@@ -766,8 +775,8 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	case BOOK3S_INTERRUPT_INST_STORAGE:
 		vcpu->stat.pf_instruc++;
 		/* only care about PTEG not found errors, but leave NX alone */
-		if (vcpu->arch.shadow_srr1 & 0x40000000) {
-			r = kvmppc_handle_pagefault(run, vcpu, vcpu->arch.pc, exit_nr);
+		if (to_svcpu(vcpu)->shadow_srr1 & 0x40000000) {
+			r = kvmppc_handle_pagefault(run, vcpu, kvmppc_get_pc(vcpu), exit_nr);
 			vcpu->stat.sp_instruc++;
 		} else if (vcpu->arch.mmu.is_dcbz32(vcpu) &&
 			  (!(vcpu->arch.hflags & BOOK3S_HFLAG_DCBZ32))) {
@@ -776,38 +785,41 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			 *     so we can't use the NX bit inside the guest. Let's cross our fingers,
 			 *     that no guest that needs the dcbz hack does NX.
 			 */
-			kvmppc_mmu_pte_flush(vcpu, vcpu->arch.pc, ~0xFFFULL);
+			kvmppc_mmu_pte_flush(vcpu, kvmppc_get_pc(vcpu), ~0xFFFULL);
 			r = RESUME_GUEST;
 		} else {
-			vcpu->arch.msr |= vcpu->arch.shadow_srr1 & 0x58000000;
+			vcpu->arch.msr |= to_svcpu(vcpu)->shadow_srr1 & 0x58000000;
 			kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
-			kvmppc_mmu_pte_flush(vcpu, vcpu->arch.pc, ~0xFFFULL);
+			kvmppc_mmu_pte_flush(vcpu, kvmppc_get_pc(vcpu), ~0xFFFULL);
 			r = RESUME_GUEST;
 		}
 		break;
 	case BOOK3S_INTERRUPT_DATA_STORAGE:
+	{
+		ulong dar = kvmppc_get_fault_dar(vcpu);
 		vcpu->stat.pf_storage++;
 		/* The only case we need to handle is missing shadow PTEs */
-		if (vcpu->arch.fault_dsisr & DSISR_NOHPTE) {
-			r = kvmppc_handle_pagefault(run, vcpu, vcpu->arch.fault_dear, exit_nr);
+		if (to_svcpu(vcpu)->fault_dsisr & DSISR_NOHPTE) {
+			r = kvmppc_handle_pagefault(run, vcpu, dar, exit_nr);
 		} else {
-			vcpu->arch.dear = vcpu->arch.fault_dear;
-			to_book3s(vcpu)->dsisr = vcpu->arch.fault_dsisr;
+			vcpu->arch.dear = dar;
+			to_book3s(vcpu)->dsisr = to_svcpu(vcpu)->fault_dsisr;
 			kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
 			kvmppc_mmu_pte_flush(vcpu, vcpu->arch.dear, ~0xFFFULL);
 			r = RESUME_GUEST;
 		}
 		break;
+	}
 	case BOOK3S_INTERRUPT_DATA_SEGMENT:
-		if (kvmppc_mmu_map_segment(vcpu, vcpu->arch.fault_dear) < 0) {
-			vcpu->arch.dear = vcpu->arch.fault_dear;
+		if (kvmppc_mmu_map_segment(vcpu, kvmppc_get_fault_dar(vcpu)) < 0) {
+			vcpu->arch.dear = kvmppc_get_fault_dar(vcpu);
 			kvmppc_book3s_queue_irqprio(vcpu,
 				BOOK3S_INTERRUPT_DATA_SEGMENT);
 		}
 		r = RESUME_GUEST;
 		break;
 	case BOOK3S_INTERRUPT_INST_SEGMENT:
-		if (kvmppc_mmu_map_segment(vcpu, vcpu->arch.pc) < 0) {
+		if (kvmppc_mmu_map_segment(vcpu, kvmppc_get_pc(vcpu)) < 0) {
 			kvmppc_book3s_queue_irqprio(vcpu,
 				BOOK3S_INTERRUPT_INST_SEGMENT);
 		}
@@ -828,13 +840,13 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		ulong flags;
 
 program_interrupt:
-		flags = vcpu->arch.shadow_srr1 & 0x1f0000ull;
+		flags = to_svcpu(vcpu)->shadow_srr1 & 0x1f0000ull;
 
 		if (vcpu->arch.msr & MSR_PR) {
 #ifdef EXIT_DEBUG
-			printk(KERN_INFO "Userspace triggered 0x700 exception at 0x%lx (0x%x)\n", vcpu->arch.pc, vcpu->arch.last_inst);
+			printk(KERN_INFO "Userspace triggered 0x700 exception at 0x%lx (0x%x)\n", kvmppc_get_pc(vcpu), kvmppc_get_last_inst(vcpu));
 #endif
-			if ((vcpu->arch.last_inst & 0xff0007ff) !+			if ((kvmppc_get_last_inst(vcpu) & 0xff0007ff) ! 			    (INS_DCBZ & 0xfffffff7)) {
 				kvmppc_core_queue_program(vcpu, flags);
 				r = RESUME_GUEST;
@@ -853,7 +865,7 @@ program_interrupt:
 			break;
 		case EMULATE_FAIL:
 			printk(KERN_CRIT "%s: emulation at %lx failed (%08x)\n",
-			       __func__, vcpu->arch.pc, vcpu->arch.last_inst);
+			       __func__, kvmppc_get_pc(vcpu), kvmppc_get_last_inst(vcpu));
 			kvmppc_core_queue_program(vcpu, flags);
 			r = RESUME_GUEST;
 			break;
@@ -916,9 +928,9 @@ program_interrupt:
 	case BOOK3S_INTERRUPT_ALIGNMENT:
 		if (kvmppc_read_inst(vcpu) = EMULATE_DONE) {
 			to_book3s(vcpu)->dsisr = kvmppc_alignment_dsisr(vcpu,
-				vcpu->arch.last_inst);
+				kvmppc_get_last_inst(vcpu));
 			vcpu->arch.dear = kvmppc_alignment_dar(vcpu,
-				vcpu->arch.last_inst);
+				kvmppc_get_last_inst(vcpu));
 			kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
 		}
 		r = RESUME_GUEST;
@@ -931,7 +943,7 @@ program_interrupt:
 	default:
 		/* Ugh - bork here! What did we get? */
 		printk(KERN_EMERG "exit_nr=0x%x | pc=0x%lx | msr=0x%lx\n",
-			exit_nr, vcpu->arch.pc, vcpu->arch.shadow_srr1);
+			exit_nr, kvmppc_get_pc(vcpu), to_svcpu(vcpu)->shadow_srr1);
 		r = RESUME_HOST;
 		BUG();
 		break;
@@ -958,7 +970,7 @@ program_interrupt:
 	}
 
 #ifdef EXIT_DEBUG
-	printk(KERN_EMERG "KVM exit: vcpu=0x%p pc=0x%lx r=0x%x\n", vcpu, vcpu->arch.pc, r);
+	printk(KERN_EMERG "KVM exit: vcpu=0x%p pc=0x%lx r=0x%x\n", vcpu, kvmppc_get_pc(vcpu), r);
 #endif
 
 	return r;
@@ -975,10 +987,10 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 
 	vcpu_load(vcpu);
 
-	regs->pc = vcpu->arch.pc;
+	regs->pc = kvmppc_get_pc(vcpu);
 	regs->cr = kvmppc_get_cr(vcpu);
-	regs->ctr = vcpu->arch.ctr;
-	regs->lr = vcpu->arch.lr;
+	regs->ctr = kvmppc_get_ctr(vcpu);
+	regs->lr = kvmppc_get_lr(vcpu);
 	regs->xer = kvmppc_get_xer(vcpu);
 	regs->msr = vcpu->arch.msr;
 	regs->srr0 = vcpu->arch.srr0;
@@ -1006,10 +1018,10 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 
 	vcpu_load(vcpu);
 
-	vcpu->arch.pc = regs->pc;
+	kvmppc_set_pc(vcpu, regs->pc);
 	kvmppc_set_cr(vcpu, regs->cr);
-	vcpu->arch.ctr = regs->ctr;
-	vcpu->arch.lr = regs->lr;
+	kvmppc_set_ctr(vcpu, regs->ctr);
+	kvmppc_set_lr(vcpu, regs->lr);
 	kvmppc_set_xer(vcpu, regs->xer);
 	kvmppc_set_msr(vcpu, regs->msr);
 	vcpu->arch.srr0 = regs->srr0;
@@ -1156,19 +1168,23 @@ struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm, unsigned int id)
 {
 	struct kvmppc_vcpu_book3s *vcpu_book3s;
 	struct kvm_vcpu *vcpu;
-	int err;
+	int err = -ENOMEM;
 
 	vcpu_book3s = vmalloc(sizeof(struct kvmppc_vcpu_book3s));
-	if (!vcpu_book3s) {
-		err = -ENOMEM;
+	if (!vcpu_book3s)
 		goto out;
-	}
+
 	memset(vcpu_book3s, 0, sizeof(struct kvmppc_vcpu_book3s));
 
+	vcpu_book3s->shadow_vcpu = (struct kvmppc_book3s_shadow_vcpu *)
+		kzalloc(sizeof(*vcpu_book3s->shadow_vcpu), GFP_KERNEL);
+	if (!vcpu_book3s->shadow_vcpu)
+		goto free_vcpu;
+
 	vcpu = &vcpu_book3s->vcpu;
 	err = kvm_vcpu_init(vcpu, kvm, id);
 	if (err)
-		goto free_vcpu;
+		goto free_shadow_vcpu;
 
 	vcpu->arch.host_retip = kvm_return_point;
 	vcpu->arch.host_msr = mfmsr();
@@ -1187,7 +1203,7 @@ struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm, unsigned int id)
 
 	err = __init_new_context();
 	if (err < 0)
-		goto free_vcpu;
+		goto free_shadow_vcpu;
 	vcpu_book3s->context_id = err;
 
 	vcpu_book3s->vsid_max = ((vcpu_book3s->context_id + 1) << USER_ESID_BITS) - 1;
@@ -1196,6 +1212,8 @@ struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm, unsigned int id)
 
 	return vcpu;
 
+free_shadow_vcpu:
+	kfree(vcpu_book3s->shadow_vcpu);
 free_vcpu:
 	vfree(vcpu_book3s);
 out:
@@ -1208,6 +1226,7 @@ void kvmppc_core_vcpu_free(struct kvm_vcpu *vcpu)
 
 	__destroy_context(vcpu_book3s->context_id);
 	kvm_vcpu_uninit(vcpu);
+	kfree(vcpu_book3s->shadow_vcpu);
 	vfree(vcpu_book3s);
 }
 
diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c
index 512dcff..12e4c97 100644
--- a/arch/powerpc/kvm/book3s_64_mmu.c
+++ b/arch/powerpc/kvm/book3s_64_mmu.c
@@ -383,7 +383,7 @@ static void kvmppc_mmu_book3s_64_slbia(struct kvm_vcpu *vcpu)
 
 	if (vcpu->arch.msr & MSR_IR) {
 		kvmppc_mmu_flush_segments(vcpu);
-		kvmppc_mmu_map_segment(vcpu, vcpu->arch.pc);
+		kvmppc_mmu_map_segment(vcpu, kvmppc_get_pc(vcpu));
 	}
 }
 
diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index a01e9c5..b0f5b4e 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -331,14 +331,14 @@ static int kvmppc_mmu_next_segment(struct kvm_vcpu *vcpu, ulong esid)
 	int found_inval = -1;
 	int r;
 
-	if (!get_paca()->kvm_slb_max)
-		get_paca()->kvm_slb_max = 1;
+	if (!to_svcpu(vcpu)->slb_max)
+		to_svcpu(vcpu)->slb_max = 1;
 
 	/* Are we overwriting? */
-	for (i = 1; i < get_paca()->kvm_slb_max; i++) {
-		if (!(get_paca()->kvm_slb[i].esid & SLB_ESID_V))
+	for (i = 1; i < to_svcpu(vcpu)->slb_max; i++) {
+		if (!(to_svcpu(vcpu)->slb[i].esid & SLB_ESID_V))
 			found_inval = i;
-		else if ((get_paca()->kvm_slb[i].esid & ESID_MASK) = esid)
+		else if ((to_svcpu(vcpu)->slb[i].esid & ESID_MASK) = esid)
 			return i;
 	}
 
@@ -352,11 +352,11 @@ static int kvmppc_mmu_next_segment(struct kvm_vcpu *vcpu, ulong esid)
 		max_slb_size = mmu_slb_size;
 
 	/* Overflowing -> purge */
-	if ((get_paca()->kvm_slb_max) = max_slb_size)
+	if ((to_svcpu(vcpu)->slb_max) = max_slb_size)
 		kvmppc_mmu_flush_segments(vcpu);
 
-	r = get_paca()->kvm_slb_max;
-	get_paca()->kvm_slb_max++;
+	r = to_svcpu(vcpu)->slb_max;
+	to_svcpu(vcpu)->slb_max++;
 
 	return r;
 }
@@ -374,7 +374,7 @@ int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr)
 
 	if (vcpu->arch.mmu.esid_to_vsid(vcpu, esid, &gvsid)) {
 		/* Invalidate an entry */
-		get_paca()->kvm_slb[slb_index].esid = 0;
+		to_svcpu(vcpu)->slb[slb_index].esid = 0;
 		return -ENOENT;
 	}
 
@@ -388,8 +388,8 @@ int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr)
 	slb_vsid &= ~SLB_VSID_KP;
 	slb_esid |= slb_index;
 
-	get_paca()->kvm_slb[slb_index].esid = slb_esid;
-	get_paca()->kvm_slb[slb_index].vsid = slb_vsid;
+	to_svcpu(vcpu)->slb[slb_index].esid = slb_esid;
+	to_svcpu(vcpu)->slb[slb_index].vsid = slb_vsid;
 
 	dprintk_slb("slbmte %#llx, %#llx\n", slb_vsid, slb_esid);
 
@@ -398,8 +398,8 @@ int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr)
 
 void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu)
 {
-	get_paca()->kvm_slb_max = 1;
-	get_paca()->kvm_slb[0].esid = 0;
+	to_svcpu(vcpu)->slb_max = 1;
+	to_svcpu(vcpu)->slb[0].esid = 0;
 }
 
 void kvmppc_mmu_destroy(struct kvm_vcpu *vcpu)
diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index 8f50776..daa829b 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -69,7 +69,7 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		switch (get_xop(inst)) {
 		case OP_19_XOP_RFID:
 		case OP_19_XOP_RFI:
-			vcpu->arch.pc = vcpu->arch.srr0;
+			kvmppc_set_pc(vcpu, vcpu->arch.srr0);
 			kvmppc_set_msr(vcpu, vcpu->arch.srr1);
 			*advance = 0;
 			break;
@@ -208,7 +208,7 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			if ((r = -ENOENT) || (r = -EPERM)) {
 				*advance = 0;
 				vcpu->arch.dear = vaddr;
-				vcpu->arch.fault_dear = vaddr;
+				to_svcpu(vcpu)->fault_dar = vaddr;
 
 				dsisr = DSISR_ISSTORE;
 				if (r = -ENOENT)
@@ -217,7 +217,7 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
 					dsisr |= DSISR_PROTFAULT;
 
 				to_book3s(vcpu)->dsisr = dsisr;
-				vcpu->arch.fault_dsisr = dsisr;
+				to_svcpu(vcpu)->fault_dsisr = dsisr;
 
 				kvmppc_book3s_queue_irqprio(vcpu,
 					BOOK3S_INTERRUPT_DATA_STORAGE);
diff --git a/arch/powerpc/kvm/book3s_paired_singles.c b/arch/powerpc/kvm/book3s_paired_singles.c
index 7a27bac..a9f66ab 100644
--- a/arch/powerpc/kvm/book3s_paired_singles.c
+++ b/arch/powerpc/kvm/book3s_paired_singles.c
@@ -656,7 +656,7 @@ static int kvmppc_ps_one_in(struct kvm_vcpu *vcpu, bool rc,
 
 int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	u32 inst = vcpu->arch.last_inst;
+	u32 inst = kvmppc_get_last_inst(vcpu);
 	enum emulation_result emulated = EMULATE_DONE;
 
 	int ax_rd = inst_get_field(inst, 6, 10);
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index dbb5d68..c6db28c 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -132,7 +132,7 @@ void kvmppc_emulate_dec(struct kvm_vcpu *vcpu)
  * from opcode tables in the future. */
 int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
 {
-	u32 inst = vcpu->arch.last_inst;
+	u32 inst = kvmppc_get_last_inst(vcpu);
 	u32 ea;
 	int ra;
 	int rb;
@@ -516,10 +516,11 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
 		}
 	}
 
-	trace_kvm_ppc_instr(inst, vcpu->arch.pc, emulated);
+	trace_kvm_ppc_instr(inst, kvmppc_get_pc(vcpu), emulated);
 
+	/* Advance past emulated instruction. */
 	if (advance)
-		vcpu->arch.pc += 4; /* Advance past emulated instruction. */
+		kvmppc_set_pc(vcpu, kvmppc_get_pc(vcpu) + 4);
 
 	return emulated;
 }
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 1af210c..0b6b018 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -69,7 +69,7 @@ int kvmppc_emulate_mmio(struct kvm_run *run, struct kvm_vcpu *vcpu)
 	case EMULATE_FAIL:
 		/* XXX Deliver Program interrupt to guest. */
 		printk(KERN_EMERG "%s: emulation failed (%08x)\n", __func__,
-		       vcpu->arch.last_inst);
+		       kvmppc_get_last_inst(vcpu));
 		r = RESUME_HOST;
 		break;
 	default:
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 10/27] KVM: PPC: Use KVM_BOOK3S_HANDLER
       [not found] ` <1271369518-11247-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-15 22:11     ` Alexander Graf
  2010-04-15 22:11     ` Alexander Graf
                       ` (15 subsequent siblings)
  16 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

So far we had a lot of conditional code on CONFIG_KVM_BOOK3S_64_HANDLER.
As we're moving towards common code between 32 and 64 bits, most of
these ifdefs can be moved to a more generic term define, called
CONFIG_KVM_BOOK3S_HANDLER.

This patch adds the new generic config option and moves ifdefs over.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/include/asm/kvm_book3s_asm.h |    4 ++--
 arch/powerpc/include/asm/paca.h           |    2 +-
 arch/powerpc/kvm/Kconfig                  |    4 ++++
 arch/powerpc/kvm/Makefile                 |    2 +-
 4 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s_asm.h b/arch/powerpc/include/asm/kvm_book3s_asm.h
index e915e7d..36fdb3a 100644
--- a/arch/powerpc/include/asm/kvm_book3s_asm.h
+++ b/arch/powerpc/include/asm/kvm_book3s_asm.h
@@ -22,7 +22,7 @@
 
 #ifdef __ASSEMBLY__
 
-#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
+#ifdef CONFIG_KVM_BOOK3S_HANDLER
 
 #include <asm/kvm_asm.h>
 
@@ -55,7 +55,7 @@ kvmppc_resume_\intno:
 .macro DO_KVM intno
 .endm
 
-#endif /* CONFIG_KVM_BOOK3S_64_HANDLER */
+#endif /* CONFIG_KVM_BOOK3S_HANDLER */
 
 #else  /*__ASSEMBLY__ */
 
diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index dc3ccdf..33347ea 100644
--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -136,7 +136,7 @@ struct paca_struct {
 	u64 startpurr;			/* PURR/TB value snapshot */
 	u64 startspurr;			/* SPURR value snapshot */
 
-#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
+#ifdef CONFIG_KVM_BOOK3S_HANDLER
 	struct  {
 		u64     esid;
 		u64     vsid;
diff --git a/arch/powerpc/kvm/Kconfig b/arch/powerpc/kvm/Kconfig
index 60624cc..8ef3766 100644
--- a/arch/powerpc/kvm/Kconfig
+++ b/arch/powerpc/kvm/Kconfig
@@ -22,8 +22,12 @@ config KVM
 	select ANON_INODES
 	select KVM_MMIO
 
+config KVM_BOOK3S_HANDLER
+	bool
+
 config KVM_BOOK3S_64_HANDLER
 	bool
+	select KVM_BOOK3S_HANDLER
 
 config KVM_BOOK3S_64
 	tristate "KVM support for PowerPC book3s_64 processors"
diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile
index 0a67310..f621ce6 100644
--- a/arch/powerpc/kvm/Makefile
+++ b/arch/powerpc/kvm/Makefile
@@ -14,7 +14,7 @@ CFLAGS_emulate.o  := -I.
 
 common-objs-y += powerpc.o emulate.o
 obj-$(CONFIG_KVM_EXIT_TIMING) += timing.o
-obj-$(CONFIG_KVM_BOOK3S_64_HANDLER) += book3s_exports.o
+obj-$(CONFIG_KVM_BOOK3S_HANDLER) += book3s_exports.o
 
 AFLAGS_booke_interrupts.o := -I$(obj)
 
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 10/27] KVM: PPC: Use KVM_BOOK3S_HANDLER
@ 2010-04-15 22:11     ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

So far we had a lot of conditional code on CONFIG_KVM_BOOK3S_64_HANDLER.
As we're moving towards common code between 32 and 64 bits, most of
these ifdefs can be moved to a more generic term define, called
CONFIG_KVM_BOOK3S_HANDLER.

This patch adds the new generic config option and moves ifdefs over.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s_asm.h |    4 ++--
 arch/powerpc/include/asm/paca.h           |    2 +-
 arch/powerpc/kvm/Kconfig                  |    4 ++++
 arch/powerpc/kvm/Makefile                 |    2 +-
 4 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s_asm.h b/arch/powerpc/include/asm/kvm_book3s_asm.h
index e915e7d..36fdb3a 100644
--- a/arch/powerpc/include/asm/kvm_book3s_asm.h
+++ b/arch/powerpc/include/asm/kvm_book3s_asm.h
@@ -22,7 +22,7 @@
 
 #ifdef __ASSEMBLY__
 
-#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
+#ifdef CONFIG_KVM_BOOK3S_HANDLER
 
 #include <asm/kvm_asm.h>
 
@@ -55,7 +55,7 @@ kvmppc_resume_\intno:
 .macro DO_KVM intno
 .endm
 
-#endif /* CONFIG_KVM_BOOK3S_64_HANDLER */
+#endif /* CONFIG_KVM_BOOK3S_HANDLER */
 
 #else  /*__ASSEMBLY__ */
 
diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index dc3ccdf..33347ea 100644
--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -136,7 +136,7 @@ struct paca_struct {
 	u64 startpurr;			/* PURR/TB value snapshot */
 	u64 startspurr;			/* SPURR value snapshot */
 
-#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
+#ifdef CONFIG_KVM_BOOK3S_HANDLER
 	struct  {
 		u64     esid;
 		u64     vsid;
diff --git a/arch/powerpc/kvm/Kconfig b/arch/powerpc/kvm/Kconfig
index 60624cc..8ef3766 100644
--- a/arch/powerpc/kvm/Kconfig
+++ b/arch/powerpc/kvm/Kconfig
@@ -22,8 +22,12 @@ config KVM
 	select ANON_INODES
 	select KVM_MMIO
 
+config KVM_BOOK3S_HANDLER
+	bool
+
 config KVM_BOOK3S_64_HANDLER
 	bool
+	select KVM_BOOK3S_HANDLER
 
 config KVM_BOOK3S_64
 	tristate "KVM support for PowerPC book3s_64 processors"
diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile
index 0a67310..f621ce6 100644
--- a/arch/powerpc/kvm/Makefile
+++ b/arch/powerpc/kvm/Makefile
@@ -14,7 +14,7 @@ CFLAGS_emulate.o  := -I.
 
 common-objs-y += powerpc.o emulate.o
 obj-$(CONFIG_KVM_EXIT_TIMING) += timing.o
-obj-$(CONFIG_KVM_BOOK3S_64_HANDLER) += book3s_exports.o
+obj-$(CONFIG_KVM_BOOK3S_HANDLER) += book3s_exports.o
 
 AFLAGS_booke_interrupts.o := -I$(obj)
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 11/27] KVM: PPC: Use CONFIG_PPC_BOOK3S define
       [not found] ` <1271369518-11247-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-15 22:11     ` Alexander Graf
  2010-04-15 22:11     ` Alexander Graf
                       ` (15 subsequent siblings)
  16 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

Upstream recently added a new name for PPC64: Book3S_64.

So instead of using CONFIG_PPC64 we should use CONFIG_PPC_BOOK3S consotently.
That makes understanding the code easier (I hope).

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/include/asm/kvm_host.h |    8 ++++----
 arch/powerpc/kernel/asm-offsets.c   |    6 +++---
 arch/powerpc/kvm/Kconfig            |    2 +-
 arch/powerpc/kvm/emulate.c          |    6 +++---
 4 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 5869a48..22801f8 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -66,7 +66,7 @@ struct kvm_vcpu_stat {
 	u32 dec_exits;
 	u32 ext_intr_exits;
 	u32 halt_wakeup;
-#ifdef CONFIG_PPC64
+#ifdef CONFIG_PPC_BOOK3S
 	u32 pf_storage;
 	u32 pf_instruc;
 	u32 sp_storage;
@@ -160,7 +160,7 @@ struct hpte_cache {
 struct kvm_vcpu_arch {
 	ulong host_stack;
 	u32 host_pid;
-#ifdef CONFIG_PPC64
+#ifdef CONFIG_PPC_BOOK3S
 	ulong host_msr;
 	ulong host_r2;
 	void *host_retip;
@@ -201,7 +201,7 @@ struct kvm_vcpu_arch {
 #endif
 
 	ulong msr;
-#ifdef CONFIG_PPC64
+#ifdef CONFIG_PPC_BOOK3S
 	ulong shadow_msr;
 	ulong shadow_srr1;
 	ulong hflags;
@@ -283,7 +283,7 @@ struct kvm_vcpu_arch {
 	u64 dec_jiffies;
 	unsigned long pending_exceptions;
 
-#ifdef CONFIG_PPC64
+#ifdef CONFIG_PPC_BOOK3S
 	struct hpte_cache hpte_cache[HPTEG_CACHE_NUM];
 	int hpte_cache_offset;
 #endif
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 957ceb7..57a8c49 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -426,8 +426,8 @@ int main(void)
 	DEFINE(VCPU_FAULT_DEAR, offsetof(struct kvm_vcpu, arch.fault_dear));
 	DEFINE(VCPU_FAULT_ESR, offsetof(struct kvm_vcpu, arch.fault_esr));
 
-	/* book3s_64 */
-#ifdef CONFIG_PPC64
+	/* book3s */
+#ifdef CONFIG_PPC_BOOK3S
 	DEFINE(VCPU_FAULT_DSISR, offsetof(struct kvm_vcpu, arch.fault_dsisr));
 	DEFINE(VCPU_HOST_RETIP, offsetof(struct kvm_vcpu, arch.host_retip));
 	DEFINE(VCPU_HOST_R2, offsetof(struct kvm_vcpu, arch.host_r2));
@@ -442,7 +442,7 @@ int main(void)
 #else
 	DEFINE(VCPU_CR, offsetof(struct kvm_vcpu, arch.cr));
 	DEFINE(VCPU_XER, offsetof(struct kvm_vcpu, arch.xer));
-#endif /* CONFIG_PPC64 */
+#endif /* CONFIG_PPC_BOOK3S */
 #endif
 #ifdef CONFIG_44x
 	DEFINE(PGD_T_LOG2, PGD_T_LOG2);
diff --git a/arch/powerpc/kvm/Kconfig b/arch/powerpc/kvm/Kconfig
index 8ef3766..d864698 100644
--- a/arch/powerpc/kvm/Kconfig
+++ b/arch/powerpc/kvm/Kconfig
@@ -31,7 +31,7 @@ config KVM_BOOK3S_64_HANDLER
 
 config KVM_BOOK3S_64
 	tristate "KVM support for PowerPC book3s_64 processors"
-	depends on EXPERIMENTAL && PPC64
+	depends on EXPERIMENTAL && PPC_BOOK3S_64
 	select KVM
 	select KVM_BOOK3S_64_HANDLER
 	---help---
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index c6db28c..b608c0b 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -69,7 +69,7 @@
 #define OP_STH  44
 #define OP_STHU 45
 
-#ifdef CONFIG_PPC64
+#ifdef CONFIG_PPC_BOOK3S
 static int kvmppc_dec_enabled(struct kvm_vcpu *vcpu)
 {
 	return 1;
@@ -86,7 +86,7 @@ void kvmppc_emulate_dec(struct kvm_vcpu *vcpu)
 	unsigned long dec_nsec;
 
 	pr_debug("mtDEC: %x\n", vcpu->arch.dec);
-#ifdef CONFIG_PPC64
+#ifdef CONFIG_PPC_BOOK3S
 	/* mtdec lowers the interrupt line when positive. */
 	kvmppc_core_dequeue_dec(vcpu);
 
@@ -153,7 +153,7 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
 
 	switch (get_op(inst)) {
 	case OP_TRAP:
-#ifdef CONFIG_PPC64
+#ifdef CONFIG_PPC_BOOK3S
 	case OP_TRAP_64:
 		kvmppc_core_queue_program(vcpu, SRR1_PROGTRAP);
 #else
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 11/27] KVM: PPC: Use CONFIG_PPC_BOOK3S define
@ 2010-04-15 22:11     ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

Upstream recently added a new name for PPC64: Book3S_64.

So instead of using CONFIG_PPC64 we should use CONFIG_PPC_BOOK3S consotently.
That makes understanding the code easier (I hope).

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_host.h |    8 ++++----
 arch/powerpc/kernel/asm-offsets.c   |    6 +++---
 arch/powerpc/kvm/Kconfig            |    2 +-
 arch/powerpc/kvm/emulate.c          |    6 +++---
 4 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 5869a48..22801f8 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -66,7 +66,7 @@ struct kvm_vcpu_stat {
 	u32 dec_exits;
 	u32 ext_intr_exits;
 	u32 halt_wakeup;
-#ifdef CONFIG_PPC64
+#ifdef CONFIG_PPC_BOOK3S
 	u32 pf_storage;
 	u32 pf_instruc;
 	u32 sp_storage;
@@ -160,7 +160,7 @@ struct hpte_cache {
 struct kvm_vcpu_arch {
 	ulong host_stack;
 	u32 host_pid;
-#ifdef CONFIG_PPC64
+#ifdef CONFIG_PPC_BOOK3S
 	ulong host_msr;
 	ulong host_r2;
 	void *host_retip;
@@ -201,7 +201,7 @@ struct kvm_vcpu_arch {
 #endif
 
 	ulong msr;
-#ifdef CONFIG_PPC64
+#ifdef CONFIG_PPC_BOOK3S
 	ulong shadow_msr;
 	ulong shadow_srr1;
 	ulong hflags;
@@ -283,7 +283,7 @@ struct kvm_vcpu_arch {
 	u64 dec_jiffies;
 	unsigned long pending_exceptions;
 
-#ifdef CONFIG_PPC64
+#ifdef CONFIG_PPC_BOOK3S
 	struct hpte_cache hpte_cache[HPTEG_CACHE_NUM];
 	int hpte_cache_offset;
 #endif
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 957ceb7..57a8c49 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -426,8 +426,8 @@ int main(void)
 	DEFINE(VCPU_FAULT_DEAR, offsetof(struct kvm_vcpu, arch.fault_dear));
 	DEFINE(VCPU_FAULT_ESR, offsetof(struct kvm_vcpu, arch.fault_esr));
 
-	/* book3s_64 */
-#ifdef CONFIG_PPC64
+	/* book3s */
+#ifdef CONFIG_PPC_BOOK3S
 	DEFINE(VCPU_FAULT_DSISR, offsetof(struct kvm_vcpu, arch.fault_dsisr));
 	DEFINE(VCPU_HOST_RETIP, offsetof(struct kvm_vcpu, arch.host_retip));
 	DEFINE(VCPU_HOST_R2, offsetof(struct kvm_vcpu, arch.host_r2));
@@ -442,7 +442,7 @@ int main(void)
 #else
 	DEFINE(VCPU_CR, offsetof(struct kvm_vcpu, arch.cr));
 	DEFINE(VCPU_XER, offsetof(struct kvm_vcpu, arch.xer));
-#endif /* CONFIG_PPC64 */
+#endif /* CONFIG_PPC_BOOK3S */
 #endif
 #ifdef CONFIG_44x
 	DEFINE(PGD_T_LOG2, PGD_T_LOG2);
diff --git a/arch/powerpc/kvm/Kconfig b/arch/powerpc/kvm/Kconfig
index 8ef3766..d864698 100644
--- a/arch/powerpc/kvm/Kconfig
+++ b/arch/powerpc/kvm/Kconfig
@@ -31,7 +31,7 @@ config KVM_BOOK3S_64_HANDLER
 
 config KVM_BOOK3S_64
 	tristate "KVM support for PowerPC book3s_64 processors"
-	depends on EXPERIMENTAL && PPC64
+	depends on EXPERIMENTAL && PPC_BOOK3S_64
 	select KVM
 	select KVM_BOOK3S_64_HANDLER
 	---help---
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index c6db28c..b608c0b 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -69,7 +69,7 @@
 #define OP_STH  44
 #define OP_STHU 45
 
-#ifdef CONFIG_PPC64
+#ifdef CONFIG_PPC_BOOK3S
 static int kvmppc_dec_enabled(struct kvm_vcpu *vcpu)
 {
 	return 1;
@@ -86,7 +86,7 @@ void kvmppc_emulate_dec(struct kvm_vcpu *vcpu)
 	unsigned long dec_nsec;
 
 	pr_debug("mtDEC: %x\n", vcpu->arch.dec);
-#ifdef CONFIG_PPC64
+#ifdef CONFIG_PPC_BOOK3S
 	/* mtdec lowers the interrupt line when positive. */
 	kvmppc_core_dequeue_dec(vcpu);
 
@@ -153,7 +153,7 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
 
 	switch (get_op(inst)) {
 	case OP_TRAP:
-#ifdef CONFIG_PPC64
+#ifdef CONFIG_PPC_BOOK3S
 	case OP_TRAP_64:
 		kvmppc_core_queue_program(vcpu, SRR1_PROGTRAP);
 #else
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 12/27] PPC: Add STLU
  2010-04-15 22:11 ` Alexander Graf
@ 2010-04-15 22:11   ` Alexander Graf
  -1 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm, Benjamin Herrenschmidt

For assembly code there are several "long" load and store defines already.
The one that's missing is the typical stack store, stdu/stwu.

So let's add that define as well, making my KVM code happy.

CC: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/asm-compat.h |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/asm-compat.h b/arch/powerpc/include/asm/asm-compat.h
index a9b91ed..2048a6a 100644
--- a/arch/powerpc/include/asm/asm-compat.h
+++ b/arch/powerpc/include/asm/asm-compat.h
@@ -21,6 +21,7 @@
 /* operations for longs and pointers */
 #define PPC_LL		stringify_in_c(ld)
 #define PPC_STL		stringify_in_c(std)
+#define PPC_STLU	stringify_in_c(stdu)
 #define PPC_LCMPI	stringify_in_c(cmpdi)
 #define PPC_LONG	stringify_in_c(.llong)
 #define PPC_LONG_ALIGN	stringify_in_c(.balign 8)
@@ -44,6 +45,7 @@
 /* operations for longs and pointers */
 #define PPC_LL		stringify_in_c(lwz)
 #define PPC_STL		stringify_in_c(stw)
+#define PPC_STLU	stringify_in_c(stwu)
 #define PPC_LCMPI	stringify_in_c(cmpwi)
 #define PPC_LONG	stringify_in_c(.long)
 #define PPC_LONG_ALIGN	stringify_in_c(.balign 4)
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 12/27] PPC: Add STLU
@ 2010-04-15 22:11   ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm, Benjamin Herrenschmidt

For assembly code there are several "long" load and store defines already.
The one that's missing is the typical stack store, stdu/stwu.

So let's add that define as well, making my KVM code happy.

CC: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/asm-compat.h |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/asm-compat.h b/arch/powerpc/include/asm/asm-compat.h
index a9b91ed..2048a6a 100644
--- a/arch/powerpc/include/asm/asm-compat.h
+++ b/arch/powerpc/include/asm/asm-compat.h
@@ -21,6 +21,7 @@
 /* operations for longs and pointers */
 #define PPC_LL		stringify_in_c(ld)
 #define PPC_STL		stringify_in_c(std)
+#define PPC_STLU	stringify_in_c(stdu)
 #define PPC_LCMPI	stringify_in_c(cmpdi)
 #define PPC_LONG	stringify_in_c(.llong)
 #define PPC_LONG_ALIGN	stringify_in_c(.balign 8)
@@ -44,6 +45,7 @@
 /* operations for longs and pointers */
 #define PPC_LL		stringify_in_c(lwz)
 #define PPC_STL		stringify_in_c(stw)
+#define PPC_STLU	stringify_in_c(stwu)
 #define PPC_LCMPI	stringify_in_c(cmpwi)
 #define PPC_LONG	stringify_in_c(.long)
 #define PPC_LONG_ALIGN	stringify_in_c(.balign 4)
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 13/27] KVM: PPC: Use now shadowed vcpu fields
       [not found] ` <1271369518-11247-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-15 22:11     ` Alexander Graf
  2010-04-15 22:11     ` Alexander Graf
                       ` (15 subsequent siblings)
  16 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

The shadow vcpu now contains some fields we don't use from the vcpu anymore.
Access to them happens using inline functions that happily use the shadow
vcpu fields.

So let's now ifdef them out to booke only and add asm-offsets.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/include/asm/kvm_host.h |    8 +--
 arch/powerpc/include/asm/paca.h     |    6 --
 arch/powerpc/kernel/asm-offsets.c   |   91 +++++++++++++++++++++--------------
 3 files changed, 57 insertions(+), 48 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 22801f8..5a83995 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -191,11 +191,11 @@ struct kvm_vcpu_arch {
 	u32 qpr[32];
 #endif
 
+#ifdef CONFIG_BOOKE
 	ulong pc;
 	ulong ctr;
 	ulong lr;
 
-#ifdef CONFIG_BOOKE
 	ulong xer;
 	u32 cr;
 #endif
@@ -203,7 +203,6 @@ struct kvm_vcpu_arch {
 	ulong msr;
 #ifdef CONFIG_PPC_BOOK3S
 	ulong shadow_msr;
-	ulong shadow_srr1;
 	ulong hflags;
 	ulong guest_owned_ext;
 #endif
@@ -258,14 +257,13 @@ struct kvm_vcpu_arch {
 	struct dentry *debugfs_exit_timing;
 #endif
 
+#ifdef CONFIG_BOOKE
 	u32 last_inst;
-#ifdef CONFIG_PPC64
-	u32 fault_dsisr;
-#endif
 	ulong fault_dear;
 	ulong fault_esr;
 	ulong queued_dear;
 	ulong queued_esr;
+#endif
 	gpa_t paddr_accessed;
 
 	u8 io_gpr; /* GPR used as IO source/target */
diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index 33347ea..224eb37 100644
--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -137,14 +137,8 @@ struct paca_struct {
 	u64 startspurr;			/* SPURR value snapshot */
 
 #ifdef CONFIG_KVM_BOOK3S_HANDLER
-	struct  {
-		u64     esid;
-		u64     vsid;
-	} kvm_slb[64];			/* guest SLB */
 	/* We use this to store guest state in */
 	struct kvmppc_book3s_shadow_vcpu shadow_vcpu;
-	u8 kvm_slb_max;			/* highest used guest slb entry */
-	u8 kvm_in_guest;		/* are we inside the guest? */
 #endif
 };
 
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 57a8c49..e8003ff 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -50,6 +50,9 @@
 #endif
 #ifdef CONFIG_KVM
 #include <linux/kvm_host.h>
+#ifndef CONFIG_BOOKE
+#include <asm/kvm_book3s.h>
+#endif
 #endif
 
 #ifdef CONFIG_PPC32
@@ -191,33 +194,9 @@ int main(void)
 	DEFINE(PACA_DATA_OFFSET, offsetof(struct paca_struct, data_offset));
 	DEFINE(PACA_TRAP_SAVE, offsetof(struct paca_struct, trap_save));
 #ifdef CONFIG_KVM_BOOK3S_64_HANDLER
-	DEFINE(PACA_KVM_IN_GUEST, offsetof(struct paca_struct, kvm_in_guest));
-	DEFINE(PACA_KVM_SLB, offsetof(struct paca_struct, kvm_slb));
-	DEFINE(PACA_KVM_SLB_MAX, offsetof(struct paca_struct, kvm_slb_max));
-	DEFINE(PACA_KVM_CR, offsetof(struct paca_struct, shadow_vcpu.cr));
-	DEFINE(PACA_KVM_XER, offsetof(struct paca_struct, shadow_vcpu.xer));
-	DEFINE(PACA_KVM_R0, offsetof(struct paca_struct, shadow_vcpu.gpr[0]));
-	DEFINE(PACA_KVM_R1, offsetof(struct paca_struct, shadow_vcpu.gpr[1]));
-	DEFINE(PACA_KVM_R2, offsetof(struct paca_struct, shadow_vcpu.gpr[2]));
-	DEFINE(PACA_KVM_R3, offsetof(struct paca_struct, shadow_vcpu.gpr[3]));
-	DEFINE(PACA_KVM_R4, offsetof(struct paca_struct, shadow_vcpu.gpr[4]));
-	DEFINE(PACA_KVM_R5, offsetof(struct paca_struct, shadow_vcpu.gpr[5]));
-	DEFINE(PACA_KVM_R6, offsetof(struct paca_struct, shadow_vcpu.gpr[6]));
-	DEFINE(PACA_KVM_R7, offsetof(struct paca_struct, shadow_vcpu.gpr[7]));
-	DEFINE(PACA_KVM_R8, offsetof(struct paca_struct, shadow_vcpu.gpr[8]));
-	DEFINE(PACA_KVM_R9, offsetof(struct paca_struct, shadow_vcpu.gpr[9]));
-	DEFINE(PACA_KVM_R10, offsetof(struct paca_struct, shadow_vcpu.gpr[10]));
-	DEFINE(PACA_KVM_R11, offsetof(struct paca_struct, shadow_vcpu.gpr[11]));
-	DEFINE(PACA_KVM_R12, offsetof(struct paca_struct, shadow_vcpu.gpr[12]));
-	DEFINE(PACA_KVM_R13, offsetof(struct paca_struct, shadow_vcpu.gpr[13]));
-	DEFINE(PACA_KVM_HOST_R1, offsetof(struct paca_struct, shadow_vcpu.host_r1));
-	DEFINE(PACA_KVM_HOST_R2, offsetof(struct paca_struct, shadow_vcpu.host_r2));
-	DEFINE(PACA_KVM_VMHANDLER, offsetof(struct paca_struct,
-					    shadow_vcpu.vmhandler));
-	DEFINE(PACA_KVM_SCRATCH0, offsetof(struct paca_struct,
-					   shadow_vcpu.scratch0));
-	DEFINE(PACA_KVM_SCRATCH1, offsetof(struct paca_struct,
-					   shadow_vcpu.scratch1));
+	DEFINE(PACA_KVM_SVCPU, offsetof(struct paca_struct, shadow_vcpu));
+	DEFINE(SVCPU_SLB, offsetof(struct kvmppc_book3s_shadow_vcpu, slb));
+	DEFINE(SVCPU_SLB_MAX, offsetof(struct kvmppc_book3s_shadow_vcpu, slb_max));
 #endif
 #endif /* CONFIG_PPC64 */
 
@@ -412,9 +391,6 @@ int main(void)
 	DEFINE(VCPU_HOST_STACK, offsetof(struct kvm_vcpu, arch.host_stack));
 	DEFINE(VCPU_HOST_PID, offsetof(struct kvm_vcpu, arch.host_pid));
 	DEFINE(VCPU_GPRS, offsetof(struct kvm_vcpu, arch.gpr));
-	DEFINE(VCPU_LR, offsetof(struct kvm_vcpu, arch.lr));
-	DEFINE(VCPU_CTR, offsetof(struct kvm_vcpu, arch.ctr));
-	DEFINE(VCPU_PC, offsetof(struct kvm_vcpu, arch.pc));
 	DEFINE(VCPU_MSR, offsetof(struct kvm_vcpu, arch.msr));
 	DEFINE(VCPU_SPRG4, offsetof(struct kvm_vcpu, arch.sprg4));
 	DEFINE(VCPU_SPRG5, offsetof(struct kvm_vcpu, arch.sprg5));
@@ -422,26 +398,67 @@ int main(void)
 	DEFINE(VCPU_SPRG7, offsetof(struct kvm_vcpu, arch.sprg7));
 	DEFINE(VCPU_SHADOW_PID, offsetof(struct kvm_vcpu, arch.shadow_pid));
 
-	DEFINE(VCPU_LAST_INST, offsetof(struct kvm_vcpu, arch.last_inst));
-	DEFINE(VCPU_FAULT_DEAR, offsetof(struct kvm_vcpu, arch.fault_dear));
-	DEFINE(VCPU_FAULT_ESR, offsetof(struct kvm_vcpu, arch.fault_esr));
-
 	/* book3s */
 #ifdef CONFIG_PPC_BOOK3S
-	DEFINE(VCPU_FAULT_DSISR, offsetof(struct kvm_vcpu, arch.fault_dsisr));
 	DEFINE(VCPU_HOST_RETIP, offsetof(struct kvm_vcpu, arch.host_retip));
-	DEFINE(VCPU_HOST_R2, offsetof(struct kvm_vcpu, arch.host_r2));
 	DEFINE(VCPU_HOST_MSR, offsetof(struct kvm_vcpu, arch.host_msr));
 	DEFINE(VCPU_SHADOW_MSR, offsetof(struct kvm_vcpu, arch.shadow_msr));
-	DEFINE(VCPU_SHADOW_SRR1, offsetof(struct kvm_vcpu, arch.shadow_srr1));
 	DEFINE(VCPU_TRAMPOLINE_LOWMEM, offsetof(struct kvm_vcpu, arch.trampoline_lowmem));
 	DEFINE(VCPU_TRAMPOLINE_ENTER, offsetof(struct kvm_vcpu, arch.trampoline_enter));
 	DEFINE(VCPU_HIGHMEM_HANDLER, offsetof(struct kvm_vcpu, arch.highmem_handler));
 	DEFINE(VCPU_RMCALL, offsetof(struct kvm_vcpu, arch.rmcall));
 	DEFINE(VCPU_HFLAGS, offsetof(struct kvm_vcpu, arch.hflags));
+	DEFINE(VCPU_SVCPU, offsetof(struct kvmppc_vcpu_book3s, shadow_vcpu) -
+			   offsetof(struct kvmppc_vcpu_book3s, vcpu));
+	DEFINE(SVCPU_CR, offsetof(struct kvmppc_book3s_shadow_vcpu, cr));
+	DEFINE(SVCPU_XER, offsetof(struct kvmppc_book3s_shadow_vcpu, xer));
+	DEFINE(SVCPU_CTR, offsetof(struct kvmppc_book3s_shadow_vcpu, ctr));
+	DEFINE(SVCPU_LR, offsetof(struct kvmppc_book3s_shadow_vcpu, lr));
+	DEFINE(SVCPU_PC, offsetof(struct kvmppc_book3s_shadow_vcpu, pc));
+	DEFINE(SVCPU_R0, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[0]));
+	DEFINE(SVCPU_R1, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[1]));
+	DEFINE(SVCPU_R2, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[2]));
+	DEFINE(SVCPU_R3, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[3]));
+	DEFINE(SVCPU_R4, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[4]));
+	DEFINE(SVCPU_R5, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[5]));
+	DEFINE(SVCPU_R6, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[6]));
+	DEFINE(SVCPU_R7, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[7]));
+	DEFINE(SVCPU_R8, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[8]));
+	DEFINE(SVCPU_R9, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[9]));
+	DEFINE(SVCPU_R10, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[10]));
+	DEFINE(SVCPU_R11, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[11]));
+	DEFINE(SVCPU_R12, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[12]));
+	DEFINE(SVCPU_R13, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[13]));
+	DEFINE(SVCPU_HOST_R1, offsetof(struct kvmppc_book3s_shadow_vcpu, host_r1));
+	DEFINE(SVCPU_HOST_R2, offsetof(struct kvmppc_book3s_shadow_vcpu, host_r2));
+	DEFINE(SVCPU_VMHANDLER, offsetof(struct kvmppc_book3s_shadow_vcpu,
+					 vmhandler));
+	DEFINE(SVCPU_SCRATCH0, offsetof(struct kvmppc_book3s_shadow_vcpu,
+					scratch0));
+	DEFINE(SVCPU_SCRATCH1, offsetof(struct kvmppc_book3s_shadow_vcpu,
+					scratch1));
+	DEFINE(SVCPU_IN_GUEST, offsetof(struct kvmppc_book3s_shadow_vcpu,
+					in_guest));
+	DEFINE(SVCPU_FAULT_DSISR, offsetof(struct kvmppc_book3s_shadow_vcpu,
+					   fault_dsisr));
+	DEFINE(SVCPU_FAULT_DAR, offsetof(struct kvmppc_book3s_shadow_vcpu,
+					 fault_dar));
+	DEFINE(SVCPU_LAST_INST, offsetof(struct kvmppc_book3s_shadow_vcpu,
+					 last_inst));
+	DEFINE(SVCPU_SHADOW_SRR1, offsetof(struct kvmppc_book3s_shadow_vcpu,
+					   shadow_srr1));
+#ifdef CONFIG_PPC_BOOK3S_32
+	DEFINE(SVCPU_SR, offsetof(struct kvmppc_book3s_shadow_vcpu, sr));
+#endif
 #else
 	DEFINE(VCPU_CR, offsetof(struct kvm_vcpu, arch.cr));
 	DEFINE(VCPU_XER, offsetof(struct kvm_vcpu, arch.xer));
+	DEFINE(VCPU_LR, offsetof(struct kvm_vcpu, arch.lr));
+	DEFINE(VCPU_CTR, offsetof(struct kvm_vcpu, arch.ctr));
+	DEFINE(VCPU_PC, offsetof(struct kvm_vcpu, arch.pc));
+	DEFINE(VCPU_LAST_INST, offsetof(struct kvm_vcpu, arch.last_inst));
+	DEFINE(VCPU_FAULT_DEAR, offsetof(struct kvm_vcpu, arch.fault_dear));
+	DEFINE(VCPU_FAULT_ESR, offsetof(struct kvm_vcpu, arch.fault_esr));
 #endif /* CONFIG_PPC_BOOK3S */
 #endif
 #ifdef CONFIG_44x
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 13/27] KVM: PPC: Use now shadowed vcpu fields
@ 2010-04-15 22:11     ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

The shadow vcpu now contains some fields we don't use from the vcpu anymore.
Access to them happens using inline functions that happily use the shadow
vcpu fields.

So let's now ifdef them out to booke only and add asm-offsets.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_host.h |    8 +--
 arch/powerpc/include/asm/paca.h     |    6 --
 arch/powerpc/kernel/asm-offsets.c   |   91 +++++++++++++++++++++--------------
 3 files changed, 57 insertions(+), 48 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 22801f8..5a83995 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -191,11 +191,11 @@ struct kvm_vcpu_arch {
 	u32 qpr[32];
 #endif
 
+#ifdef CONFIG_BOOKE
 	ulong pc;
 	ulong ctr;
 	ulong lr;
 
-#ifdef CONFIG_BOOKE
 	ulong xer;
 	u32 cr;
 #endif
@@ -203,7 +203,6 @@ struct kvm_vcpu_arch {
 	ulong msr;
 #ifdef CONFIG_PPC_BOOK3S
 	ulong shadow_msr;
-	ulong shadow_srr1;
 	ulong hflags;
 	ulong guest_owned_ext;
 #endif
@@ -258,14 +257,13 @@ struct kvm_vcpu_arch {
 	struct dentry *debugfs_exit_timing;
 #endif
 
+#ifdef CONFIG_BOOKE
 	u32 last_inst;
-#ifdef CONFIG_PPC64
-	u32 fault_dsisr;
-#endif
 	ulong fault_dear;
 	ulong fault_esr;
 	ulong queued_dear;
 	ulong queued_esr;
+#endif
 	gpa_t paddr_accessed;
 
 	u8 io_gpr; /* GPR used as IO source/target */
diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index 33347ea..224eb37 100644
--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -137,14 +137,8 @@ struct paca_struct {
 	u64 startspurr;			/* SPURR value snapshot */
 
 #ifdef CONFIG_KVM_BOOK3S_HANDLER
-	struct  {
-		u64     esid;
-		u64     vsid;
-	} kvm_slb[64];			/* guest SLB */
 	/* We use this to store guest state in */
 	struct kvmppc_book3s_shadow_vcpu shadow_vcpu;
-	u8 kvm_slb_max;			/* highest used guest slb entry */
-	u8 kvm_in_guest;		/* are we inside the guest? */
 #endif
 };
 
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 57a8c49..e8003ff 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -50,6 +50,9 @@
 #endif
 #ifdef CONFIG_KVM
 #include <linux/kvm_host.h>
+#ifndef CONFIG_BOOKE
+#include <asm/kvm_book3s.h>
+#endif
 #endif
 
 #ifdef CONFIG_PPC32
@@ -191,33 +194,9 @@ int main(void)
 	DEFINE(PACA_DATA_OFFSET, offsetof(struct paca_struct, data_offset));
 	DEFINE(PACA_TRAP_SAVE, offsetof(struct paca_struct, trap_save));
 #ifdef CONFIG_KVM_BOOK3S_64_HANDLER
-	DEFINE(PACA_KVM_IN_GUEST, offsetof(struct paca_struct, kvm_in_guest));
-	DEFINE(PACA_KVM_SLB, offsetof(struct paca_struct, kvm_slb));
-	DEFINE(PACA_KVM_SLB_MAX, offsetof(struct paca_struct, kvm_slb_max));
-	DEFINE(PACA_KVM_CR, offsetof(struct paca_struct, shadow_vcpu.cr));
-	DEFINE(PACA_KVM_XER, offsetof(struct paca_struct, shadow_vcpu.xer));
-	DEFINE(PACA_KVM_R0, offsetof(struct paca_struct, shadow_vcpu.gpr[0]));
-	DEFINE(PACA_KVM_R1, offsetof(struct paca_struct, shadow_vcpu.gpr[1]));
-	DEFINE(PACA_KVM_R2, offsetof(struct paca_struct, shadow_vcpu.gpr[2]));
-	DEFINE(PACA_KVM_R3, offsetof(struct paca_struct, shadow_vcpu.gpr[3]));
-	DEFINE(PACA_KVM_R4, offsetof(struct paca_struct, shadow_vcpu.gpr[4]));
-	DEFINE(PACA_KVM_R5, offsetof(struct paca_struct, shadow_vcpu.gpr[5]));
-	DEFINE(PACA_KVM_R6, offsetof(struct paca_struct, shadow_vcpu.gpr[6]));
-	DEFINE(PACA_KVM_R7, offsetof(struct paca_struct, shadow_vcpu.gpr[7]));
-	DEFINE(PACA_KVM_R8, offsetof(struct paca_struct, shadow_vcpu.gpr[8]));
-	DEFINE(PACA_KVM_R9, offsetof(struct paca_struct, shadow_vcpu.gpr[9]));
-	DEFINE(PACA_KVM_R10, offsetof(struct paca_struct, shadow_vcpu.gpr[10]));
-	DEFINE(PACA_KVM_R11, offsetof(struct paca_struct, shadow_vcpu.gpr[11]));
-	DEFINE(PACA_KVM_R12, offsetof(struct paca_struct, shadow_vcpu.gpr[12]));
-	DEFINE(PACA_KVM_R13, offsetof(struct paca_struct, shadow_vcpu.gpr[13]));
-	DEFINE(PACA_KVM_HOST_R1, offsetof(struct paca_struct, shadow_vcpu.host_r1));
-	DEFINE(PACA_KVM_HOST_R2, offsetof(struct paca_struct, shadow_vcpu.host_r2));
-	DEFINE(PACA_KVM_VMHANDLER, offsetof(struct paca_struct,
-					    shadow_vcpu.vmhandler));
-	DEFINE(PACA_KVM_SCRATCH0, offsetof(struct paca_struct,
-					   shadow_vcpu.scratch0));
-	DEFINE(PACA_KVM_SCRATCH1, offsetof(struct paca_struct,
-					   shadow_vcpu.scratch1));
+	DEFINE(PACA_KVM_SVCPU, offsetof(struct paca_struct, shadow_vcpu));
+	DEFINE(SVCPU_SLB, offsetof(struct kvmppc_book3s_shadow_vcpu, slb));
+	DEFINE(SVCPU_SLB_MAX, offsetof(struct kvmppc_book3s_shadow_vcpu, slb_max));
 #endif
 #endif /* CONFIG_PPC64 */
 
@@ -412,9 +391,6 @@ int main(void)
 	DEFINE(VCPU_HOST_STACK, offsetof(struct kvm_vcpu, arch.host_stack));
 	DEFINE(VCPU_HOST_PID, offsetof(struct kvm_vcpu, arch.host_pid));
 	DEFINE(VCPU_GPRS, offsetof(struct kvm_vcpu, arch.gpr));
-	DEFINE(VCPU_LR, offsetof(struct kvm_vcpu, arch.lr));
-	DEFINE(VCPU_CTR, offsetof(struct kvm_vcpu, arch.ctr));
-	DEFINE(VCPU_PC, offsetof(struct kvm_vcpu, arch.pc));
 	DEFINE(VCPU_MSR, offsetof(struct kvm_vcpu, arch.msr));
 	DEFINE(VCPU_SPRG4, offsetof(struct kvm_vcpu, arch.sprg4));
 	DEFINE(VCPU_SPRG5, offsetof(struct kvm_vcpu, arch.sprg5));
@@ -422,26 +398,67 @@ int main(void)
 	DEFINE(VCPU_SPRG7, offsetof(struct kvm_vcpu, arch.sprg7));
 	DEFINE(VCPU_SHADOW_PID, offsetof(struct kvm_vcpu, arch.shadow_pid));
 
-	DEFINE(VCPU_LAST_INST, offsetof(struct kvm_vcpu, arch.last_inst));
-	DEFINE(VCPU_FAULT_DEAR, offsetof(struct kvm_vcpu, arch.fault_dear));
-	DEFINE(VCPU_FAULT_ESR, offsetof(struct kvm_vcpu, arch.fault_esr));
-
 	/* book3s */
 #ifdef CONFIG_PPC_BOOK3S
-	DEFINE(VCPU_FAULT_DSISR, offsetof(struct kvm_vcpu, arch.fault_dsisr));
 	DEFINE(VCPU_HOST_RETIP, offsetof(struct kvm_vcpu, arch.host_retip));
-	DEFINE(VCPU_HOST_R2, offsetof(struct kvm_vcpu, arch.host_r2));
 	DEFINE(VCPU_HOST_MSR, offsetof(struct kvm_vcpu, arch.host_msr));
 	DEFINE(VCPU_SHADOW_MSR, offsetof(struct kvm_vcpu, arch.shadow_msr));
-	DEFINE(VCPU_SHADOW_SRR1, offsetof(struct kvm_vcpu, arch.shadow_srr1));
 	DEFINE(VCPU_TRAMPOLINE_LOWMEM, offsetof(struct kvm_vcpu, arch.trampoline_lowmem));
 	DEFINE(VCPU_TRAMPOLINE_ENTER, offsetof(struct kvm_vcpu, arch.trampoline_enter));
 	DEFINE(VCPU_HIGHMEM_HANDLER, offsetof(struct kvm_vcpu, arch.highmem_handler));
 	DEFINE(VCPU_RMCALL, offsetof(struct kvm_vcpu, arch.rmcall));
 	DEFINE(VCPU_HFLAGS, offsetof(struct kvm_vcpu, arch.hflags));
+	DEFINE(VCPU_SVCPU, offsetof(struct kvmppc_vcpu_book3s, shadow_vcpu) -
+			   offsetof(struct kvmppc_vcpu_book3s, vcpu));
+	DEFINE(SVCPU_CR, offsetof(struct kvmppc_book3s_shadow_vcpu, cr));
+	DEFINE(SVCPU_XER, offsetof(struct kvmppc_book3s_shadow_vcpu, xer));
+	DEFINE(SVCPU_CTR, offsetof(struct kvmppc_book3s_shadow_vcpu, ctr));
+	DEFINE(SVCPU_LR, offsetof(struct kvmppc_book3s_shadow_vcpu, lr));
+	DEFINE(SVCPU_PC, offsetof(struct kvmppc_book3s_shadow_vcpu, pc));
+	DEFINE(SVCPU_R0, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[0]));
+	DEFINE(SVCPU_R1, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[1]));
+	DEFINE(SVCPU_R2, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[2]));
+	DEFINE(SVCPU_R3, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[3]));
+	DEFINE(SVCPU_R4, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[4]));
+	DEFINE(SVCPU_R5, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[5]));
+	DEFINE(SVCPU_R6, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[6]));
+	DEFINE(SVCPU_R7, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[7]));
+	DEFINE(SVCPU_R8, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[8]));
+	DEFINE(SVCPU_R9, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[9]));
+	DEFINE(SVCPU_R10, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[10]));
+	DEFINE(SVCPU_R11, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[11]));
+	DEFINE(SVCPU_R12, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[12]));
+	DEFINE(SVCPU_R13, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[13]));
+	DEFINE(SVCPU_HOST_R1, offsetof(struct kvmppc_book3s_shadow_vcpu, host_r1));
+	DEFINE(SVCPU_HOST_R2, offsetof(struct kvmppc_book3s_shadow_vcpu, host_r2));
+	DEFINE(SVCPU_VMHANDLER, offsetof(struct kvmppc_book3s_shadow_vcpu,
+					 vmhandler));
+	DEFINE(SVCPU_SCRATCH0, offsetof(struct kvmppc_book3s_shadow_vcpu,
+					scratch0));
+	DEFINE(SVCPU_SCRATCH1, offsetof(struct kvmppc_book3s_shadow_vcpu,
+					scratch1));
+	DEFINE(SVCPU_IN_GUEST, offsetof(struct kvmppc_book3s_shadow_vcpu,
+					in_guest));
+	DEFINE(SVCPU_FAULT_DSISR, offsetof(struct kvmppc_book3s_shadow_vcpu,
+					   fault_dsisr));
+	DEFINE(SVCPU_FAULT_DAR, offsetof(struct kvmppc_book3s_shadow_vcpu,
+					 fault_dar));
+	DEFINE(SVCPU_LAST_INST, offsetof(struct kvmppc_book3s_shadow_vcpu,
+					 last_inst));
+	DEFINE(SVCPU_SHADOW_SRR1, offsetof(struct kvmppc_book3s_shadow_vcpu,
+					   shadow_srr1));
+#ifdef CONFIG_PPC_BOOK3S_32
+	DEFINE(SVCPU_SR, offsetof(struct kvmppc_book3s_shadow_vcpu, sr));
+#endif
 #else
 	DEFINE(VCPU_CR, offsetof(struct kvm_vcpu, arch.cr));
 	DEFINE(VCPU_XER, offsetof(struct kvm_vcpu, arch.xer));
+	DEFINE(VCPU_LR, offsetof(struct kvm_vcpu, arch.lr));
+	DEFINE(VCPU_CTR, offsetof(struct kvm_vcpu, arch.ctr));
+	DEFINE(VCPU_PC, offsetof(struct kvm_vcpu, arch.pc));
+	DEFINE(VCPU_LAST_INST, offsetof(struct kvm_vcpu, arch.last_inst));
+	DEFINE(VCPU_FAULT_DEAR, offsetof(struct kvm_vcpu, arch.fault_dear));
+	DEFINE(VCPU_FAULT_ESR, offsetof(struct kvm_vcpu, arch.fault_esr));
 #endif /* CONFIG_PPC_BOOK3S */
 #endif
 #ifdef CONFIG_44x
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 14/27] KVM: PPC: Extract MMU init
       [not found] ` <1271369518-11247-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-15 22:11     ` Alexander Graf
  2010-04-15 22:11     ` Alexander Graf
                       ` (15 subsequent siblings)
  16 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

The host shadow mmu code needs to get initialized. It needs to fetch a
segment it can use to put shadow PTEs into.

That initialization code was in generic code, which is icky. Let's move
it over to the respective MMU file.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/include/asm/kvm_ppc.h    |    1 +
 arch/powerpc/kvm/book3s.c             |    8 +-------
 arch/powerpc/kvm/book3s_64_mmu_host.c |   18 ++++++++++++++++++
 3 files changed, 20 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index edade84..18d139e 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -69,6 +69,7 @@ extern void kvmppc_mmu_map(struct kvm_vcpu *vcpu, u64 gvaddr, gpa_t gpaddr,
 extern void kvmppc_mmu_priv_switch(struct kvm_vcpu *vcpu, int usermode);
 extern void kvmppc_mmu_switch_pid(struct kvm_vcpu *vcpu, u32 pid);
 extern void kvmppc_mmu_destroy(struct kvm_vcpu *vcpu);
+extern int kvmppc_mmu_init(struct kvm_vcpu *vcpu);
 extern int kvmppc_mmu_dtlb_index(struct kvm_vcpu *vcpu, gva_t eaddr);
 extern int kvmppc_mmu_itlb_index(struct kvm_vcpu *vcpu, gva_t eaddr);
 extern gpa_t kvmppc_mmu_xlate(struct kvm_vcpu *vcpu, unsigned int gtlb_index,
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 6eb2da2..b917b97 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -1201,14 +1201,9 @@ struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm, unsigned int id)
 
 	vcpu->arch.shadow_msr = MSR_USER64;
 
-	err = __init_new_context();
+	err = kvmppc_mmu_init(vcpu);
 	if (err < 0)
 		goto free_shadow_vcpu;
-	vcpu_book3s->context_id = err;
-
-	vcpu_book3s->vsid_max = ((vcpu_book3s->context_id + 1) << USER_ESID_BITS) - 1;
-	vcpu_book3s->vsid_first = vcpu_book3s->context_id << USER_ESID_BITS;
-	vcpu_book3s->vsid_next = vcpu_book3s->vsid_first;
 
 	return vcpu;
 
@@ -1224,7 +1219,6 @@ void kvmppc_core_vcpu_free(struct kvm_vcpu *vcpu)
 {
 	struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
 
-	__destroy_context(vcpu_book3s->context_id);
 	kvm_vcpu_uninit(vcpu);
 	kfree(vcpu_book3s->shadow_vcpu);
 	vfree(vcpu_book3s);
diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index b0f5b4e..0eea589 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -405,4 +405,22 @@ void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu)
 void kvmppc_mmu_destroy(struct kvm_vcpu *vcpu)
 {
 	kvmppc_mmu_pte_flush(vcpu, 0, 0);
+	__destroy_context(to_book3s(vcpu)->context_id);
+}
+
+int kvmppc_mmu_init(struct kvm_vcpu *vcpu)
+{
+	struct kvmppc_vcpu_book3s *vcpu3s = to_book3s(vcpu);
+	int err;
+
+	err = __init_new_context();
+	if (err < 0)
+		return -1;
+	vcpu3s->context_id = err;
+
+	vcpu3s->vsid_max = ((vcpu3s->context_id + 1) << USER_ESID_BITS) - 1;
+	vcpu3s->vsid_first = vcpu3s->context_id << USER_ESID_BITS;
+	vcpu3s->vsid_next = vcpu3s->vsid_first;
+
+	return 0;
 }
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 14/27] KVM: PPC: Extract MMU init
@ 2010-04-15 22:11     ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

The host shadow mmu code needs to get initialized. It needs to fetch a
segment it can use to put shadow PTEs into.

That initialization code was in generic code, which is icky. Let's move
it over to the respective MMU file.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_ppc.h    |    1 +
 arch/powerpc/kvm/book3s.c             |    8 +-------
 arch/powerpc/kvm/book3s_64_mmu_host.c |   18 ++++++++++++++++++
 3 files changed, 20 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index edade84..18d139e 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -69,6 +69,7 @@ extern void kvmppc_mmu_map(struct kvm_vcpu *vcpu, u64 gvaddr, gpa_t gpaddr,
 extern void kvmppc_mmu_priv_switch(struct kvm_vcpu *vcpu, int usermode);
 extern void kvmppc_mmu_switch_pid(struct kvm_vcpu *vcpu, u32 pid);
 extern void kvmppc_mmu_destroy(struct kvm_vcpu *vcpu);
+extern int kvmppc_mmu_init(struct kvm_vcpu *vcpu);
 extern int kvmppc_mmu_dtlb_index(struct kvm_vcpu *vcpu, gva_t eaddr);
 extern int kvmppc_mmu_itlb_index(struct kvm_vcpu *vcpu, gva_t eaddr);
 extern gpa_t kvmppc_mmu_xlate(struct kvm_vcpu *vcpu, unsigned int gtlb_index,
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 6eb2da2..b917b97 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -1201,14 +1201,9 @@ struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm, unsigned int id)
 
 	vcpu->arch.shadow_msr = MSR_USER64;
 
-	err = __init_new_context();
+	err = kvmppc_mmu_init(vcpu);
 	if (err < 0)
 		goto free_shadow_vcpu;
-	vcpu_book3s->context_id = err;
-
-	vcpu_book3s->vsid_max = ((vcpu_book3s->context_id + 1) << USER_ESID_BITS) - 1;
-	vcpu_book3s->vsid_first = vcpu_book3s->context_id << USER_ESID_BITS;
-	vcpu_book3s->vsid_next = vcpu_book3s->vsid_first;
 
 	return vcpu;
 
@@ -1224,7 +1219,6 @@ void kvmppc_core_vcpu_free(struct kvm_vcpu *vcpu)
 {
 	struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
 
-	__destroy_context(vcpu_book3s->context_id);
 	kvm_vcpu_uninit(vcpu);
 	kfree(vcpu_book3s->shadow_vcpu);
 	vfree(vcpu_book3s);
diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index b0f5b4e..0eea589 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -405,4 +405,22 @@ void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu)
 void kvmppc_mmu_destroy(struct kvm_vcpu *vcpu)
 {
 	kvmppc_mmu_pte_flush(vcpu, 0, 0);
+	__destroy_context(to_book3s(vcpu)->context_id);
+}
+
+int kvmppc_mmu_init(struct kvm_vcpu *vcpu)
+{
+	struct kvmppc_vcpu_book3s *vcpu3s = to_book3s(vcpu);
+	int err;
+
+	err = __init_new_context();
+	if (err < 0)
+		return -1;
+	vcpu3s->context_id = err;
+
+	vcpu3s->vsid_max = ((vcpu3s->context_id + 1) << USER_ESID_BITS) - 1;
+	vcpu3s->vsid_first = vcpu3s->context_id << USER_ESID_BITS;
+	vcpu3s->vsid_next = vcpu3s->vsid_first;
+
+	return 0;
 }
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 15/27] KVM: PPC: Make real mode handler generic
       [not found] ` <1271369518-11247-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-15 22:11     ` Alexander Graf
  2010-04-15 22:11     ` Alexander Graf
                       ` (15 subsequent siblings)
  16 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

The real mode handler code was originally writen for 64 bit Book3S only.
But since we not add 32 bit functionality too, we need to make some tweaks
to it.

This patch basically combines using the "long" access defines and using
fields from the shadow VCPU we just moved there.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s_rmhandlers.S |  119 +++++++++++++++++++++++++---------
 1 files changed, 88 insertions(+), 31 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_rmhandlers.S b/arch/powerpc/kvm/book3s_rmhandlers.S
index bd08535..0c8d331 100644
--- a/arch/powerpc/kvm/book3s_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_rmhandlers.S
@@ -22,7 +22,10 @@
 #include <asm/reg.h>
 #include <asm/page.h>
 #include <asm/asm-offsets.h>
+
+#ifdef CONFIG_PPC_BOOK3S_64
 #include <asm/exception-64s.h>
+#endif
 
 /*****************************************************************************
  *                                                                           *
@@ -30,6 +33,39 @@
  *                                                                           *
  ****************************************************************************/
 
+#if defined(CONFIG_PPC_BOOK3S_64)
+
+#define LOAD_SHADOW_VCPU(reg)				\
+	mfspr	reg, SPRN_SPRG_PACA
+
+#define SHADOW_VCPU_OFF		PACA_KVM_SVCPU
+#define MSR_NOIRQ		MSR_KERNEL & ~(MSR_IR | MSR_DR)
+#define FUNC(name) 		GLUE(.,name)
+
+#elif defined(CONFIG_PPC_BOOK3S_32)
+
+#define LOAD_SHADOW_VCPU(reg)						\
+	mfspr	reg, SPRN_SPRG_THREAD;					\
+	lwz	reg, THREAD_KVM_SVCPU(reg);				\
+	/* PPC32 can have a NULL pointer - let's check for that */	\
+	mtspr   SPRN_SPRG_SCRATCH1, r12;	/* Save r12 */		\
+	mfcr	r12;							\
+	cmpwi	reg, 0;							\
+	bne	1f;							\
+	mfspr	reg, SPRN_SPRG_SCRATCH0;				\
+	mtcr	r12;							\
+	mfspr	r12, SPRN_SPRG_SCRATCH1;				\
+	b	kvmppc_resume_\intno;					\
+1:;									\
+	mtcr	r12;							\
+	mfspr	r12, SPRN_SPRG_SCRATCH1;				\
+	tophys(reg, reg)
+
+#define SHADOW_VCPU_OFF		0
+#define MSR_NOIRQ		MSR_KERNEL
+#define FUNC(name)		name
+
+#endif
 
 .macro INTERRUPT_TRAMPOLINE intno
 
@@ -42,19 +78,19 @@ kvmppc_trampoline_\intno:
 	 * First thing to do is to find out if we're coming
 	 * from a KVM guest or a Linux process.
 	 *
-	 * To distinguish, we check a magic byte in the PACA
+	 * To distinguish, we check a magic byte in the PACA/current
 	 */
-	mfspr	r13, SPRN_SPRG_PACA		/* r13 = PACA */
-	std	r12, PACA_KVM_SCRATCH0(r13)
+	LOAD_SHADOW_VCPU(r13)
+	PPC_STL	r12, (SHADOW_VCPU_OFF + SVCPU_SCRATCH0)(r13)
 	mfcr	r12
-	stw	r12, PACA_KVM_SCRATCH1(r13)
-	lbz	r12, PACA_KVM_IN_GUEST(r13)
+	stw	r12, (SHADOW_VCPU_OFF + SVCPU_SCRATCH1)(r13)
+	lbz	r12, (SHADOW_VCPU_OFF + SVCPU_IN_GUEST)(r13)
 	cmpwi	r12, KVM_GUEST_MODE_NONE
 	bne	..kvmppc_handler_hasmagic_\intno
 	/* No KVM guest? Then jump back to the Linux handler! */
-	lwz	r12, PACA_KVM_SCRATCH1(r13)
+	lwz	r12, (SHADOW_VCPU_OFF + SVCPU_SCRATCH1)(r13)
 	mtcr	r12
-	ld	r12, PACA_KVM_SCRATCH0(r13)
+	PPC_LL	r12, (SHADOW_VCPU_OFF + SVCPU_SCRATCH0)(r13)
 	mfspr	r13, SPRN_SPRG_SCRATCH0		/* r13 = original r13 */
 	b	kvmppc_resume_\intno		/* Get back original handler */
 
@@ -76,9 +112,7 @@ kvmppc_trampoline_\intno:
 INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_SYSTEM_RESET
 INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_MACHINE_CHECK
 INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_DATA_STORAGE
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_DATA_SEGMENT
 INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_INST_STORAGE
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_INST_SEGMENT
 INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_EXTERNAL
 INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_ALIGNMENT
 INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_PROGRAM
@@ -88,7 +122,14 @@ INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_SYSCALL
 INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_TRACE
 INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_PERFMON
 INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_ALTIVEC
+
+/* Those are only available on 64 bit machines */
+
+#ifdef CONFIG_PPC_BOOK3S_64
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_DATA_SEGMENT
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_INST_SEGMENT
 INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_VSX
+#endif
 
 /*
  * Bring us back to the faulting code, but skip the
@@ -99,11 +140,11 @@ INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_VSX
  *
  * Input Registers:
  *
- * R12               = free
- * R13               = PACA
- * PACA.KVM.SCRATCH0 = guest R12
- * PACA.KVM.SCRATCH1 = guest CR
- * SPRG_SCRATCH0     = guest R13
+ * R12            = free
+ * R13            = Shadow VCPU (PACA)
+ * SVCPU.SCRATCH0 = guest R12
+ * SVCPU.SCRATCH1 = guest CR
+ * SPRG_SCRATCH0  = guest R13
  *
  */
 kvmppc_handler_skip_ins:
@@ -114,9 +155,9 @@ kvmppc_handler_skip_ins:
 	mtsrr0	r12
 
 	/* Clean up all state */
-	lwz	r12, PACA_KVM_SCRATCH1(r13)
+	lwz	r12, (SHADOW_VCPU_OFF + SVCPU_SCRATCH1)(r13)
 	mtcr	r12
-	ld	r12, PACA_KVM_SCRATCH0(r13)
+	PPC_LL	r12, (SHADOW_VCPU_OFF + SVCPU_SCRATCH0)(r13)
 	mfspr	r13, SPRN_SPRG_SCRATCH0
 
 	/* And get back into the code */
@@ -147,32 +188,48 @@ kvmppc_handler_lowmem_trampoline_end:
  *
  * R3 = function
  * R4 = MSR
- * R5 = CTR
+ * R5 = scratch register
  *
  */
 _GLOBAL(kvmppc_rmcall)
-	mtmsr	r4		/* Disable relocation, so mtsrr
+	LOAD_REG_IMMEDIATE(r5, MSR_NOIRQ)
+	mtmsr	r5		/* Disable relocation and interrupts, so mtsrr
 				   doesn't get interrupted */
-	mtctr	r5
+	sync
 	mtsrr0	r3
 	mtsrr1	r4
 	RFI
 
+#if defined(CONFIG_PPC_BOOK3S_32)
+#define STACK_LR	INT_FRAME_SIZE+4
+#elif defined(CONFIG_PPC_BOOK3S_64)
+#define STACK_LR	_LINK
+#endif
+
 /*
  * Activate current's external feature (FPU/Altivec/VSX)
  */
-#define define_load_up(what) 				\
-							\
-_GLOBAL(kvmppc_load_up_ ## what);			\
-	stdu	r1, -INT_FRAME_SIZE(r1);		\
-	mflr	r3;					\
-	std	r3, _LINK(r1);				\
-							\
-	bl	.load_up_ ## what;			\
-							\
-	ld	r3, _LINK(r1);				\
-	mtlr	r3;					\
-	addi	r1, r1, INT_FRAME_SIZE;			\
+#define define_load_up(what) 					\
+								\
+_GLOBAL(kvmppc_load_up_ ## what);				\
+	PPC_STLU r1, -INT_FRAME_SIZE(r1);			\
+	mflr	r3;						\
+	PPC_STL	r3, STACK_LR(r1);				\
+	PPC_STL	r20, _NIP(r1);					\
+	mfmsr	r20;						\
+	LOAD_REG_IMMEDIATE(r3, MSR_DR|MSR_EE);			\
+	andc	r3,r20,r3;		/* Disable DR,EE */	\
+	mtmsr	r3;						\
+	sync;							\
+								\
+	bl	FUNC(load_up_ ## what);				\
+								\
+	mtmsr	r20;			/* Enable DR,EE */	\
+	sync;							\
+	PPC_LL	r3, STACK_LR(r1);				\
+	PPC_LL	r20, _NIP(r1);					\
+	mtlr	r3;						\
+	addi	r1, r1, INT_FRAME_SIZE;				\
 	blr
 
 define_load_up(fpu)
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 15/27] KVM: PPC: Make real mode handler generic
@ 2010-04-15 22:11     ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

The real mode handler code was originally writen for 64 bit Book3S only.
But since we not add 32 bit functionality too, we need to make some tweaks
to it.

This patch basically combines using the "long" access defines and using
fields from the shadow VCPU we just moved there.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_rmhandlers.S |  119 +++++++++++++++++++++++++---------
 1 files changed, 88 insertions(+), 31 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_rmhandlers.S b/arch/powerpc/kvm/book3s_rmhandlers.S
index bd08535..0c8d331 100644
--- a/arch/powerpc/kvm/book3s_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_rmhandlers.S
@@ -22,7 +22,10 @@
 #include <asm/reg.h>
 #include <asm/page.h>
 #include <asm/asm-offsets.h>
+
+#ifdef CONFIG_PPC_BOOK3S_64
 #include <asm/exception-64s.h>
+#endif
 
 /*****************************************************************************
  *                                                                           *
@@ -30,6 +33,39 @@
  *                                                                           *
  ****************************************************************************/
 
+#if defined(CONFIG_PPC_BOOK3S_64)
+
+#define LOAD_SHADOW_VCPU(reg)				\
+	mfspr	reg, SPRN_SPRG_PACA
+
+#define SHADOW_VCPU_OFF		PACA_KVM_SVCPU
+#define MSR_NOIRQ		MSR_KERNEL & ~(MSR_IR | MSR_DR)
+#define FUNC(name) 		GLUE(.,name)
+
+#elif defined(CONFIG_PPC_BOOK3S_32)
+
+#define LOAD_SHADOW_VCPU(reg)						\
+	mfspr	reg, SPRN_SPRG_THREAD;					\
+	lwz	reg, THREAD_KVM_SVCPU(reg);				\
+	/* PPC32 can have a NULL pointer - let's check for that */	\
+	mtspr   SPRN_SPRG_SCRATCH1, r12;	/* Save r12 */		\
+	mfcr	r12;							\
+	cmpwi	reg, 0;							\
+	bne	1f;							\
+	mfspr	reg, SPRN_SPRG_SCRATCH0;				\
+	mtcr	r12;							\
+	mfspr	r12, SPRN_SPRG_SCRATCH1;				\
+	b	kvmppc_resume_\intno;					\
+1:;									\
+	mtcr	r12;							\
+	mfspr	r12, SPRN_SPRG_SCRATCH1;				\
+	tophys(reg, reg)
+
+#define SHADOW_VCPU_OFF		0
+#define MSR_NOIRQ		MSR_KERNEL
+#define FUNC(name)		name
+
+#endif
 
 .macro INTERRUPT_TRAMPOLINE intno
 
@@ -42,19 +78,19 @@ kvmppc_trampoline_\intno:
 	 * First thing to do is to find out if we're coming
 	 * from a KVM guest or a Linux process.
 	 *
-	 * To distinguish, we check a magic byte in the PACA
+	 * To distinguish, we check a magic byte in the PACA/current
 	 */
-	mfspr	r13, SPRN_SPRG_PACA		/* r13 = PACA */
-	std	r12, PACA_KVM_SCRATCH0(r13)
+	LOAD_SHADOW_VCPU(r13)
+	PPC_STL	r12, (SHADOW_VCPU_OFF + SVCPU_SCRATCH0)(r13)
 	mfcr	r12
-	stw	r12, PACA_KVM_SCRATCH1(r13)
-	lbz	r12, PACA_KVM_IN_GUEST(r13)
+	stw	r12, (SHADOW_VCPU_OFF + SVCPU_SCRATCH1)(r13)
+	lbz	r12, (SHADOW_VCPU_OFF + SVCPU_IN_GUEST)(r13)
 	cmpwi	r12, KVM_GUEST_MODE_NONE
 	bne	..kvmppc_handler_hasmagic_\intno
 	/* No KVM guest? Then jump back to the Linux handler! */
-	lwz	r12, PACA_KVM_SCRATCH1(r13)
+	lwz	r12, (SHADOW_VCPU_OFF + SVCPU_SCRATCH1)(r13)
 	mtcr	r12
-	ld	r12, PACA_KVM_SCRATCH0(r13)
+	PPC_LL	r12, (SHADOW_VCPU_OFF + SVCPU_SCRATCH0)(r13)
 	mfspr	r13, SPRN_SPRG_SCRATCH0		/* r13 = original r13 */
 	b	kvmppc_resume_\intno		/* Get back original handler */
 
@@ -76,9 +112,7 @@ kvmppc_trampoline_\intno:
 INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_SYSTEM_RESET
 INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_MACHINE_CHECK
 INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_DATA_STORAGE
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_DATA_SEGMENT
 INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_INST_STORAGE
-INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_INST_SEGMENT
 INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_EXTERNAL
 INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_ALIGNMENT
 INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_PROGRAM
@@ -88,7 +122,14 @@ INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_SYSCALL
 INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_TRACE
 INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_PERFMON
 INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_ALTIVEC
+
+/* Those are only available on 64 bit machines */
+
+#ifdef CONFIG_PPC_BOOK3S_64
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_DATA_SEGMENT
+INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_INST_SEGMENT
 INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_VSX
+#endif
 
 /*
  * Bring us back to the faulting code, but skip the
@@ -99,11 +140,11 @@ INTERRUPT_TRAMPOLINE	BOOK3S_INTERRUPT_VSX
  *
  * Input Registers:
  *
- * R12               = free
- * R13               = PACA
- * PACA.KVM.SCRATCH0 = guest R12
- * PACA.KVM.SCRATCH1 = guest CR
- * SPRG_SCRATCH0     = guest R13
+ * R12            = free
+ * R13            = Shadow VCPU (PACA)
+ * SVCPU.SCRATCH0 = guest R12
+ * SVCPU.SCRATCH1 = guest CR
+ * SPRG_SCRATCH0  = guest R13
  *
  */
 kvmppc_handler_skip_ins:
@@ -114,9 +155,9 @@ kvmppc_handler_skip_ins:
 	mtsrr0	r12
 
 	/* Clean up all state */
-	lwz	r12, PACA_KVM_SCRATCH1(r13)
+	lwz	r12, (SHADOW_VCPU_OFF + SVCPU_SCRATCH1)(r13)
 	mtcr	r12
-	ld	r12, PACA_KVM_SCRATCH0(r13)
+	PPC_LL	r12, (SHADOW_VCPU_OFF + SVCPU_SCRATCH0)(r13)
 	mfspr	r13, SPRN_SPRG_SCRATCH0
 
 	/* And get back into the code */
@@ -147,32 +188,48 @@ kvmppc_handler_lowmem_trampoline_end:
  *
  * R3 = function
  * R4 = MSR
- * R5 = CTR
+ * R5 = scratch register
  *
  */
 _GLOBAL(kvmppc_rmcall)
-	mtmsr	r4		/* Disable relocation, so mtsrr
+	LOAD_REG_IMMEDIATE(r5, MSR_NOIRQ)
+	mtmsr	r5		/* Disable relocation and interrupts, so mtsrr
 				   doesn't get interrupted */
-	mtctr	r5
+	sync
 	mtsrr0	r3
 	mtsrr1	r4
 	RFI
 
+#if defined(CONFIG_PPC_BOOK3S_32)
+#define STACK_LR	INT_FRAME_SIZE+4
+#elif defined(CONFIG_PPC_BOOK3S_64)
+#define STACK_LR	_LINK
+#endif
+
 /*
  * Activate current's external feature (FPU/Altivec/VSX)
  */
-#define define_load_up(what) 				\
-							\
-_GLOBAL(kvmppc_load_up_ ## what);			\
-	stdu	r1, -INT_FRAME_SIZE(r1);		\
-	mflr	r3;					\
-	std	r3, _LINK(r1);				\
-							\
-	bl	.load_up_ ## what;			\
-							\
-	ld	r3, _LINK(r1);				\
-	mtlr	r3;					\
-	addi	r1, r1, INT_FRAME_SIZE;			\
+#define define_load_up(what) 					\
+								\
+_GLOBAL(kvmppc_load_up_ ## what);				\
+	PPC_STLU r1, -INT_FRAME_SIZE(r1);			\
+	mflr	r3;						\
+	PPC_STL	r3, STACK_LR(r1);				\
+	PPC_STL	r20, _NIP(r1);					\
+	mfmsr	r20;						\
+	LOAD_REG_IMMEDIATE(r3, MSR_DR|MSR_EE);			\
+	andc	r3,r20,r3;		/* Disable DR,EE */	\
+	mtmsr	r3;						\
+	sync;							\
+								\
+	bl	FUNC(load_up_ ## what);				\
+								\
+	mtmsr	r20;			/* Enable DR,EE */	\
+	sync;							\
+	PPC_LL	r3, STACK_LR(r1);				\
+	PPC_LL	r20, _NIP(r1);					\
+	mtlr	r3;						\
+	addi	r1, r1, INT_FRAME_SIZE;				\
 	blr
 
 define_load_up(fpu)
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 16/27] KVM: PPC: Make highmem code generic
       [not found] ` <1271369518-11247-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-15 22:11     ` Alexander Graf
  2010-04-15 22:11     ` Alexander Graf
                       ` (15 subsequent siblings)
  16 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

Since we now have several fields in the shadow VCPU, we also change
the internal calling convention between the different entry/exit code
layers.

Let's reflect that in the IR=1 code and make sure we use "long" defines
for long field access.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s_interrupts.S |  201 +++++++++++++++++-----------------
 1 files changed, 101 insertions(+), 100 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_interrupts.S b/arch/powerpc/kvm/book3s_interrupts.S
index faca876..f5b3358 100644
--- a/arch/powerpc/kvm/book3s_interrupts.S
+++ b/arch/powerpc/kvm/book3s_interrupts.S
@@ -24,36 +24,56 @@
 #include <asm/asm-offsets.h>
 #include <asm/exception-64s.h>
 
-#define KVMPPC_HANDLE_EXIT .kvmppc_handle_exit
-#define ULONG_SIZE 8
-#define VCPU_GPR(n)     (VCPU_GPRS + (n * ULONG_SIZE))
+#if defined(CONFIG_PPC_BOOK3S_64)
 
-.macro DISABLE_INTERRUPTS
-       mfmsr   r0
-       rldicl  r0,r0,48,1
-       rotldi  r0,r0,16
-       mtmsrd  r0,1
-.endm
+#define ULONG_SIZE 		8
+#define FUNC(name) 		GLUE(.,name)
 
+#define GET_SHADOW_VCPU(reg)    \
+        addi    reg, r13, PACA_KVM_SVCPU
+
+#define DISABLE_INTERRUPTS	\
+	mfmsr   r0;		\
+	rldicl  r0,r0,48,1;	\
+	rotldi  r0,r0,16;	\
+	mtmsrd  r0,1;		\
+
+#elif defined(CONFIG_PPC_BOOK3S_32)
+
+#define ULONG_SIZE              4
+#define FUNC(name)		name
+
+#define GET_SHADOW_VCPU(reg)    \
+        lwz     reg, (THREAD + THREAD_KVM_SVCPU)(r2)
+
+#define DISABLE_INTERRUPTS	\
+	mfmsr   r0;		\
+	rlwinm  r0,r0,0,17,15;	\
+	mtmsr   r0;		\
+
+#endif /* CONFIG_PPC_BOOK3S_XX */
+
+
+#define VCPU_GPR(n)		(VCPU_GPRS + (n * ULONG_SIZE))
 #define VCPU_LOAD_NVGPRS(vcpu) \
-	ld	r14, VCPU_GPR(r14)(vcpu); \
-	ld	r15, VCPU_GPR(r15)(vcpu); \
-	ld	r16, VCPU_GPR(r16)(vcpu); \
-	ld	r17, VCPU_GPR(r17)(vcpu); \
-	ld	r18, VCPU_GPR(r18)(vcpu); \
-	ld	r19, VCPU_GPR(r19)(vcpu); \
-	ld	r20, VCPU_GPR(r20)(vcpu); \
-	ld	r21, VCPU_GPR(r21)(vcpu); \
-	ld	r22, VCPU_GPR(r22)(vcpu); \
-	ld	r23, VCPU_GPR(r23)(vcpu); \
-	ld	r24, VCPU_GPR(r24)(vcpu); \
-	ld	r25, VCPU_GPR(r25)(vcpu); \
-	ld	r26, VCPU_GPR(r26)(vcpu); \
-	ld	r27, VCPU_GPR(r27)(vcpu); \
-	ld	r28, VCPU_GPR(r28)(vcpu); \
-	ld	r29, VCPU_GPR(r29)(vcpu); \
-	ld	r30, VCPU_GPR(r30)(vcpu); \
-	ld	r31, VCPU_GPR(r31)(vcpu); \
+	PPC_LL	r14, VCPU_GPR(r14)(vcpu); \
+	PPC_LL	r15, VCPU_GPR(r15)(vcpu); \
+	PPC_LL	r16, VCPU_GPR(r16)(vcpu); \
+	PPC_LL	r17, VCPU_GPR(r17)(vcpu); \
+	PPC_LL	r18, VCPU_GPR(r18)(vcpu); \
+	PPC_LL	r19, VCPU_GPR(r19)(vcpu); \
+	PPC_LL	r20, VCPU_GPR(r20)(vcpu); \
+	PPC_LL	r21, VCPU_GPR(r21)(vcpu); \
+	PPC_LL	r22, VCPU_GPR(r22)(vcpu); \
+	PPC_LL	r23, VCPU_GPR(r23)(vcpu); \
+	PPC_LL	r24, VCPU_GPR(r24)(vcpu); \
+	PPC_LL	r25, VCPU_GPR(r25)(vcpu); \
+	PPC_LL	r26, VCPU_GPR(r26)(vcpu); \
+	PPC_LL	r27, VCPU_GPR(r27)(vcpu); \
+	PPC_LL	r28, VCPU_GPR(r28)(vcpu); \
+	PPC_LL	r29, VCPU_GPR(r29)(vcpu); \
+	PPC_LL	r30, VCPU_GPR(r30)(vcpu); \
+	PPC_LL	r31, VCPU_GPR(r31)(vcpu); \
 
 /*****************************************************************************
  *                                                                           *
@@ -69,11 +89,11 @@ _GLOBAL(__kvmppc_vcpu_entry)
 
 kvm_start_entry:
 	/* Write correct stack frame */
-	mflr    r0
-	std     r0,16(r1)
+	mflr	r0
+	PPC_STL	r0,PPC_LR_STKOFF(r1)
 
 	/* Save host state to the stack */
-	stdu	r1, -SWITCH_FRAME_SIZE(r1)
+	PPC_STLU r1, -SWITCH_FRAME_SIZE(r1)
 
 	/* Save r3 (kvm_run) and r4 (vcpu) */
 	SAVE_2GPRS(3, r1)
@@ -82,33 +102,28 @@ kvm_start_entry:
 	SAVE_NVGPRS(r1)
 
 	/* Save LR */
-	std	r0, _LINK(r1)
+	PPC_STL	r0, _LINK(r1)
 
 	/* Load non-volatile guest state from the vcpu */
 	VCPU_LOAD_NVGPRS(r4)
 
+	GET_SHADOW_VCPU(r5)
+
 	/* Save R1/R2 in the PACA */
-	std	r1, PACA_KVM_HOST_R1(r13)
-	std	r2, PACA_KVM_HOST_R2(r13)
+	PPC_STL	r1, SVCPU_HOST_R1(r5)
+	PPC_STL	r2, SVCPU_HOST_R2(r5)
 
 	/* XXX swap in/out on load? */
-	ld	r3, VCPU_HIGHMEM_HANDLER(r4)
-	std	r3, PACA_KVM_VMHANDLER(r13)
+	PPC_LL	r3, VCPU_HIGHMEM_HANDLER(r4)
+	PPC_STL	r3, SVCPU_VMHANDLER(r5)
 
 kvm_start_lightweight:
 
-	ld	r9, VCPU_PC(r4)			/* r9 = vcpu->arch.pc */
-	ld	r10, VCPU_SHADOW_MSR(r4)	/* r10 = vcpu->arch.shadow_msr */
-
-	/* Load some guest state in the respective registers */
-	ld	r5, VCPU_CTR(r4)	/* r5 = vcpu->arch.ctr */
-					/* will be swapped in by rmcall */
-
-	ld	r3, VCPU_LR(r4)		/* r3 = vcpu->arch.lr */
-	mtlr	r3			/* LR = r3 */
+	PPC_LL	r10, VCPU_SHADOW_MSR(r4)	/* r10 = vcpu->arch.shadow_msr */
 
 	DISABLE_INTERRUPTS
 
+#ifdef CONFIG_PPC_BOOK3S_64
 	/* Some guests may need to have dcbz set to 32 byte length.
 	 *
 	 * Usually we ensure that by patching the guest's instructions
@@ -118,7 +133,7 @@ kvm_start_lightweight:
 	 * because that's a lot faster.
 	 */
 
-	ld	r3, VCPU_HFLAGS(r4)
+	PPC_LL	r3, VCPU_HFLAGS(r4)
 	rldicl.	r3, r3, 0, 63		/* CR = ((r3 & 1) == 0) */
 	beq	no_dcbz32_on
 
@@ -128,13 +143,15 @@ kvm_start_lightweight:
 
 no_dcbz32_on:
 
-	ld	r6, VCPU_RMCALL(r4)
+#endif /* CONFIG_PPC_BOOK3S_64 */
+
+	PPC_LL	r6, VCPU_RMCALL(r4)
 	mtctr	r6
 
-	ld	r3, VCPU_TRAMPOLINE_ENTER(r4)
+	PPC_LL	r3, VCPU_TRAMPOLINE_ENTER(r4)
 	LOAD_REG_IMMEDIATE(r4, MSR_KERNEL & ~(MSR_IR | MSR_DR))
 
-	/* Jump to SLB patching handlder and into our guest */
+	/* Jump to segment patching handler and into our guest */
 	bctr
 
 /*
@@ -149,31 +166,20 @@ kvmppc_handler_highmem:
 	/*
 	 * Register usage at this point:
 	 *
-	 * R0         = guest last inst
-	 * R1         = host R1
-	 * R2         = host R2
-	 * R3         = guest PC
-	 * R4         = guest MSR
-	 * R5         = guest DAR
-	 * R6         = guest DSISR
-	 * R13        = PACA
-	 * PACA.KVM.* = guest *
+	 * R1       = host R1
+	 * R2       = host R2
+	 * R12      = exit handler id
+	 * R13      = PACA
+	 * SVCPU.*  = guest *
 	 *
 	 */
 
 	/* R7 = vcpu */
-	ld	r7, GPR4(r1)
-
-	/* Now save the guest state */
-
-	stw	r0, VCPU_LAST_INST(r7)
+	PPC_LL	r7, GPR4(r1)
 
-	std	r3, VCPU_PC(r7)
-	std	r4, VCPU_SHADOW_SRR1(r7)
-	std	r5, VCPU_FAULT_DEAR(r7)
-	stw	r6, VCPU_FAULT_DSISR(r7)
+#ifdef CONFIG_PPC_BOOK3S_64
 
-	ld	r5, VCPU_HFLAGS(r7)
+	PPC_LL	r5, VCPU_HFLAGS(r7)
 	rldicl.	r5, r5, 0, 63		/* CR = ((r5 & 1) == 0) */
 	beq	no_dcbz32_off
 
@@ -184,35 +190,29 @@ kvmppc_handler_highmem:
 
 no_dcbz32_off:
 
-	std	r14, VCPU_GPR(r14)(r7)
-	std	r15, VCPU_GPR(r15)(r7)
-	std	r16, VCPU_GPR(r16)(r7)
-	std	r17, VCPU_GPR(r17)(r7)
-	std	r18, VCPU_GPR(r18)(r7)
-	std	r19, VCPU_GPR(r19)(r7)
-	std	r20, VCPU_GPR(r20)(r7)
-	std	r21, VCPU_GPR(r21)(r7)
-	std	r22, VCPU_GPR(r22)(r7)
-	std	r23, VCPU_GPR(r23)(r7)
-	std	r24, VCPU_GPR(r24)(r7)
-	std	r25, VCPU_GPR(r25)(r7)
-	std	r26, VCPU_GPR(r26)(r7)
-	std	r27, VCPU_GPR(r27)(r7)
-	std	r28, VCPU_GPR(r28)(r7)
-	std	r29, VCPU_GPR(r29)(r7)
-	std	r30, VCPU_GPR(r30)(r7)
-	std	r31, VCPU_GPR(r31)(r7)
-
-	/* Save guest CTR */
-	mfctr	r5
-	std	r5, VCPU_CTR(r7)
-
-	/* Save guest LR */
-	mflr	r5
-	std	r5, VCPU_LR(r7)
+#endif /* CONFIG_PPC_BOOK3S_64 */
+
+	PPC_STL	r14, VCPU_GPR(r14)(r7)
+	PPC_STL	r15, VCPU_GPR(r15)(r7)
+	PPC_STL	r16, VCPU_GPR(r16)(r7)
+	PPC_STL	r17, VCPU_GPR(r17)(r7)
+	PPC_STL	r18, VCPU_GPR(r18)(r7)
+	PPC_STL	r19, VCPU_GPR(r19)(r7)
+	PPC_STL	r20, VCPU_GPR(r20)(r7)
+	PPC_STL	r21, VCPU_GPR(r21)(r7)
+	PPC_STL	r22, VCPU_GPR(r22)(r7)
+	PPC_STL	r23, VCPU_GPR(r23)(r7)
+	PPC_STL	r24, VCPU_GPR(r24)(r7)
+	PPC_STL	r25, VCPU_GPR(r25)(r7)
+	PPC_STL	r26, VCPU_GPR(r26)(r7)
+	PPC_STL	r27, VCPU_GPR(r27)(r7)
+	PPC_STL	r28, VCPU_GPR(r28)(r7)
+	PPC_STL	r29, VCPU_GPR(r29)(r7)
+	PPC_STL	r30, VCPU_GPR(r30)(r7)
+	PPC_STL	r31, VCPU_GPR(r31)(r7)
 
 	/* Restore host msr -> SRR1 */
-	ld	r6, VCPU_HOST_MSR(r7)
+	PPC_LL	r6, VCPU_HOST_MSR(r7)
 
 	/*
 	 * For some interrupts, we need to call the real Linux
@@ -231,6 +231,7 @@ no_dcbz32_off:
 
 	/* Back to EE=1 */
 	mtmsr	r6
+	sync
 	b	kvm_return_point
 
 call_linux_handler:
@@ -249,14 +250,14 @@ call_linux_handler:
 	 */
 
 	/* Restore host IP -> SRR0 */
-	ld	r5, VCPU_HOST_RETIP(r7)
+	PPC_LL	r5, VCPU_HOST_RETIP(r7)
 
 	/* XXX Better move to a safe function?
 	 *     What if we get an HTAB flush in between mtsrr0 and mtsrr1? */
 
 	mtlr	r12
 
-	ld	r4, VCPU_TRAMPOLINE_LOWMEM(r7)
+	PPC_LL	r4, VCPU_TRAMPOLINE_LOWMEM(r7)
 	mtsrr0	r4
 	LOAD_REG_IMMEDIATE(r3, MSR_KERNEL & ~(MSR_IR | MSR_DR))
 	mtsrr1	r3
@@ -274,7 +275,7 @@ kvm_return_point:
 
 	/* Restore r3 (kvm_run) and r4 (vcpu) */
 	REST_2GPRS(3, r1)
-	bl	KVMPPC_HANDLE_EXIT
+	bl	FUNC(kvmppc_handle_exit)
 
 	/* If RESUME_GUEST, get back in the loop */
 	cmpwi	r3, RESUME_GUEST
@@ -285,7 +286,7 @@ kvm_return_point:
 
 kvm_exit_loop:
 
-	ld	r4, _LINK(r1)
+	PPC_LL	r4, _LINK(r1)
 	mtlr	r4
 
 	/* Restore non-volatile host registers (r14 - r31) */
@@ -296,8 +297,8 @@ kvm_exit_loop:
 
 kvm_loop_heavyweight:
 
-	ld	r4, _LINK(r1)
-	std     r4, (16 + SWITCH_FRAME_SIZE)(r1)
+	PPC_LL	r4, _LINK(r1)
+	PPC_STL r4, (PPC_LR_STKOFF + SWITCH_FRAME_SIZE)(r1)
 
 	/* Load vcpu and cpu_run */
 	REST_2GPRS(3, r1)
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 16/27] KVM: PPC: Make highmem code generic
@ 2010-04-15 22:11     ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

Since we now have several fields in the shadow VCPU, we also change
the internal calling convention between the different entry/exit code
layers.

Let's reflect that in the IR=1 code and make sure we use "long" defines
for long field access.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_interrupts.S |  201 +++++++++++++++++-----------------
 1 files changed, 101 insertions(+), 100 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_interrupts.S b/arch/powerpc/kvm/book3s_interrupts.S
index faca876..f5b3358 100644
--- a/arch/powerpc/kvm/book3s_interrupts.S
+++ b/arch/powerpc/kvm/book3s_interrupts.S
@@ -24,36 +24,56 @@
 #include <asm/asm-offsets.h>
 #include <asm/exception-64s.h>
 
-#define KVMPPC_HANDLE_EXIT .kvmppc_handle_exit
-#define ULONG_SIZE 8
-#define VCPU_GPR(n)     (VCPU_GPRS + (n * ULONG_SIZE))
+#if defined(CONFIG_PPC_BOOK3S_64)
 
-.macro DISABLE_INTERRUPTS
-       mfmsr   r0
-       rldicl  r0,r0,48,1
-       rotldi  r0,r0,16
-       mtmsrd  r0,1
-.endm
+#define ULONG_SIZE 		8
+#define FUNC(name) 		GLUE(.,name)
 
+#define GET_SHADOW_VCPU(reg)    \
+        addi    reg, r13, PACA_KVM_SVCPU
+
+#define DISABLE_INTERRUPTS	\
+	mfmsr   r0;		\
+	rldicl  r0,r0,48,1;	\
+	rotldi  r0,r0,16;	\
+	mtmsrd  r0,1;		\
+
+#elif defined(CONFIG_PPC_BOOK3S_32)
+
+#define ULONG_SIZE              4
+#define FUNC(name)		name
+
+#define GET_SHADOW_VCPU(reg)    \
+        lwz     reg, (THREAD + THREAD_KVM_SVCPU)(r2)
+
+#define DISABLE_INTERRUPTS	\
+	mfmsr   r0;		\
+	rlwinm  r0,r0,0,17,15;	\
+	mtmsr   r0;		\
+
+#endif /* CONFIG_PPC_BOOK3S_XX */
+
+
+#define VCPU_GPR(n)		(VCPU_GPRS + (n * ULONG_SIZE))
 #define VCPU_LOAD_NVGPRS(vcpu) \
-	ld	r14, VCPU_GPR(r14)(vcpu); \
-	ld	r15, VCPU_GPR(r15)(vcpu); \
-	ld	r16, VCPU_GPR(r16)(vcpu); \
-	ld	r17, VCPU_GPR(r17)(vcpu); \
-	ld	r18, VCPU_GPR(r18)(vcpu); \
-	ld	r19, VCPU_GPR(r19)(vcpu); \
-	ld	r20, VCPU_GPR(r20)(vcpu); \
-	ld	r21, VCPU_GPR(r21)(vcpu); \
-	ld	r22, VCPU_GPR(r22)(vcpu); \
-	ld	r23, VCPU_GPR(r23)(vcpu); \
-	ld	r24, VCPU_GPR(r24)(vcpu); \
-	ld	r25, VCPU_GPR(r25)(vcpu); \
-	ld	r26, VCPU_GPR(r26)(vcpu); \
-	ld	r27, VCPU_GPR(r27)(vcpu); \
-	ld	r28, VCPU_GPR(r28)(vcpu); \
-	ld	r29, VCPU_GPR(r29)(vcpu); \
-	ld	r30, VCPU_GPR(r30)(vcpu); \
-	ld	r31, VCPU_GPR(r31)(vcpu); \
+	PPC_LL	r14, VCPU_GPR(r14)(vcpu); \
+	PPC_LL	r15, VCPU_GPR(r15)(vcpu); \
+	PPC_LL	r16, VCPU_GPR(r16)(vcpu); \
+	PPC_LL	r17, VCPU_GPR(r17)(vcpu); \
+	PPC_LL	r18, VCPU_GPR(r18)(vcpu); \
+	PPC_LL	r19, VCPU_GPR(r19)(vcpu); \
+	PPC_LL	r20, VCPU_GPR(r20)(vcpu); \
+	PPC_LL	r21, VCPU_GPR(r21)(vcpu); \
+	PPC_LL	r22, VCPU_GPR(r22)(vcpu); \
+	PPC_LL	r23, VCPU_GPR(r23)(vcpu); \
+	PPC_LL	r24, VCPU_GPR(r24)(vcpu); \
+	PPC_LL	r25, VCPU_GPR(r25)(vcpu); \
+	PPC_LL	r26, VCPU_GPR(r26)(vcpu); \
+	PPC_LL	r27, VCPU_GPR(r27)(vcpu); \
+	PPC_LL	r28, VCPU_GPR(r28)(vcpu); \
+	PPC_LL	r29, VCPU_GPR(r29)(vcpu); \
+	PPC_LL	r30, VCPU_GPR(r30)(vcpu); \
+	PPC_LL	r31, VCPU_GPR(r31)(vcpu); \
 
 /*****************************************************************************
  *                                                                           *
@@ -69,11 +89,11 @@ _GLOBAL(__kvmppc_vcpu_entry)
 
 kvm_start_entry:
 	/* Write correct stack frame */
-	mflr    r0
-	std     r0,16(r1)
+	mflr	r0
+	PPC_STL	r0,PPC_LR_STKOFF(r1)
 
 	/* Save host state to the stack */
-	stdu	r1, -SWITCH_FRAME_SIZE(r1)
+	PPC_STLU r1, -SWITCH_FRAME_SIZE(r1)
 
 	/* Save r3 (kvm_run) and r4 (vcpu) */
 	SAVE_2GPRS(3, r1)
@@ -82,33 +102,28 @@ kvm_start_entry:
 	SAVE_NVGPRS(r1)
 
 	/* Save LR */
-	std	r0, _LINK(r1)
+	PPC_STL	r0, _LINK(r1)
 
 	/* Load non-volatile guest state from the vcpu */
 	VCPU_LOAD_NVGPRS(r4)
 
+	GET_SHADOW_VCPU(r5)
+
 	/* Save R1/R2 in the PACA */
-	std	r1, PACA_KVM_HOST_R1(r13)
-	std	r2, PACA_KVM_HOST_R2(r13)
+	PPC_STL	r1, SVCPU_HOST_R1(r5)
+	PPC_STL	r2, SVCPU_HOST_R2(r5)
 
 	/* XXX swap in/out on load? */
-	ld	r3, VCPU_HIGHMEM_HANDLER(r4)
-	std	r3, PACA_KVM_VMHANDLER(r13)
+	PPC_LL	r3, VCPU_HIGHMEM_HANDLER(r4)
+	PPC_STL	r3, SVCPU_VMHANDLER(r5)
 
 kvm_start_lightweight:
 
-	ld	r9, VCPU_PC(r4)			/* r9 = vcpu->arch.pc */
-	ld	r10, VCPU_SHADOW_MSR(r4)	/* r10 = vcpu->arch.shadow_msr */
-
-	/* Load some guest state in the respective registers */
-	ld	r5, VCPU_CTR(r4)	/* r5 = vcpu->arch.ctr */
-					/* will be swapped in by rmcall */
-
-	ld	r3, VCPU_LR(r4)		/* r3 = vcpu->arch.lr */
-	mtlr	r3			/* LR = r3 */
+	PPC_LL	r10, VCPU_SHADOW_MSR(r4)	/* r10 = vcpu->arch.shadow_msr */
 
 	DISABLE_INTERRUPTS
 
+#ifdef CONFIG_PPC_BOOK3S_64
 	/* Some guests may need to have dcbz set to 32 byte length.
 	 *
 	 * Usually we ensure that by patching the guest's instructions
@@ -118,7 +133,7 @@ kvm_start_lightweight:
 	 * because that's a lot faster.
 	 */
 
-	ld	r3, VCPU_HFLAGS(r4)
+	PPC_LL	r3, VCPU_HFLAGS(r4)
 	rldicl.	r3, r3, 0, 63		/* CR = ((r3 & 1) = 0) */
 	beq	no_dcbz32_on
 
@@ -128,13 +143,15 @@ kvm_start_lightweight:
 
 no_dcbz32_on:
 
-	ld	r6, VCPU_RMCALL(r4)
+#endif /* CONFIG_PPC_BOOK3S_64 */
+
+	PPC_LL	r6, VCPU_RMCALL(r4)
 	mtctr	r6
 
-	ld	r3, VCPU_TRAMPOLINE_ENTER(r4)
+	PPC_LL	r3, VCPU_TRAMPOLINE_ENTER(r4)
 	LOAD_REG_IMMEDIATE(r4, MSR_KERNEL & ~(MSR_IR | MSR_DR))
 
-	/* Jump to SLB patching handlder and into our guest */
+	/* Jump to segment patching handler and into our guest */
 	bctr
 
 /*
@@ -149,31 +166,20 @@ kvmppc_handler_highmem:
 	/*
 	 * Register usage at this point:
 	 *
-	 * R0         = guest last inst
-	 * R1         = host R1
-	 * R2         = host R2
-	 * R3         = guest PC
-	 * R4         = guest MSR
-	 * R5         = guest DAR
-	 * R6         = guest DSISR
-	 * R13        = PACA
-	 * PACA.KVM.* = guest *
+	 * R1       = host R1
+	 * R2       = host R2
+	 * R12      = exit handler id
+	 * R13      = PACA
+	 * SVCPU.*  = guest *
 	 *
 	 */
 
 	/* R7 = vcpu */
-	ld	r7, GPR4(r1)
-
-	/* Now save the guest state */
-
-	stw	r0, VCPU_LAST_INST(r7)
+	PPC_LL	r7, GPR4(r1)
 
-	std	r3, VCPU_PC(r7)
-	std	r4, VCPU_SHADOW_SRR1(r7)
-	std	r5, VCPU_FAULT_DEAR(r7)
-	stw	r6, VCPU_FAULT_DSISR(r7)
+#ifdef CONFIG_PPC_BOOK3S_64
 
-	ld	r5, VCPU_HFLAGS(r7)
+	PPC_LL	r5, VCPU_HFLAGS(r7)
 	rldicl.	r5, r5, 0, 63		/* CR = ((r5 & 1) = 0) */
 	beq	no_dcbz32_off
 
@@ -184,35 +190,29 @@ kvmppc_handler_highmem:
 
 no_dcbz32_off:
 
-	std	r14, VCPU_GPR(r14)(r7)
-	std	r15, VCPU_GPR(r15)(r7)
-	std	r16, VCPU_GPR(r16)(r7)
-	std	r17, VCPU_GPR(r17)(r7)
-	std	r18, VCPU_GPR(r18)(r7)
-	std	r19, VCPU_GPR(r19)(r7)
-	std	r20, VCPU_GPR(r20)(r7)
-	std	r21, VCPU_GPR(r21)(r7)
-	std	r22, VCPU_GPR(r22)(r7)
-	std	r23, VCPU_GPR(r23)(r7)
-	std	r24, VCPU_GPR(r24)(r7)
-	std	r25, VCPU_GPR(r25)(r7)
-	std	r26, VCPU_GPR(r26)(r7)
-	std	r27, VCPU_GPR(r27)(r7)
-	std	r28, VCPU_GPR(r28)(r7)
-	std	r29, VCPU_GPR(r29)(r7)
-	std	r30, VCPU_GPR(r30)(r7)
-	std	r31, VCPU_GPR(r31)(r7)
-
-	/* Save guest CTR */
-	mfctr	r5
-	std	r5, VCPU_CTR(r7)
-
-	/* Save guest LR */
-	mflr	r5
-	std	r5, VCPU_LR(r7)
+#endif /* CONFIG_PPC_BOOK3S_64 */
+
+	PPC_STL	r14, VCPU_GPR(r14)(r7)
+	PPC_STL	r15, VCPU_GPR(r15)(r7)
+	PPC_STL	r16, VCPU_GPR(r16)(r7)
+	PPC_STL	r17, VCPU_GPR(r17)(r7)
+	PPC_STL	r18, VCPU_GPR(r18)(r7)
+	PPC_STL	r19, VCPU_GPR(r19)(r7)
+	PPC_STL	r20, VCPU_GPR(r20)(r7)
+	PPC_STL	r21, VCPU_GPR(r21)(r7)
+	PPC_STL	r22, VCPU_GPR(r22)(r7)
+	PPC_STL	r23, VCPU_GPR(r23)(r7)
+	PPC_STL	r24, VCPU_GPR(r24)(r7)
+	PPC_STL	r25, VCPU_GPR(r25)(r7)
+	PPC_STL	r26, VCPU_GPR(r26)(r7)
+	PPC_STL	r27, VCPU_GPR(r27)(r7)
+	PPC_STL	r28, VCPU_GPR(r28)(r7)
+	PPC_STL	r29, VCPU_GPR(r29)(r7)
+	PPC_STL	r30, VCPU_GPR(r30)(r7)
+	PPC_STL	r31, VCPU_GPR(r31)(r7)
 
 	/* Restore host msr -> SRR1 */
-	ld	r6, VCPU_HOST_MSR(r7)
+	PPC_LL	r6, VCPU_HOST_MSR(r7)
 
 	/*
 	 * For some interrupts, we need to call the real Linux
@@ -231,6 +231,7 @@ no_dcbz32_off:
 
 	/* Back to EE=1 */
 	mtmsr	r6
+	sync
 	b	kvm_return_point
 
 call_linux_handler:
@@ -249,14 +250,14 @@ call_linux_handler:
 	 */
 
 	/* Restore host IP -> SRR0 */
-	ld	r5, VCPU_HOST_RETIP(r7)
+	PPC_LL	r5, VCPU_HOST_RETIP(r7)
 
 	/* XXX Better move to a safe function?
 	 *     What if we get an HTAB flush in between mtsrr0 and mtsrr1? */
 
 	mtlr	r12
 
-	ld	r4, VCPU_TRAMPOLINE_LOWMEM(r7)
+	PPC_LL	r4, VCPU_TRAMPOLINE_LOWMEM(r7)
 	mtsrr0	r4
 	LOAD_REG_IMMEDIATE(r3, MSR_KERNEL & ~(MSR_IR | MSR_DR))
 	mtsrr1	r3
@@ -274,7 +275,7 @@ kvm_return_point:
 
 	/* Restore r3 (kvm_run) and r4 (vcpu) */
 	REST_2GPRS(3, r1)
-	bl	KVMPPC_HANDLE_EXIT
+	bl	FUNC(kvmppc_handle_exit)
 
 	/* If RESUME_GUEST, get back in the loop */
 	cmpwi	r3, RESUME_GUEST
@@ -285,7 +286,7 @@ kvm_return_point:
 
 kvm_exit_loop:
 
-	ld	r4, _LINK(r1)
+	PPC_LL	r4, _LINK(r1)
 	mtlr	r4
 
 	/* Restore non-volatile host registers (r14 - r31) */
@@ -296,8 +297,8 @@ kvm_exit_loop:
 
 kvm_loop_heavyweight:
 
-	ld	r4, _LINK(r1)
-	std     r4, (16 + SWITCH_FRAME_SIZE)(r1)
+	PPC_LL	r4, _LINK(r1)
+	PPC_STL r4, (PPC_LR_STKOFF + SWITCH_FRAME_SIZE)(r1)
 
 	/* Load vcpu and cpu_run */
 	REST_2GPRS(3, r1)
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 17/27] KVM: PPC: Make SLB switching code the new segment framework
       [not found] ` <1271369518-11247-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-15 22:11     ` Alexander Graf
  2010-04-15 22:11     ` Alexander Graf
                       ` (15 subsequent siblings)
  16 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

We just introduced generic segment switching code that only needs to call
small macros to do the actual switching, but keeps most of the entry / exit
code generic.

So let's move the SLB switching code over to use this new mechanism.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s_64_slb.S     |  183 +++++-----------------------------
 arch/powerpc/kvm/book3s_rmhandlers.S |    2 +-
 2 files changed, 25 insertions(+), 160 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_slb.S b/arch/powerpc/kvm/book3s_64_slb.S
index 0919679..04e7d3b 100644
--- a/arch/powerpc/kvm/book3s_64_slb.S
+++ b/arch/powerpc/kvm/book3s_64_slb.S
@@ -44,8 +44,7 @@ slb_exit_skip_ ## num:
  *                                                                            *
  *****************************************************************************/
 
-.global kvmppc_handler_trampoline_enter
-kvmppc_handler_trampoline_enter:
+.macro LOAD_GUEST_SEGMENTS
 
 	/* Required state:
 	 *
@@ -53,20 +52,14 @@ kvmppc_handler_trampoline_enter:
 	 * R13 = PACA
 	 * R1 = host R1
 	 * R2 = host R2
-	 * R9 = guest IP
-	 * R10 = guest MSR
-	 * all other GPRS = free
-	 * PACA[KVM_CR] = guest CR
-	 * PACA[KVM_XER] = guest XER
+	 * R3 = shadow vcpu
+	 * all other volatile GPRS = free
+	 * SVCPU[CR]  = guest CR
+	 * SVCPU[XER] = guest XER
+	 * SVCPU[CTR] = guest CTR
+	 * SVCPU[LR]  = guest LR
 	 */
 
-	mtsrr0	r9
-	mtsrr1	r10
-
-	/* Activate guest mode, so faults get handled by KVM */
-	li	r11, KVM_GUEST_MODE_GUEST
-	stb	r11, PACA_KVM_IN_GUEST(r13)
-
 	/* Remove LPAR shadow entries */
 
 #if SLB_NUM_BOLTED == 3
@@ -101,14 +94,14 @@ kvmppc_handler_trampoline_enter:
 
 	/* Fill SLB with our shadow */
 
-	lbz	r12, PACA_KVM_SLB_MAX(r13)
+	lbz	r12, SVCPU_SLB_MAX(r3)
 	mulli	r12, r12, 16
-	addi	r12, r12, PACA_KVM_SLB
-	add	r12, r12, r13
+	addi	r12, r12, SVCPU_SLB
+	add	r12, r12, r3
 
 	/* for (r11 = kvm_slb; r11 < kvm_slb + kvm_slb_size; r11+=slb_entry) */
-	li	r11, PACA_KVM_SLB
-	add	r11, r11, r13
+	li	r11, SVCPU_SLB
+	add	r11, r11, r3
 
 slb_loop_enter:
 
@@ -127,34 +120,7 @@ slb_loop_enter_skip:
 
 slb_do_enter:
 
-	/* Enter guest */
-
-	ld	r0, (PACA_KVM_R0)(r13)
-	ld	r1, (PACA_KVM_R1)(r13)
-	ld	r2, (PACA_KVM_R2)(r13)
-	ld	r3, (PACA_KVM_R3)(r13)
-	ld	r4, (PACA_KVM_R4)(r13)
-	ld	r5, (PACA_KVM_R5)(r13)
-	ld	r6, (PACA_KVM_R6)(r13)
-	ld	r7, (PACA_KVM_R7)(r13)
-	ld	r8, (PACA_KVM_R8)(r13)
-	ld	r9, (PACA_KVM_R9)(r13)
-	ld	r10, (PACA_KVM_R10)(r13)
-	ld	r12, (PACA_KVM_R12)(r13)
-
-	lwz	r11, (PACA_KVM_CR)(r13)
-	mtcr	r11
-
-	lwz	r11, (PACA_KVM_XER)(r13)
-	mtxer	r11
-
-	ld	r11, (PACA_KVM_R11)(r13)
-	ld	r13, (PACA_KVM_R13)(r13)
-
-	RFI
-kvmppc_handler_trampoline_enter_end:
-
-
+.endm
 
 /******************************************************************************
  *                                                                            *
@@ -162,99 +128,22 @@ kvmppc_handler_trampoline_enter_end:
  *                                                                            *
  *****************************************************************************/
 
-.global kvmppc_handler_trampoline_exit
-kvmppc_handler_trampoline_exit:
+.macro LOAD_HOST_SEGMENTS
 
 	/* Register usage at this point:
 	 *
-	 * SPRG_SCRATCH0     = guest R13
-	 * R12               = exit handler id
-	 * R13               = PACA
-	 * PACA.KVM.SCRATCH0 = guest R12
-	 * PACA.KVM.SCRATCH1 = guest CR
+	 * R1         = host R1
+	 * R2         = host R2
+	 * R12        = exit handler id
+	 * R13        = shadow vcpu - SHADOW_VCPU_OFF [=PACA on PPC64]
+	 * SVCPU.*    = guest *
+	 * SVCPU[CR]  = guest CR
+	 * SVCPU[XER] = guest XER
+	 * SVCPU[CTR] = guest CTR
+	 * SVCPU[LR]  = guest LR
 	 *
 	 */
 
-	/* Save registers */
-
-	std	r0, PACA_KVM_R0(r13)
-	std	r1, PACA_KVM_R1(r13)
-	std	r2, PACA_KVM_R2(r13)
-	std	r3, PACA_KVM_R3(r13)
-	std	r4, PACA_KVM_R4(r13)
-	std	r5, PACA_KVM_R5(r13)
-	std	r6, PACA_KVM_R6(r13)
-	std	r7, PACA_KVM_R7(r13)
-	std	r8, PACA_KVM_R8(r13)
-	std	r9, PACA_KVM_R9(r13)
-	std	r10, PACA_KVM_R10(r13)
-	std	r11, PACA_KVM_R11(r13)
-
-	/* Restore R1/R2 so we can handle faults */
-	ld	r1, PACA_KVM_HOST_R1(r13)
-	ld	r2, PACA_KVM_HOST_R2(r13)
-
-	/* Save guest PC and MSR in GPRs */
-	mfsrr0	r3
-	mfsrr1	r4
-
-	/* Get scratch'ed off registers */
-	mfspr	r9, SPRN_SPRG_SCRATCH0
-	std	r9, PACA_KVM_R13(r13)
-
-	ld	r8, PACA_KVM_SCRATCH0(r13)
-	std	r8, PACA_KVM_R12(r13)
-
-	lwz	r7, PACA_KVM_SCRATCH1(r13)
-	stw	r7, PACA_KVM_CR(r13)
-
-	/* Save more register state  */
-
-	mfxer	r6
-	stw	r6, PACA_KVM_XER(r13)
-
-	mfdar	r5
-	mfdsisr	r6
-
-	/*
-	 * In order for us to easily get the last instruction,
-	 * we got the #vmexit at, we exploit the fact that the
-	 * virtual layout is still the same here, so we can just
-	 * ld from the guest's PC address
-	 */
-
-	/* We only load the last instruction when it's safe */
-	cmpwi	r12, BOOK3S_INTERRUPT_DATA_STORAGE
-	beq	ld_last_inst
-	cmpwi	r12, BOOK3S_INTERRUPT_PROGRAM
-	beq	ld_last_inst
-
-	b	no_ld_last_inst
-
-ld_last_inst:
-	/* Save off the guest instruction we're at */
-
-	/* Set guest mode to 'jump over instruction' so if lwz faults
-	 * we'll just continue at the next IP. */
-	li	r9, KVM_GUEST_MODE_SKIP
-	stb	r9, PACA_KVM_IN_GUEST(r13)
-
-	/*    1) enable paging for data */
-	mfmsr	r9
-	ori	r11, r9, MSR_DR			/* Enable paging for data */
-	mtmsr	r11
-	/*    2) fetch the instruction */
-	li	r0, KVM_INST_FETCH_FAILED	/* In case lwz faults */
-	lwz	r0, 0(r3)
-	/*    3) disable paging again */
-	mtmsr	r9
-
-no_ld_last_inst:
-
-	/* Unset guest mode */
-	li	r9, KVM_GUEST_MODE_NONE
-	stb	r9, PACA_KVM_IN_GUEST(r13)
-
 	/* Restore bolted entries from the shadow and fix it along the way */
 
 	/* We don't store anything in entry 0, so we don't need to take care of it */
@@ -275,28 +164,4 @@ no_ld_last_inst:
 
 slb_do_exit:
 
-	/* Register usage at this point:
-	 *
-	 * R0         = guest last inst
-	 * R1         = host R1
-	 * R2         = host R2
-	 * R3         = guest PC
-	 * R4         = guest MSR
-	 * R5         = guest DAR
-	 * R6         = guest DSISR
-	 * R12        = exit handler id
-	 * R13        = PACA
-	 * PACA.KVM.* = guest *
-	 *
-	 */
-
-	/* RFI into the highmem handler */
-	mfmsr	r7
-	ori	r7, r7, MSR_IR|MSR_DR|MSR_RI	/* Enable paging */
-	mtsrr1	r7
-	ld	r8, PACA_KVM_VMHANDLER(r13)	/* Highmem handler address */
-	mtsrr0	r8
-
-	RFI
-kvmppc_handler_trampoline_exit_end:
-
+.endm
diff --git a/arch/powerpc/kvm/book3s_rmhandlers.S b/arch/powerpc/kvm/book3s_rmhandlers.S
index 0c8d331..da571f8 100644
--- a/arch/powerpc/kvm/book3s_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_rmhandlers.S
@@ -248,5 +248,5 @@ kvmppc_trampoline_lowmem:
 kvmppc_trampoline_enter:
 	.long kvmppc_handler_trampoline_enter - _stext
 
-#include "book3s_64_slb.S"
+#include "book3s_segment.S"
 
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 17/27] KVM: PPC: Make SLB switching code the new segment framework
@ 2010-04-15 22:11     ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

We just introduced generic segment switching code that only needs to call
small macros to do the actual switching, but keeps most of the entry / exit
code generic.

So let's move the SLB switching code over to use this new mechanism.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_64_slb.S     |  183 +++++-----------------------------
 arch/powerpc/kvm/book3s_rmhandlers.S |    2 +-
 2 files changed, 25 insertions(+), 160 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_slb.S b/arch/powerpc/kvm/book3s_64_slb.S
index 0919679..04e7d3b 100644
--- a/arch/powerpc/kvm/book3s_64_slb.S
+++ b/arch/powerpc/kvm/book3s_64_slb.S
@@ -44,8 +44,7 @@ slb_exit_skip_ ## num:
  *                                                                            *
  *****************************************************************************/
 
-.global kvmppc_handler_trampoline_enter
-kvmppc_handler_trampoline_enter:
+.macro LOAD_GUEST_SEGMENTS
 
 	/* Required state:
 	 *
@@ -53,20 +52,14 @@ kvmppc_handler_trampoline_enter:
 	 * R13 = PACA
 	 * R1 = host R1
 	 * R2 = host R2
-	 * R9 = guest IP
-	 * R10 = guest MSR
-	 * all other GPRS = free
-	 * PACA[KVM_CR] = guest CR
-	 * PACA[KVM_XER] = guest XER
+	 * R3 = shadow vcpu
+	 * all other volatile GPRS = free
+	 * SVCPU[CR]  = guest CR
+	 * SVCPU[XER] = guest XER
+	 * SVCPU[CTR] = guest CTR
+	 * SVCPU[LR]  = guest LR
 	 */
 
-	mtsrr0	r9
-	mtsrr1	r10
-
-	/* Activate guest mode, so faults get handled by KVM */
-	li	r11, KVM_GUEST_MODE_GUEST
-	stb	r11, PACA_KVM_IN_GUEST(r13)
-
 	/* Remove LPAR shadow entries */
 
 #if SLB_NUM_BOLTED = 3
@@ -101,14 +94,14 @@ kvmppc_handler_trampoline_enter:
 
 	/* Fill SLB with our shadow */
 
-	lbz	r12, PACA_KVM_SLB_MAX(r13)
+	lbz	r12, SVCPU_SLB_MAX(r3)
 	mulli	r12, r12, 16
-	addi	r12, r12, PACA_KVM_SLB
-	add	r12, r12, r13
+	addi	r12, r12, SVCPU_SLB
+	add	r12, r12, r3
 
 	/* for (r11 = kvm_slb; r11 < kvm_slb + kvm_slb_size; r11+=slb_entry) */
-	li	r11, PACA_KVM_SLB
-	add	r11, r11, r13
+	li	r11, SVCPU_SLB
+	add	r11, r11, r3
 
 slb_loop_enter:
 
@@ -127,34 +120,7 @@ slb_loop_enter_skip:
 
 slb_do_enter:
 
-	/* Enter guest */
-
-	ld	r0, (PACA_KVM_R0)(r13)
-	ld	r1, (PACA_KVM_R1)(r13)
-	ld	r2, (PACA_KVM_R2)(r13)
-	ld	r3, (PACA_KVM_R3)(r13)
-	ld	r4, (PACA_KVM_R4)(r13)
-	ld	r5, (PACA_KVM_R5)(r13)
-	ld	r6, (PACA_KVM_R6)(r13)
-	ld	r7, (PACA_KVM_R7)(r13)
-	ld	r8, (PACA_KVM_R8)(r13)
-	ld	r9, (PACA_KVM_R9)(r13)
-	ld	r10, (PACA_KVM_R10)(r13)
-	ld	r12, (PACA_KVM_R12)(r13)
-
-	lwz	r11, (PACA_KVM_CR)(r13)
-	mtcr	r11
-
-	lwz	r11, (PACA_KVM_XER)(r13)
-	mtxer	r11
-
-	ld	r11, (PACA_KVM_R11)(r13)
-	ld	r13, (PACA_KVM_R13)(r13)
-
-	RFI
-kvmppc_handler_trampoline_enter_end:
-
-
+.endm
 
 /******************************************************************************
  *                                                                            *
@@ -162,99 +128,22 @@ kvmppc_handler_trampoline_enter_end:
  *                                                                            *
  *****************************************************************************/
 
-.global kvmppc_handler_trampoline_exit
-kvmppc_handler_trampoline_exit:
+.macro LOAD_HOST_SEGMENTS
 
 	/* Register usage at this point:
 	 *
-	 * SPRG_SCRATCH0     = guest R13
-	 * R12               = exit handler id
-	 * R13               = PACA
-	 * PACA.KVM.SCRATCH0 = guest R12
-	 * PACA.KVM.SCRATCH1 = guest CR
+	 * R1         = host R1
+	 * R2         = host R2
+	 * R12        = exit handler id
+	 * R13        = shadow vcpu - SHADOW_VCPU_OFF [=PACA on PPC64]
+	 * SVCPU.*    = guest *
+	 * SVCPU[CR]  = guest CR
+	 * SVCPU[XER] = guest XER
+	 * SVCPU[CTR] = guest CTR
+	 * SVCPU[LR]  = guest LR
 	 *
 	 */
 
-	/* Save registers */
-
-	std	r0, PACA_KVM_R0(r13)
-	std	r1, PACA_KVM_R1(r13)
-	std	r2, PACA_KVM_R2(r13)
-	std	r3, PACA_KVM_R3(r13)
-	std	r4, PACA_KVM_R4(r13)
-	std	r5, PACA_KVM_R5(r13)
-	std	r6, PACA_KVM_R6(r13)
-	std	r7, PACA_KVM_R7(r13)
-	std	r8, PACA_KVM_R8(r13)
-	std	r9, PACA_KVM_R9(r13)
-	std	r10, PACA_KVM_R10(r13)
-	std	r11, PACA_KVM_R11(r13)
-
-	/* Restore R1/R2 so we can handle faults */
-	ld	r1, PACA_KVM_HOST_R1(r13)
-	ld	r2, PACA_KVM_HOST_R2(r13)
-
-	/* Save guest PC and MSR in GPRs */
-	mfsrr0	r3
-	mfsrr1	r4
-
-	/* Get scratch'ed off registers */
-	mfspr	r9, SPRN_SPRG_SCRATCH0
-	std	r9, PACA_KVM_R13(r13)
-
-	ld	r8, PACA_KVM_SCRATCH0(r13)
-	std	r8, PACA_KVM_R12(r13)
-
-	lwz	r7, PACA_KVM_SCRATCH1(r13)
-	stw	r7, PACA_KVM_CR(r13)
-
-	/* Save more register state  */
-
-	mfxer	r6
-	stw	r6, PACA_KVM_XER(r13)
-
-	mfdar	r5
-	mfdsisr	r6
-
-	/*
-	 * In order for us to easily get the last instruction,
-	 * we got the #vmexit at, we exploit the fact that the
-	 * virtual layout is still the same here, so we can just
-	 * ld from the guest's PC address
-	 */
-
-	/* We only load the last instruction when it's safe */
-	cmpwi	r12, BOOK3S_INTERRUPT_DATA_STORAGE
-	beq	ld_last_inst
-	cmpwi	r12, BOOK3S_INTERRUPT_PROGRAM
-	beq	ld_last_inst
-
-	b	no_ld_last_inst
-
-ld_last_inst:
-	/* Save off the guest instruction we're at */
-
-	/* Set guest mode to 'jump over instruction' so if lwz faults
-	 * we'll just continue at the next IP. */
-	li	r9, KVM_GUEST_MODE_SKIP
-	stb	r9, PACA_KVM_IN_GUEST(r13)
-
-	/*    1) enable paging for data */
-	mfmsr	r9
-	ori	r11, r9, MSR_DR			/* Enable paging for data */
-	mtmsr	r11
-	/*    2) fetch the instruction */
-	li	r0, KVM_INST_FETCH_FAILED	/* In case lwz faults */
-	lwz	r0, 0(r3)
-	/*    3) disable paging again */
-	mtmsr	r9
-
-no_ld_last_inst:
-
-	/* Unset guest mode */
-	li	r9, KVM_GUEST_MODE_NONE
-	stb	r9, PACA_KVM_IN_GUEST(r13)
-
 	/* Restore bolted entries from the shadow and fix it along the way */
 
 	/* We don't store anything in entry 0, so we don't need to take care of it */
@@ -275,28 +164,4 @@ no_ld_last_inst:
 
 slb_do_exit:
 
-	/* Register usage at this point:
-	 *
-	 * R0         = guest last inst
-	 * R1         = host R1
-	 * R2         = host R2
-	 * R3         = guest PC
-	 * R4         = guest MSR
-	 * R5         = guest DAR
-	 * R6         = guest DSISR
-	 * R12        = exit handler id
-	 * R13        = PACA
-	 * PACA.KVM.* = guest *
-	 *
-	 */
-
-	/* RFI into the highmem handler */
-	mfmsr	r7
-	ori	r7, r7, MSR_IR|MSR_DR|MSR_RI	/* Enable paging */
-	mtsrr1	r7
-	ld	r8, PACA_KVM_VMHANDLER(r13)	/* Highmem handler address */
-	mtsrr0	r8
-
-	RFI
-kvmppc_handler_trampoline_exit_end:
-
+.endm
diff --git a/arch/powerpc/kvm/book3s_rmhandlers.S b/arch/powerpc/kvm/book3s_rmhandlers.S
index 0c8d331..da571f8 100644
--- a/arch/powerpc/kvm/book3s_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_rmhandlers.S
@@ -248,5 +248,5 @@ kvmppc_trampoline_lowmem:
 kvmppc_trampoline_enter:
 	.long kvmppc_handler_trampoline_enter - _stext
 
-#include "book3s_64_slb.S"
+#include "book3s_segment.S"
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 18/27] KVM: PPC: Release clean pages as clean
       [not found] ` <1271369518-11247-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-15 22:11     ` Alexander Graf
  2010-04-15 22:11     ` Alexander Graf
                       ` (15 subsequent siblings)
  16 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

When we mapped a page as read-only, we can just release it as clean to
KVM's page claim mechanisms, because we're pretty sure it hasn't been
touched.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s_64_mmu_host.c |    6 +++++-
 1 files changed, 5 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index 0eea589..b230154 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -55,7 +55,11 @@ static void invalidate_pte(struct hpte_cache *pte)
 			       MMU_PAGE_4K, MMU_SEGSIZE_256M,
 			       false);
 	pte->host_va = 0;
-	kvm_release_pfn_dirty(pte->pfn);
+
+	if (pte->pte.may_write)
+		kvm_release_pfn_dirty(pte->pfn);
+	else
+		kvm_release_pfn_clean(pte->pfn);
 }
 
 void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, u64 guest_ea, u64 ea_mask)
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 18/27] KVM: PPC: Release clean pages as clean
@ 2010-04-15 22:11     ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

When we mapped a page as read-only, we can just release it as clean to
KVM's page claim mechanisms, because we're pretty sure it hasn't been
touched.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_64_mmu_host.c |    6 +++++-
 1 files changed, 5 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index 0eea589..b230154 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -55,7 +55,11 @@ static void invalidate_pte(struct hpte_cache *pte)
 			       MMU_PAGE_4K, MMU_SEGSIZE_256M,
 			       false);
 	pte->host_va = 0;
-	kvm_release_pfn_dirty(pte->pfn);
+
+	if (pte->pte.may_write)
+		kvm_release_pfn_dirty(pte->pfn);
+	else
+		kvm_release_pfn_clean(pte->pfn);
 }
 
 void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, u64 guest_ea, u64 ea_mask)
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 19/27] KVM: PPC: Remove fetch fail code
  2010-04-15 22:11 ` Alexander Graf
@ 2010-04-15 22:11   ` Alexander Graf
  -1 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

When instruction fetch failed, the inline function hook automatically
detects that and starts the internal guest memory load function. So
whenever we access kvmppc_get_last_inst(), we're sure the result is sane.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/emulate.c |    4 ----
 1 files changed, 0 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index b608c0b..4568ec3 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -147,10 +147,6 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
 
 	pr_debug(KERN_INFO "Emulating opcode %d / %d\n", get_op(inst), get_xop(inst));
 
-	/* Try again next time */
-	if (inst == KVM_INST_FETCH_FAILED)
-		return EMULATE_DONE;
-
 	switch (get_op(inst)) {
 	case OP_TRAP:
 #ifdef CONFIG_PPC_BOOK3S
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 19/27] KVM: PPC: Remove fetch fail code
@ 2010-04-15 22:11   ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

When instruction fetch failed, the inline function hook automatically
detects that and starts the internal guest memory load function. So
whenever we access kvmppc_get_last_inst(), we're sure the result is sane.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/emulate.c |    4 ----
 1 files changed, 0 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index b608c0b..4568ec3 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -147,10 +147,6 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
 
 	pr_debug(KERN_INFO "Emulating opcode %d / %d\n", get_op(inst), get_xop(inst));
 
-	/* Try again next time */
-	if (inst = KVM_INST_FETCH_FAILED)
-		return EMULATE_DONE;
-
 	switch (get_op(inst)) {
 	case OP_TRAP:
 #ifdef CONFIG_PPC_BOOK3S
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 20/27] KVM: PPC: Add SVCPU to Book3S_32
  2010-04-15 22:11 ` Alexander Graf
@ 2010-04-15 22:11   ` Alexander Graf
  -1 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

We need to keep the pointer to the shadow vcpu somewhere accessible from
within really early interrupt code. The best fit I found was the thread
struct, as that resides in an SPRG.

So let's put a pointer to the shadow vcpu in the thread struct and add
an asm-offset so we can find it.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/processor.h |    3 +++
 arch/powerpc/kernel/asm-offsets.c    |    3 +++
 2 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h
index 221ba62..7492fe8 100644
--- a/arch/powerpc/include/asm/processor.h
+++ b/arch/powerpc/include/asm/processor.h
@@ -229,6 +229,9 @@ struct thread_struct {
 	unsigned long	spefscr;	/* SPE & eFP status */
 	int		used_spe;	/* set if process has used spe */
 #endif /* CONFIG_SPE */
+#ifdef CONFIG_KVM_BOOK3S_32_HANDLER
+	void*		kvm_shadow_vcpu; /* KVM internal data */
+#endif /* CONFIG_KVM_BOOK3S_32_HANDLER */
 };
 
 #define ARCH_MIN_TASKALIGN 16
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index e8003ff..1804c2c 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -108,6 +108,9 @@ int main(void)
 	DEFINE(THREAD_USED_SPE, offsetof(struct thread_struct, used_spe));
 #endif /* CONFIG_SPE */
 #endif /* CONFIG_PPC64 */
+#ifdef CONFIG_KVM_BOOK3S_32_HANDLER
+	DEFINE(THREAD_KVM_SVCPU, offsetof(struct thread_struct, kvm_shadow_vcpu));
+#endif
 
 	DEFINE(TI_FLAGS, offsetof(struct thread_info, flags));
 	DEFINE(TI_LOCAL_FLAGS, offsetof(struct thread_info, local_flags));
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 20/27] KVM: PPC: Add SVCPU to Book3S_32
@ 2010-04-15 22:11   ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

We need to keep the pointer to the shadow vcpu somewhere accessible from
within really early interrupt code. The best fit I found was the thread
struct, as that resides in an SPRG.

So let's put a pointer to the shadow vcpu in the thread struct and add
an asm-offset so we can find it.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/processor.h |    3 +++
 arch/powerpc/kernel/asm-offsets.c    |    3 +++
 2 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h
index 221ba62..7492fe8 100644
--- a/arch/powerpc/include/asm/processor.h
+++ b/arch/powerpc/include/asm/processor.h
@@ -229,6 +229,9 @@ struct thread_struct {
 	unsigned long	spefscr;	/* SPE & eFP status */
 	int		used_spe;	/* set if process has used spe */
 #endif /* CONFIG_SPE */
+#ifdef CONFIG_KVM_BOOK3S_32_HANDLER
+	void*		kvm_shadow_vcpu; /* KVM internal data */
+#endif /* CONFIG_KVM_BOOK3S_32_HANDLER */
 };
 
 #define ARCH_MIN_TASKALIGN 16
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index e8003ff..1804c2c 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -108,6 +108,9 @@ int main(void)
 	DEFINE(THREAD_USED_SPE, offsetof(struct thread_struct, used_spe));
 #endif /* CONFIG_SPE */
 #endif /* CONFIG_PPC64 */
+#ifdef CONFIG_KVM_BOOK3S_32_HANDLER
+	DEFINE(THREAD_KVM_SVCPU, offsetof(struct thread_struct, kvm_shadow_vcpu));
+#endif
 
 	DEFINE(TI_FLAGS, offsetof(struct thread_info, flags));
 	DEFINE(TI_LOCAL_FLAGS, offsetof(struct thread_info, local_flags));
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 21/27] KVM: PPC: Emulate segment fault
  2010-04-15 22:11 ` Alexander Graf
@ 2010-04-15 22:11   ` Alexander Graf
  -1 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

Book3S_32 doesn't know about segment faults. It only knows about page faults.
So in order to know that we didn't map a segment, we need to fake segment
faults.

We do this by setting invalid segment registers to an invalid VSID and then
check for that VSID on normal page faults.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s.c |   23 +++++++++++++++++++++++
 1 files changed, 23 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index b917b97..178ddd4 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -774,6 +774,18 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	switch (exit_nr) {
 	case BOOK3S_INTERRUPT_INST_STORAGE:
 		vcpu->stat.pf_instruc++;
+
+#ifdef CONFIG_PPC_BOOK3S_32
+		/* We set segments as unused segments when invalidating them. So
+		 * treat the respective fault as segment fault. */
+		if (to_svcpu(vcpu)->sr[kvmppc_get_pc(vcpu) >> SID_SHIFT]
+		    == SR_INVALID) {
+			kvmppc_mmu_map_segment(vcpu, kvmppc_get_pc(vcpu));
+			r = RESUME_GUEST;
+			break;
+		}
+#endif
+
 		/* only care about PTEG not found errors, but leave NX alone */
 		if (to_svcpu(vcpu)->shadow_srr1 & 0x40000000) {
 			r = kvmppc_handle_pagefault(run, vcpu, kvmppc_get_pc(vcpu), exit_nr);
@@ -798,6 +810,17 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	{
 		ulong dar = kvmppc_get_fault_dar(vcpu);
 		vcpu->stat.pf_storage++;
+
+#ifdef CONFIG_PPC_BOOK3S_32
+		/* We set segments as unused segments when invalidating them. So
+		 * treat the respective fault as segment fault. */
+		if ((to_svcpu(vcpu)->sr[dar >> SID_SHIFT]) == SR_INVALID) {
+			kvmppc_mmu_map_segment(vcpu, dar);
+			r = RESUME_GUEST;
+			break;
+		}
+#endif
+
 		/* The only case we need to handle is missing shadow PTEs */
 		if (to_svcpu(vcpu)->fault_dsisr & DSISR_NOHPTE) {
 			r = kvmppc_handle_pagefault(run, vcpu, dar, exit_nr);
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 21/27] KVM: PPC: Emulate segment fault
@ 2010-04-15 22:11   ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

Book3S_32 doesn't know about segment faults. It only knows about page faults.
So in order to know that we didn't map a segment, we need to fake segment
faults.

We do this by setting invalid segment registers to an invalid VSID and then
check for that VSID on normal page faults.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s.c |   23 +++++++++++++++++++++++
 1 files changed, 23 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index b917b97..178ddd4 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -774,6 +774,18 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	switch (exit_nr) {
 	case BOOK3S_INTERRUPT_INST_STORAGE:
 		vcpu->stat.pf_instruc++;
+
+#ifdef CONFIG_PPC_BOOK3S_32
+		/* We set segments as unused segments when invalidating them. So
+		 * treat the respective fault as segment fault. */
+		if (to_svcpu(vcpu)->sr[kvmppc_get_pc(vcpu) >> SID_SHIFT]
+		    = SR_INVALID) {
+			kvmppc_mmu_map_segment(vcpu, kvmppc_get_pc(vcpu));
+			r = RESUME_GUEST;
+			break;
+		}
+#endif
+
 		/* only care about PTEG not found errors, but leave NX alone */
 		if (to_svcpu(vcpu)->shadow_srr1 & 0x40000000) {
 			r = kvmppc_handle_pagefault(run, vcpu, kvmppc_get_pc(vcpu), exit_nr);
@@ -798,6 +810,17 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	{
 		ulong dar = kvmppc_get_fault_dar(vcpu);
 		vcpu->stat.pf_storage++;
+
+#ifdef CONFIG_PPC_BOOK3S_32
+		/* We set segments as unused segments when invalidating them. So
+		 * treat the respective fault as segment fault. */
+		if ((to_svcpu(vcpu)->sr[dar >> SID_SHIFT]) = SR_INVALID) {
+			kvmppc_mmu_map_segment(vcpu, dar);
+			r = RESUME_GUEST;
+			break;
+		}
+#endif
+
 		/* The only case we need to handle is missing shadow PTEs */
 		if (to_svcpu(vcpu)->fault_dsisr & DSISR_NOHPTE) {
 			r = kvmppc_handle_pagefault(run, vcpu, dar, exit_nr);
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 22/27] KVM: PPC: Add Book3S compatibility code
  2010-04-15 22:11 ` Alexander Graf
@ 2010-04-15 22:11   ` Alexander Graf
  -1 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

Some code we had so far required defines and had code that was completely
Book3S_64 specific. Since we now opened book3s.c to Book3S_32 too, we need
to take care of these pieces.

So let's add some minor code where it makes sense to not go the Book3S_64
code paths and add compat defines on others.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s.c            |   26 +++++++++++++++++++++++++-
 arch/powerpc/kvm/book3s_32_mmu.c     |    3 +++
 arch/powerpc/kvm/book3s_emulate.c    |    4 ++++
 arch/powerpc/kvm/book3s_rmhandlers.S |    4 ++--
 4 files changed, 34 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 178ddd4..f5229f9 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -39,6 +39,13 @@
 static int kvmppc_handle_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr,
 			     ulong msr);
 
+/* Some compatibility defines */
+#ifdef CONFIG_PPC_BOOK3S_32
+#define MSR_USER32 MSR_USER
+#define MSR_USER64 MSR_USER
+#define HW_PAGE_SIZE PAGE_SIZE
+#endif
+
 struct kvm_stats_debugfs_item debugfs_entries[] = {
 	{ "exits",       VCPU_STAT(sum_exits) },
 	{ "mmio",        VCPU_STAT(mmio_exits) },
@@ -347,11 +354,14 @@ void kvmppc_set_pvr(struct kvm_vcpu *vcpu, u32 pvr)
 {
 	vcpu->arch.hflags &= ~BOOK3S_HFLAG_SLB;
 	vcpu->arch.pvr = pvr;
+#ifdef CONFIG_PPC_BOOK3S_64
 	if ((pvr >= 0x330000) && (pvr < 0x70330000)) {
 		kvmppc_mmu_book3s_64_init(vcpu);
 		to_book3s(vcpu)->hior = 0xfff00000;
 		to_book3s(vcpu)->msr_mask = 0xffffffffffffffffULL;
-	} else {
+	} else
+#endif
+	{
 		kvmppc_mmu_book3s_32_init(vcpu);
 		to_book3s(vcpu)->hior = 0;
 		to_book3s(vcpu)->msr_mask = 0xffffffffULL;
@@ -368,6 +378,11 @@ void kvmppc_set_pvr(struct kvm_vcpu *vcpu, u32 pvr)
 	   really needs them in a VM on Cell and force disable them. */
 	if (!strcmp(cur_cpu_spec->platform, "ppc-cell-be"))
 		to_book3s(vcpu)->msr_mask &= ~(MSR_FE0 | MSR_FE1);
+
+#ifdef CONFIG_PPC_BOOK3S_32
+	/* 32 bit Book3S always has 32 byte dcbz */
+	vcpu->arch.hflags |= BOOK3S_HFLAG_DCBZ32;
+#endif
 }
 
 /* Book3s_32 CPUs always have 32 bytes cache line size, which Linux assumes. To
@@ -1211,8 +1226,13 @@ struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm, unsigned int id)
 
 	vcpu->arch.host_retip = kvm_return_point;
 	vcpu->arch.host_msr = mfmsr();
+#ifdef CONFIG_PPC_BOOK3S_64
 	/* default to book3s_64 (970fx) */
 	vcpu->arch.pvr = 0x3C0301;
+#else
+	/* default to book3s_32 (750) */
+	vcpu->arch.pvr = 0x84202;
+#endif
 	kvmppc_set_pvr(vcpu, vcpu->arch.pvr);
 	vcpu_book3s->slb_nr = 64;
 
@@ -1220,7 +1240,11 @@ struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm, unsigned int id)
 	vcpu->arch.trampoline_lowmem = kvmppc_trampoline_lowmem;
 	vcpu->arch.trampoline_enter = kvmppc_trampoline_enter;
 	vcpu->arch.highmem_handler = (ulong)kvmppc_handler_highmem;
+#ifdef CONFIG_PPC_BOOK3S_64
 	vcpu->arch.rmcall = *(ulong*)kvmppc_rmcall;
+#else
+	vcpu->arch.rmcall = (ulong)kvmppc_rmcall;
+#endif
 
 	vcpu->arch.shadow_msr = MSR_USER64;
 
diff --git a/arch/powerpc/kvm/book3s_32_mmu.c b/arch/powerpc/kvm/book3s_32_mmu.c
index 7071e22..48efb37 100644
--- a/arch/powerpc/kvm/book3s_32_mmu.c
+++ b/arch/powerpc/kvm/book3s_32_mmu.c
@@ -45,6 +45,9 @@
 
 #define PTEG_FLAG_ACCESSED	0x00000100
 #define PTEG_FLAG_DIRTY		0x00000080
+#ifndef SID_SHIFT
+#define SID_SHIFT		28
+#endif
 
 static inline bool check_debug_ip(struct kvm_vcpu *vcpu)
 {
diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index daa829b..3f7afb5 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -59,6 +59,10 @@
 #define SPRN_GQR6		918
 #define SPRN_GQR7		919
 
+/* Book3S_32 defines mfsrin(v) - but that messes up our abstract
+ * function pointers, so let's just disable the define. */
+#undef mfsrin
+
 int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
                            unsigned int inst, int *advance)
 {
diff --git a/arch/powerpc/kvm/book3s_rmhandlers.S b/arch/powerpc/kvm/book3s_rmhandlers.S
index da571f8..86f9bde 100644
--- a/arch/powerpc/kvm/book3s_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_rmhandlers.S
@@ -242,11 +242,11 @@ define_load_up(vsx)
 
 .global kvmppc_trampoline_lowmem
 kvmppc_trampoline_lowmem:
-	.long kvmppc_handler_lowmem_trampoline - _stext
+	.long kvmppc_handler_lowmem_trampoline - CONFIG_KERNEL_START
 
 .global kvmppc_trampoline_enter
 kvmppc_trampoline_enter:
-	.long kvmppc_handler_trampoline_enter - _stext
+	.long kvmppc_handler_trampoline_enter - CONFIG_KERNEL_START
 
 #include "book3s_segment.S"
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 22/27] KVM: PPC: Add Book3S compatibility code
@ 2010-04-15 22:11   ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

Some code we had so far required defines and had code that was completely
Book3S_64 specific. Since we now opened book3s.c to Book3S_32 too, we need
to take care of these pieces.

So let's add some minor code where it makes sense to not go the Book3S_64
code paths and add compat defines on others.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s.c            |   26 +++++++++++++++++++++++++-
 arch/powerpc/kvm/book3s_32_mmu.c     |    3 +++
 arch/powerpc/kvm/book3s_emulate.c    |    4 ++++
 arch/powerpc/kvm/book3s_rmhandlers.S |    4 ++--
 4 files changed, 34 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 178ddd4..f5229f9 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -39,6 +39,13 @@
 static int kvmppc_handle_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr,
 			     ulong msr);
 
+/* Some compatibility defines */
+#ifdef CONFIG_PPC_BOOK3S_32
+#define MSR_USER32 MSR_USER
+#define MSR_USER64 MSR_USER
+#define HW_PAGE_SIZE PAGE_SIZE
+#endif
+
 struct kvm_stats_debugfs_item debugfs_entries[] = {
 	{ "exits",       VCPU_STAT(sum_exits) },
 	{ "mmio",        VCPU_STAT(mmio_exits) },
@@ -347,11 +354,14 @@ void kvmppc_set_pvr(struct kvm_vcpu *vcpu, u32 pvr)
 {
 	vcpu->arch.hflags &= ~BOOK3S_HFLAG_SLB;
 	vcpu->arch.pvr = pvr;
+#ifdef CONFIG_PPC_BOOK3S_64
 	if ((pvr >= 0x330000) && (pvr < 0x70330000)) {
 		kvmppc_mmu_book3s_64_init(vcpu);
 		to_book3s(vcpu)->hior = 0xfff00000;
 		to_book3s(vcpu)->msr_mask = 0xffffffffffffffffULL;
-	} else {
+	} else
+#endif
+	{
 		kvmppc_mmu_book3s_32_init(vcpu);
 		to_book3s(vcpu)->hior = 0;
 		to_book3s(vcpu)->msr_mask = 0xffffffffULL;
@@ -368,6 +378,11 @@ void kvmppc_set_pvr(struct kvm_vcpu *vcpu, u32 pvr)
 	   really needs them in a VM on Cell and force disable them. */
 	if (!strcmp(cur_cpu_spec->platform, "ppc-cell-be"))
 		to_book3s(vcpu)->msr_mask &= ~(MSR_FE0 | MSR_FE1);
+
+#ifdef CONFIG_PPC_BOOK3S_32
+	/* 32 bit Book3S always has 32 byte dcbz */
+	vcpu->arch.hflags |= BOOK3S_HFLAG_DCBZ32;
+#endif
 }
 
 /* Book3s_32 CPUs always have 32 bytes cache line size, which Linux assumes. To
@@ -1211,8 +1226,13 @@ struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm, unsigned int id)
 
 	vcpu->arch.host_retip = kvm_return_point;
 	vcpu->arch.host_msr = mfmsr();
+#ifdef CONFIG_PPC_BOOK3S_64
 	/* default to book3s_64 (970fx) */
 	vcpu->arch.pvr = 0x3C0301;
+#else
+	/* default to book3s_32 (750) */
+	vcpu->arch.pvr = 0x84202;
+#endif
 	kvmppc_set_pvr(vcpu, vcpu->arch.pvr);
 	vcpu_book3s->slb_nr = 64;
 
@@ -1220,7 +1240,11 @@ struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm, unsigned int id)
 	vcpu->arch.trampoline_lowmem = kvmppc_trampoline_lowmem;
 	vcpu->arch.trampoline_enter = kvmppc_trampoline_enter;
 	vcpu->arch.highmem_handler = (ulong)kvmppc_handler_highmem;
+#ifdef CONFIG_PPC_BOOK3S_64
 	vcpu->arch.rmcall = *(ulong*)kvmppc_rmcall;
+#else
+	vcpu->arch.rmcall = (ulong)kvmppc_rmcall;
+#endif
 
 	vcpu->arch.shadow_msr = MSR_USER64;
 
diff --git a/arch/powerpc/kvm/book3s_32_mmu.c b/arch/powerpc/kvm/book3s_32_mmu.c
index 7071e22..48efb37 100644
--- a/arch/powerpc/kvm/book3s_32_mmu.c
+++ b/arch/powerpc/kvm/book3s_32_mmu.c
@@ -45,6 +45,9 @@
 
 #define PTEG_FLAG_ACCESSED	0x00000100
 #define PTEG_FLAG_DIRTY		0x00000080
+#ifndef SID_SHIFT
+#define SID_SHIFT		28
+#endif
 
 static inline bool check_debug_ip(struct kvm_vcpu *vcpu)
 {
diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index daa829b..3f7afb5 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -59,6 +59,10 @@
 #define SPRN_GQR6		918
 #define SPRN_GQR7		919
 
+/* Book3S_32 defines mfsrin(v) - but that messes up our abstract
+ * function pointers, so let's just disable the define. */
+#undef mfsrin
+
 int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
                            unsigned int inst, int *advance)
 {
diff --git a/arch/powerpc/kvm/book3s_rmhandlers.S b/arch/powerpc/kvm/book3s_rmhandlers.S
index da571f8..86f9bde 100644
--- a/arch/powerpc/kvm/book3s_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_rmhandlers.S
@@ -242,11 +242,11 @@ define_load_up(vsx)
 
 .global kvmppc_trampoline_lowmem
 kvmppc_trampoline_lowmem:
-	.long kvmppc_handler_lowmem_trampoline - _stext
+	.long kvmppc_handler_lowmem_trampoline - CONFIG_KERNEL_START
 
 .global kvmppc_trampoline_enter
 kvmppc_trampoline_enter:
-	.long kvmppc_handler_trampoline_enter - _stext
+	.long kvmppc_handler_trampoline_enter - CONFIG_KERNEL_START
 
 #include "book3s_segment.S"
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 23/27] KVM: PPC: Export MMU variables
       [not found] ` <1271369518-11247-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-15 22:11     ` Alexander Graf
  2010-04-15 22:11     ` Alexander Graf
                       ` (15 subsequent siblings)
  16 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA
  Cc: kvm-u79uwXL29TY76Z2rM5mHXA, Benjamin Herrenschmidt

Our shadow MMU code needs to know where the HTAB is located and how
big it is. So we need some variables from the kernel exported to
module space if KVM is built as a module.

CC: Benjamin Herrenschmidt <benh-XVmvHMARGAS8U2dJNN8I7kB+6BGkLq7r@public.gmane.org>
Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kernel/ppc_ksyms.c |    5 +++++
 1 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kernel/ppc_ksyms.c b/arch/powerpc/kernel/ppc_ksyms.c
index bc9f39d..2b7c43f 100644
--- a/arch/powerpc/kernel/ppc_ksyms.c
+++ b/arch/powerpc/kernel/ppc_ksyms.c
@@ -178,6 +178,11 @@ EXPORT_SYMBOL(switch_mmu_context);
 extern long mol_trampoline;
 EXPORT_SYMBOL(mol_trampoline); /* For MOL */
 EXPORT_SYMBOL(flush_hash_pages); /* For MOL */
+
+extern struct hash_pte *Hash;
+extern unsigned long _SDR1;
+EXPORT_SYMBOL_GPL(Hash); /* For KVM */
+EXPORT_SYMBOL_GPL(_SDR1); /* For KVM */
 #ifdef CONFIG_SMP
 extern int mmu_hash_lock;
 EXPORT_SYMBOL(mmu_hash_lock); /* For MOL */
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 23/27] KVM: PPC: Export MMU variables
@ 2010-04-15 22:11     ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA
  Cc: kvm-u79uwXL29TY76Z2rM5mHXA, Benjamin Herrenschmidt

Our shadow MMU code needs to know where the HTAB is located and how
big it is. So we need some variables from the kernel exported to
module space if KVM is built as a module.

CC: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kernel/ppc_ksyms.c |    5 +++++
 1 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kernel/ppc_ksyms.c b/arch/powerpc/kernel/ppc_ksyms.c
index bc9f39d..2b7c43f 100644
--- a/arch/powerpc/kernel/ppc_ksyms.c
+++ b/arch/powerpc/kernel/ppc_ksyms.c
@@ -178,6 +178,11 @@ EXPORT_SYMBOL(switch_mmu_context);
 extern long mol_trampoline;
 EXPORT_SYMBOL(mol_trampoline); /* For MOL */
 EXPORT_SYMBOL(flush_hash_pages); /* For MOL */
+
+extern struct hash_pte *Hash;
+extern unsigned long _SDR1;
+EXPORT_SYMBOL_GPL(Hash); /* For KVM */
+EXPORT_SYMBOL_GPL(_SDR1); /* For KVM */
 #ifdef CONFIG_SMP
 extern int mmu_hash_lock;
 EXPORT_SYMBOL(mmu_hash_lock); /* For MOL */
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 24/27] PPC: Export SWITCH_FRAME_SIZE
       [not found] ` <1271369518-11247-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-15 22:11     ` Alexander Graf
  2010-04-15 22:11     ` Alexander Graf
                       ` (15 subsequent siblings)
  16 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA
  Cc: kvm-u79uwXL29TY76Z2rM5mHXA, Benjamin Herrenschmidt

We need the SWITCH_FRAME_SIZE define on Book3S_32 now too.
So let's export it unconditionally.

CC: Benjamin Herrenschmidt <benh-XVmvHMARGAS8U2dJNN8I7kB+6BGkLq7r@public.gmane.org>
Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kernel/asm-offsets.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 1804c2c..2716c51 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -210,8 +210,8 @@ int main(void)
 	/* Interrupt register frame */
 	DEFINE(STACK_FRAME_OVERHEAD, STACK_FRAME_OVERHEAD);
 	DEFINE(INT_FRAME_SIZE, STACK_INT_FRAME_SIZE);
-#ifdef CONFIG_PPC64
 	DEFINE(SWITCH_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs));
+#ifdef CONFIG_PPC64
 	/* Create extra stack space for SRR0 and SRR1 when calling prom/rtas. */
 	DEFINE(PROM_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs) + 16);
 	DEFINE(RTAS_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs) + 16);
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 24/27] PPC: Export SWITCH_FRAME_SIZE
@ 2010-04-15 22:11     ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA
  Cc: kvm-u79uwXL29TY76Z2rM5mHXA, Benjamin Herrenschmidt

We need the SWITCH_FRAME_SIZE define on Book3S_32 now too.
So let's export it unconditionally.

CC: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kernel/asm-offsets.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 1804c2c..2716c51 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -210,8 +210,8 @@ int main(void)
 	/* Interrupt register frame */
 	DEFINE(STACK_FRAME_OVERHEAD, STACK_FRAME_OVERHEAD);
 	DEFINE(INT_FRAME_SIZE, STACK_INT_FRAME_SIZE);
-#ifdef CONFIG_PPC64
 	DEFINE(SWITCH_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs));
+#ifdef CONFIG_PPC64
 	/* Create extra stack space for SRR0 and SRR1 when calling prom/rtas. */
 	DEFINE(PROM_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs) + 16);
 	DEFINE(RTAS_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs) + 16);
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 25/27] KVM: PPC: Check max IRQ prio
       [not found] ` <1271369518-11247-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-15 22:11     ` Alexander Graf
  2010-04-15 22:11     ` Alexander Graf
                       ` (15 subsequent siblings)
  16 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

We have a define on what the highest bit of IRQ priorities is. So we can
just as well use it in the bit checking code and avoid invalid IRQ values
to be triggered.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index f5229f9..5805f99 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -336,7 +336,7 @@ void kvmppc_core_deliver_interrupts(struct kvm_vcpu *vcpu)
 		printk(KERN_EMERG "KVM: Check pending: %lx\n", vcpu->arch.pending_exceptions);
 #endif
 	priority = __ffs(*pending);
-	while (priority <= (sizeof(unsigned int) * 8)) {
+	while (priority < BOOK3S_IRQPRIO_MAX) {
 		if (kvmppc_book3s_irqprio_deliver(vcpu, priority) &&
 		    (priority != BOOK3S_IRQPRIO_DECREMENTER)) {
 			/* DEC interrupts get cleared by mtdec */
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 25/27] KVM: PPC: Check max IRQ prio
@ 2010-04-15 22:11     ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

We have a define on what the highest bit of IRQ priorities is. So we can
just as well use it in the bit checking code and avoid invalid IRQ values
to be triggered.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index f5229f9..5805f99 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -336,7 +336,7 @@ void kvmppc_core_deliver_interrupts(struct kvm_vcpu *vcpu)
 		printk(KERN_EMERG "KVM: Check pending: %lx\n", vcpu->arch.pending_exceptions);
 #endif
 	priority = __ffs(*pending);
-	while (priority <= (sizeof(unsigned int) * 8)) {
+	while (priority < BOOK3S_IRQPRIO_MAX) {
 		if (kvmppc_book3s_irqprio_deliver(vcpu, priority) &&
 		    (priority != BOOK3S_IRQPRIO_DECREMENTER)) {
 			/* DEC interrupts get cleared by mtdec */
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 26/27] KVM: PPC: Add KVM intercept handlers
  2010-04-15 22:11 ` Alexander Graf
@ 2010-04-15 22:11   ` Alexander Graf
  -1 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm, Benjamin Herrenschmidt

When an interrupt occurs we don't know yet if we're in guest context or
in host context. When in guest context, KVM needs to handle it.

So let's pull the same trick we did on Book3S_64: Just add a macro to
determine if we're in guest context or not and if so jump on to KVM code.

CC: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kernel/head_32.S |   14 ++++++++++++++
 1 files changed, 14 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kernel/head_32.S b/arch/powerpc/kernel/head_32.S
index e025e89..98c4b29 100644
--- a/arch/powerpc/kernel/head_32.S
+++ b/arch/powerpc/kernel/head_32.S
@@ -33,6 +33,7 @@
 #include <asm/asm-offsets.h>
 #include <asm/ptrace.h>
 #include <asm/bug.h>
+#include <asm/kvm_book3s_asm.h>
 
 /* 601 only have IBAT; cr0.eq is set on 601 when using this macro */
 #define LOAD_BAT(n, reg, RA, RB)	\
@@ -303,6 +304,7 @@ __secondary_hold_acknowledge:
  */
 #define EXCEPTION(n, label, hdlr, xfer)		\
 	. = n;					\
+	DO_KVM n;				\
 label:						\
 	EXCEPTION_PROLOG;			\
 	addi	r3,r1,STACK_FRAME_OVERHEAD;	\
@@ -358,6 +360,7 @@ i##n:								\
  *	-- paulus.
  */
 	. = 0x200
+	DO_KVM  0x200
 	mtspr	SPRN_SPRG_SCRATCH0,r10
 	mtspr	SPRN_SPRG_SCRATCH1,r11
 	mfcr	r10
@@ -381,6 +384,7 @@ i##n:								\
 
 /* Data access exception. */
 	. = 0x300
+	DO_KVM  0x300
 DataAccess:
 	EXCEPTION_PROLOG
 	mfspr	r10,SPRN_DSISR
@@ -397,6 +401,7 @@ DataAccess:
 
 /* Instruction access exception. */
 	. = 0x400
+	DO_KVM  0x400
 InstructionAccess:
 	EXCEPTION_PROLOG
 	andis.	r0,r9,0x4000		/* no pte found? */
@@ -413,6 +418,7 @@ InstructionAccess:
 
 /* Alignment exception */
 	. = 0x600
+	DO_KVM  0x600
 Alignment:
 	EXCEPTION_PROLOG
 	mfspr	r4,SPRN_DAR
@@ -427,6 +433,7 @@ Alignment:
 
 /* Floating-point unavailable */
 	. = 0x800
+	DO_KVM  0x800
 FPUnavailable:
 BEGIN_FTR_SECTION
 /*
@@ -450,6 +457,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_FPU_UNAVAILABLE)
 
 /* System call */
 	. = 0xc00
+	DO_KVM  0xc00
 SystemCall:
 	EXCEPTION_PROLOG
 	EXC_XFER_EE_LITE(0xc00, DoSyscall)
@@ -467,9 +475,11 @@ SystemCall:
  * by executing an altivec instruction.
  */
 	. = 0xf00
+	DO_KVM  0xf00
 	b	PerformanceMonitor
 
 	. = 0xf20
+	DO_KVM  0xf20
 	b	AltiVecUnavailable
 
 /*
@@ -882,6 +892,10 @@ __secondary_start:
 	RFI
 #endif /* CONFIG_SMP */
 
+#ifdef CONFIG_KVM_BOOK3S_HANDLER
+#include "../kvm/book3s_rmhandlers.S"
+#endif
+
 /*
  * Those generic dummy functions are kept for CPUs not
  * included in CONFIG_6xx
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 26/27] KVM: PPC: Add KVM intercept handlers
@ 2010-04-15 22:11   ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm, Benjamin Herrenschmidt

When an interrupt occurs we don't know yet if we're in guest context or
in host context. When in guest context, KVM needs to handle it.

So let's pull the same trick we did on Book3S_64: Just add a macro to
determine if we're in guest context or not and if so jump on to KVM code.

CC: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kernel/head_32.S |   14 ++++++++++++++
 1 files changed, 14 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kernel/head_32.S b/arch/powerpc/kernel/head_32.S
index e025e89..98c4b29 100644
--- a/arch/powerpc/kernel/head_32.S
+++ b/arch/powerpc/kernel/head_32.S
@@ -33,6 +33,7 @@
 #include <asm/asm-offsets.h>
 #include <asm/ptrace.h>
 #include <asm/bug.h>
+#include <asm/kvm_book3s_asm.h>
 
 /* 601 only have IBAT; cr0.eq is set on 601 when using this macro */
 #define LOAD_BAT(n, reg, RA, RB)	\
@@ -303,6 +304,7 @@ __secondary_hold_acknowledge:
  */
 #define EXCEPTION(n, label, hdlr, xfer)		\
 	. = n;					\
+	DO_KVM n;				\
 label:						\
 	EXCEPTION_PROLOG;			\
 	addi	r3,r1,STACK_FRAME_OVERHEAD;	\
@@ -358,6 +360,7 @@ i##n:								\
  *	-- paulus.
  */
 	. = 0x200
+	DO_KVM  0x200
 	mtspr	SPRN_SPRG_SCRATCH0,r10
 	mtspr	SPRN_SPRG_SCRATCH1,r11
 	mfcr	r10
@@ -381,6 +384,7 @@ i##n:								\
 
 /* Data access exception. */
 	. = 0x300
+	DO_KVM  0x300
 DataAccess:
 	EXCEPTION_PROLOG
 	mfspr	r10,SPRN_DSISR
@@ -397,6 +401,7 @@ DataAccess:
 
 /* Instruction access exception. */
 	. = 0x400
+	DO_KVM  0x400
 InstructionAccess:
 	EXCEPTION_PROLOG
 	andis.	r0,r9,0x4000		/* no pte found? */
@@ -413,6 +418,7 @@ InstructionAccess:
 
 /* Alignment exception */
 	. = 0x600
+	DO_KVM  0x600
 Alignment:
 	EXCEPTION_PROLOG
 	mfspr	r4,SPRN_DAR
@@ -427,6 +433,7 @@ Alignment:
 
 /* Floating-point unavailable */
 	. = 0x800
+	DO_KVM  0x800
 FPUnavailable:
 BEGIN_FTR_SECTION
 /*
@@ -450,6 +457,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_FPU_UNAVAILABLE)
 
 /* System call */
 	. = 0xc00
+	DO_KVM  0xc00
 SystemCall:
 	EXCEPTION_PROLOG
 	EXC_XFER_EE_LITE(0xc00, DoSyscall)
@@ -467,9 +475,11 @@ SystemCall:
  * by executing an altivec instruction.
  */
 	. = 0xf00
+	DO_KVM  0xf00
 	b	PerformanceMonitor
 
 	. = 0xf20
+	DO_KVM  0xf20
 	b	AltiVecUnavailable
 
 /*
@@ -882,6 +892,10 @@ __secondary_start:
 	RFI
 #endif /* CONFIG_SMP */
 
+#ifdef CONFIG_KVM_BOOK3S_HANDLER
+#include "../kvm/book3s_rmhandlers.S"
+#endif
+
 /*
  * Those generic dummy functions are kept for CPUs not
  * included in CONFIG_6xx
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 27/27] KVM: PPC: Enable Book3S_32 KVM building
  2010-04-15 22:11 ` Alexander Graf
@ 2010-04-15 22:11   ` Alexander Graf
  -1 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

Now that we have all the bits and pieces in place, let's enable building
of the Book3S_32 target.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/Kconfig  |   18 ++++++++++++++++++
 arch/powerpc/kvm/Makefile |   12 ++++++++++++
 2 files changed, 30 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/Kconfig b/arch/powerpc/kvm/Kconfig
index d864698..b7baff7 100644
--- a/arch/powerpc/kvm/Kconfig
+++ b/arch/powerpc/kvm/Kconfig
@@ -25,10 +25,28 @@ config KVM
 config KVM_BOOK3S_HANDLER
 	bool
 
+config KVM_BOOK3S_32_HANDLER
+	bool
+	select KVM_BOOK3S_HANDLER
+
 config KVM_BOOK3S_64_HANDLER
 	bool
 	select KVM_BOOK3S_HANDLER
 
+config KVM_BOOK3S_32
+	tristate "KVM support for PowerPC book3s_32 processors"
+	depends on EXPERIMENTAL && PPC_BOOK3S_32 && !SMP && !PTE_64BIT
+	select KVM
+	select KVM_BOOK3S_32_HANDLER
+	---help---
+	  Support running unmodified book3s_32 guest kernels
+	  in virtual machines on book3s_32 host processors.
+
+	  This module provides access to the hardware capabilities through
+	  a character device node named /dev/kvm.
+
+	  If unsure, say N.
+
 config KVM_BOOK3S_64
 	tristate "KVM support for PowerPC book3s_64 processors"
 	depends on EXPERIMENTAL && PPC_BOOK3S_64
diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile
index f621ce6..ff43606 100644
--- a/arch/powerpc/kvm/Makefile
+++ b/arch/powerpc/kvm/Makefile
@@ -50,9 +50,21 @@ kvm-book3s_64-objs := \
 	book3s_32_mmu.o
 kvm-objs-$(CONFIG_KVM_BOOK3S_64) := $(kvm-book3s_64-objs)
 
+kvm-book3s_32-objs := \
+	$(common-objs-y) \
+	fpu.o \
+	book3s_paired_singles.o \
+	book3s.o \
+	book3s_emulate.o \
+	book3s_interrupts.o \
+	book3s_32_mmu_host.o \
+	book3s_32_mmu.o
+kvm-objs-$(CONFIG_KVM_BOOK3S_32) := $(kvm-book3s_32-objs)
+
 kvm-objs := $(kvm-objs-m) $(kvm-objs-y)
 
 obj-$(CONFIG_KVM_440) += kvm.o
 obj-$(CONFIG_KVM_E500) += kvm.o
 obj-$(CONFIG_KVM_BOOK3S_64) += kvm.o
+obj-$(CONFIG_KVM_BOOK3S_32) += kvm.o
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 27/27] KVM: PPC: Enable Book3S_32 KVM building
@ 2010-04-15 22:11   ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-15 22:11 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

Now that we have all the bits and pieces in place, let's enable building
of the Book3S_32 target.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/Kconfig  |   18 ++++++++++++++++++
 arch/powerpc/kvm/Makefile |   12 ++++++++++++
 2 files changed, 30 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/Kconfig b/arch/powerpc/kvm/Kconfig
index d864698..b7baff7 100644
--- a/arch/powerpc/kvm/Kconfig
+++ b/arch/powerpc/kvm/Kconfig
@@ -25,10 +25,28 @@ config KVM
 config KVM_BOOK3S_HANDLER
 	bool
 
+config KVM_BOOK3S_32_HANDLER
+	bool
+	select KVM_BOOK3S_HANDLER
+
 config KVM_BOOK3S_64_HANDLER
 	bool
 	select KVM_BOOK3S_HANDLER
 
+config KVM_BOOK3S_32
+	tristate "KVM support for PowerPC book3s_32 processors"
+	depends on EXPERIMENTAL && PPC_BOOK3S_32 && !SMP && !PTE_64BIT
+	select KVM
+	select KVM_BOOK3S_32_HANDLER
+	---help---
+	  Support running unmodified book3s_32 guest kernels
+	  in virtual machines on book3s_32 host processors.
+
+	  This module provides access to the hardware capabilities through
+	  a character device node named /dev/kvm.
+
+	  If unsure, say N.
+
 config KVM_BOOK3S_64
 	tristate "KVM support for PowerPC book3s_64 processors"
 	depends on EXPERIMENTAL && PPC_BOOK3S_64
diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile
index f621ce6..ff43606 100644
--- a/arch/powerpc/kvm/Makefile
+++ b/arch/powerpc/kvm/Makefile
@@ -50,9 +50,21 @@ kvm-book3s_64-objs := \
 	book3s_32_mmu.o
 kvm-objs-$(CONFIG_KVM_BOOK3S_64) := $(kvm-book3s_64-objs)
 
+kvm-book3s_32-objs := \
+	$(common-objs-y) \
+	fpu.o \
+	book3s_paired_singles.o \
+	book3s.o \
+	book3s_emulate.o \
+	book3s_interrupts.o \
+	book3s_32_mmu_host.o \
+	book3s_32_mmu.o
+kvm-objs-$(CONFIG_KVM_BOOK3S_32) := $(kvm-book3s_32-objs)
+
 kvm-objs := $(kvm-objs-m) $(kvm-objs-y)
 
 obj-$(CONFIG_KVM_440) += kvm.o
 obj-$(CONFIG_KVM_E500) += kvm.o
 obj-$(CONFIG_KVM_BOOK3S_64) += kvm.o
+obj-$(CONFIG_KVM_BOOK3S_32) += kvm.o
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* Re: [PATCH 05/27] PPC: Split context init/destroy functions
  2010-04-15 22:11   ` Alexander Graf
@ 2010-04-16  6:46     ` Benjamin Herrenschmidt
  -1 siblings, 0 replies; 80+ messages in thread
From: Benjamin Herrenschmidt @ 2010-04-16  6:46 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On Fri, 2010-04-16 at 00:11 +0200, Alexander Graf wrote:
> We need to reserve a context from KVM to make sure we have our own
> segment space. While we did that split for Book3S_64 already, 32 bit
> is still outstanding.
> 
> So let's split it now.
> 
> Signed-off-by: Alexander Graf <agraf@suse.de>

Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>


> ---
>  arch/powerpc/include/asm/mmu_context.h |    2 ++
>  arch/powerpc/mm/mmu_context_hash32.c   |   29 ++++++++++++++++++++++-------
>  2 files changed, 24 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
> index 26383e0..81fb412 100644
> --- a/arch/powerpc/include/asm/mmu_context.h
> +++ b/arch/powerpc/include/asm/mmu_context.h
> @@ -27,6 +27,8 @@ extern int __init_new_context(void);
>  extern void __destroy_context(int context_id);
>  static inline void mmu_context_init(void) { }
>  #else
> +extern unsigned long __init_new_context(void);
> +extern void __destroy_context(unsigned long context_id);
>  extern void mmu_context_init(void);
>  #endif
>  
> diff --git a/arch/powerpc/mm/mmu_context_hash32.c b/arch/powerpc/mm/mmu_context_hash32.c
> index 0dfba2b..d0ee554 100644
> --- a/arch/powerpc/mm/mmu_context_hash32.c
> +++ b/arch/powerpc/mm/mmu_context_hash32.c
> @@ -60,11 +60,7 @@
>  static unsigned long next_mmu_context;
>  static unsigned long context_map[LAST_CONTEXT / BITS_PER_LONG + 1];
>  
> -
> -/*
> - * Set up the context for a new address space.
> - */
> -int init_new_context(struct task_struct *t, struct mm_struct *mm)
> +unsigned long __init_new_context(void)
>  {
>  	unsigned long ctx = next_mmu_context;
>  
> @@ -74,19 +70,38 @@ int init_new_context(struct task_struct *t, struct mm_struct *mm)
>  			ctx = 0;
>  	}
>  	next_mmu_context = (ctx + 1) & LAST_CONTEXT;
> -	mm->context.id = ctx;
> +
> +	return ctx;
> +}
> +EXPORT_SYMBOL_GPL(__init_new_context);
> +
> +/*
> + * Set up the context for a new address space.
> + */
> +int init_new_context(struct task_struct *t, struct mm_struct *mm)
> +{
> +	mm->context.id = __init_new_context();
>  
>  	return 0;
>  }
>  
>  /*
> + * Free a context ID. Make sure to call this with preempt disabled!
> + */
> +void __destroy_context(unsigned long ctx)
> +{
> +	clear_bit(ctx, context_map);
> +}
> +EXPORT_SYMBOL_GPL(__destroy_context);
> +
> +/*
>   * We're finished using the context for an address space.
>   */
>  void destroy_context(struct mm_struct *mm)
>  {
>  	preempt_disable();
>  	if (mm->context.id != NO_CONTEXT) {
> -		clear_bit(mm->context.id, context_map);
> +		__destroy_context(mm->context.id);
>  		mm->context.id = NO_CONTEXT;
>  	}
>  	preempt_enable();



^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 05/27] PPC: Split context init/destroy functions
@ 2010-04-16  6:46     ` Benjamin Herrenschmidt
  0 siblings, 0 replies; 80+ messages in thread
From: Benjamin Herrenschmidt @ 2010-04-16  6:46 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On Fri, 2010-04-16 at 00:11 +0200, Alexander Graf wrote:
> We need to reserve a context from KVM to make sure we have our own
> segment space. While we did that split for Book3S_64 already, 32 bit
> is still outstanding.
> 
> So let's split it now.
> 
> Signed-off-by: Alexander Graf <agraf@suse.de>

Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>


> ---
>  arch/powerpc/include/asm/mmu_context.h |    2 ++
>  arch/powerpc/mm/mmu_context_hash32.c   |   29 ++++++++++++++++++++++-------
>  2 files changed, 24 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
> index 26383e0..81fb412 100644
> --- a/arch/powerpc/include/asm/mmu_context.h
> +++ b/arch/powerpc/include/asm/mmu_context.h
> @@ -27,6 +27,8 @@ extern int __init_new_context(void);
>  extern void __destroy_context(int context_id);
>  static inline void mmu_context_init(void) { }
>  #else
> +extern unsigned long __init_new_context(void);
> +extern void __destroy_context(unsigned long context_id);
>  extern void mmu_context_init(void);
>  #endif
>  
> diff --git a/arch/powerpc/mm/mmu_context_hash32.c b/arch/powerpc/mm/mmu_context_hash32.c
> index 0dfba2b..d0ee554 100644
> --- a/arch/powerpc/mm/mmu_context_hash32.c
> +++ b/arch/powerpc/mm/mmu_context_hash32.c
> @@ -60,11 +60,7 @@
>  static unsigned long next_mmu_context;
>  static unsigned long context_map[LAST_CONTEXT / BITS_PER_LONG + 1];
>  
> -
> -/*
> - * Set up the context for a new address space.
> - */
> -int init_new_context(struct task_struct *t, struct mm_struct *mm)
> +unsigned long __init_new_context(void)
>  {
>  	unsigned long ctx = next_mmu_context;
>  
> @@ -74,19 +70,38 @@ int init_new_context(struct task_struct *t, struct mm_struct *mm)
>  			ctx = 0;
>  	}
>  	next_mmu_context = (ctx + 1) & LAST_CONTEXT;
> -	mm->context.id = ctx;
> +
> +	return ctx;
> +}
> +EXPORT_SYMBOL_GPL(__init_new_context);
> +
> +/*
> + * Set up the context for a new address space.
> + */
> +int init_new_context(struct task_struct *t, struct mm_struct *mm)
> +{
> +	mm->context.id = __init_new_context();
>  
>  	return 0;
>  }
>  
>  /*
> + * Free a context ID. Make sure to call this with preempt disabled!
> + */
> +void __destroy_context(unsigned long ctx)
> +{
> +	clear_bit(ctx, context_map);
> +}
> +EXPORT_SYMBOL_GPL(__destroy_context);
> +
> +/*
>   * We're finished using the context for an address space.
>   */
>  void destroy_context(struct mm_struct *mm)
>  {
>  	preempt_disable();
>  	if (mm->context.id != NO_CONTEXT) {
> -		clear_bit(mm->context.id, context_map);
> +		__destroy_context(mm->context.id);
>  		mm->context.id = NO_CONTEXT;
>  	}
>  	preempt_enable();



^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 12/27] PPC: Add STLU
  2010-04-15 22:11   ` Alexander Graf
@ 2010-04-16  6:47     ` Benjamin Herrenschmidt
  -1 siblings, 0 replies; 80+ messages in thread
From: Benjamin Herrenschmidt @ 2010-04-16  6:47 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On Fri, 2010-04-16 at 00:11 +0200, Alexander Graf wrote:
> For assembly code there are several "long" load and store defines already.
> The one that's missing is the typical stack store, stdu/stwu.
> 
> So let's add that define as well, making my KVM code happy.
> 

Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

> Signed-off-by: Alexander Graf <agraf@suse.de>
> ---
>  arch/powerpc/include/asm/asm-compat.h |    2 ++
>  1 files changed, 2 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/asm-compat.h b/arch/powerpc/include/asm/asm-compat.h
> index a9b91ed..2048a6a 100644
> --- a/arch/powerpc/include/asm/asm-compat.h
> +++ b/arch/powerpc/include/asm/asm-compat.h
> @@ -21,6 +21,7 @@
>  /* operations for longs and pointers */
>  #define PPC_LL		stringify_in_c(ld)
>  #define PPC_STL		stringify_in_c(std)
> +#define PPC_STLU	stringify_in_c(stdu)
>  #define PPC_LCMPI	stringify_in_c(cmpdi)
>  #define PPC_LONG	stringify_in_c(.llong)
>  #define PPC_LONG_ALIGN	stringify_in_c(.balign 8)
> @@ -44,6 +45,7 @@
>  /* operations for longs and pointers */
>  #define PPC_LL		stringify_in_c(lwz)
>  #define PPC_STL		stringify_in_c(stw)
> +#define PPC_STLU	stringify_in_c(stwu)
>  #define PPC_LCMPI	stringify_in_c(cmpwi)
>  #define PPC_LONG	stringify_in_c(.long)
>  #define PPC_LONG_ALIGN	stringify_in_c(.balign 4)



^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 12/27] PPC: Add STLU
@ 2010-04-16  6:47     ` Benjamin Herrenschmidt
  0 siblings, 0 replies; 80+ messages in thread
From: Benjamin Herrenschmidt @ 2010-04-16  6:47 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On Fri, 2010-04-16 at 00:11 +0200, Alexander Graf wrote:
> For assembly code there are several "long" load and store defines already.
> The one that's missing is the typical stack store, stdu/stwu.
> 
> So let's add that define as well, making my KVM code happy.
> 

Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

> Signed-off-by: Alexander Graf <agraf@suse.de>
> ---
>  arch/powerpc/include/asm/asm-compat.h |    2 ++
>  1 files changed, 2 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/asm-compat.h b/arch/powerpc/include/asm/asm-compat.h
> index a9b91ed..2048a6a 100644
> --- a/arch/powerpc/include/asm/asm-compat.h
> +++ b/arch/powerpc/include/asm/asm-compat.h
> @@ -21,6 +21,7 @@
>  /* operations for longs and pointers */
>  #define PPC_LL		stringify_in_c(ld)
>  #define PPC_STL		stringify_in_c(std)
> +#define PPC_STLU	stringify_in_c(stdu)
>  #define PPC_LCMPI	stringify_in_c(cmpdi)
>  #define PPC_LONG	stringify_in_c(.llong)
>  #define PPC_LONG_ALIGN	stringify_in_c(.balign 8)
> @@ -44,6 +45,7 @@
>  /* operations for longs and pointers */
>  #define PPC_LL		stringify_in_c(lwz)
>  #define PPC_STL		stringify_in_c(stw)
> +#define PPC_STLU	stringify_in_c(stwu)
>  #define PPC_LCMPI	stringify_in_c(cmpwi)
>  #define PPC_LONG	stringify_in_c(.long)
>  #define PPC_LONG_ALIGN	stringify_in_c(.balign 4)



^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 23/27] KVM: PPC: Export MMU variables
       [not found]     ` <1271369518-11247-24-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-16  6:47         ` Benjamin Herrenschmidt
  0 siblings, 0 replies; 80+ messages in thread
From: Benjamin Herrenschmidt @ 2010-04-16  6:47 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On Fri, 2010-04-16 at 00:11 +0200, Alexander Graf wrote:
> Our shadow MMU code needs to know where the HTAB is located and how
> big it is. So we need some variables from the kernel exported to
> module space if KVM is built as a module.

Gross :-) Can't you just read the real SDR1 ? :-)

Cheers,
Ben.

> CC: Benjamin Herrenschmidt <benh-XVmvHMARGAS8U2dJNN8I7kB+6BGkLq7r@public.gmane.org>
> Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
> ---
>  arch/powerpc/kernel/ppc_ksyms.c |    5 +++++
>  1 files changed, 5 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/ppc_ksyms.c b/arch/powerpc/kernel/ppc_ksyms.c
> index bc9f39d..2b7c43f 100644
> --- a/arch/powerpc/kernel/ppc_ksyms.c
> +++ b/arch/powerpc/kernel/ppc_ksyms.c
> @@ -178,6 +178,11 @@ EXPORT_SYMBOL(switch_mmu_context);
>  extern long mol_trampoline;
>  EXPORT_SYMBOL(mol_trampoline); /* For MOL */
>  EXPORT_SYMBOL(flush_hash_pages); /* For MOL */
> +
> +extern struct hash_pte *Hash;
> +extern unsigned long _SDR1;
> +EXPORT_SYMBOL_GPL(Hash); /* For KVM */
> +EXPORT_SYMBOL_GPL(_SDR1); /* For KVM */
>  #ifdef CONFIG_SMP
>  extern int mmu_hash_lock;
>  EXPORT_SYMBOL(mmu_hash_lock); /* For MOL */

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 23/27] KVM: PPC: Export MMU variables
@ 2010-04-16  6:47         ` Benjamin Herrenschmidt
  0 siblings, 0 replies; 80+ messages in thread
From: Benjamin Herrenschmidt @ 2010-04-16  6:47 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On Fri, 2010-04-16 at 00:11 +0200, Alexander Graf wrote:
> Our shadow MMU code needs to know where the HTAB is located and how
> big it is. So we need some variables from the kernel exported to
> module space if KVM is built as a module.

Gross :-) Can't you just read the real SDR1 ? :-)

Cheers,
Ben.

> CC: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Signed-off-by: Alexander Graf <agraf@suse.de>
> ---
>  arch/powerpc/kernel/ppc_ksyms.c |    5 +++++
>  1 files changed, 5 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/ppc_ksyms.c b/arch/powerpc/kernel/ppc_ksyms.c
> index bc9f39d..2b7c43f 100644
> --- a/arch/powerpc/kernel/ppc_ksyms.c
> +++ b/arch/powerpc/kernel/ppc_ksyms.c
> @@ -178,6 +178,11 @@ EXPORT_SYMBOL(switch_mmu_context);
>  extern long mol_trampoline;
>  EXPORT_SYMBOL(mol_trampoline); /* For MOL */
>  EXPORT_SYMBOL(flush_hash_pages); /* For MOL */
> +
> +extern struct hash_pte *Hash;
> +extern unsigned long _SDR1;
> +EXPORT_SYMBOL_GPL(Hash); /* For KVM */
> +EXPORT_SYMBOL_GPL(_SDR1); /* For KVM */
>  #ifdef CONFIG_SMP
>  extern int mmu_hash_lock;
>  EXPORT_SYMBOL(mmu_hash_lock); /* For MOL */



^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 24/27] PPC: Export SWITCH_FRAME_SIZE
  2010-04-15 22:11     ` Alexander Graf
@ 2010-04-16  6:48       ` Benjamin Herrenschmidt
  -1 siblings, 0 replies; 80+ messages in thread
From: Benjamin Herrenschmidt @ 2010-04-16  6:48 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On Fri, 2010-04-16 at 00:11 +0200, Alexander Graf wrote:
> We need the SWITCH_FRAME_SIZE define on Book3S_32 now too.
> So let's export it unconditionally.
> 

Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

> Signed-off-by: Alexander Graf <agraf@suse.de>
> ---
>  arch/powerpc/kernel/asm-offsets.c |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
> index 1804c2c..2716c51 100644
> --- a/arch/powerpc/kernel/asm-offsets.c
> +++ b/arch/powerpc/kernel/asm-offsets.c
> @@ -210,8 +210,8 @@ int main(void)
>  	/* Interrupt register frame */
>  	DEFINE(STACK_FRAME_OVERHEAD, STACK_FRAME_OVERHEAD);
>  	DEFINE(INT_FRAME_SIZE, STACK_INT_FRAME_SIZE);
> -#ifdef CONFIG_PPC64
>  	DEFINE(SWITCH_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs));
> +#ifdef CONFIG_PPC64
>  	/* Create extra stack space for SRR0 and SRR1 when calling prom/rtas. */
>  	DEFINE(PROM_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs) + 16);
>  	DEFINE(RTAS_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs) + 16);



^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 24/27] PPC: Export SWITCH_FRAME_SIZE
@ 2010-04-16  6:48       ` Benjamin Herrenschmidt
  0 siblings, 0 replies; 80+ messages in thread
From: Benjamin Herrenschmidt @ 2010-04-16  6:48 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On Fri, 2010-04-16 at 00:11 +0200, Alexander Graf wrote:
> We need the SWITCH_FRAME_SIZE define on Book3S_32 now too.
> So let's export it unconditionally.
> 

Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

> Signed-off-by: Alexander Graf <agraf@suse.de>
> ---
>  arch/powerpc/kernel/asm-offsets.c |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
> index 1804c2c..2716c51 100644
> --- a/arch/powerpc/kernel/asm-offsets.c
> +++ b/arch/powerpc/kernel/asm-offsets.c
> @@ -210,8 +210,8 @@ int main(void)
>  	/* Interrupt register frame */
>  	DEFINE(STACK_FRAME_OVERHEAD, STACK_FRAME_OVERHEAD);
>  	DEFINE(INT_FRAME_SIZE, STACK_INT_FRAME_SIZE);
> -#ifdef CONFIG_PPC64
>  	DEFINE(SWITCH_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs));
> +#ifdef CONFIG_PPC64
>  	/* Create extra stack space for SRR0 and SRR1 when calling prom/rtas. */
>  	DEFINE(PROM_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs) + 16);
>  	DEFINE(RTAS_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs) + 16);



^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 26/27] KVM: PPC: Add KVM intercept handlers
       [not found]   ` <1271369518-11247-27-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-16  6:48       ` Benjamin Herrenschmidt
  0 siblings, 0 replies; 80+ messages in thread
From: Benjamin Herrenschmidt @ 2010-04-16  6:48 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On Fri, 2010-04-16 at 00:11 +0200, Alexander Graf wrote:
> When an interrupt occurs we don't know yet if we're in guest context or
> in host context. When in guest context, KVM needs to handle it.
> 
> So let's pull the same trick we did on Book3S_64: Just add a macro to
> determine if we're in guest context or not and if so jump on to KVM code.
> 
Acked-by: Benjamin Herrenschmidt <benh-XVmvHMARGAS8U2dJNN8I7kB+6BGkLq7r@public.gmane.org>

> Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
> ---
>  arch/powerpc/kernel/head_32.S |   14 ++++++++++++++
>  1 files changed, 14 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/head_32.S b/arch/powerpc/kernel/head_32.S
> index e025e89..98c4b29 100644
> --- a/arch/powerpc/kernel/head_32.S
> +++ b/arch/powerpc/kernel/head_32.S
> @@ -33,6 +33,7 @@
>  #include <asm/asm-offsets.h>
>  #include <asm/ptrace.h>
>  #include <asm/bug.h>
> +#include <asm/kvm_book3s_asm.h>
>  
>  /* 601 only have IBAT; cr0.eq is set on 601 when using this macro */
>  #define LOAD_BAT(n, reg, RA, RB)	\
> @@ -303,6 +304,7 @@ __secondary_hold_acknowledge:
>   */
>  #define EXCEPTION(n, label, hdlr, xfer)		\
>  	. = n;					\
> +	DO_KVM n;				\
>  label:						\
>  	EXCEPTION_PROLOG;			\
>  	addi	r3,r1,STACK_FRAME_OVERHEAD;	\
> @@ -358,6 +360,7 @@ i##n:								\
>   *	-- paulus.
>   */
>  	. = 0x200
> +	DO_KVM  0x200
>  	mtspr	SPRN_SPRG_SCRATCH0,r10
>  	mtspr	SPRN_SPRG_SCRATCH1,r11
>  	mfcr	r10
> @@ -381,6 +384,7 @@ i##n:								\
>  
>  /* Data access exception. */
>  	. = 0x300
> +	DO_KVM  0x300
>  DataAccess:
>  	EXCEPTION_PROLOG
>  	mfspr	r10,SPRN_DSISR
> @@ -397,6 +401,7 @@ DataAccess:
>  
>  /* Instruction access exception. */
>  	. = 0x400
> +	DO_KVM  0x400
>  InstructionAccess:
>  	EXCEPTION_PROLOG
>  	andis.	r0,r9,0x4000		/* no pte found? */
> @@ -413,6 +418,7 @@ InstructionAccess:
>  
>  /* Alignment exception */
>  	. = 0x600
> +	DO_KVM  0x600
>  Alignment:
>  	EXCEPTION_PROLOG
>  	mfspr	r4,SPRN_DAR
> @@ -427,6 +433,7 @@ Alignment:
>  
>  /* Floating-point unavailable */
>  	. = 0x800
> +	DO_KVM  0x800
>  FPUnavailable:
>  BEGIN_FTR_SECTION
>  /*
> @@ -450,6 +457,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_FPU_UNAVAILABLE)
>  
>  /* System call */
>  	. = 0xc00
> +	DO_KVM  0xc00
>  SystemCall:
>  	EXCEPTION_PROLOG
>  	EXC_XFER_EE_LITE(0xc00, DoSyscall)
> @@ -467,9 +475,11 @@ SystemCall:
>   * by executing an altivec instruction.
>   */
>  	. = 0xf00
> +	DO_KVM  0xf00
>  	b	PerformanceMonitor
>  
>  	. = 0xf20
> +	DO_KVM  0xf20
>  	b	AltiVecUnavailable
>  
>  /*
> @@ -882,6 +892,10 @@ __secondary_start:
>  	RFI
>  #endif /* CONFIG_SMP */
>  
> +#ifdef CONFIG_KVM_BOOK3S_HANDLER
> +#include "../kvm/book3s_rmhandlers.S"
> +#endif
> +
>  /*
>   * Those generic dummy functions are kept for CPUs not
>   * included in CONFIG_6xx

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 26/27] KVM: PPC: Add KVM intercept handlers
@ 2010-04-16  6:48       ` Benjamin Herrenschmidt
  0 siblings, 0 replies; 80+ messages in thread
From: Benjamin Herrenschmidt @ 2010-04-16  6:48 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On Fri, 2010-04-16 at 00:11 +0200, Alexander Graf wrote:
> When an interrupt occurs we don't know yet if we're in guest context or
> in host context. When in guest context, KVM needs to handle it.
> 
> So let's pull the same trick we did on Book3S_64: Just add a macro to
> determine if we're in guest context or not and if so jump on to KVM code.
> 
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

> Signed-off-by: Alexander Graf <agraf@suse.de>
> ---
>  arch/powerpc/kernel/head_32.S |   14 ++++++++++++++
>  1 files changed, 14 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/head_32.S b/arch/powerpc/kernel/head_32.S
> index e025e89..98c4b29 100644
> --- a/arch/powerpc/kernel/head_32.S
> +++ b/arch/powerpc/kernel/head_32.S
> @@ -33,6 +33,7 @@
>  #include <asm/asm-offsets.h>
>  #include <asm/ptrace.h>
>  #include <asm/bug.h>
> +#include <asm/kvm_book3s_asm.h>
>  
>  /* 601 only have IBAT; cr0.eq is set on 601 when using this macro */
>  #define LOAD_BAT(n, reg, RA, RB)	\
> @@ -303,6 +304,7 @@ __secondary_hold_acknowledge:
>   */
>  #define EXCEPTION(n, label, hdlr, xfer)		\
>  	. = n;					\
> +	DO_KVM n;				\
>  label:						\
>  	EXCEPTION_PROLOG;			\
>  	addi	r3,r1,STACK_FRAME_OVERHEAD;	\
> @@ -358,6 +360,7 @@ i##n:								\
>   *	-- paulus.
>   */
>  	. = 0x200
> +	DO_KVM  0x200
>  	mtspr	SPRN_SPRG_SCRATCH0,r10
>  	mtspr	SPRN_SPRG_SCRATCH1,r11
>  	mfcr	r10
> @@ -381,6 +384,7 @@ i##n:								\
>  
>  /* Data access exception. */
>  	. = 0x300
> +	DO_KVM  0x300
>  DataAccess:
>  	EXCEPTION_PROLOG
>  	mfspr	r10,SPRN_DSISR
> @@ -397,6 +401,7 @@ DataAccess:
>  
>  /* Instruction access exception. */
>  	. = 0x400
> +	DO_KVM  0x400
>  InstructionAccess:
>  	EXCEPTION_PROLOG
>  	andis.	r0,r9,0x4000		/* no pte found? */
> @@ -413,6 +418,7 @@ InstructionAccess:
>  
>  /* Alignment exception */
>  	. = 0x600
> +	DO_KVM  0x600
>  Alignment:
>  	EXCEPTION_PROLOG
>  	mfspr	r4,SPRN_DAR
> @@ -427,6 +433,7 @@ Alignment:
>  
>  /* Floating-point unavailable */
>  	. = 0x800
> +	DO_KVM  0x800
>  FPUnavailable:
>  BEGIN_FTR_SECTION
>  /*
> @@ -450,6 +457,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_FPU_UNAVAILABLE)
>  
>  /* System call */
>  	. = 0xc00
> +	DO_KVM  0xc00
>  SystemCall:
>  	EXCEPTION_PROLOG
>  	EXC_XFER_EE_LITE(0xc00, DoSyscall)
> @@ -467,9 +475,11 @@ SystemCall:
>   * by executing an altivec instruction.
>   */
>  	. = 0xf00
> +	DO_KVM  0xf00
>  	b	PerformanceMonitor
>  
>  	. = 0xf20
> +	DO_KVM  0xf20
>  	b	AltiVecUnavailable
>  
>  /*
> @@ -882,6 +892,10 @@ __secondary_start:
>  	RFI
>  #endif /* CONFIG_SMP */
>  
> +#ifdef CONFIG_KVM_BOOK3S_HANDLER
> +#include "../kvm/book3s_rmhandlers.S"
> +#endif
> +
>  /*
>   * Those generic dummy functions are kept for CPUs not
>   * included in CONFIG_6xx



^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 23/27] KVM: PPC: Export MMU variables
  2010-04-16  6:47         ` Benjamin Herrenschmidt
@ 2010-04-16  9:07           ` Alexander Graf
  -1 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-16  9:07 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA


On 16.04.2010, at 08:47, Benjamin Herrenschmidt wrote:

> On Fri, 2010-04-16 at 00:11 +0200, Alexander Graf wrote:
>> Our shadow MMU code needs to know where the HTAB is located and how
>> big it is. So we need some variables from the kernel exported to
>> module space if KVM is built as a module.
> 
> Gross :-) Can't you just read the real SDR1 ? :-)

I figured this is faster.

Alex

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 23/27] KVM: PPC: Export MMU variables
@ 2010-04-16  9:07           ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-16  9:07 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA


On 16.04.2010, at 08:47, Benjamin Herrenschmidt wrote:

> On Fri, 2010-04-16 at 00:11 +0200, Alexander Graf wrote:
>> Our shadow MMU code needs to know where the HTAB is located and how
>> big it is. So we need some variables from the kernel exported to
>> module space if KVM is built as a module.
> 
> Gross :-) Can't you just read the real SDR1 ? :-)

I figured this is faster.

Alex


^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 23/27] KVM: PPC: Export MMU variables
       [not found]           ` <41F9F369-9F53-43E8-AC1F-1C67DD918991-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-16  9:22               ` Benjamin Herrenschmidt
  0 siblings, 0 replies; 80+ messages in thread
From: Benjamin Herrenschmidt @ 2010-04-16  9:22 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On Fri, 2010-04-16 at 11:07 +0200, Alexander Graf wrote:
> On 16.04.2010, at 08:47, Benjamin Herrenschmidt wrote:
> 
> > On Fri, 2010-04-16 at 00:11 +0200, Alexander Graf wrote:
> >> Our shadow MMU code needs to know where the HTAB is located and how
> >> big it is. So we need some variables from the kernel exported to
> >> module space if KVM is built as a module.
> > 
> > Gross :-) Can't you just read the real SDR1 ? :-)
> 
> I figured this is faster.

In a fast path it might be, but I'm not too sure I like exporting
those... Oh well, Ack for now, maybe we'll come up with something nicer
at some stage.

Cheers,
Ben.

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 23/27] KVM: PPC: Export MMU variables
@ 2010-04-16  9:22               ` Benjamin Herrenschmidt
  0 siblings, 0 replies; 80+ messages in thread
From: Benjamin Herrenschmidt @ 2010-04-16  9:22 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On Fri, 2010-04-16 at 11:07 +0200, Alexander Graf wrote:
> On 16.04.2010, at 08:47, Benjamin Herrenschmidt wrote:
> 
> > On Fri, 2010-04-16 at 00:11 +0200, Alexander Graf wrote:
> >> Our shadow MMU code needs to know where the HTAB is located and how
> >> big it is. So we need some variables from the kernel exported to
> >> module space if KVM is built as a module.
> > 
> > Gross :-) Can't you just read the real SDR1 ? :-)
> 
> I figured this is faster.

In a fast path it might be, but I'm not too sure I like exporting
those... Oh well, Ack for now, maybe we'll come up with something nicer
at some stage.

Cheers,
Ben.



^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 23/27] KVM: PPC: Export MMU variables
  2010-04-16  9:22               ` Benjamin Herrenschmidt
@ 2010-04-16  9:25                 ` Alexander Graf
  -1 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-16  9:25 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: kvm-ppc, kvm


On 16.04.2010, at 11:22, Benjamin Herrenschmidt wrote:

> On Fri, 2010-04-16 at 11:07 +0200, Alexander Graf wrote:
>> On 16.04.2010, at 08:47, Benjamin Herrenschmidt wrote:
>> 
>>> On Fri, 2010-04-16 at 00:11 +0200, Alexander Graf wrote:
>>>> Our shadow MMU code needs to know where the HTAB is located and how
>>>> big it is. So we need some variables from the kernel exported to
>>>> module space if KVM is built as a module.
>>> 
>>> Gross :-) Can't you just read the real SDR1 ? :-)
>> 
>> I figured this is faster.
> 
> In a fast path it might be, but I'm not too sure I like exporting
> those... Oh well, Ack for now, maybe we'll come up with something nicer
> at some stage.

Well, I did look into reusing the existing functions for HTAB modification, but they're incredibly tightly coupled to Linux PTEs, partially even modifying them along the way. So this was a much much easier alternative.

Of course, I guess I could do a mfsdr1 on init and just take it from there? The HTAB location shouldn't change right?


Alex


^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 23/27] KVM: PPC: Export MMU variables
@ 2010-04-16  9:25                 ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-16  9:25 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: kvm-ppc, kvm


On 16.04.2010, at 11:22, Benjamin Herrenschmidt wrote:

> On Fri, 2010-04-16 at 11:07 +0200, Alexander Graf wrote:
>> On 16.04.2010, at 08:47, Benjamin Herrenschmidt wrote:
>> 
>>> On Fri, 2010-04-16 at 00:11 +0200, Alexander Graf wrote:
>>>> Our shadow MMU code needs to know where the HTAB is located and how
>>>> big it is. So we need some variables from the kernel exported to
>>>> module space if KVM is built as a module.
>>> 
>>> Gross :-) Can't you just read the real SDR1 ? :-)
>> 
>> I figured this is faster.
> 
> In a fast path it might be, but I'm not too sure I like exporting
> those... Oh well, Ack for now, maybe we'll come up with something nicer
> at some stage.

Well, I did look into reusing the existing functions for HTAB modification, but they're incredibly tightly coupled to Linux PTEs, partially even modifying them along the way. So this was a much much easier alternative.

Of course, I guess I could do a mfsdr1 on init and just take it from there? The HTAB location shouldn't change right?


Alex


^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 23/27] KVM: PPC: Export MMU variables
  2010-04-16  9:25                 ` Alexander Graf
@ 2010-04-16  9:31                   ` Alexander Graf
  -1 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-16  9:31 UTC (permalink / raw)
  To: Alexander Graf; +Cc: Benjamin Herrenschmidt, kvm-ppc, kvm


On 16.04.2010, at 11:25, Alexander Graf wrote:

> 
> On 16.04.2010, at 11:22, Benjamin Herrenschmidt wrote:
> 
>> On Fri, 2010-04-16 at 11:07 +0200, Alexander Graf wrote:
>>> On 16.04.2010, at 08:47, Benjamin Herrenschmidt wrote:
>>> 
>>>> On Fri, 2010-04-16 at 00:11 +0200, Alexander Graf wrote:
>>>>> Our shadow MMU code needs to know where the HTAB is located and how
>>>>> big it is. So we need some variables from the kernel exported to
>>>>> module space if KVM is built as a module.
>>>> 
>>>> Gross :-) Can't you just read the real SDR1 ? :-)
>>> 
>>> I figured this is faster.
>> 
>> In a fast path it might be, but I'm not too sure I like exporting
>> those... Oh well, Ack for now, maybe we'll come up with something nicer
>> at some stage.
> 
> Well, I did look into reusing the existing functions for HTAB modification, but they're incredibly tightly coupled to Linux PTEs, partially even modifying them along the way. So this was a much much easier alternative.
> 
> Of course, I guess I could do a mfsdr1 on init and just take it from there? The HTAB location shouldn't change right?

Either way, please apply the patches as is. I'll do a cleanup later. PPC32 isn't exactly a heavily moving target :-).


Alex


^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 23/27] KVM: PPC: Export MMU variables
@ 2010-04-16  9:31                   ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-16  9:31 UTC (permalink / raw)
  To: Alexander Graf; +Cc: Benjamin Herrenschmidt, kvm-ppc, kvm


On 16.04.2010, at 11:25, Alexander Graf wrote:

> 
> On 16.04.2010, at 11:22, Benjamin Herrenschmidt wrote:
> 
>> On Fri, 2010-04-16 at 11:07 +0200, Alexander Graf wrote:
>>> On 16.04.2010, at 08:47, Benjamin Herrenschmidt wrote:
>>> 
>>>> On Fri, 2010-04-16 at 00:11 +0200, Alexander Graf wrote:
>>>>> Our shadow MMU code needs to know where the HTAB is located and how
>>>>> big it is. So we need some variables from the kernel exported to
>>>>> module space if KVM is built as a module.
>>>> 
>>>> Gross :-) Can't you just read the real SDR1 ? :-)
>>> 
>>> I figured this is faster.
>> 
>> In a fast path it might be, but I'm not too sure I like exporting
>> those... Oh well, Ack for now, maybe we'll come up with something nicer
>> at some stage.
> 
> Well, I did look into reusing the existing functions for HTAB modification, but they're incredibly tightly coupled to Linux PTEs, partially even modifying them along the way. So this was a much much easier alternative.
> 
> Of course, I guess I could do a mfsdr1 on init and just take it from there? The HTAB location shouldn't change right?

Either way, please apply the patches as is. I'll do a cleanup later. PPC32 isn't exactly a heavily moving target :-).


Alex


^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 23/27] KVM: PPC: Export MMU variables
       [not found]                   ` <8EEEAFCF-CB31-4BD9-A917-08B5B6E40400-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-16 11:18                       ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-16 11:18 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Benjamin Herrenschmidt, kvm-devel list


On 16.04.2010, at 11:31, Alexander Graf wrote:

> 
> On 16.04.2010, at 11:25, Alexander Graf wrote:
> 
>> 
>> On 16.04.2010, at 11:22, Benjamin Herrenschmidt wrote:
>> 
>> Well, I did look into reusing the existing functions for HTAB modification, but they're incredibly tightly coupled to Linux PTEs, partially even modifying them along the way. So this was a much much easier alternative.
>> 
>> Of course, I guess I could do a mfsdr1 on init and just take it from there? The HTAB location shouldn't change right?
> 
> Either way, please apply the patches as is. I'll do a cleanup later. PPC32 isn't exactly a heavily moving target :-).

Clarification: Marcelo or Avi, please take it into kvm.git. I doubt there will be any changes in powerpc.git to these code paths, so we can expect this to be rather collision-free. And I'd prefer to work in kvm.git :).


Alex

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 23/27] KVM: PPC: Export MMU variables
@ 2010-04-16 11:18                       ` Alexander Graf
  0 siblings, 0 replies; 80+ messages in thread
From: Alexander Graf @ 2010-04-16 11:18 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Benjamin Herrenschmidt, kvm-devel list


On 16.04.2010, at 11:31, Alexander Graf wrote:

> 
> On 16.04.2010, at 11:25, Alexander Graf wrote:
> 
>> 
>> On 16.04.2010, at 11:22, Benjamin Herrenschmidt wrote:
>> 
>> Well, I did look into reusing the existing functions for HTAB modification, but they're incredibly tightly coupled to Linux PTEs, partially even modifying them along the way. So this was a much much easier alternative.
>> 
>> Of course, I guess I could do a mfsdr1 on init and just take it from there? The HTAB location shouldn't change right?
> 
> Either way, please apply the patches as is. I'll do a cleanup later. PPC32 isn't exactly a heavily moving target :-).

Clarification: Marcelo or Avi, please take it into kvm.git. I doubt there will be any changes in powerpc.git to these code paths, so we can expect this to be rather collision-free. And I'd prefer to work in kvm.git :).


Alex


^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 01/27] KVM: PPC: Name generic 64-bit code generic
  2010-04-15 22:11     ` Alexander Graf
@ 2010-04-21  9:29       ` Avi Kivity
  -1 siblings, 0 replies; 80+ messages in thread
From: Avi Kivity @ 2010-04-21  9:29 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On 04/16/2010 01:11 AM, Alexander Graf wrote:
> We have quite some code that can be used by Book3S_32 and Book3S_64 alike,
> so let's call it "Book3S" instead of "Book3S_64", so we can later on
> use it from the 32 bit port too.
>
> Signed-off-by: Alexander Graf<agraf@suse.de>
> ---
>   arch/powerpc/include/asm/kvm_book3s.h        |    2 +-
>   arch/powerpc/include/asm/kvm_book3s_64_asm.h |   76 ----
>   arch/powerpc/include/asm/kvm_book3s_asm.h    |   76 ++++
>   arch/powerpc/include/asm/paca.h              |    2 +-
>   arch/powerpc/kernel/head_64.S                |    4 +-
>   arch/powerpc/kvm/Makefile                    |    6 +-
>   arch/powerpc/kvm/book3s_64_emulate.c         |  566 --------------------------
>   arch/powerpc/kvm/book3s_64_exports.c         |   32 --
>   arch/powerpc/kvm/book3s_64_interrupts.S      |  318 ---------------
>   arch/powerpc/kvm/book3s_64_rmhandlers.S      |  195 ---------
>   arch/powerpc/kvm/book3s_emulate.c            |  566 ++++++++++++++++++++++++++
>   arch/powerpc/kvm/book3s_exports.c            |   32 ++
>   arch/powerpc/kvm/book3s_interrupts.S         |  318 +++++++++++++++
>   arch/powerpc/kvm/book3s_rmhandlers.S         |  195 +++++++++
>   14 files changed, 1194 insertions(+), 1194 deletions(-)
>   delete mode 100644 arch/powerpc/include/asm/kvm_book3s_64_asm.h
>   create mode 100644 arch/powerpc/include/asm/kvm_book3s_asm.h
>   delete mode 100644 arch/powerpc/kvm/book3s_64_emulate.c
>   delete mode 100644 arch/powerpc/kvm/book3s_64_exports.c
>   delete mode 100644 arch/powerpc/kvm/book3s_64_interrupts.S
>   delete mode 100644 arch/powerpc/kvm/book3s_64_rmhandlers.S
>   create mode 100644 arch/powerpc/kvm/book3s_emulate.c
>   create mode 100644 arch/powerpc/kvm/book3s_exports.c
>   create mode 100644 arch/powerpc/kvm/book3s_interrupts.S
>   create mode 100644 arch/powerpc/kvm/book3s_rmhandlers.S
>
>    

Please set you gitconfig diff.renames to true, these kind of patches 
come out much nicer this way.




-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 01/27] KVM: PPC: Name generic 64-bit code generic
@ 2010-04-21  9:29       ` Avi Kivity
  0 siblings, 0 replies; 80+ messages in thread
From: Avi Kivity @ 2010-04-21  9:29 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On 04/16/2010 01:11 AM, Alexander Graf wrote:
> We have quite some code that can be used by Book3S_32 and Book3S_64 alike,
> so let's call it "Book3S" instead of "Book3S_64", so we can later on
> use it from the 32 bit port too.
>
> Signed-off-by: Alexander Graf<agraf@suse.de>
> ---
>   arch/powerpc/include/asm/kvm_book3s.h        |    2 +-
>   arch/powerpc/include/asm/kvm_book3s_64_asm.h |   76 ----
>   arch/powerpc/include/asm/kvm_book3s_asm.h    |   76 ++++
>   arch/powerpc/include/asm/paca.h              |    2 +-
>   arch/powerpc/kernel/head_64.S                |    4 +-
>   arch/powerpc/kvm/Makefile                    |    6 +-
>   arch/powerpc/kvm/book3s_64_emulate.c         |  566 --------------------------
>   arch/powerpc/kvm/book3s_64_exports.c         |   32 --
>   arch/powerpc/kvm/book3s_64_interrupts.S      |  318 ---------------
>   arch/powerpc/kvm/book3s_64_rmhandlers.S      |  195 ---------
>   arch/powerpc/kvm/book3s_emulate.c            |  566 ++++++++++++++++++++++++++
>   arch/powerpc/kvm/book3s_exports.c            |   32 ++
>   arch/powerpc/kvm/book3s_interrupts.S         |  318 +++++++++++++++
>   arch/powerpc/kvm/book3s_rmhandlers.S         |  195 +++++++++
>   14 files changed, 1194 insertions(+), 1194 deletions(-)
>   delete mode 100644 arch/powerpc/include/asm/kvm_book3s_64_asm.h
>   create mode 100644 arch/powerpc/include/asm/kvm_book3s_asm.h
>   delete mode 100644 arch/powerpc/kvm/book3s_64_emulate.c
>   delete mode 100644 arch/powerpc/kvm/book3s_64_exports.c
>   delete mode 100644 arch/powerpc/kvm/book3s_64_interrupts.S
>   delete mode 100644 arch/powerpc/kvm/book3s_64_rmhandlers.S
>   create mode 100644 arch/powerpc/kvm/book3s_emulate.c
>   create mode 100644 arch/powerpc/kvm/book3s_exports.c
>   create mode 100644 arch/powerpc/kvm/book3s_interrupts.S
>   create mode 100644 arch/powerpc/kvm/book3s_rmhandlers.S
>
>    

Please set you gitconfig diff.renames to true, these kind of patches 
come out much nicer this way.




-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 00/27] Book3S_32 (PPC32) KVM support
       [not found] ` <1271369518-11247-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-21  9:42     ` Avi Kivity
  2010-04-15 22:11     ` Alexander Graf
                       ` (15 subsequent siblings)
  16 siblings, 0 replies; 80+ messages in thread
From: Avi Kivity @ 2010-04-21  9:42 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On 04/16/2010 01:11 AM, Alexander Graf wrote:
> Since we do have support for Book3S_64 KVM now, the next obvious step is to
> support the generation before that: Book3S_32.
>
> This patch set adds support for Book3S_32 hosts, making your old G4 this much
> more useful. It should also work on fancy exotic systems like the Wii and the
> Game Cube, but I haven't tried yet.
>
> As far as the path I took goes, I tried to merge as much functionality and code
> as possible with the 64 bit host support. So whenever code was reusable, it gets
> reused.
>
>    

Applied all.  Whew!

-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 00/27] Book3S_32 (PPC32) KVM support
@ 2010-04-21  9:42     ` Avi Kivity
  0 siblings, 0 replies; 80+ messages in thread
From: Avi Kivity @ 2010-04-21  9:42 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On 04/16/2010 01:11 AM, Alexander Graf wrote:
> Since we do have support for Book3S_64 KVM now, the next obvious step is to
> support the generation before that: Book3S_32.
>
> This patch set adds support for Book3S_32 hosts, making your old G4 this much
> more useful. It should also work on fancy exotic systems like the Wii and the
> Game Cube, but I haven't tried yet.
>
> As far as the path I took goes, I tried to merge as much functionality and code
> as possible with the 64 bit host support. So whenever code was reusable, it gets
> reused.
>
>    

Applied all.  Whew!

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 80+ messages in thread

end of thread, other threads:[~2010-04-21  9:42 UTC | newest]

Thread overview: 80+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-04-15 22:11 [PATCH 00/27] Book3S_32 (PPC32) KVM support Alexander Graf
2010-04-15 22:11 ` Alexander Graf
2010-04-15 22:11 ` [PATCH 05/27] PPC: Split context init/destroy functions Alexander Graf
2010-04-15 22:11   ` Alexander Graf
2010-04-16  6:46   ` Benjamin Herrenschmidt
2010-04-16  6:46     ` Benjamin Herrenschmidt
2010-04-15 22:11 ` [PATCH 06/27] KVM: PPC: Add kvm_book3s_64.h Alexander Graf
2010-04-15 22:11   ` Alexander Graf
2010-04-15 22:11 ` [PATCH 07/27] KVM: PPC: Add kvm_book3s_32.h Alexander Graf
2010-04-15 22:11   ` Alexander Graf
2010-04-15 22:11 ` [PATCH 08/27] KVM: PPC: Add fields to shadow vcpu Alexander Graf
2010-04-15 22:11   ` Alexander Graf
2010-04-15 22:11 ` [PATCH 12/27] PPC: Add STLU Alexander Graf
2010-04-15 22:11   ` Alexander Graf
2010-04-16  6:47   ` Benjamin Herrenschmidt
2010-04-16  6:47     ` Benjamin Herrenschmidt
     [not found] ` <1271369518-11247-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
2010-04-15 22:11   ` [PATCH 01/27] KVM: PPC: Name generic 64-bit code generic Alexander Graf
2010-04-15 22:11     ` Alexander Graf
2010-04-21  9:29     ` Avi Kivity
2010-04-21  9:29       ` Avi Kivity
2010-04-15 22:11   ` [PATCH 02/27] KVM: PPC: Add host MMU Support Alexander Graf
2010-04-15 22:11     ` Alexander Graf
2010-04-15 22:11   ` [PATCH 03/27] KVM: PPC: Add SR swapping code Alexander Graf
2010-04-15 22:11     ` Alexander Graf
2010-04-15 22:11   ` [PATCH 04/27] KVM: PPC: Add generic segment switching code Alexander Graf
2010-04-15 22:11     ` Alexander Graf
2010-04-15 22:11   ` [PATCH 09/27] KVM: PPC: Improve indirect svcpu accessors Alexander Graf
2010-04-15 22:11     ` Alexander Graf
2010-04-15 22:11   ` [PATCH 10/27] KVM: PPC: Use KVM_BOOK3S_HANDLER Alexander Graf
2010-04-15 22:11     ` Alexander Graf
2010-04-15 22:11   ` [PATCH 11/27] KVM: PPC: Use CONFIG_PPC_BOOK3S define Alexander Graf
2010-04-15 22:11     ` Alexander Graf
2010-04-15 22:11   ` [PATCH 13/27] KVM: PPC: Use now shadowed vcpu fields Alexander Graf
2010-04-15 22:11     ` Alexander Graf
2010-04-15 22:11   ` [PATCH 14/27] KVM: PPC: Extract MMU init Alexander Graf
2010-04-15 22:11     ` Alexander Graf
2010-04-15 22:11   ` [PATCH 15/27] KVM: PPC: Make real mode handler generic Alexander Graf
2010-04-15 22:11     ` Alexander Graf
2010-04-15 22:11   ` [PATCH 16/27] KVM: PPC: Make highmem code generic Alexander Graf
2010-04-15 22:11     ` Alexander Graf
2010-04-15 22:11   ` [PATCH 17/27] KVM: PPC: Make SLB switching code the new segment framework Alexander Graf
2010-04-15 22:11     ` Alexander Graf
2010-04-15 22:11   ` [PATCH 18/27] KVM: PPC: Release clean pages as clean Alexander Graf
2010-04-15 22:11     ` Alexander Graf
2010-04-15 22:11   ` [PATCH 23/27] KVM: PPC: Export MMU variables Alexander Graf
2010-04-15 22:11     ` Alexander Graf
     [not found]     ` <1271369518-11247-24-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
2010-04-16  6:47       ` Benjamin Herrenschmidt
2010-04-16  6:47         ` Benjamin Herrenschmidt
2010-04-16  9:07         ` Alexander Graf
2010-04-16  9:07           ` Alexander Graf
     [not found]           ` <41F9F369-9F53-43E8-AC1F-1C67DD918991-l3A5Bk7waGM@public.gmane.org>
2010-04-16  9:22             ` Benjamin Herrenschmidt
2010-04-16  9:22               ` Benjamin Herrenschmidt
2010-04-16  9:25               ` Alexander Graf
2010-04-16  9:25                 ` Alexander Graf
2010-04-16  9:31                 ` Alexander Graf
2010-04-16  9:31                   ` Alexander Graf
     [not found]                   ` <8EEEAFCF-CB31-4BD9-A917-08B5B6E40400-l3A5Bk7waGM@public.gmane.org>
2010-04-16 11:18                     ` Alexander Graf
2010-04-16 11:18                       ` Alexander Graf
2010-04-15 22:11   ` [PATCH 24/27] PPC: Export SWITCH_FRAME_SIZE Alexander Graf
2010-04-15 22:11     ` Alexander Graf
2010-04-16  6:48     ` Benjamin Herrenschmidt
2010-04-16  6:48       ` Benjamin Herrenschmidt
2010-04-15 22:11   ` [PATCH 25/27] KVM: PPC: Check max IRQ prio Alexander Graf
2010-04-15 22:11     ` Alexander Graf
2010-04-21  9:42   ` [PATCH 00/27] Book3S_32 (PPC32) KVM support Avi Kivity
2010-04-21  9:42     ` Avi Kivity
2010-04-15 22:11 ` [PATCH 19/27] KVM: PPC: Remove fetch fail code Alexander Graf
2010-04-15 22:11   ` Alexander Graf
2010-04-15 22:11 ` [PATCH 20/27] KVM: PPC: Add SVCPU to Book3S_32 Alexander Graf
2010-04-15 22:11   ` Alexander Graf
2010-04-15 22:11 ` [PATCH 21/27] KVM: PPC: Emulate segment fault Alexander Graf
2010-04-15 22:11   ` Alexander Graf
2010-04-15 22:11 ` [PATCH 22/27] KVM: PPC: Add Book3S compatibility code Alexander Graf
2010-04-15 22:11   ` Alexander Graf
2010-04-15 22:11 ` [PATCH 26/27] KVM: PPC: Add KVM intercept handlers Alexander Graf
2010-04-15 22:11   ` Alexander Graf
     [not found]   ` <1271369518-11247-27-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
2010-04-16  6:48     ` Benjamin Herrenschmidt
2010-04-16  6:48       ` Benjamin Herrenschmidt
2010-04-15 22:11 ` [PATCH 27/27] KVM: PPC: Enable Book3S_32 KVM building Alexander Graf
2010-04-15 22:11   ` Alexander Graf

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.