All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/9] Post-PPC32 series v2
@ 2010-04-20  0:49 ` Alexander Graf
  0 siblings, 0 replies; 22+ messages in thread
From: Alexander Graf @ 2010-04-20  0:49 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

While working with the PPC32 host target we finally have I stumbled over
several things. Thanks to the now possible performance measurements I also
tracked down split mode as one of the major slowdowns to KVM.

What's left now that slows us down is the normal flushing code that needs
to move to a table based lookup and instruction emulation. On PPC32 guests
we waste about 70% of our time on emulating mfmsr, mtmsr, mfsprg, mtsprg
and friends.

Either way - this patch series deprecates the former performance counter
and u64 patch.

Avi / Marcelo, please apply the former series and this series. Ignore the
two patches in between.

v1 -> v2:

  - add paired single patch
  - move WARN bailing to the correct patch

Alexander Graf (9):
  KVM: PPC: Convert u64 -> ulong
  KVM: PPC: Make Performance Counters work
  KVM: PPC: Improve split mode
  KVM: PPC: Make Alignment interrupts work again
  KVM: PPC: Be more informative on BUG
  KVM: PPC: Set VSID_PR also for Book3S_64
  KVM: PPC: Fix Book3S_64 Host MMU debug output
  KVM: PPC: Find HTAB ourselves
  KVM: PPC: Enable native paired singles

 arch/powerpc/include/asm/kvm_asm.h    |    1 +
 arch/powerpc/include/asm/kvm_book3s.h |   13 +++----
 arch/powerpc/include/asm/kvm_host.h   |    6 ++--
 arch/powerpc/kernel/ppc_ksyms.c       |    5 ---
 arch/powerpc/kvm/book3s.c             |   56 +++++++++++++++++++++++----------
 arch/powerpc/kvm/book3s_32_mmu.c      |   27 +++++++++------
 arch/powerpc/kvm/book3s_32_mmu_host.c |   29 +++++++++-------
 arch/powerpc/kvm/book3s_64_mmu.c      |   34 ++++++++++++--------
 arch/powerpc/kvm/book3s_64_mmu_host.c |   36 +++++++++++++-------
 arch/powerpc/kvm/book3s_emulate.c     |    5 ++-
 arch/powerpc/kvm/book3s_interrupts.S  |    2 +
 arch/powerpc/kvm/book3s_segment.S     |    2 +
 12 files changed, 132 insertions(+), 84 deletions(-)


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 0/9] Post-PPC32 series v2
@ 2010-04-20  0:49 ` Alexander Graf
  0 siblings, 0 replies; 22+ messages in thread
From: Alexander Graf @ 2010-04-20  0:49 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

While working with the PPC32 host target we finally have I stumbled over
several things. Thanks to the now possible performance measurements I also
tracked down split mode as one of the major slowdowns to KVM.

What's left now that slows us down is the normal flushing code that needs
to move to a table based lookup and instruction emulation. On PPC32 guests
we waste about 70% of our time on emulating mfmsr, mtmsr, mfsprg, mtsprg
and friends.

Either way - this patch series deprecates the former performance counter
and u64 patch.

Avi / Marcelo, please apply the former series and this series. Ignore the
two patches in between.

v1 -> v2:

  - add paired single patch
  - move WARN bailing to the correct patch

Alexander Graf (9):
  KVM: PPC: Convert u64 -> ulong
  KVM: PPC: Make Performance Counters work
  KVM: PPC: Improve split mode
  KVM: PPC: Make Alignment interrupts work again
  KVM: PPC: Be more informative on BUG
  KVM: PPC: Set VSID_PR also for Book3S_64
  KVM: PPC: Fix Book3S_64 Host MMU debug output
  KVM: PPC: Find HTAB ourselves
  KVM: PPC: Enable native paired singles

 arch/powerpc/include/asm/kvm_asm.h    |    1 +
 arch/powerpc/include/asm/kvm_book3s.h |   13 +++----
 arch/powerpc/include/asm/kvm_host.h   |    6 ++--
 arch/powerpc/kernel/ppc_ksyms.c       |    5 ---
 arch/powerpc/kvm/book3s.c             |   56 +++++++++++++++++++++++----------
 arch/powerpc/kvm/book3s_32_mmu.c      |   27 +++++++++------
 arch/powerpc/kvm/book3s_32_mmu_host.c |   29 +++++++++-------
 arch/powerpc/kvm/book3s_64_mmu.c      |   34 ++++++++++++--------
 arch/powerpc/kvm/book3s_64_mmu_host.c |   36 +++++++++++++-------
 arch/powerpc/kvm/book3s_emulate.c     |    5 ++-
 arch/powerpc/kvm/book3s_interrupts.S  |    2 +
 arch/powerpc/kvm/book3s_segment.S     |    2 +
 12 files changed, 132 insertions(+), 84 deletions(-)


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 1/9] KVM: PPC: Convert u64 -> ulong
       [not found] ` <1271724594-6223-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-20  0:49     ` Alexander Graf
  2010-04-20  0:49     ` Alexander Graf
                       ` (4 subsequent siblings)
  5 siblings, 0 replies; 22+ messages in thread
From: Alexander Graf @ 2010-04-20  0:49 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

There are some pieces in the code that I overlooked that still use
u64s instead of longs. This slows down 32 bit hosts unnecessarily, so
let's just move them to ulong.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>

---

v1 -> v2:

  - don't touch vsid - that stays u64!
---
 arch/powerpc/include/asm/kvm_book3s.h |    4 ++--
 arch/powerpc/include/asm/kvm_host.h   |    6 +++---
 arch/powerpc/kvm/book3s.c             |    6 +++---
 arch/powerpc/kvm/book3s_32_mmu.c      |    6 +++---
 arch/powerpc/kvm/book3s_32_mmu_host.c |    8 +++-----
 arch/powerpc/kvm/book3s_64_mmu.c      |    4 ++--
 arch/powerpc/kvm/book3s_64_mmu_host.c |    6 +++---
 7 files changed, 19 insertions(+), 21 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 9517b8d..5d3bd0c 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -107,9 +107,9 @@ struct kvmppc_vcpu_book3s {
 #define VSID_BAT	0x7fffffffffb00000ULL
 #define VSID_PR		0x8000000000000000ULL
 
-extern void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, u64 ea, u64 ea_mask);
+extern void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, ulong ea, ulong ea_mask);
 extern void kvmppc_mmu_pte_vflush(struct kvm_vcpu *vcpu, u64 vp, u64 vp_mask);
-extern void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, u64 pa_start, u64 pa_end);
+extern void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, ulong pa_start, ulong pa_end);
 extern void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 new_msr);
 extern void kvmppc_mmu_book3s_64_init(struct kvm_vcpu *vcpu);
 extern void kvmppc_mmu_book3s_32_init(struct kvm_vcpu *vcpu);
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 5a83995..0c9ad86 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -124,9 +124,9 @@ struct kvm_arch {
 };
 
 struct kvmppc_pte {
-	u64 eaddr;
+	ulong eaddr;
 	u64 vpage;
-	u64 raddr;
+	ulong raddr;
 	bool may_read		: 1;
 	bool may_write		: 1;
 	bool may_execute	: 1;
@@ -145,7 +145,7 @@ struct kvmppc_mmu {
 	int  (*xlate)(struct kvm_vcpu *vcpu, gva_t eaddr, struct kvmppc_pte *pte, bool data);
 	void (*reset_msr)(struct kvm_vcpu *vcpu);
 	void (*tlbie)(struct kvm_vcpu *vcpu, ulong addr, bool large);
-	int  (*esid_to_vsid)(struct kvm_vcpu *vcpu, u64 esid, u64 *vsid);
+	int  (*esid_to_vsid)(struct kvm_vcpu *vcpu, ulong esid, u64 *vsid);
 	u64  (*ea_to_vp)(struct kvm_vcpu *vcpu, gva_t eaddr, bool data);
 	bool (*is_dcbz32)(struct kvm_vcpu *vcpu);
 };
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 5805f99..a7de709 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -812,12 +812,12 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			 *     so we can't use the NX bit inside the guest. Let's cross our fingers,
 			 *     that no guest that needs the dcbz hack does NX.
 			 */
-			kvmppc_mmu_pte_flush(vcpu, kvmppc_get_pc(vcpu), ~0xFFFULL);
+			kvmppc_mmu_pte_flush(vcpu, kvmppc_get_pc(vcpu), ~0xFFFUL);
 			r = RESUME_GUEST;
 		} else {
 			vcpu->arch.msr |= to_svcpu(vcpu)->shadow_srr1 & 0x58000000;
 			kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
-			kvmppc_mmu_pte_flush(vcpu, kvmppc_get_pc(vcpu), ~0xFFFULL);
+			kvmppc_mmu_pte_flush(vcpu, kvmppc_get_pc(vcpu), ~0xFFFUL);
 			r = RESUME_GUEST;
 		}
 		break;
@@ -843,7 +843,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			vcpu->arch.dear = dar;
 			to_book3s(vcpu)->dsisr = to_svcpu(vcpu)->fault_dsisr;
 			kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
-			kvmppc_mmu_pte_flush(vcpu, vcpu->arch.dear, ~0xFFFULL);
+			kvmppc_mmu_pte_flush(vcpu, vcpu->arch.dear, ~0xFFFUL);
 			r = RESUME_GUEST;
 		}
 		break;
diff --git a/arch/powerpc/kvm/book3s_32_mmu.c b/arch/powerpc/kvm/book3s_32_mmu.c
index 48efb37..33186b7 100644
--- a/arch/powerpc/kvm/book3s_32_mmu.c
+++ b/arch/powerpc/kvm/book3s_32_mmu.c
@@ -60,7 +60,7 @@ static inline bool check_debug_ip(struct kvm_vcpu *vcpu)
 
 static int kvmppc_mmu_book3s_32_xlate_bat(struct kvm_vcpu *vcpu, gva_t eaddr,
 					  struct kvmppc_pte *pte, bool data);
-static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, u64 esid,
+static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
 					     u64 *vsid);
 
 static struct kvmppc_sr *find_sr(struct kvmppc_vcpu_book3s *vcpu_book3s, gva_t eaddr)
@@ -183,7 +183,7 @@ static int kvmppc_mmu_book3s_32_xlate_pte(struct kvm_vcpu *vcpu, gva_t eaddr,
 	struct kvmppc_sr *sre;
 	hva_t ptegp;
 	u32 pteg[16];
-	u64 ptem = 0;
+	u32 ptem = 0;
 	int i;
 	int found = 0;
 
@@ -327,7 +327,7 @@ static void kvmppc_mmu_book3s_32_tlbie(struct kvm_vcpu *vcpu, ulong ea, bool lar
 	kvmppc_mmu_pte_flush(vcpu, ea, 0x0FFFF000);
 }
 
-static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, u64 esid,
+static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
 					     u64 *vsid)
 {
 	/* In case we only have one of MSR_IR or MSR_DR set, let's put
diff --git a/arch/powerpc/kvm/book3s_32_mmu_host.c b/arch/powerpc/kvm/book3s_32_mmu_host.c
index ce1bfb1..2bb67e6 100644
--- a/arch/powerpc/kvm/book3s_32_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_32_mmu_host.c
@@ -77,11 +77,9 @@ static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
 		kvm_release_pfn_clean(pte->pfn);
 }
 
-void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, u64 _guest_ea, u64 _ea_mask)
+void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, ulong guest_ea, ulong ea_mask)
 {
 	int i;
-	u32 guest_ea = _guest_ea;
-	u32 ea_mask = _ea_mask;
 
 	dprintk_mmu("KVM: Flushing %d Shadow PTEs: 0x%x & 0x%x\n",
 		    vcpu->arch.hpte_cache_offset, guest_ea, ea_mask);
@@ -127,7 +125,7 @@ void kvmppc_mmu_pte_vflush(struct kvm_vcpu *vcpu, u64 guest_vp, u64 vp_mask)
 	}
 }
 
-void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, u64 pa_start, u64 pa_end)
+void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, ulong pa_start, ulong pa_end)
 {
 	int i;
 
@@ -265,7 +263,7 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte)
 	/* Get host physical address for gpa */
 	hpaddr = gfn_to_pfn(vcpu->kvm, orig_pte->raddr >> PAGE_SHIFT);
 	if (kvm_is_error_hva(hpaddr)) {
-		printk(KERN_INFO "Couldn't get guest page for gfn %llx!\n",
+		printk(KERN_INFO "Couldn't get guest page for gfn %lx!\n",
 				 orig_pte->eaddr);
 		return -EINVAL;
 	}
diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c
index 12e4c97..a9241e9 100644
--- a/arch/powerpc/kvm/book3s_64_mmu.c
+++ b/arch/powerpc/kvm/book3s_64_mmu.c
@@ -232,7 +232,7 @@ do_second:
 			}
 
 			dprintk("KVM MMU: Translated 0x%lx [0x%llx] -> 0x%llx "
-				"-> 0x%llx\n",
+				"-> 0x%lx\n",
 				eaddr, avpn, gpte->vpage, gpte->raddr);
 			found = true;
 			break;
@@ -439,7 +439,7 @@ static void kvmppc_mmu_book3s_64_tlbie(struct kvm_vcpu *vcpu, ulong va,
 	kvmppc_mmu_pte_vflush(vcpu, va >> 12, mask);
 }
 
-static int kvmppc_mmu_book3s_64_esid_to_vsid(struct kvm_vcpu *vcpu, u64 esid,
+static int kvmppc_mmu_book3s_64_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
 					     u64 *vsid)
 {
 	switch (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index b230154..41af12f 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -62,7 +62,7 @@ static void invalidate_pte(struct hpte_cache *pte)
 		kvm_release_pfn_clean(pte->pfn);
 }
 
-void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, u64 guest_ea, u64 ea_mask)
+void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, ulong guest_ea, ulong ea_mask)
 {
 	int i;
 
@@ -110,7 +110,7 @@ void kvmppc_mmu_pte_vflush(struct kvm_vcpu *vcpu, u64 guest_vp, u64 vp_mask)
 	}
 }
 
-void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, u64 pa_start, u64 pa_end)
+void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, ulong pa_start, ulong pa_end)
 {
 	int i;
 
@@ -216,7 +216,7 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte)
 	/* Get host physical address for gpa */
 	hpaddr = gfn_to_pfn(vcpu->kvm, orig_pte->raddr >> PAGE_SHIFT);
 	if (kvm_is_error_hva(hpaddr)) {
-		printk(KERN_INFO "Couldn't get guest page for gfn %llx!\n", orig_pte->eaddr);
+		printk(KERN_INFO "Couldn't get guest page for gfn %lx!\n", orig_pte->eaddr);
 		return -EINVAL;
 	}
 	hpaddr <<= PAGE_SHIFT;
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 1/9] KVM: PPC: Convert u64 -> ulong
@ 2010-04-20  0:49     ` Alexander Graf
  0 siblings, 0 replies; 22+ messages in thread
From: Alexander Graf @ 2010-04-20  0:49 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

There are some pieces in the code that I overlooked that still use
u64s instead of longs. This slows down 32 bit hosts unnecessarily, so
let's just move them to ulong.

Signed-off-by: Alexander Graf <agraf@suse.de>

---

v1 -> v2:

  - don't touch vsid - that stays u64!
---
 arch/powerpc/include/asm/kvm_book3s.h |    4 ++--
 arch/powerpc/include/asm/kvm_host.h   |    6 +++---
 arch/powerpc/kvm/book3s.c             |    6 +++---
 arch/powerpc/kvm/book3s_32_mmu.c      |    6 +++---
 arch/powerpc/kvm/book3s_32_mmu_host.c |    8 +++-----
 arch/powerpc/kvm/book3s_64_mmu.c      |    4 ++--
 arch/powerpc/kvm/book3s_64_mmu_host.c |    6 +++---
 7 files changed, 19 insertions(+), 21 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 9517b8d..5d3bd0c 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -107,9 +107,9 @@ struct kvmppc_vcpu_book3s {
 #define VSID_BAT	0x7fffffffffb00000ULL
 #define VSID_PR		0x8000000000000000ULL
 
-extern void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, u64 ea, u64 ea_mask);
+extern void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, ulong ea, ulong ea_mask);
 extern void kvmppc_mmu_pte_vflush(struct kvm_vcpu *vcpu, u64 vp, u64 vp_mask);
-extern void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, u64 pa_start, u64 pa_end);
+extern void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, ulong pa_start, ulong pa_end);
 extern void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 new_msr);
 extern void kvmppc_mmu_book3s_64_init(struct kvm_vcpu *vcpu);
 extern void kvmppc_mmu_book3s_32_init(struct kvm_vcpu *vcpu);
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 5a83995..0c9ad86 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -124,9 +124,9 @@ struct kvm_arch {
 };
 
 struct kvmppc_pte {
-	u64 eaddr;
+	ulong eaddr;
 	u64 vpage;
-	u64 raddr;
+	ulong raddr;
 	bool may_read		: 1;
 	bool may_write		: 1;
 	bool may_execute	: 1;
@@ -145,7 +145,7 @@ struct kvmppc_mmu {
 	int  (*xlate)(struct kvm_vcpu *vcpu, gva_t eaddr, struct kvmppc_pte *pte, bool data);
 	void (*reset_msr)(struct kvm_vcpu *vcpu);
 	void (*tlbie)(struct kvm_vcpu *vcpu, ulong addr, bool large);
-	int  (*esid_to_vsid)(struct kvm_vcpu *vcpu, u64 esid, u64 *vsid);
+	int  (*esid_to_vsid)(struct kvm_vcpu *vcpu, ulong esid, u64 *vsid);
 	u64  (*ea_to_vp)(struct kvm_vcpu *vcpu, gva_t eaddr, bool data);
 	bool (*is_dcbz32)(struct kvm_vcpu *vcpu);
 };
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 5805f99..a7de709 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -812,12 +812,12 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			 *     so we can't use the NX bit inside the guest. Let's cross our fingers,
 			 *     that no guest that needs the dcbz hack does NX.
 			 */
-			kvmppc_mmu_pte_flush(vcpu, kvmppc_get_pc(vcpu), ~0xFFFULL);
+			kvmppc_mmu_pte_flush(vcpu, kvmppc_get_pc(vcpu), ~0xFFFUL);
 			r = RESUME_GUEST;
 		} else {
 			vcpu->arch.msr |= to_svcpu(vcpu)->shadow_srr1 & 0x58000000;
 			kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
-			kvmppc_mmu_pte_flush(vcpu, kvmppc_get_pc(vcpu), ~0xFFFULL);
+			kvmppc_mmu_pte_flush(vcpu, kvmppc_get_pc(vcpu), ~0xFFFUL);
 			r = RESUME_GUEST;
 		}
 		break;
@@ -843,7 +843,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			vcpu->arch.dear = dar;
 			to_book3s(vcpu)->dsisr = to_svcpu(vcpu)->fault_dsisr;
 			kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
-			kvmppc_mmu_pte_flush(vcpu, vcpu->arch.dear, ~0xFFFULL);
+			kvmppc_mmu_pte_flush(vcpu, vcpu->arch.dear, ~0xFFFUL);
 			r = RESUME_GUEST;
 		}
 		break;
diff --git a/arch/powerpc/kvm/book3s_32_mmu.c b/arch/powerpc/kvm/book3s_32_mmu.c
index 48efb37..33186b7 100644
--- a/arch/powerpc/kvm/book3s_32_mmu.c
+++ b/arch/powerpc/kvm/book3s_32_mmu.c
@@ -60,7 +60,7 @@ static inline bool check_debug_ip(struct kvm_vcpu *vcpu)
 
 static int kvmppc_mmu_book3s_32_xlate_bat(struct kvm_vcpu *vcpu, gva_t eaddr,
 					  struct kvmppc_pte *pte, bool data);
-static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, u64 esid,
+static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
 					     u64 *vsid);
 
 static struct kvmppc_sr *find_sr(struct kvmppc_vcpu_book3s *vcpu_book3s, gva_t eaddr)
@@ -183,7 +183,7 @@ static int kvmppc_mmu_book3s_32_xlate_pte(struct kvm_vcpu *vcpu, gva_t eaddr,
 	struct kvmppc_sr *sre;
 	hva_t ptegp;
 	u32 pteg[16];
-	u64 ptem = 0;
+	u32 ptem = 0;
 	int i;
 	int found = 0;
 
@@ -327,7 +327,7 @@ static void kvmppc_mmu_book3s_32_tlbie(struct kvm_vcpu *vcpu, ulong ea, bool lar
 	kvmppc_mmu_pte_flush(vcpu, ea, 0x0FFFF000);
 }
 
-static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, u64 esid,
+static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
 					     u64 *vsid)
 {
 	/* In case we only have one of MSR_IR or MSR_DR set, let's put
diff --git a/arch/powerpc/kvm/book3s_32_mmu_host.c b/arch/powerpc/kvm/book3s_32_mmu_host.c
index ce1bfb1..2bb67e6 100644
--- a/arch/powerpc/kvm/book3s_32_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_32_mmu_host.c
@@ -77,11 +77,9 @@ static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
 		kvm_release_pfn_clean(pte->pfn);
 }
 
-void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, u64 _guest_ea, u64 _ea_mask)
+void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, ulong guest_ea, ulong ea_mask)
 {
 	int i;
-	u32 guest_ea = _guest_ea;
-	u32 ea_mask = _ea_mask;
 
 	dprintk_mmu("KVM: Flushing %d Shadow PTEs: 0x%x & 0x%x\n",
 		    vcpu->arch.hpte_cache_offset, guest_ea, ea_mask);
@@ -127,7 +125,7 @@ void kvmppc_mmu_pte_vflush(struct kvm_vcpu *vcpu, u64 guest_vp, u64 vp_mask)
 	}
 }
 
-void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, u64 pa_start, u64 pa_end)
+void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, ulong pa_start, ulong pa_end)
 {
 	int i;
 
@@ -265,7 +263,7 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte)
 	/* Get host physical address for gpa */
 	hpaddr = gfn_to_pfn(vcpu->kvm, orig_pte->raddr >> PAGE_SHIFT);
 	if (kvm_is_error_hva(hpaddr)) {
-		printk(KERN_INFO "Couldn't get guest page for gfn %llx!\n",
+		printk(KERN_INFO "Couldn't get guest page for gfn %lx!\n",
 				 orig_pte->eaddr);
 		return -EINVAL;
 	}
diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c
index 12e4c97..a9241e9 100644
--- a/arch/powerpc/kvm/book3s_64_mmu.c
+++ b/arch/powerpc/kvm/book3s_64_mmu.c
@@ -232,7 +232,7 @@ do_second:
 			}
 
 			dprintk("KVM MMU: Translated 0x%lx [0x%llx] -> 0x%llx "
-				"-> 0x%llx\n",
+				"-> 0x%lx\n",
 				eaddr, avpn, gpte->vpage, gpte->raddr);
 			found = true;
 			break;
@@ -439,7 +439,7 @@ static void kvmppc_mmu_book3s_64_tlbie(struct kvm_vcpu *vcpu, ulong va,
 	kvmppc_mmu_pte_vflush(vcpu, va >> 12, mask);
 }
 
-static int kvmppc_mmu_book3s_64_esid_to_vsid(struct kvm_vcpu *vcpu, u64 esid,
+static int kvmppc_mmu_book3s_64_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
 					     u64 *vsid)
 {
 	switch (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index b230154..41af12f 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -62,7 +62,7 @@ static void invalidate_pte(struct hpte_cache *pte)
 		kvm_release_pfn_clean(pte->pfn);
 }
 
-void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, u64 guest_ea, u64 ea_mask)
+void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, ulong guest_ea, ulong ea_mask)
 {
 	int i;
 
@@ -110,7 +110,7 @@ void kvmppc_mmu_pte_vflush(struct kvm_vcpu *vcpu, u64 guest_vp, u64 vp_mask)
 	}
 }
 
-void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, u64 pa_start, u64 pa_end)
+void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, ulong pa_start, ulong pa_end)
 {
 	int i;
 
@@ -216,7 +216,7 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte)
 	/* Get host physical address for gpa */
 	hpaddr = gfn_to_pfn(vcpu->kvm, orig_pte->raddr >> PAGE_SHIFT);
 	if (kvm_is_error_hva(hpaddr)) {
-		printk(KERN_INFO "Couldn't get guest page for gfn %llx!\n", orig_pte->eaddr);
+		printk(KERN_INFO "Couldn't get guest page for gfn %lx!\n", orig_pte->eaddr);
 		return -EINVAL;
 	}
 	hpaddr <<= PAGE_SHIFT;
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 2/9] KVM: PPC: Make Performance Counters work
       [not found] ` <1271724594-6223-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-20  0:49     ` Alexander Graf
  2010-04-20  0:49     ` Alexander Graf
                       ` (4 subsequent siblings)
  5 siblings, 0 replies; 22+ messages in thread
From: Alexander Graf @ 2010-04-20  0:49 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

When we get a performance counter interrupt we need to route it on to the
Linux handler after we got out of the guest context. We also need to tell
our handling code that this particular interrupt doesn't need treatment.

So let's add those two bits in, making perf work while having a KVM guest
running.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s.c            |    3 +++
 arch/powerpc/kvm/book3s_interrupts.S |    2 ++
 2 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index a7de709..a03163b 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -872,6 +872,9 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		vcpu->stat.ext_intr_exits++;
 		r = RESUME_GUEST;
 		break;
+	case BOOK3S_INTERRUPT_PERFMON:
+		r = RESUME_GUEST;
+		break;
 	case BOOK3S_INTERRUPT_PROGRAM:
 	{
 		enum emulation_result er;
diff --git a/arch/powerpc/kvm/book3s_interrupts.S b/arch/powerpc/kvm/book3s_interrupts.S
index f5b3358..e486193 100644
--- a/arch/powerpc/kvm/book3s_interrupts.S
+++ b/arch/powerpc/kvm/book3s_interrupts.S
@@ -228,6 +228,8 @@ no_dcbz32_off:
 	beq	call_linux_handler
 	cmpwi	r12, BOOK3S_INTERRUPT_DECREMENTER
 	beq	call_linux_handler
+	cmpwi	r12, BOOK3S_INTERRUPT_PERFMON
+	beq	call_linux_handler
 
 	/* Back to EE=1 */
 	mtmsr	r6
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 2/9] KVM: PPC: Make Performance Counters work
@ 2010-04-20  0:49     ` Alexander Graf
  0 siblings, 0 replies; 22+ messages in thread
From: Alexander Graf @ 2010-04-20  0:49 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

When we get a performance counter interrupt we need to route it on to the
Linux handler after we got out of the guest context. We also need to tell
our handling code that this particular interrupt doesn't need treatment.

So let's add those two bits in, making perf work while having a KVM guest
running.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s.c            |    3 +++
 arch/powerpc/kvm/book3s_interrupts.S |    2 ++
 2 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index a7de709..a03163b 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -872,6 +872,9 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		vcpu->stat.ext_intr_exits++;
 		r = RESUME_GUEST;
 		break;
+	case BOOK3S_INTERRUPT_PERFMON:
+		r = RESUME_GUEST;
+		break;
 	case BOOK3S_INTERRUPT_PROGRAM:
 	{
 		enum emulation_result er;
diff --git a/arch/powerpc/kvm/book3s_interrupts.S b/arch/powerpc/kvm/book3s_interrupts.S
index f5b3358..e486193 100644
--- a/arch/powerpc/kvm/book3s_interrupts.S
+++ b/arch/powerpc/kvm/book3s_interrupts.S
@@ -228,6 +228,8 @@ no_dcbz32_off:
 	beq	call_linux_handler
 	cmpwi	r12, BOOK3S_INTERRUPT_DECREMENTER
 	beq	call_linux_handler
+	cmpwi	r12, BOOK3S_INTERRUPT_PERFMON
+	beq	call_linux_handler
 
 	/* Back to EE=1 */
 	mtmsr	r6
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 3/9] KVM: PPC: Improve split mode
       [not found] ` <1271724594-6223-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-20  0:49     ` Alexander Graf
  2010-04-20  0:49     ` Alexander Graf
                       ` (4 subsequent siblings)
  5 siblings, 0 replies; 22+ messages in thread
From: Alexander Graf @ 2010-04-20  0:49 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

When in split mode, instruction relocation and data relocation are not equal.

So far we implemented this mode by reserving a special pseudo-VSID for the
two cases and flushing all PTEs when going into split mode, which is slow.

Unfortunately 32bit Linux and Mac OS X use split mode extensively. So to not
slow down things too much, I came up with a different idea: Mark the split
mode with a bit in the VSID and then treat it like any other segment.

This means we can just flush the shadow segment cache, but keep the PTEs
intact. I verified that this works with ppc32 Linux and Mac OS X 10.4
guests and does speed them up.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/include/asm/kvm_book3s.h |    9 ++++-----
 arch/powerpc/kvm/book3s.c             |   28 ++++++++++++++--------------
 arch/powerpc/kvm/book3s_32_mmu.c      |   21 +++++++++++++--------
 arch/powerpc/kvm/book3s_64_mmu.c      |   27 +++++++++++++++------------
 4 files changed, 46 insertions(+), 39 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 5d3bd0c..6f74d93 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -100,11 +100,10 @@ struct kvmppc_vcpu_book3s {
 #define CONTEXT_GUEST		1
 #define CONTEXT_GUEST_END	2
 
-#define VSID_REAL_DR	0x7ffffffffff00000ULL
-#define VSID_REAL_IR	0x7fffffffffe00000ULL
-#define VSID_SPLIT_MASK	0x7fffffffffe00000ULL
-#define VSID_REAL	0x7fffffffffc00000ULL
-#define VSID_BAT	0x7fffffffffb00000ULL
+#define VSID_REAL	0x1fffffffffc00000ULL
+#define VSID_BAT	0x1fffffffffb00000ULL
+#define VSID_REAL_DR	0x2000000000000000ULL
+#define VSID_REAL_IR	0x4000000000000000ULL
 #define VSID_PR		0x8000000000000000ULL
 
 extern void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, ulong ea, ulong ea_mask);
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index a03163b..91dc42d 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -147,16 +147,8 @@ void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 msr)
 		}
 	}
 
-	if (((vcpu->arch.msr & (MSR_IR|MSR_DR)) != (old_msr & (MSR_IR|MSR_DR))) ||
-	    (vcpu->arch.msr & MSR_PR) != (old_msr & MSR_PR)) {
-		bool dr = (vcpu->arch.msr & MSR_DR) ? true : false;
-		bool ir = (vcpu->arch.msr & MSR_IR) ? true : false;
-
-		/* Flush split mode PTEs */
-		if (dr != ir)
-			kvmppc_mmu_pte_vflush(vcpu, VSID_SPLIT_MASK,
-					      VSID_SPLIT_MASK);
-
+	if ((vcpu->arch.msr & (MSR_PR|MSR_IR|MSR_DR)) !=
+		   (old_msr & (MSR_PR|MSR_IR|MSR_DR))) {
 		kvmppc_mmu_flush_segments(vcpu);
 		kvmppc_mmu_map_segment(vcpu, kvmppc_get_pc(vcpu));
 	}
@@ -534,6 +526,7 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	bool is_mmio = false;
 	bool dr = (vcpu->arch.msr & MSR_DR) ? true : false;
 	bool ir = (vcpu->arch.msr & MSR_IR) ? true : false;
+	u64 vsid;
 
 	relocated = data ? dr : ir;
 
@@ -551,13 +544,20 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 	switch (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
 	case 0:
-		pte.vpage |= VSID_REAL;
+		pte.vpage |= ((u64)VSID_REAL << (SID_SHIFT - 12));
 		break;
 	case MSR_DR:
-		pte.vpage |= VSID_REAL_DR;
-		break;
 	case MSR_IR:
-		pte.vpage |= VSID_REAL_IR;
+		vcpu->arch.mmu.esid_to_vsid(vcpu, eaddr >> SID_SHIFT, &vsid);
+
+		if ((vcpu->arch.msr & (MSR_DR|MSR_IR)) == MSR_DR)
+			pte.vpage |= ((u64)VSID_REAL_DR << (SID_SHIFT - 12));
+		else
+			pte.vpage |= ((u64)VSID_REAL_IR << (SID_SHIFT - 12));
+		pte.vpage |= vsid;
+
+		if (vsid == -1)
+			page_found = -EINVAL;
 		break;
 	}
 
diff --git a/arch/powerpc/kvm/book3s_32_mmu.c b/arch/powerpc/kvm/book3s_32_mmu.c
index 33186b7..0b10503 100644
--- a/arch/powerpc/kvm/book3s_32_mmu.c
+++ b/arch/powerpc/kvm/book3s_32_mmu.c
@@ -330,30 +330,35 @@ static void kvmppc_mmu_book3s_32_tlbie(struct kvm_vcpu *vcpu, ulong ea, bool lar
 static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
 					     u64 *vsid)
 {
+	ulong ea = esid << SID_SHIFT;
+	struct kvmppc_sr *sr;
+	u64 gvsid = esid;
+
+	if (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
+		sr = find_sr(to_book3s(vcpu), ea);
+		if (sr->valid)
+			gvsid = sr->vsid;
+	}
+
 	/* In case we only have one of MSR_IR or MSR_DR set, let's put
 	   that in the real-mode context (and hope RM doesn't access
 	   high memory) */
 	switch (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
 	case 0:
-		*vsid = (VSID_REAL >> 16) | esid;
+		*vsid = VSID_REAL | esid;
 		break;
 	case MSR_IR:
-		*vsid = (VSID_REAL_IR >> 16) | esid;
+		*vsid = VSID_REAL_IR | gvsid;
 		break;
 	case MSR_DR:
-		*vsid = (VSID_REAL_DR >> 16) | esid;
+		*vsid = VSID_REAL_DR | gvsid;
 		break;
 	case MSR_DR|MSR_IR:
-	{
-		ulong ea = esid << SID_SHIFT;
-		struct kvmppc_sr *sr = find_sr(to_book3s(vcpu), ea);
-
 		if (!sr->valid)
 			return -1;
 
 		*vsid = sr->vsid;
 		break;
-	}
 	default:
 		BUG();
 	}
diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c
index a9241e9..612de6e 100644
--- a/arch/powerpc/kvm/book3s_64_mmu.c
+++ b/arch/powerpc/kvm/book3s_64_mmu.c
@@ -442,29 +442,32 @@ static void kvmppc_mmu_book3s_64_tlbie(struct kvm_vcpu *vcpu, ulong va,
 static int kvmppc_mmu_book3s_64_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
 					     u64 *vsid)
 {
+	ulong ea = esid << SID_SHIFT;
+	struct kvmppc_slb *slb;
+	u64 gvsid = esid;
+
+	if (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
+		slb = kvmppc_mmu_book3s_64_find_slbe(to_book3s(vcpu), ea);
+		if (slb)
+			gvsid = slb->vsid;
+	}
+
 	switch (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
 	case 0:
-		*vsid = (VSID_REAL >> 16) | esid;
+		*vsid = VSID_REAL | esid;
 		break;
 	case MSR_IR:
-		*vsid = (VSID_REAL_IR >> 16) | esid;
+		*vsid = VSID_REAL_IR | gvsid;
 		break;
 	case MSR_DR:
-		*vsid = (VSID_REAL_DR >> 16) | esid;
+		*vsid = VSID_REAL_DR | gvsid;
 		break;
 	case MSR_DR|MSR_IR:
-	{
-		ulong ea;
-		struct kvmppc_slb *slb;
-		ea = esid << SID_SHIFT;
-		slb = kvmppc_mmu_book3s_64_find_slbe(to_book3s(vcpu), ea);
-		if (slb)
-			*vsid = slb->vsid;
-		else
+		if (!slb)
 			return -ENOENT;
 
+		*vsid = gvsid;
 		break;
-	}
 	default:
 		BUG();
 		break;
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 3/9] KVM: PPC: Improve split mode
@ 2010-04-20  0:49     ` Alexander Graf
  0 siblings, 0 replies; 22+ messages in thread
From: Alexander Graf @ 2010-04-20  0:49 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

When in split mode, instruction relocation and data relocation are not equal.

So far we implemented this mode by reserving a special pseudo-VSID for the
two cases and flushing all PTEs when going into split mode, which is slow.

Unfortunately 32bit Linux and Mac OS X use split mode extensively. So to not
slow down things too much, I came up with a different idea: Mark the split
mode with a bit in the VSID and then treat it like any other segment.

This means we can just flush the shadow segment cache, but keep the PTEs
intact. I verified that this works with ppc32 Linux and Mac OS X 10.4
guests and does speed them up.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s.h |    9 ++++-----
 arch/powerpc/kvm/book3s.c             |   28 ++++++++++++++--------------
 arch/powerpc/kvm/book3s_32_mmu.c      |   21 +++++++++++++--------
 arch/powerpc/kvm/book3s_64_mmu.c      |   27 +++++++++++++++------------
 4 files changed, 46 insertions(+), 39 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 5d3bd0c..6f74d93 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -100,11 +100,10 @@ struct kvmppc_vcpu_book3s {
 #define CONTEXT_GUEST		1
 #define CONTEXT_GUEST_END	2
 
-#define VSID_REAL_DR	0x7ffffffffff00000ULL
-#define VSID_REAL_IR	0x7fffffffffe00000ULL
-#define VSID_SPLIT_MASK	0x7fffffffffe00000ULL
-#define VSID_REAL	0x7fffffffffc00000ULL
-#define VSID_BAT	0x7fffffffffb00000ULL
+#define VSID_REAL	0x1fffffffffc00000ULL
+#define VSID_BAT	0x1fffffffffb00000ULL
+#define VSID_REAL_DR	0x2000000000000000ULL
+#define VSID_REAL_IR	0x4000000000000000ULL
 #define VSID_PR		0x8000000000000000ULL
 
 extern void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, ulong ea, ulong ea_mask);
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index a03163b..91dc42d 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -147,16 +147,8 @@ void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 msr)
 		}
 	}
 
-	if (((vcpu->arch.msr & (MSR_IR|MSR_DR)) != (old_msr & (MSR_IR|MSR_DR))) ||
-	    (vcpu->arch.msr & MSR_PR) != (old_msr & MSR_PR)) {
-		bool dr = (vcpu->arch.msr & MSR_DR) ? true : false;
-		bool ir = (vcpu->arch.msr & MSR_IR) ? true : false;
-
-		/* Flush split mode PTEs */
-		if (dr != ir)
-			kvmppc_mmu_pte_vflush(vcpu, VSID_SPLIT_MASK,
-					      VSID_SPLIT_MASK);
-
+	if ((vcpu->arch.msr & (MSR_PR|MSR_IR|MSR_DR)) !+		   (old_msr & (MSR_PR|MSR_IR|MSR_DR))) {
 		kvmppc_mmu_flush_segments(vcpu);
 		kvmppc_mmu_map_segment(vcpu, kvmppc_get_pc(vcpu));
 	}
@@ -534,6 +526,7 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	bool is_mmio = false;
 	bool dr = (vcpu->arch.msr & MSR_DR) ? true : false;
 	bool ir = (vcpu->arch.msr & MSR_IR) ? true : false;
+	u64 vsid;
 
 	relocated = data ? dr : ir;
 
@@ -551,13 +544,20 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 	switch (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
 	case 0:
-		pte.vpage |= VSID_REAL;
+		pte.vpage |= ((u64)VSID_REAL << (SID_SHIFT - 12));
 		break;
 	case MSR_DR:
-		pte.vpage |= VSID_REAL_DR;
-		break;
 	case MSR_IR:
-		pte.vpage |= VSID_REAL_IR;
+		vcpu->arch.mmu.esid_to_vsid(vcpu, eaddr >> SID_SHIFT, &vsid);
+
+		if ((vcpu->arch.msr & (MSR_DR|MSR_IR)) = MSR_DR)
+			pte.vpage |= ((u64)VSID_REAL_DR << (SID_SHIFT - 12));
+		else
+			pte.vpage |= ((u64)VSID_REAL_IR << (SID_SHIFT - 12));
+		pte.vpage |= vsid;
+
+		if (vsid = -1)
+			page_found = -EINVAL;
 		break;
 	}
 
diff --git a/arch/powerpc/kvm/book3s_32_mmu.c b/arch/powerpc/kvm/book3s_32_mmu.c
index 33186b7..0b10503 100644
--- a/arch/powerpc/kvm/book3s_32_mmu.c
+++ b/arch/powerpc/kvm/book3s_32_mmu.c
@@ -330,30 +330,35 @@ static void kvmppc_mmu_book3s_32_tlbie(struct kvm_vcpu *vcpu, ulong ea, bool lar
 static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
 					     u64 *vsid)
 {
+	ulong ea = esid << SID_SHIFT;
+	struct kvmppc_sr *sr;
+	u64 gvsid = esid;
+
+	if (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
+		sr = find_sr(to_book3s(vcpu), ea);
+		if (sr->valid)
+			gvsid = sr->vsid;
+	}
+
 	/* In case we only have one of MSR_IR or MSR_DR set, let's put
 	   that in the real-mode context (and hope RM doesn't access
 	   high memory) */
 	switch (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
 	case 0:
-		*vsid = (VSID_REAL >> 16) | esid;
+		*vsid = VSID_REAL | esid;
 		break;
 	case MSR_IR:
-		*vsid = (VSID_REAL_IR >> 16) | esid;
+		*vsid = VSID_REAL_IR | gvsid;
 		break;
 	case MSR_DR:
-		*vsid = (VSID_REAL_DR >> 16) | esid;
+		*vsid = VSID_REAL_DR | gvsid;
 		break;
 	case MSR_DR|MSR_IR:
-	{
-		ulong ea = esid << SID_SHIFT;
-		struct kvmppc_sr *sr = find_sr(to_book3s(vcpu), ea);
-
 		if (!sr->valid)
 			return -1;
 
 		*vsid = sr->vsid;
 		break;
-	}
 	default:
 		BUG();
 	}
diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c
index a9241e9..612de6e 100644
--- a/arch/powerpc/kvm/book3s_64_mmu.c
+++ b/arch/powerpc/kvm/book3s_64_mmu.c
@@ -442,29 +442,32 @@ static void kvmppc_mmu_book3s_64_tlbie(struct kvm_vcpu *vcpu, ulong va,
 static int kvmppc_mmu_book3s_64_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
 					     u64 *vsid)
 {
+	ulong ea = esid << SID_SHIFT;
+	struct kvmppc_slb *slb;
+	u64 gvsid = esid;
+
+	if (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
+		slb = kvmppc_mmu_book3s_64_find_slbe(to_book3s(vcpu), ea);
+		if (slb)
+			gvsid = slb->vsid;
+	}
+
 	switch (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
 	case 0:
-		*vsid = (VSID_REAL >> 16) | esid;
+		*vsid = VSID_REAL | esid;
 		break;
 	case MSR_IR:
-		*vsid = (VSID_REAL_IR >> 16) | esid;
+		*vsid = VSID_REAL_IR | gvsid;
 		break;
 	case MSR_DR:
-		*vsid = (VSID_REAL_DR >> 16) | esid;
+		*vsid = VSID_REAL_DR | gvsid;
 		break;
 	case MSR_DR|MSR_IR:
-	{
-		ulong ea;
-		struct kvmppc_slb *slb;
-		ea = esid << SID_SHIFT;
-		slb = kvmppc_mmu_book3s_64_find_slbe(to_book3s(vcpu), ea);
-		if (slb)
-			*vsid = slb->vsid;
-		else
+		if (!slb)
 			return -ENOENT;
 
+		*vsid = gvsid;
 		break;
-	}
 	default:
 		BUG();
 		break;
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 4/9] KVM: PPC: Make Alignment interrupts work again
  2010-04-20  0:49 ` Alexander Graf
@ 2010-04-20  0:49   ` Alexander Graf
  -1 siblings, 0 replies; 22+ messages in thread
From: Alexander Graf @ 2010-04-20  0:49 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

In the process of merging Book3S_32 and 64 I somehow ended up having the
alignment interrupt handler take last_inst, but the fetching code not
fetching it. So we ended up with stale last_inst values.

Let's just enable last_inst fetching for alignment interrupts too.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_segment.S |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_segment.S b/arch/powerpc/kvm/book3s_segment.S
index 778e3fc..ede47fd 100644
--- a/arch/powerpc/kvm/book3s_segment.S
+++ b/arch/powerpc/kvm/book3s_segment.S
@@ -196,6 +196,8 @@ kvmppc_handler_trampoline_exit:
 	beq	ld_last_inst
 	cmpwi	r12, BOOK3S_INTERRUPT_PROGRAM
 	beq	ld_last_inst
+	cmpwi	r12, BOOK3S_INTERRUPT_ALIGNMENT
+	beq-	ld_last_inst
 
 	b	no_ld_last_inst
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 4/9] KVM: PPC: Make Alignment interrupts work again
@ 2010-04-20  0:49   ` Alexander Graf
  0 siblings, 0 replies; 22+ messages in thread
From: Alexander Graf @ 2010-04-20  0:49 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

In the process of merging Book3S_32 and 64 I somehow ended up having the
alignment interrupt handler take last_inst, but the fetching code not
fetching it. So we ended up with stale last_inst values.

Let's just enable last_inst fetching for alignment interrupts too.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_segment.S |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_segment.S b/arch/powerpc/kvm/book3s_segment.S
index 778e3fc..ede47fd 100644
--- a/arch/powerpc/kvm/book3s_segment.S
+++ b/arch/powerpc/kvm/book3s_segment.S
@@ -196,6 +196,8 @@ kvmppc_handler_trampoline_exit:
 	beq	ld_last_inst
 	cmpwi	r12, BOOK3S_INTERRUPT_PROGRAM
 	beq	ld_last_inst
+	cmpwi	r12, BOOK3S_INTERRUPT_ALIGNMENT
+	beq-	ld_last_inst
 
 	b	no_ld_last_inst
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 5/9] KVM: PPC: Be more informative on BUG
       [not found] ` <1271724594-6223-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-20  0:49     ` Alexander Graf
  2010-04-20  0:49     ` Alexander Graf
                       ` (4 subsequent siblings)
  5 siblings, 0 replies; 22+ messages in thread
From: Alexander Graf @ 2010-04-20  0:49 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

We have a condition in the ppc64 host mmu code that should never occur.
Unfortunately, it just did happen to me and I was rather puzzled on why,
because BUG_ON doesn't tell me anything useful.

So let's add some more debug output in case this goes wrong. Also change
BUG to WARN, since I don't want to reboot every time I mess something up.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>

---

v1 -> v2:

  - make WARN bail out
---
 arch/powerpc/kvm/book3s_64_mmu_host.c |   10 ++++++++--
 1 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index 41af12f..5545c45 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -231,10 +231,16 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte)
 	vcpu->arch.mmu.esid_to_vsid(vcpu, orig_pte->eaddr >> SID_SHIFT, &vsid);
 	map = find_sid_vsid(vcpu, vsid);
 	if (!map) {
-		kvmppc_mmu_map_segment(vcpu, orig_pte->eaddr);
+		ret = kvmppc_mmu_map_segment(vcpu, orig_pte->eaddr);
+		WARN_ON(ret < 0);
 		map = find_sid_vsid(vcpu, vsid);
 	}
-	BUG_ON(!map);
+	if (!map) {
+		printk(KERN_ERR "KVM: Segment map for 0x%llx (0x%lx) failed\n",
+				vsid, orig_pte->eaddr);
+		WARN_ON(true);
+		return -EINVAL;
+	}
 
 	vsid = map->host_vsid;
 	va = hpt_va(orig_pte->eaddr, vsid, MMU_SEGSIZE_256M);
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 5/9] KVM: PPC: Be more informative on BUG
@ 2010-04-20  0:49     ` Alexander Graf
  0 siblings, 0 replies; 22+ messages in thread
From: Alexander Graf @ 2010-04-20  0:49 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

We have a condition in the ppc64 host mmu code that should never occur.
Unfortunately, it just did happen to me and I was rather puzzled on why,
because BUG_ON doesn't tell me anything useful.

So let's add some more debug output in case this goes wrong. Also change
BUG to WARN, since I don't want to reboot every time I mess something up.

Signed-off-by: Alexander Graf <agraf@suse.de>

---

v1 -> v2:

  - make WARN bail out
---
 arch/powerpc/kvm/book3s_64_mmu_host.c |   10 ++++++++--
 1 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index 41af12f..5545c45 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -231,10 +231,16 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte)
 	vcpu->arch.mmu.esid_to_vsid(vcpu, orig_pte->eaddr >> SID_SHIFT, &vsid);
 	map = find_sid_vsid(vcpu, vsid);
 	if (!map) {
-		kvmppc_mmu_map_segment(vcpu, orig_pte->eaddr);
+		ret = kvmppc_mmu_map_segment(vcpu, orig_pte->eaddr);
+		WARN_ON(ret < 0);
 		map = find_sid_vsid(vcpu, vsid);
 	}
-	BUG_ON(!map);
+	if (!map) {
+		printk(KERN_ERR "KVM: Segment map for 0x%llx (0x%lx) failed\n",
+				vsid, orig_pte->eaddr);
+		WARN_ON(true);
+		return -EINVAL;
+	}
 
 	vsid = map->host_vsid;
 	va = hpt_va(orig_pte->eaddr, vsid, MMU_SEGSIZE_256M);
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 6/9] KVM: PPC: Set VSID_PR also for Book3S_64
  2010-04-20  0:49 ` Alexander Graf
@ 2010-04-20  0:49   ` Alexander Graf
  -1 siblings, 0 replies; 22+ messages in thread
From: Alexander Graf @ 2010-04-20  0:49 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

Book3S_64 didn't set VSID_PR when we're in PR=1. This lead to pretty bad
behavior when searching for the shadow segment, as part of the code relied
on VSID_PR being set.

This patch fixes booting Book3S_64 guests.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_64_mmu.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c
index 612de6e..4025ea2 100644
--- a/arch/powerpc/kvm/book3s_64_mmu.c
+++ b/arch/powerpc/kvm/book3s_64_mmu.c
@@ -473,6 +473,9 @@ static int kvmppc_mmu_book3s_64_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
 		break;
 	}
 
+	if (vcpu->arch.msr & MSR_PR)
+		*vsid |= VSID_PR;
+
 	return 0;
 }
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 6/9] KVM: PPC: Set VSID_PR also for Book3S_64
@ 2010-04-20  0:49   ` Alexander Graf
  0 siblings, 0 replies; 22+ messages in thread
From: Alexander Graf @ 2010-04-20  0:49 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

Book3S_64 didn't set VSID_PR when we're in PR=1. This lead to pretty bad
behavior when searching for the shadow segment, as part of the code relied
on VSID_PR being set.

This patch fixes booting Book3S_64 guests.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_64_mmu.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu.c b/arch/powerpc/kvm/book3s_64_mmu.c
index 612de6e..4025ea2 100644
--- a/arch/powerpc/kvm/book3s_64_mmu.c
+++ b/arch/powerpc/kvm/book3s_64_mmu.c
@@ -473,6 +473,9 @@ static int kvmppc_mmu_book3s_64_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
 		break;
 	}
 
+	if (vcpu->arch.msr & MSR_PR)
+		*vsid |= VSID_PR;
+
 	return 0;
 }
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 7/9] KVM: PPC: Fix Book3S_64 Host MMU debug output
  2010-04-20  0:49 ` Alexander Graf
@ 2010-04-20  0:49   ` Alexander Graf
  -1 siblings, 0 replies; 22+ messages in thread
From: Alexander Graf @ 2010-04-20  0:49 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

We have some debug output in Book3S_64. Some of that was invalid though,
partially not even compiling because it accessed incorrect variables.

So let's fix that up, making debugging more fun again.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_64_mmu_host.c |   20 ++++++++++++--------
 1 files changed, 12 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index 5545c45..e4b5744 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -48,8 +48,8 @@
 
 static void invalidate_pte(struct hpte_cache *pte)
 {
-	dprintk_mmu("KVM: Flushing SPT %d: 0x%llx (0x%llx) -> 0x%llx\n",
-		    i, pte->pte.eaddr, pte->pte.vpage, pte->host_va);
+	dprintk_mmu("KVM: Flushing SPT: 0x%lx (0x%llx) -> 0x%llx\n",
+		    pte->pte.eaddr, pte->pte.vpage, pte->host_va);
 
 	ppc_md.hpte_invalidate(pte->slot, pte->host_va,
 			       MMU_PAGE_4K, MMU_SEGSIZE_256M,
@@ -66,7 +66,7 @@ void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, ulong guest_ea, ulong ea_mask)
 {
 	int i;
 
-	dprintk_mmu("KVM: Flushing %d Shadow PTEs: 0x%llx & 0x%llx\n",
+	dprintk_mmu("KVM: Flushing %d Shadow PTEs: 0x%lx & 0x%lx\n",
 		    vcpu->arch.hpte_cache_offset, guest_ea, ea_mask);
 	BUG_ON(vcpu->arch.hpte_cache_offset > HPTEG_CACHE_NUM);
 
@@ -114,8 +114,8 @@ void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, ulong pa_start, ulong pa_end)
 {
 	int i;
 
-	dprintk_mmu("KVM: Flushing %d Shadow pPTEs: 0x%llx & 0x%llx\n",
-		    vcpu->arch.hpte_cache_offset, guest_pa, pa_mask);
+	dprintk_mmu("KVM: Flushing %d Shadow pPTEs: 0x%lx & 0x%lx\n",
+		    vcpu->arch.hpte_cache_offset, pa_start, pa_end);
 	BUG_ON(vcpu->arch.hpte_cache_offset > HPTEG_CACHE_NUM);
 
 	for (i = 0; i < vcpu->arch.hpte_cache_offset; i++) {
@@ -186,7 +186,7 @@ static struct kvmppc_sid_map *find_sid_vsid(struct kvm_vcpu *vcpu, u64 gvsid)
 	sid_map_mask = kvmppc_sid_hash(vcpu, gvsid);
 	map = &to_book3s(vcpu)->sid_map[sid_map_mask];
 	if (map->guest_vsid == gvsid) {
-		dprintk_slb("SLB: Searching 0x%llx -> 0x%llx\n",
+		dprintk_slb("SLB: Searching: 0x%llx -> 0x%llx\n",
 			    gvsid, map->host_vsid);
 		return map;
 	}
@@ -198,7 +198,8 @@ static struct kvmppc_sid_map *find_sid_vsid(struct kvm_vcpu *vcpu, u64 gvsid)
 		return map;
 	}
 
-	dprintk_slb("SLB: Searching 0x%llx -> not found\n", gvsid);
+	dprintk_slb("SLB: Searching %d/%d: 0x%llx -> not found\n",
+		    sid_map_mask, SID_MAP_MASK - sid_map_mask, gvsid);
 	return NULL;
 }
 
@@ -275,7 +276,7 @@ map_again:
 		int hpte_id = kvmppc_mmu_hpte_cache_next(vcpu);
 		struct hpte_cache *pte = &vcpu->arch.hpte_cache[hpte_id];
 
-		dprintk_mmu("KVM: %c%c Map 0x%llx: [%lx] 0x%lx (0x%llx) -> %lx\n",
+		dprintk_mmu("KVM: %c%c Map 0x%lx: [%lx] 0x%lx (0x%llx) -> %lx\n",
 			    ((rflags & HPTE_R_PP) == 3) ? '-' : 'w',
 			    (rflags & HPTE_R_N) ? '-' : 'x',
 			    orig_pte->eaddr, hpteg, va, orig_pte->vpage, hpaddr);
@@ -331,6 +332,9 @@ static struct kvmppc_sid_map *create_sid_map(struct kvm_vcpu *vcpu, u64 gvsid)
 	map->guest_vsid = gvsid;
 	map->valid = true;
 
+	dprintk_slb("SLB: New mapping at %d: 0x%llx -> 0x%llx\n",
+		    sid_map_mask, gvsid, map->host_vsid);
+
 	return map;
 }
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 7/9] KVM: PPC: Fix Book3S_64 Host MMU debug output
@ 2010-04-20  0:49   ` Alexander Graf
  0 siblings, 0 replies; 22+ messages in thread
From: Alexander Graf @ 2010-04-20  0:49 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

We have some debug output in Book3S_64. Some of that was invalid though,
partially not even compiling because it accessed incorrect variables.

So let's fix that up, making debugging more fun again.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_64_mmu_host.c |   20 ++++++++++++--------
 1 files changed, 12 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index 5545c45..e4b5744 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -48,8 +48,8 @@
 
 static void invalidate_pte(struct hpte_cache *pte)
 {
-	dprintk_mmu("KVM: Flushing SPT %d: 0x%llx (0x%llx) -> 0x%llx\n",
-		    i, pte->pte.eaddr, pte->pte.vpage, pte->host_va);
+	dprintk_mmu("KVM: Flushing SPT: 0x%lx (0x%llx) -> 0x%llx\n",
+		    pte->pte.eaddr, pte->pte.vpage, pte->host_va);
 
 	ppc_md.hpte_invalidate(pte->slot, pte->host_va,
 			       MMU_PAGE_4K, MMU_SEGSIZE_256M,
@@ -66,7 +66,7 @@ void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, ulong guest_ea, ulong ea_mask)
 {
 	int i;
 
-	dprintk_mmu("KVM: Flushing %d Shadow PTEs: 0x%llx & 0x%llx\n",
+	dprintk_mmu("KVM: Flushing %d Shadow PTEs: 0x%lx & 0x%lx\n",
 		    vcpu->arch.hpte_cache_offset, guest_ea, ea_mask);
 	BUG_ON(vcpu->arch.hpte_cache_offset > HPTEG_CACHE_NUM);
 
@@ -114,8 +114,8 @@ void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, ulong pa_start, ulong pa_end)
 {
 	int i;
 
-	dprintk_mmu("KVM: Flushing %d Shadow pPTEs: 0x%llx & 0x%llx\n",
-		    vcpu->arch.hpte_cache_offset, guest_pa, pa_mask);
+	dprintk_mmu("KVM: Flushing %d Shadow pPTEs: 0x%lx & 0x%lx\n",
+		    vcpu->arch.hpte_cache_offset, pa_start, pa_end);
 	BUG_ON(vcpu->arch.hpte_cache_offset > HPTEG_CACHE_NUM);
 
 	for (i = 0; i < vcpu->arch.hpte_cache_offset; i++) {
@@ -186,7 +186,7 @@ static struct kvmppc_sid_map *find_sid_vsid(struct kvm_vcpu *vcpu, u64 gvsid)
 	sid_map_mask = kvmppc_sid_hash(vcpu, gvsid);
 	map = &to_book3s(vcpu)->sid_map[sid_map_mask];
 	if (map->guest_vsid = gvsid) {
-		dprintk_slb("SLB: Searching 0x%llx -> 0x%llx\n",
+		dprintk_slb("SLB: Searching: 0x%llx -> 0x%llx\n",
 			    gvsid, map->host_vsid);
 		return map;
 	}
@@ -198,7 +198,8 @@ static struct kvmppc_sid_map *find_sid_vsid(struct kvm_vcpu *vcpu, u64 gvsid)
 		return map;
 	}
 
-	dprintk_slb("SLB: Searching 0x%llx -> not found\n", gvsid);
+	dprintk_slb("SLB: Searching %d/%d: 0x%llx -> not found\n",
+		    sid_map_mask, SID_MAP_MASK - sid_map_mask, gvsid);
 	return NULL;
 }
 
@@ -275,7 +276,7 @@ map_again:
 		int hpte_id = kvmppc_mmu_hpte_cache_next(vcpu);
 		struct hpte_cache *pte = &vcpu->arch.hpte_cache[hpte_id];
 
-		dprintk_mmu("KVM: %c%c Map 0x%llx: [%lx] 0x%lx (0x%llx) -> %lx\n",
+		dprintk_mmu("KVM: %c%c Map 0x%lx: [%lx] 0x%lx (0x%llx) -> %lx\n",
 			    ((rflags & HPTE_R_PP) = 3) ? '-' : 'w',
 			    (rflags & HPTE_R_N) ? '-' : 'x',
 			    orig_pte->eaddr, hpteg, va, orig_pte->vpage, hpaddr);
@@ -331,6 +332,9 @@ static struct kvmppc_sid_map *create_sid_map(struct kvm_vcpu *vcpu, u64 gvsid)
 	map->guest_vsid = gvsid;
 	map->valid = true;
 
+	dprintk_slb("SLB: New mapping at %d: 0x%llx -> 0x%llx\n",
+		    sid_map_mask, gvsid, map->host_vsid);
+
 	return map;
 }
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 8/9] KVM: PPC: Find HTAB ourselves
       [not found] ` <1271724594-6223-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-20  0:49     ` Alexander Graf
  2010-04-20  0:49     ` Alexander Graf
                       ` (4 subsequent siblings)
  5 siblings, 0 replies; 22+ messages in thread
From: Alexander Graf @ 2010-04-20  0:49 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA
  Cc: kvm-u79uwXL29TY76Z2rM5mHXA, Benjamin Herrenschmidt

For KVM we need to find the location of the HTAB. We can either rely
on internal data structures of the kernel or ask the hardware.

Ben issued complaints about the internal data structure method, so
let's switch it to our own inquiry of the HTAB. Now we're fully
independend :-).

CC: Benjamin Herrenschmidt <benh-XVmvHMARGAS8U2dJNN8I7kB+6BGkLq7r@public.gmane.org>
Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kernel/ppc_ksyms.c       |    5 -----
 arch/powerpc/kvm/book3s_32_mmu_host.c |   21 +++++++++++++--------
 2 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/arch/powerpc/kernel/ppc_ksyms.c b/arch/powerpc/kernel/ppc_ksyms.c
index 2b7c43f..bc9f39d 100644
--- a/arch/powerpc/kernel/ppc_ksyms.c
+++ b/arch/powerpc/kernel/ppc_ksyms.c
@@ -178,11 +178,6 @@ EXPORT_SYMBOL(switch_mmu_context);
 extern long mol_trampoline;
 EXPORT_SYMBOL(mol_trampoline); /* For MOL */
 EXPORT_SYMBOL(flush_hash_pages); /* For MOL */
-
-extern struct hash_pte *Hash;
-extern unsigned long _SDR1;
-EXPORT_SYMBOL_GPL(Hash); /* For KVM */
-EXPORT_SYMBOL_GPL(_SDR1); /* For KVM */
 #ifdef CONFIG_SMP
 extern int mmu_hash_lock;
 EXPORT_SYMBOL(mmu_hash_lock); /* For MOL */
diff --git a/arch/powerpc/kvm/book3s_32_mmu_host.c b/arch/powerpc/kvm/book3s_32_mmu_host.c
index 2bb67e6..0bb6600 100644
--- a/arch/powerpc/kvm/book3s_32_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_32_mmu_host.c
@@ -54,6 +54,9 @@
 #error Only 32 bit pages are supported for now
 #endif
 
+static ulong htab;
+static u32 htabmask;
+
 static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
 {
 	volatile u32 *pteg;
@@ -217,14 +220,11 @@ static struct kvmppc_sid_map *find_sid_vsid(struct kvm_vcpu *vcpu, u64 gvsid)
 	return NULL;
 }
 
-extern struct hash_pte *Hash;
-extern unsigned long _SDR1;
-
 static u32 *kvmppc_mmu_get_pteg(struct kvm_vcpu *vcpu, u32 vsid, u32 eaddr,
 				bool primary)
 {
-	u32 page, hash, htabmask;
-	ulong pteg = (ulong)Hash;
+	u32 page, hash;
+	ulong pteg = htab;
 
 	page = (eaddr & ~ESID_MASK) >> 12;
 
@@ -232,13 +232,12 @@ static u32 *kvmppc_mmu_get_pteg(struct kvm_vcpu *vcpu, u32 vsid, u32 eaddr,
 	if (!primary)
 		hash = ~hash;
 
-	htabmask = ((_SDR1 & 0x1FF) << 16) | 0xFFC0;
 	hash &= htabmask;
 
 	pteg |= hash;
 
-	dprintk_mmu("htab: %p | hash: %x | htabmask: %x | pteg: %lx\n",
-		Hash, hash, htabmask, pteg);
+	dprintk_mmu("htab: %lx | hash: %x | htabmask: %x | pteg: %lx\n",
+		htab, hash, htabmask, pteg);
 
 	return (u32*)pteg;
 }
@@ -453,6 +452,7 @@ int kvmppc_mmu_init(struct kvm_vcpu *vcpu)
 {
 	struct kvmppc_vcpu_book3s *vcpu3s = to_book3s(vcpu);
 	int err;
+	ulong sdr1;
 
 	err = __init_new_context();
 	if (err < 0)
@@ -474,5 +474,10 @@ int kvmppc_mmu_init(struct kvm_vcpu *vcpu)
 
 	vcpu3s->vsid_next = vcpu3s->vsid_first;
 
+	/* Remember where the HTAB is */
+	asm ( "mfsdr1 %0" : "=r"(sdr1) );
+	htabmask = ((sdr1 & 0x1FF) << 16) | 0xFFC0;
+	htab = (ulong)__va(sdr1 & 0xffff0000);
+
 	return 0;
 }
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 8/9] KVM: PPC: Find HTAB ourselves
@ 2010-04-20  0:49     ` Alexander Graf
  0 siblings, 0 replies; 22+ messages in thread
From: Alexander Graf @ 2010-04-20  0:49 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA
  Cc: kvm-u79uwXL29TY76Z2rM5mHXA, Benjamin Herrenschmidt

For KVM we need to find the location of the HTAB. We can either rely
on internal data structures of the kernel or ask the hardware.

Ben issued complaints about the internal data structure method, so
let's switch it to our own inquiry of the HTAB. Now we're fully
independend :-).

CC: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kernel/ppc_ksyms.c       |    5 -----
 arch/powerpc/kvm/book3s_32_mmu_host.c |   21 +++++++++++++--------
 2 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/arch/powerpc/kernel/ppc_ksyms.c b/arch/powerpc/kernel/ppc_ksyms.c
index 2b7c43f..bc9f39d 100644
--- a/arch/powerpc/kernel/ppc_ksyms.c
+++ b/arch/powerpc/kernel/ppc_ksyms.c
@@ -178,11 +178,6 @@ EXPORT_SYMBOL(switch_mmu_context);
 extern long mol_trampoline;
 EXPORT_SYMBOL(mol_trampoline); /* For MOL */
 EXPORT_SYMBOL(flush_hash_pages); /* For MOL */
-
-extern struct hash_pte *Hash;
-extern unsigned long _SDR1;
-EXPORT_SYMBOL_GPL(Hash); /* For KVM */
-EXPORT_SYMBOL_GPL(_SDR1); /* For KVM */
 #ifdef CONFIG_SMP
 extern int mmu_hash_lock;
 EXPORT_SYMBOL(mmu_hash_lock); /* For MOL */
diff --git a/arch/powerpc/kvm/book3s_32_mmu_host.c b/arch/powerpc/kvm/book3s_32_mmu_host.c
index 2bb67e6..0bb6600 100644
--- a/arch/powerpc/kvm/book3s_32_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_32_mmu_host.c
@@ -54,6 +54,9 @@
 #error Only 32 bit pages are supported for now
 #endif
 
+static ulong htab;
+static u32 htabmask;
+
 static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
 {
 	volatile u32 *pteg;
@@ -217,14 +220,11 @@ static struct kvmppc_sid_map *find_sid_vsid(struct kvm_vcpu *vcpu, u64 gvsid)
 	return NULL;
 }
 
-extern struct hash_pte *Hash;
-extern unsigned long _SDR1;
-
 static u32 *kvmppc_mmu_get_pteg(struct kvm_vcpu *vcpu, u32 vsid, u32 eaddr,
 				bool primary)
 {
-	u32 page, hash, htabmask;
-	ulong pteg = (ulong)Hash;
+	u32 page, hash;
+	ulong pteg = htab;
 
 	page = (eaddr & ~ESID_MASK) >> 12;
 
@@ -232,13 +232,12 @@ static u32 *kvmppc_mmu_get_pteg(struct kvm_vcpu *vcpu, u32 vsid, u32 eaddr,
 	if (!primary)
 		hash = ~hash;
 
-	htabmask = ((_SDR1 & 0x1FF) << 16) | 0xFFC0;
 	hash &= htabmask;
 
 	pteg |= hash;
 
-	dprintk_mmu("htab: %p | hash: %x | htabmask: %x | pteg: %lx\n",
-		Hash, hash, htabmask, pteg);
+	dprintk_mmu("htab: %lx | hash: %x | htabmask: %x | pteg: %lx\n",
+		htab, hash, htabmask, pteg);
 
 	return (u32*)pteg;
 }
@@ -453,6 +452,7 @@ int kvmppc_mmu_init(struct kvm_vcpu *vcpu)
 {
 	struct kvmppc_vcpu_book3s *vcpu3s = to_book3s(vcpu);
 	int err;
+	ulong sdr1;
 
 	err = __init_new_context();
 	if (err < 0)
@@ -474,5 +474,10 @@ int kvmppc_mmu_init(struct kvm_vcpu *vcpu)
 
 	vcpu3s->vsid_next = vcpu3s->vsid_first;
 
+	/* Remember where the HTAB is */
+	asm ( "mfsdr1 %0" : "=r"(sdr1) );
+	htabmask = ((sdr1 & 0x1FF) << 16) | 0xFFC0;
+	htab = (ulong)__va(sdr1 & 0xffff0000);
+
 	return 0;
 }
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 9/9] KVM: PPC: Enable native paired singles
       [not found] ` <1271724594-6223-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-04-20  0:49     ` Alexander Graf
  2010-04-20  0:49     ` Alexander Graf
                       ` (4 subsequent siblings)
  5 siblings, 0 replies; 22+ messages in thread
From: Alexander Graf @ 2010-04-20  0:49 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

When we're on a paired single capable host, we can just always enable
paired singles and expose them to the guest directly.

This approach breaks when multiple VMs run and access PS concurrently,
but this should suffice until we get a proper framework for it in Linux.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/include/asm/kvm_asm.h |    1 +
 arch/powerpc/kvm/book3s.c          |   19 +++++++++++++++++++
 arch/powerpc/kvm/book3s_emulate.c  |    5 ++++-
 3 files changed, 24 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_asm.h b/arch/powerpc/include/asm/kvm_asm.h
index 7238c04..c5ea4cd 100644
--- a/arch/powerpc/include/asm/kvm_asm.h
+++ b/arch/powerpc/include/asm/kvm_asm.h
@@ -89,6 +89,7 @@
 #define BOOK3S_HFLAG_DCBZ32			0x1
 #define BOOK3S_HFLAG_SLB			0x2
 #define BOOK3S_HFLAG_PAIRED_SINGLE		0x4
+#define BOOK3S_HFLAG_NATIVE_PS			0x8
 
 #define RESUME_FLAG_NV          (1<<0)  /* Reload guest nonvolatile state? */
 #define RESUME_FLAG_HOST        (1<<1)  /* Resume host? */
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 91dc42d..cfc3051 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -344,6 +344,8 @@ void kvmppc_core_deliver_interrupts(struct kvm_vcpu *vcpu)
 
 void kvmppc_set_pvr(struct kvm_vcpu *vcpu, u32 pvr)
 {
+	u32 host_pvr;
+
 	vcpu->arch.hflags &= ~BOOK3S_HFLAG_SLB;
 	vcpu->arch.pvr = pvr;
 #ifdef CONFIG_PPC_BOOK3S_64
@@ -375,6 +377,23 @@ void kvmppc_set_pvr(struct kvm_vcpu *vcpu, u32 pvr)
 	/* 32 bit Book3S always has 32 byte dcbz */
 	vcpu->arch.hflags |= BOOK3S_HFLAG_DCBZ32;
 #endif
+
+	/* On some CPUs we can execute paired single operations natively */
+	asm ( "mfpvr %0" : "=r"(host_pvr));
+	switch (host_pvr) {
+	case 0x00080200:	/* lonestar 2.0 */
+	case 0x00088202:	/* lonestar 2.2 */
+	case 0x70000100:	/* gekko 1.0 */
+	case 0x00080100:	/* gekko 2.0 */
+	case 0x00083203:	/* gekko 2.3a */
+	case 0x00083213:	/* gekko 2.3b */
+	case 0x00083204:	/* gekko 2.4 */
+	case 0x00083214:	/* gekko 2.4e (8SE) - retail HW2 */
+	case 0x00087200:	/* broadway */
+		vcpu->arch.hflags |= BOOK3S_HFLAG_NATIVE_PS;
+		/* Enable HID2.PSE - in case we need it later */
+		mtspr(SPRN_HID2_GEKKO, mfspr(SPRN_HID2_GEKKO) | (1 << 29));
+	}
 }
 
 /* Book3s_32 CPUs always have 32 bytes cache line size, which Linux assumes. To
diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index 3f7afb5..c85f906 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -365,7 +365,10 @@ int kvmppc_core_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs)
 		case 0x00083213:	/* gekko 2.3b */
 		case 0x00083204:	/* gekko 2.4 */
 		case 0x00083214:	/* gekko 2.4e (8SE) - retail HW2 */
-			if (spr_val & (1 << 29)) { /* HID2.PSE */
+		case 0x00087200:	/* broadway */
+			if (vcpu->arch.hflags & BOOK3S_HFLAG_NATIVE_PS) {
+				/* Native paired singles */
+			} else if (spr_val & (1 << 29)) { /* HID2.PSE */
 				vcpu->arch.hflags |= BOOK3S_HFLAG_PAIRED_SINGLE;
 				kvmppc_giveup_ext(vcpu, MSR_FP);
 			} else {
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 9/9] KVM: PPC: Enable native paired singles
@ 2010-04-20  0:49     ` Alexander Graf
  0 siblings, 0 replies; 22+ messages in thread
From: Alexander Graf @ 2010-04-20  0:49 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

When we're on a paired single capable host, we can just always enable
paired singles and expose them to the guest directly.

This approach breaks when multiple VMs run and access PS concurrently,
but this should suffice until we get a proper framework for it in Linux.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_asm.h |    1 +
 arch/powerpc/kvm/book3s.c          |   19 +++++++++++++++++++
 arch/powerpc/kvm/book3s_emulate.c  |    5 ++++-
 3 files changed, 24 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_asm.h b/arch/powerpc/include/asm/kvm_asm.h
index 7238c04..c5ea4cd 100644
--- a/arch/powerpc/include/asm/kvm_asm.h
+++ b/arch/powerpc/include/asm/kvm_asm.h
@@ -89,6 +89,7 @@
 #define BOOK3S_HFLAG_DCBZ32			0x1
 #define BOOK3S_HFLAG_SLB			0x2
 #define BOOK3S_HFLAG_PAIRED_SINGLE		0x4
+#define BOOK3S_HFLAG_NATIVE_PS			0x8
 
 #define RESUME_FLAG_NV          (1<<0)  /* Reload guest nonvolatile state? */
 #define RESUME_FLAG_HOST        (1<<1)  /* Resume host? */
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 91dc42d..cfc3051 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -344,6 +344,8 @@ void kvmppc_core_deliver_interrupts(struct kvm_vcpu *vcpu)
 
 void kvmppc_set_pvr(struct kvm_vcpu *vcpu, u32 pvr)
 {
+	u32 host_pvr;
+
 	vcpu->arch.hflags &= ~BOOK3S_HFLAG_SLB;
 	vcpu->arch.pvr = pvr;
 #ifdef CONFIG_PPC_BOOK3S_64
@@ -375,6 +377,23 @@ void kvmppc_set_pvr(struct kvm_vcpu *vcpu, u32 pvr)
 	/* 32 bit Book3S always has 32 byte dcbz */
 	vcpu->arch.hflags |= BOOK3S_HFLAG_DCBZ32;
 #endif
+
+	/* On some CPUs we can execute paired single operations natively */
+	asm ( "mfpvr %0" : "=r"(host_pvr));
+	switch (host_pvr) {
+	case 0x00080200:	/* lonestar 2.0 */
+	case 0x00088202:	/* lonestar 2.2 */
+	case 0x70000100:	/* gekko 1.0 */
+	case 0x00080100:	/* gekko 2.0 */
+	case 0x00083203:	/* gekko 2.3a */
+	case 0x00083213:	/* gekko 2.3b */
+	case 0x00083204:	/* gekko 2.4 */
+	case 0x00083214:	/* gekko 2.4e (8SE) - retail HW2 */
+	case 0x00087200:	/* broadway */
+		vcpu->arch.hflags |= BOOK3S_HFLAG_NATIVE_PS;
+		/* Enable HID2.PSE - in case we need it later */
+		mtspr(SPRN_HID2_GEKKO, mfspr(SPRN_HID2_GEKKO) | (1 << 29));
+	}
 }
 
 /* Book3s_32 CPUs always have 32 bytes cache line size, which Linux assumes. To
diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index 3f7afb5..c85f906 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -365,7 +365,10 @@ int kvmppc_core_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs)
 		case 0x00083213:	/* gekko 2.3b */
 		case 0x00083204:	/* gekko 2.4 */
 		case 0x00083214:	/* gekko 2.4e (8SE) - retail HW2 */
-			if (spr_val & (1 << 29)) { /* HID2.PSE */
+		case 0x00087200:	/* broadway */
+			if (vcpu->arch.hflags & BOOK3S_HFLAG_NATIVE_PS) {
+				/* Native paired singles */
+			} else if (spr_val & (1 << 29)) { /* HID2.PSE */
 				vcpu->arch.hflags |= BOOK3S_HFLAG_PAIRED_SINGLE;
 				kvmppc_giveup_ext(vcpu, MSR_FP);
 			} else {
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH 0/9] Post-PPC32 series v2
  2010-04-20  0:49 ` Alexander Graf
@ 2010-04-21  9:44   ` Avi Kivity
  -1 siblings, 0 replies; 22+ messages in thread
From: Avi Kivity @ 2010-04-21  9:44 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On 04/20/2010 03:49 AM, Alexander Graf wrote:
> While working with the PPC32 host target we finally have I stumbled over
> several things. Thanks to the now possible performance measurements I also
> tracked down split mode as one of the major slowdowns to KVM.
>
> What's left now that slows us down is the normal flushing code that needs
> to move to a table based lookup and instruction emulation. On PPC32 guests
> we waste about 70% of our time on emulating mfmsr, mtmsr, mfsprg, mtsprg
> and friends.
>
> Either way - this patch series deprecates the former performance counter
> and u64 patch.
>
>    

Applied all, thanks.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 0/9] Post-PPC32 series v2
@ 2010-04-21  9:44   ` Avi Kivity
  0 siblings, 0 replies; 22+ messages in thread
From: Avi Kivity @ 2010-04-21  9:44 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On 04/20/2010 03:49 AM, Alexander Graf wrote:
> While working with the PPC32 host target we finally have I stumbled over
> several things. Thanks to the now possible performance measurements I also
> tracked down split mode as one of the major slowdowns to KVM.
>
> What's left now that slows us down is the normal flushing code that needs
> to move to a table based lookup and instruction emulation. On PPC32 guests
> we waste about 70% of our time on emulating mfmsr, mtmsr, mfsprg, mtsprg
> and friends.
>
> Either way - this patch series deprecates the former performance counter
> and u64 patch.
>
>    

Applied all, thanks.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2010-04-21  9:44 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-04-20  0:49 [PATCH 0/9] Post-PPC32 series v2 Alexander Graf
2010-04-20  0:49 ` Alexander Graf
2010-04-20  0:49 ` [PATCH 4/9] KVM: PPC: Make Alignment interrupts work again Alexander Graf
2010-04-20  0:49   ` Alexander Graf
2010-04-20  0:49 ` [PATCH 6/9] KVM: PPC: Set VSID_PR also for Book3S_64 Alexander Graf
2010-04-20  0:49   ` Alexander Graf
2010-04-20  0:49 ` [PATCH 7/9] KVM: PPC: Fix Book3S_64 Host MMU debug output Alexander Graf
2010-04-20  0:49   ` Alexander Graf
     [not found] ` <1271724594-6223-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
2010-04-20  0:49   ` [PATCH 1/9] KVM: PPC: Convert u64 -> ulong Alexander Graf
2010-04-20  0:49     ` Alexander Graf
2010-04-20  0:49   ` [PATCH 2/9] KVM: PPC: Make Performance Counters work Alexander Graf
2010-04-20  0:49     ` Alexander Graf
2010-04-20  0:49   ` [PATCH 3/9] KVM: PPC: Improve split mode Alexander Graf
2010-04-20  0:49     ` Alexander Graf
2010-04-20  0:49   ` [PATCH 5/9] KVM: PPC: Be more informative on BUG Alexander Graf
2010-04-20  0:49     ` Alexander Graf
2010-04-20  0:49   ` [PATCH 8/9] KVM: PPC: Find HTAB ourselves Alexander Graf
2010-04-20  0:49     ` Alexander Graf
2010-04-20  0:49   ` [PATCH 9/9] KVM: PPC: Enable native paired singles Alexander Graf
2010-04-20  0:49     ` Alexander Graf
2010-04-21  9:44 ` [PATCH 0/9] Post-PPC32 series v2 Avi Kivity
2010-04-21  9:44   ` Avi Kivity

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.