kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/26] KVM: PPC: Mid-August patch queue
@ 2010-08-17 13:57 Alexander Graf
  2010-08-17 13:57 ` [PATCH 03/26] KVM: PPC: Add tracepoint for generic mmu map Alexander Graf
                   ` (10 more replies)
  0 siblings, 11 replies; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:57 UTC (permalink / raw)
  To: kvm-ppc; +Cc: linuxppc-dev, KVM list

Howdy,

This is my local patch queue with stuff that has accumulated over the last
weeks on KVM for PPC with some last minute fixes, speedups and debugging help
that I needed for the KVM Forum ;-).

The highlights of this set are:

  - Converted most important debug points to tracepoints
  - Flush less PTEs (speedup)
  - Go back to our own hash (less duplicates)
  - Make SRs guest settable (speedup for 32 bit guests)
  - Remove r30/r31 restrictions from PV hooks (speedup!)
  - Fix random breakages
  - Fix random guest stalls
  - 440GP host support (Thanks Hollis!)

Keep in mind that this is the first version that is stable on PPC32 hosts.
All versions prior to this could occupy otherwise used segment entries and
thus crash your machine :-).

After finally meeting Avi again, we also agreed to give pulls a try. So
here we go - this is my tree online:

    git://github.com/agraf/linux-2.6.git kvm-ppc-next


Have fun with more accurate, faster and less buggy KVM on PowerPC!


Alexander Graf (23):
  KVM: PPC: Move EXIT_DEBUG partially to tracepoints
  KVM: PPC: Move book3s_64 mmu map debug print to trace point
  KVM: PPC: Add tracepoint for generic mmu map
  KVM: PPC: Move pte invalidate debug code to tracepoint
  KVM: PPC: Fix sid map search after flush
  KVM: PPC: Add tracepoints for generic spte flushes
  KVM: PPC: Preload magic page when in kernel mode
  KVM: PPC: Don't flush PTEs on NX/RO hit
  KVM: PPC: Make invalidation code more reliable
  KVM: PPC: Move slb debugging to tracepoints
  KVM: PPC: Revert "KVM: PPC: Use kernel hash function"
  KVM: PPC: Remove unused define
  KVM: PPC: Add feature bitmap for magic page
  KVM: PPC: Move BAT handling code into spr handler
  KVM: PPC: Interpret SR registers on demand
  KVM: PPC: Put segment registers in shared page
  KVM: PPC: Add mtsrin PV code
  KVM: PPC: Make PV mtmsr work with r30 and r31
  KVM: PPC: Update int_pending also on dequeue
  KVM: PPC: Make PV mtmsrd L=1 work with r30 and r31
  KVM: PPC: Force enable nap on KVM
  KVM: PPC: Implement correct SID mapping on Book3s_32
  KVM: PPC: Don't put MSR_POW in MSR

Hollis Blanchard (3):
  KVM: PPC: initialize IVORs in addition to IVPR
  KVM: PPC: fix compilation of "dump tlbs" debug function
  KVM: PPC: allow ppc440gp to pass the compatibility check

 arch/powerpc/include/asm/kvm_book3s.h |   25 ++--
 arch/powerpc/include/asm/kvm_para.h   |    3 +
 arch/powerpc/kernel/asm-offsets.c     |    1 +
 arch/powerpc/kernel/kvm.c             |  144 ++++++++++++++++++---
 arch/powerpc/kernel/kvm_emul.S        |   75 +++++++++--
 arch/powerpc/kvm/44x.c                |    3 +-
 arch/powerpc/kvm/44x_tlb.c            |    1 +
 arch/powerpc/kvm/book3s.c             |   54 ++++----
 arch/powerpc/kvm/book3s_32_mmu.c      |   83 +++++++------
 arch/powerpc/kvm/book3s_32_mmu_host.c |   67 ++++++----
 arch/powerpc/kvm/book3s_64_mmu_host.c |   59 +++------
 arch/powerpc/kvm/book3s_emulate.c     |   48 +++-----
 arch/powerpc/kvm/book3s_mmu_hpte.c    |   38 ++----
 arch/powerpc/kvm/booke.c              |    8 +-
 arch/powerpc/kvm/powerpc.c            |    5 +-
 arch/powerpc/kvm/trace.h              |  230 +++++++++++++++++++++++++++++++++
 16 files changed, 614 insertions(+), 230 deletions(-)


^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH 01/26] KVM: PPC: Move EXIT_DEBUG partially to tracepoints
       [not found] ` <1282053481-18787-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-08-17 13:57   ` Alexander Graf
  2010-08-17 13:57   ` [PATCH 02/26] KVM: PPC: Move book3s_64 mmu map debug print to trace point Alexander Graf
                     ` (15 subsequent siblings)
  16 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:57 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: linuxppc-dev, KVM list

We have a debug printk on every exit that is usually #ifdef'ed out. Using
tracepoints makes a lot more sense here though, as they can be dynamically
enabled.

This patch converts the most commonly used debug printks of EXIT_DEBUG to
tracepoints.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s.c |   26 ++++----------------------
 arch/powerpc/kvm/trace.h  |   42 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 46 insertions(+), 22 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index eee97b5..f8b9aab 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -17,6 +17,7 @@
 #include <linux/kvm_host.h>
 #include <linux/err.h>
 #include <linux/slab.h>
+#include "trace.h"
 
 #include <asm/reg.h>
 #include <asm/cputable.h>
@@ -35,7 +36,6 @@
 #define VCPU_STAT(x) offsetof(struct kvm_vcpu, stat.x), KVM_STAT_VCPU
 
 /* #define EXIT_DEBUG */
-/* #define EXIT_DEBUG_SIMPLE */
 /* #define DEBUG_EXT */
 
 static int kvmppc_handle_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr,
@@ -105,14 +105,6 @@ void kvmppc_core_vcpu_put(struct kvm_vcpu *vcpu)
 	kvmppc_giveup_ext(vcpu, MSR_VSX);
 }
 
-#if defined(EXIT_DEBUG)
-static u32 kvmppc_get_dec(struct kvm_vcpu *vcpu)
-{
-	u64 jd = mftb() - vcpu->arch.dec_jiffies;
-	return vcpu->arch.dec - jd;
-}
-#endif
-
 static void kvmppc_recalc_shadow_msr(struct kvm_vcpu *vcpu)
 {
 	ulong smsr = vcpu->arch.shared->msr;
@@ -848,16 +840,8 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 	run->exit_reason = KVM_EXIT_UNKNOWN;
 	run->ready_for_interrupt_injection = 1;
-#ifdef EXIT_DEBUG
-	printk(KERN_EMERG "exit_nr=0x%x | pc=0x%lx | dar=0x%lx | dec=0x%x | msr=0x%lx\n",
-		exit_nr, kvmppc_get_pc(vcpu), kvmppc_get_fault_dar(vcpu),
-		kvmppc_get_dec(vcpu), to_svcpu(vcpu)->shadow_srr1);
-#elif defined (EXIT_DEBUG_SIMPLE)
-	if ((exit_nr != 0x900) && (exit_nr != 0x500))
-		printk(KERN_EMERG "exit_nr=0x%x | pc=0x%lx | dar=0x%lx | msr=0x%lx\n",
-			exit_nr, kvmppc_get_pc(vcpu), kvmppc_get_fault_dar(vcpu),
-			vcpu->arch.shared->msr);
-#endif
+
+	trace_kvm_book3s_exit(exit_nr, vcpu);
 	kvm_resched(vcpu);
 	switch (exit_nr) {
 	case BOOK3S_INTERRUPT_INST_STORAGE:
@@ -1089,9 +1073,7 @@ program_interrupt:
 		}
 	}
 
-#ifdef EXIT_DEBUG
-	printk(KERN_EMERG "KVM exit: vcpu=0x%p pc=0x%lx r=0x%x\n", vcpu, kvmppc_get_pc(vcpu), r);
-#endif
+	trace_kvm_book3s_reenter(r, vcpu);
 
 	return r;
 }
diff --git a/arch/powerpc/kvm/trace.h b/arch/powerpc/kvm/trace.h
index a8e8400..56cd162 100644
--- a/arch/powerpc/kvm/trace.h
+++ b/arch/powerpc/kvm/trace.h
@@ -98,6 +98,48 @@ TRACE_EVENT(kvm_gtlb_write,
 		__entry->word1, __entry->word2)
 );
 
+TRACE_EVENT(kvm_book3s_exit,
+	TP_PROTO(unsigned int exit_nr, struct kvm_vcpu *vcpu),
+	TP_ARGS(exit_nr, vcpu),
+
+	TP_STRUCT__entry(
+		__field(	unsigned int,	exit_nr		)
+		__field(	unsigned long,	pc		)
+		__field(	unsigned long,	msr		)
+		__field(	unsigned long,	dar		)
+		__field(	unsigned long,	srr1		)
+	),
+
+	TP_fast_assign(
+		__entry->exit_nr	= exit_nr;
+		__entry->pc		= kvmppc_get_pc(vcpu);
+		__entry->dar		= kvmppc_get_fault_dar(vcpu);
+		__entry->msr		= vcpu->arch.shared->msr;
+		__entry->srr1		= to_svcpu(vcpu)->shadow_srr1;
+	),
+
+	TP_printk("exit=0x%x | pc=0x%lx | msr=0x%lx | dar=0x%lx | srr1=0x%lx",
+		  __entry->exit_nr, __entry->pc, __entry->msr, __entry->dar,
+		  __entry->srr1)
+);
+
+TRACE_EVENT(kvm_book3s_reenter,
+	TP_PROTO(int r, struct kvm_vcpu *vcpu),
+	TP_ARGS(r, vcpu),
+
+	TP_STRUCT__entry(
+		__field(	unsigned int,	r		)
+		__field(	unsigned long,	pc		)
+	),
+
+	TP_fast_assign(
+		__entry->r		= r;
+		__entry->pc		= kvmppc_get_pc(vcpu);
+	),
+
+	TP_printk("reentry r=%d | pc=0x%lx", __entry->r, __entry->pc)
+);
+
 #endif /* _TRACE_KVM_H */
 
 /* This part must be outside protection */
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 02/26] KVM: PPC: Move book3s_64 mmu map debug print to trace point
       [not found] ` <1282053481-18787-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
  2010-08-17 13:57   ` [PATCH 01/26] KVM: PPC: Move EXIT_DEBUG partially to tracepoints Alexander Graf
@ 2010-08-17 13:57   ` Alexander Graf
  2010-08-17 13:57   ` [PATCH 05/26] KVM: PPC: Fix sid map search after flush Alexander Graf
                     ` (14 subsequent siblings)
  16 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:57 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: linuxppc-dev, KVM list

This patch moves Book3s MMU debugging over to tracepoints.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s_64_mmu_host.c |   13 +----------
 arch/powerpc/kvm/trace.h              |   34 +++++++++++++++++++++++++++++++++
 2 files changed, 36 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index 672b149..aa516ad 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -28,19 +28,13 @@
 #include <asm/machdep.h>
 #include <asm/mmu_context.h>
 #include <asm/hw_irq.h>
+#include "trace.h"
 
 #define PTE_SIZE 12
 #define VSID_ALL 0
 
-/* #define DEBUG_MMU */
 /* #define DEBUG_SLB */
 
-#ifdef DEBUG_MMU
-#define dprintk_mmu(a, ...) printk(KERN_INFO a, __VA_ARGS__)
-#else
-#define dprintk_mmu(a, ...) do { } while(0)
-#endif
-
 #ifdef DEBUG_SLB
 #define dprintk_slb(a, ...) printk(KERN_INFO a, __VA_ARGS__)
 #else
@@ -156,10 +150,7 @@ map_again:
 	} else {
 		struct hpte_cache *pte = kvmppc_mmu_hpte_cache_next(vcpu);
 
-		dprintk_mmu("KVM: %c%c Map 0x%lx: [%lx] 0x%lx (0x%llx) -> %lx\n",
-			    ((rflags & HPTE_R_PP) == 3) ? '-' : 'w',
-			    (rflags & HPTE_R_N) ? '-' : 'x',
-			    orig_pte->eaddr, hpteg, va, orig_pte->vpage, hpaddr);
+		trace_kvm_book3s_64_mmu_map(rflags, hpteg, va, hpaddr, orig_pte);
 
 		/* The ppc_md code may give us a secondary entry even though we
 		   asked for a primary. Fix up. */
diff --git a/arch/powerpc/kvm/trace.h b/arch/powerpc/kvm/trace.h
index 56cd162..3b9169c 100644
--- a/arch/powerpc/kvm/trace.h
+++ b/arch/powerpc/kvm/trace.h
@@ -140,6 +140,40 @@ TRACE_EVENT(kvm_book3s_reenter,
 	TP_printk("reentry r=%d | pc=0x%lx", __entry->r, __entry->pc)
 );
 
+#ifdef CONFIG_PPC_BOOK3S_64
+
+TRACE_EVENT(kvm_book3s_64_mmu_map,
+	TP_PROTO(int rflags, ulong hpteg, ulong va, pfn_t hpaddr,
+		 struct kvmppc_pte *orig_pte),
+	TP_ARGS(rflags, hpteg, va, hpaddr, orig_pte),
+
+	TP_STRUCT__entry(
+		__field(	unsigned char,		flag_w		)
+		__field(	unsigned char,		flag_x		)
+		__field(	unsigned long,		eaddr		)
+		__field(	unsigned long,		hpteg		)
+		__field(	unsigned long,		va		)
+		__field(	unsigned long long,	vpage		)
+		__field(	unsigned long,		hpaddr		)
+	),
+
+	TP_fast_assign(
+		__entry->flag_w	= ((rflags & HPTE_R_PP) == 3) ? '-' : 'w';
+		__entry->flag_x	= (rflags & HPTE_R_N) ? '-' : 'x';
+		__entry->eaddr	= orig_pte->eaddr;
+		__entry->hpteg	= hpteg;
+		__entry->va	= va;
+		__entry->vpage	= orig_pte->vpage;
+		__entry->hpaddr	= hpaddr;
+	),
+
+	TP_printk("KVM: %c%c Map 0x%lx: [%lx] 0x%lx (0x%llx) -> %lx",
+		  __entry->flag_w, __entry->flag_x, __entry->eaddr,
+		  __entry->hpteg, __entry->va, __entry->vpage, __entry->hpaddr)
+);
+
+#endif
+
 #endif /* _TRACE_KVM_H */
 
 /* This part must be outside protection */
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 03/26] KVM: PPC: Add tracepoint for generic mmu map
  2010-08-17 13:57 [PATCH 00/26] KVM: PPC: Mid-August patch queue Alexander Graf
@ 2010-08-17 13:57 ` Alexander Graf
  2010-08-17 13:57 ` [PATCH 04/26] KVM: PPC: Move pte invalidate debug code to tracepoint Alexander Graf
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:57 UTC (permalink / raw)
  To: kvm-ppc; +Cc: linuxppc-dev, KVM list

This patch moves the generic mmu map debugging over to tracepoints.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_mmu_hpte.c |    3 +++
 arch/powerpc/kvm/trace.h           |   29 +++++++++++++++++++++++++++++
 2 files changed, 32 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_mmu_hpte.c b/arch/powerpc/kvm/book3s_mmu_hpte.c
index 02c64ab..ac94bd9 100644
--- a/arch/powerpc/kvm/book3s_mmu_hpte.c
+++ b/arch/powerpc/kvm/book3s_mmu_hpte.c
@@ -21,6 +21,7 @@
 #include <linux/kvm_host.h>
 #include <linux/hash.h>
 #include <linux/slab.h>
+#include "trace.h"
 
 #include <asm/kvm_ppc.h>
 #include <asm/kvm_book3s.h>
@@ -66,6 +67,8 @@ void kvmppc_mmu_hpte_cache_map(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
 {
 	u64 index;
 
+	trace_kvm_book3s_mmu_map(pte);
+
 	spin_lock(&vcpu->arch.mmu_lock);
 
 	/* Add to ePTE list */
diff --git a/arch/powerpc/kvm/trace.h b/arch/powerpc/kvm/trace.h
index 3b9169c..ee6ac88 100644
--- a/arch/powerpc/kvm/trace.h
+++ b/arch/powerpc/kvm/trace.h
@@ -174,6 +174,35 @@ TRACE_EVENT(kvm_book3s_64_mmu_map,
 
 #endif
 
+TRACE_EVENT(kvm_book3s_mmu_map,
+	TP_PROTO(struct hpte_cache *pte),
+	TP_ARGS(pte),
+
+	TP_STRUCT__entry(
+		__field(	u64,		host_va		)
+		__field(	u64,		pfn		)
+		__field(	ulong,		eaddr		)
+		__field(	u64,		vpage		)
+		__field(	ulong,		raddr		)
+		__field(	int,		flags		)
+	),
+
+	TP_fast_assign(
+		__entry->host_va	= pte->host_va;
+		__entry->pfn		= pte->pfn;
+		__entry->eaddr		= pte->pte.eaddr;
+		__entry->vpage		= pte->pte.vpage;
+		__entry->raddr		= pte->pte.raddr;
+		__entry->flags		= (pte->pte.may_read ? 0x4 : 0) |
+					  (pte->pte.may_write ? 0x2 : 0) |
+					  (pte->pte.may_execute ? 0x1 : 0);
+	),
+
+	TP_printk("Map: hva=%llx pfn=%llx ea=%lx vp=%llx ra=%lx [%x]",
+		  __entry->host_va, __entry->pfn, __entry->eaddr,
+		  __entry->vpage, __entry->raddr, __entry->flags)
+);
+
 #endif /* _TRACE_KVM_H */
 
 /* This part must be outside protection */
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 04/26] KVM: PPC: Move pte invalidate debug code to tracepoint
  2010-08-17 13:57 [PATCH 00/26] KVM: PPC: Mid-August patch queue Alexander Graf
  2010-08-17 13:57 ` [PATCH 03/26] KVM: PPC: Add tracepoint for generic mmu map Alexander Graf
@ 2010-08-17 13:57 ` Alexander Graf
       [not found] ` <1282053481-18787-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:57 UTC (permalink / raw)
  To: kvm-ppc; +Cc: linuxppc-dev, KVM list

This patch moves the SPTE flush debug printk over to tracepoints.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_mmu_hpte.c |    3 +--
 arch/powerpc/kvm/trace.h           |   29 +++++++++++++++++++++++++++++
 2 files changed, 30 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_mmu_hpte.c b/arch/powerpc/kvm/book3s_mmu_hpte.c
index ac94bd9..3397152 100644
--- a/arch/powerpc/kvm/book3s_mmu_hpte.c
+++ b/arch/powerpc/kvm/book3s_mmu_hpte.c
@@ -104,8 +104,7 @@ static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
 	if (hlist_unhashed(&pte->list_pte))
 		return;
 
-	dprintk_mmu("KVM: Flushing SPT: 0x%lx (0x%llx) -> 0x%llx\n",
-		    pte->pte.eaddr, pte->pte.vpage, pte->host_va);
+	trace_kvm_book3s_mmu_invalidate(pte);
 
 	/* Different for 32 and 64 bit */
 	kvmppc_mmu_invalidate_pte(vcpu, pte);
diff --git a/arch/powerpc/kvm/trace.h b/arch/powerpc/kvm/trace.h
index ee6ac88..4ab1c72 100644
--- a/arch/powerpc/kvm/trace.h
+++ b/arch/powerpc/kvm/trace.h
@@ -203,6 +203,35 @@ TRACE_EVENT(kvm_book3s_mmu_map,
 		  __entry->vpage, __entry->raddr, __entry->flags)
 );
 
+TRACE_EVENT(kvm_book3s_mmu_invalidate,
+	TP_PROTO(struct hpte_cache *pte),
+	TP_ARGS(pte),
+
+	TP_STRUCT__entry(
+		__field(	u64,		host_va		)
+		__field(	u64,		pfn		)
+		__field(	ulong,		eaddr		)
+		__field(	u64,		vpage		)
+		__field(	ulong,		raddr		)
+		__field(	int,		flags		)
+	),
+
+	TP_fast_assign(
+		__entry->host_va	= pte->host_va;
+		__entry->pfn		= pte->pfn;
+		__entry->eaddr		= pte->pte.eaddr;
+		__entry->vpage		= pte->pte.vpage;
+		__entry->raddr		= pte->pte.raddr;
+		__entry->flags		= (pte->pte.may_read ? 0x4 : 0) |
+					  (pte->pte.may_write ? 0x2 : 0) |
+					  (pte->pte.may_execute ? 0x1 : 0);
+	),
+
+	TP_printk("Flush: hva=%llx pfn=%llx ea=%lx vp=%llx ra=%lx [%x]",
+		  __entry->host_va, __entry->pfn, __entry->eaddr,
+		  __entry->vpage, __entry->raddr, __entry->flags)
+);
+
 #endif /* _TRACE_KVM_H */
 
 /* This part must be outside protection */
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 05/26] KVM: PPC: Fix sid map search after flush
       [not found] ` <1282053481-18787-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
  2010-08-17 13:57   ` [PATCH 01/26] KVM: PPC: Move EXIT_DEBUG partially to tracepoints Alexander Graf
  2010-08-17 13:57   ` [PATCH 02/26] KVM: PPC: Move book3s_64 mmu map debug print to trace point Alexander Graf
@ 2010-08-17 13:57   ` Alexander Graf
  2010-08-17 13:57   ` [PATCH 06/26] KVM: PPC: Add tracepoints for generic spte flushes Alexander Graf
                     ` (13 subsequent siblings)
  16 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:57 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: linuxppc-dev, KVM list

After a flush the sid map contained lots of entries with 0 for their gvsid and
hvsid value. Unfortunately, 0 can be a real value the guest searches for when
looking up a vsid so it would incorrectly find the host's 0 hvsid mapping which
doesn't belong to our sid space.

So let's also check for the valid bit that indicated that the sid we're
looking at actually contains useful data.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s_64_mmu_host.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index aa516ad..ebb1b5d 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -65,14 +65,14 @@ static struct kvmppc_sid_map *find_sid_vsid(struct kvm_vcpu *vcpu, u64 gvsid)
 
 	sid_map_mask = kvmppc_sid_hash(vcpu, gvsid);
 	map = &to_book3s(vcpu)->sid_map[sid_map_mask];
-	if (map->guest_vsid == gvsid) {
+	if (map->valid && (map->guest_vsid == gvsid)) {
 		dprintk_slb("SLB: Searching: 0x%llx -> 0x%llx\n",
 			    gvsid, map->host_vsid);
 		return map;
 	}
 
 	map = &to_book3s(vcpu)->sid_map[SID_MAP_MASK - sid_map_mask];
-	if (map->guest_vsid == gvsid) {
+	if (map->valid && (map->guest_vsid == gvsid)) {
 		dprintk_slb("SLB: Searching 0x%llx -> 0x%llx\n",
 			    gvsid, map->host_vsid);
 		return map;
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 06/26] KVM: PPC: Add tracepoints for generic spte flushes
       [not found] ` <1282053481-18787-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                     ` (2 preceding siblings ...)
  2010-08-17 13:57   ` [PATCH 05/26] KVM: PPC: Fix sid map search after flush Alexander Graf
@ 2010-08-17 13:57   ` Alexander Graf
  2010-08-17 13:57   ` [PATCH 07/26] KVM: PPC: Preload magic page when in kernel mode Alexander Graf
                     ` (12 subsequent siblings)
  16 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:57 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: linuxppc-dev, KVM list

The different ways of flusing shadow ptes have their own debug prints which use
stupid old printk.

Let's move them to tracepoints, making them easier available, faster and
possible to activate on demand

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s_mmu_hpte.c |   18 +++---------------
 arch/powerpc/kvm/trace.h           |   23 +++++++++++++++++++++++
 2 files changed, 26 insertions(+), 15 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_mmu_hpte.c b/arch/powerpc/kvm/book3s_mmu_hpte.c
index 3397152..bd6a767 100644
--- a/arch/powerpc/kvm/book3s_mmu_hpte.c
+++ b/arch/powerpc/kvm/book3s_mmu_hpte.c
@@ -31,14 +31,6 @@
 
 #define PTE_SIZE	12
 
-/* #define DEBUG_MMU */
-
-#ifdef DEBUG_MMU
-#define dprintk_mmu(a, ...) printk(KERN_INFO a, __VA_ARGS__)
-#else
-#define dprintk_mmu(a, ...) do { } while(0)
-#endif
-
 static struct kmem_cache *hpte_cache;
 
 static inline u64 kvmppc_mmu_hash_pte(u64 eaddr)
@@ -186,9 +178,7 @@ static void kvmppc_mmu_pte_flush_long(struct kvm_vcpu *vcpu, ulong guest_ea)
 
 void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, ulong guest_ea, ulong ea_mask)
 {
-	dprintk_mmu("KVM: Flushing %d Shadow PTEs: 0x%lx & 0x%lx\n",
-		    vcpu->arch.hpte_cache_count, guest_ea, ea_mask);
-
+	trace_kvm_book3s_mmu_flush("", vcpu, guest_ea, ea_mask);
 	guest_ea &= ea_mask;
 
 	switch (ea_mask) {
@@ -251,8 +241,7 @@ static void kvmppc_mmu_pte_vflush_long(struct kvm_vcpu *vcpu, u64 guest_vp)
 
 void kvmppc_mmu_pte_vflush(struct kvm_vcpu *vcpu, u64 guest_vp, u64 vp_mask)
 {
-	dprintk_mmu("KVM: Flushing %d Shadow vPTEs: 0x%llx & 0x%llx\n",
-		    vcpu->arch.hpte_cache_count, guest_vp, vp_mask);
+	trace_kvm_book3s_mmu_flush("v", vcpu, guest_vp, vp_mask);
 	guest_vp &= vp_mask;
 
 	switch(vp_mask) {
@@ -274,8 +263,7 @@ void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, ulong pa_start, ulong pa_end)
 	struct hpte_cache *pte;
 	int i;
 
-	dprintk_mmu("KVM: Flushing %d Shadow pPTEs: 0x%lx - 0x%lx\n",
-		    vcpu->arch.hpte_cache_count, pa_start, pa_end);
+	trace_kvm_book3s_mmu_flush("p", vcpu, pa_start, pa_end);
 
 	rcu_read_lock();
 
diff --git a/arch/powerpc/kvm/trace.h b/arch/powerpc/kvm/trace.h
index 4ab1c72..df15d02 100644
--- a/arch/powerpc/kvm/trace.h
+++ b/arch/powerpc/kvm/trace.h
@@ -232,6 +232,29 @@ TRACE_EVENT(kvm_book3s_mmu_invalidate,
 		  __entry->vpage, __entry->raddr, __entry->flags)
 );
 
+TRACE_EVENT(kvm_book3s_mmu_flush,
+	TP_PROTO(const char *type, struct kvm_vcpu *vcpu, unsigned long long p1,
+		 unsigned long long p2),
+	TP_ARGS(type, vcpu, p1, p2),
+
+	TP_STRUCT__entry(
+		__field(	int,			count		)
+		__field(	unsigned long long,	p1		)
+		__field(	unsigned long long,	p2		)
+		__field(	const char *,		type		)
+	),
+
+	TP_fast_assign(
+		__entry->count		= vcpu->arch.hpte_cache_count;
+		__entry->p1		= p1;
+		__entry->p2		= p2;
+		__entry->type		= type;
+	),
+
+	TP_printk("Flush %d %sPTEs: %llx - %llx",
+		  __entry->count, __entry->type, __entry->p1, __entry->p2)
+);
+
 #endif /* _TRACE_KVM_H */
 
 /* This part must be outside protection */
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 07/26] KVM: PPC: Preload magic page when in kernel mode
       [not found] ` <1282053481-18787-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                     ` (3 preceding siblings ...)
  2010-08-17 13:57   ` [PATCH 06/26] KVM: PPC: Add tracepoints for generic spte flushes Alexander Graf
@ 2010-08-17 13:57   ` Alexander Graf
  2010-08-17 13:57   ` [PATCH 08/26] KVM: PPC: Don't flush PTEs on NX/RO hit Alexander Graf
                     ` (11 subsequent siblings)
  16 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:57 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: linuxppc-dev, KVM list

When the guest jumps into kernel mode and has the magic page mapped, theres a
very high chance that it will also use it. So let's detect that scenario and
map the segment accordingly.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s.c |   10 ++++++++++
 1 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index f8b9aab..b3c1dde 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -145,6 +145,16 @@ void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 msr)
 		   (old_msr & (MSR_PR|MSR_IR|MSR_DR))) {
 		kvmppc_mmu_flush_segments(vcpu);
 		kvmppc_mmu_map_segment(vcpu, kvmppc_get_pc(vcpu));
+
+		/* Preload magic page segment when in kernel mode */
+		if (!(msr & MSR_PR) && vcpu->arch.magic_page_pa) {
+			struct kvm_vcpu_arch *a = &vcpu->arch;
+
+			if (msr & MSR_DR)
+				kvmppc_mmu_map_segment(vcpu, a->magic_page_ea);
+			else
+				kvmppc_mmu_map_segment(vcpu, a->magic_page_pa);
+		}
 	}
 
 	/* Preload FPU if it's enabled */
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 08/26] KVM: PPC: Don't flush PTEs on NX/RO hit
       [not found] ` <1282053481-18787-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                     ` (4 preceding siblings ...)
  2010-08-17 13:57   ` [PATCH 07/26] KVM: PPC: Preload magic page when in kernel mode Alexander Graf
@ 2010-08-17 13:57   ` Alexander Graf
  2010-08-17 13:57   ` [PATCH 10/26] KVM: PPC: Move slb debugging to tracepoints Alexander Graf
                     ` (10 subsequent siblings)
  16 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:57 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: linuxppc-dev, KVM list

When hitting a no-execute or read-only data/inst storage interrupt we were
flushing the respective PTE so we're sure it gets properly overwritten next.

According to the spec, this is unnecessary though. The guest issues a tlbie
anyways, so we're safe to just keep the PTE around and have it manually removed
from the guest, saving us a flush.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s.c |    2 --
 1 files changed, 0 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index b3c1dde..3e017da 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -885,7 +885,6 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			vcpu->arch.shared->msr |=
 				to_svcpu(vcpu)->shadow_srr1 & 0x58000000;
 			kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
-			kvmppc_mmu_pte_flush(vcpu, kvmppc_get_pc(vcpu), ~0xFFFUL);
 			r = RESUME_GUEST;
 		}
 		break;
@@ -911,7 +910,6 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			vcpu->arch.shared->dar = dar;
 			vcpu->arch.shared->dsisr = to_svcpu(vcpu)->fault_dsisr;
 			kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
-			kvmppc_mmu_pte_flush(vcpu, dar, ~0xFFFUL);
 			r = RESUME_GUEST;
 		}
 		break;
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 09/26] KVM: PPC: Make invalidation code more reliable
  2010-08-17 13:57 [PATCH 00/26] KVM: PPC: Mid-August patch queue Alexander Graf
                   ` (2 preceding siblings ...)
       [not found] ` <1282053481-18787-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-08-17 13:57 ` Alexander Graf
  2010-08-17 13:57 ` [PATCH 14/26] KVM: PPC: Move BAT handling code into spr handler Alexander Graf
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:57 UTC (permalink / raw)
  To: kvm-ppc; +Cc: linuxppc-dev, KVM list

There is a race condition in the pte invalidation code path where we can't
be sure if a pte was invalidated already. So let's move the spin lock around
to get rid of the race.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_mmu_hpte.c |   14 ++++++++------
 1 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_mmu_hpte.c b/arch/powerpc/kvm/book3s_mmu_hpte.c
index bd6a767..79751d8 100644
--- a/arch/powerpc/kvm/book3s_mmu_hpte.c
+++ b/arch/powerpc/kvm/book3s_mmu_hpte.c
@@ -92,10 +92,6 @@ static void free_pte_rcu(struct rcu_head *head)
 
 static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
 {
-	/* pte already invalidated? */
-	if (hlist_unhashed(&pte->list_pte))
-		return;
-
 	trace_kvm_book3s_mmu_invalidate(pte);
 
 	/* Different for 32 and 64 bit */
@@ -103,18 +99,24 @@ static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
 
 	spin_lock(&vcpu->arch.mmu_lock);
 
+	/* pte already invalidated in between? */
+	if (hlist_unhashed(&pte->list_pte)) {
+		spin_unlock(&vcpu->arch.mmu_lock);
+		return;
+	}
+
 	hlist_del_init_rcu(&pte->list_pte);
 	hlist_del_init_rcu(&pte->list_pte_long);
 	hlist_del_init_rcu(&pte->list_vpte);
 	hlist_del_init_rcu(&pte->list_vpte_long);
 
-	spin_unlock(&vcpu->arch.mmu_lock);
-
 	if (pte->pte.may_write)
 		kvm_release_pfn_dirty(pte->pfn);
 	else
 		kvm_release_pfn_clean(pte->pfn);
 
+	spin_unlock(&vcpu->arch.mmu_lock);
+
 	vcpu->arch.hpte_cache_count--;
 	call_rcu(&pte->rcu_head, free_pte_rcu);
 }
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 10/26] KVM: PPC: Move slb debugging to tracepoints
       [not found] ` <1282053481-18787-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                     ` (5 preceding siblings ...)
  2010-08-17 13:57   ` [PATCH 08/26] KVM: PPC: Don't flush PTEs on NX/RO hit Alexander Graf
@ 2010-08-17 13:57   ` Alexander Graf
  2010-08-17 13:57   ` [PATCH 11/26] KVM: PPC: Revert "KVM: PPC: Use kernel hash function" Alexander Graf
                     ` (9 subsequent siblings)
  16 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:57 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: linuxppc-dev, KVM list

This patch moves debugging printks for shadow SLB debugging over to tracepoints.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s_64_mmu_host.c |   22 ++--------
 arch/powerpc/kvm/trace.h              |   73 +++++++++++++++++++++++++++++++++
 2 files changed, 78 insertions(+), 17 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index ebb1b5d..321c931 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -33,14 +33,6 @@
 #define PTE_SIZE 12
 #define VSID_ALL 0
 
-/* #define DEBUG_SLB */
-
-#ifdef DEBUG_SLB
-#define dprintk_slb(a, ...) printk(KERN_INFO a, __VA_ARGS__)
-#else
-#define dprintk_slb(a, ...) do { } while(0)
-#endif
-
 void kvmppc_mmu_invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
 {
 	ppc_md.hpte_invalidate(pte->slot, pte->host_va,
@@ -66,20 +58,17 @@ static struct kvmppc_sid_map *find_sid_vsid(struct kvm_vcpu *vcpu, u64 gvsid)
 	sid_map_mask = kvmppc_sid_hash(vcpu, gvsid);
 	map = &to_book3s(vcpu)->sid_map[sid_map_mask];
 	if (map->valid && (map->guest_vsid == gvsid)) {
-		dprintk_slb("SLB: Searching: 0x%llx -> 0x%llx\n",
-			    gvsid, map->host_vsid);
+		trace_kvm_book3s_slb_found(gvsid, map->host_vsid);
 		return map;
 	}
 
 	map = &to_book3s(vcpu)->sid_map[SID_MAP_MASK - sid_map_mask];
 	if (map->valid && (map->guest_vsid == gvsid)) {
-		dprintk_slb("SLB: Searching 0x%llx -> 0x%llx\n",
-			    gvsid, map->host_vsid);
+		trace_kvm_book3s_slb_found(gvsid, map->host_vsid);
 		return map;
 	}
 
-	dprintk_slb("SLB: Searching %d/%d: 0x%llx -> not found\n",
-		    sid_map_mask, SID_MAP_MASK - sid_map_mask, gvsid);
+	trace_kvm_book3s_slb_fail(sid_map_mask, gvsid);
 	return NULL;
 }
 
@@ -205,8 +194,7 @@ static struct kvmppc_sid_map *create_sid_map(struct kvm_vcpu *vcpu, u64 gvsid)
 	map->guest_vsid = gvsid;
 	map->valid = true;
 
-	dprintk_slb("SLB: New mapping at %d: 0x%llx -> 0x%llx\n",
-		    sid_map_mask, gvsid, map->host_vsid);
+	trace_kvm_book3s_slb_map(sid_map_mask, gvsid, map->host_vsid);
 
 	return map;
 }
@@ -278,7 +266,7 @@ int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr)
 	to_svcpu(vcpu)->slb[slb_index].esid = slb_esid;
 	to_svcpu(vcpu)->slb[slb_index].vsid = slb_vsid;
 
-	dprintk_slb("slbmte %#llx, %#llx\n", slb_vsid, slb_esid);
+	trace_kvm_book3s_slbmte(slb_vsid, slb_esid);
 
 	return 0;
 }
diff --git a/arch/powerpc/kvm/trace.h b/arch/powerpc/kvm/trace.h
index df15d02..705c63d 100644
--- a/arch/powerpc/kvm/trace.h
+++ b/arch/powerpc/kvm/trace.h
@@ -255,6 +255,79 @@ TRACE_EVENT(kvm_book3s_mmu_flush,
 		  __entry->count, __entry->type, __entry->p1, __entry->p2)
 );
 
+TRACE_EVENT(kvm_book3s_slb_found,
+	TP_PROTO(unsigned long long gvsid, unsigned long long hvsid),
+	TP_ARGS(gvsid, hvsid),
+
+	TP_STRUCT__entry(
+		__field(	unsigned long long,	gvsid		)
+		__field(	unsigned long long,	hvsid		)
+	),
+
+	TP_fast_assign(
+		__entry->gvsid		= gvsid;
+		__entry->hvsid		= hvsid;
+	),
+
+	TP_printk("%llx -> %llx", __entry->gvsid, __entry->hvsid)
+);
+
+TRACE_EVENT(kvm_book3s_slb_fail,
+	TP_PROTO(u16 sid_map_mask, unsigned long long gvsid),
+	TP_ARGS(sid_map_mask, gvsid),
+
+	TP_STRUCT__entry(
+		__field(	unsigned short,		sid_map_mask	)
+		__field(	unsigned long long,	gvsid		)
+	),
+
+	TP_fast_assign(
+		__entry->sid_map_mask	= sid_map_mask;
+		__entry->gvsid		= gvsid;
+	),
+
+	TP_printk("%x/%x: %llx", __entry->sid_map_mask,
+		  SID_MAP_MASK - __entry->sid_map_mask, __entry->gvsid)
+);
+
+TRACE_EVENT(kvm_book3s_slb_map,
+	TP_PROTO(u16 sid_map_mask, unsigned long long gvsid,
+		 unsigned long long hvsid),
+	TP_ARGS(sid_map_mask, gvsid, hvsid),
+
+	TP_STRUCT__entry(
+		__field(	unsigned short,		sid_map_mask	)
+		__field(	unsigned long long,	guest_vsid	)
+		__field(	unsigned long long,	host_vsid	)
+	),
+
+	TP_fast_assign(
+		__entry->sid_map_mask	= sid_map_mask;
+		__entry->guest_vsid	= gvsid;
+		__entry->host_vsid	= hvsid;
+	),
+
+	TP_printk("%x: %llx -> %llx", __entry->sid_map_mask,
+		  __entry->guest_vsid, __entry->host_vsid)
+);
+
+TRACE_EVENT(kvm_book3s_slbmte,
+	TP_PROTO(u64 slb_vsid, u64 slb_esid),
+	TP_ARGS(slb_vsid, slb_esid),
+
+	TP_STRUCT__entry(
+		__field(	u64,	slb_vsid	)
+		__field(	u64,	slb_esid	)
+	),
+
+	TP_fast_assign(
+		__entry->slb_vsid	= slb_vsid;
+		__entry->slb_esid	= slb_esid;
+	),
+
+	TP_printk("%llx, %llx", __entry->slb_vsid, __entry->slb_esid)
+);
+
 #endif /* _TRACE_KVM_H */
 
 /* This part must be outside protection */
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 11/26] KVM: PPC: Revert "KVM: PPC: Use kernel hash function"
       [not found] ` <1282053481-18787-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                     ` (6 preceding siblings ...)
  2010-08-17 13:57   ` [PATCH 10/26] KVM: PPC: Move slb debugging to tracepoints Alexander Graf
@ 2010-08-17 13:57   ` Alexander Graf
  2010-08-17 13:57   ` [PATCH 12/26] KVM: PPC: Remove unused define Alexander Graf
                     ` (8 subsequent siblings)
  16 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:57 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: linuxppc-dev, KVM list

It turns out the in-kernel hash function is sub-optimal for our subtle
hash inputs where every bit is significant. So let's revert to the original
hash functions.

This reverts commit 05340ab4f9a6626f7a2e8f9fe5397c61d494f445.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s_32_mmu_host.c |   10 ++++++++--
 arch/powerpc/kvm/book3s_64_mmu_host.c |   11 +++++++++--
 2 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_32_mmu_host.c b/arch/powerpc/kvm/book3s_32_mmu_host.c
index 343452c..57dddeb 100644
--- a/arch/powerpc/kvm/book3s_32_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_32_mmu_host.c
@@ -19,7 +19,6 @@
  */
 
 #include <linux/kvm_host.h>
-#include <linux/hash.h>
 
 #include <asm/kvm_ppc.h>
 #include <asm/kvm_book3s.h>
@@ -77,7 +76,14 @@ void kvmppc_mmu_invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
  * a hash, so we don't waste cycles on looping */
 static u16 kvmppc_sid_hash(struct kvm_vcpu *vcpu, u64 gvsid)
 {
-	return hash_64(gvsid, SID_MAP_BITS);
+	return (u16)(((gvsid >> (SID_MAP_BITS * 7)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 6)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 5)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 4)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 3)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 2)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 1)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 0)) & SID_MAP_MASK));
 }
 
 
diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index 321c931..e7c4d00 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -20,7 +20,6 @@
  */
 
 #include <linux/kvm_host.h>
-#include <linux/hash.h>
 
 #include <asm/kvm_ppc.h>
 #include <asm/kvm_book3s.h>
@@ -44,9 +43,17 @@ void kvmppc_mmu_invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
  * a hash, so we don't waste cycles on looping */
 static u16 kvmppc_sid_hash(struct kvm_vcpu *vcpu, u64 gvsid)
 {
-	return hash_64(gvsid, SID_MAP_BITS);
+	return (u16)(((gvsid >> (SID_MAP_BITS * 7)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 6)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 5)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 4)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 3)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 2)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 1)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 0)) & SID_MAP_MASK));
 }
 
+
 static struct kvmppc_sid_map *find_sid_vsid(struct kvm_vcpu *vcpu, u64 gvsid)
 {
 	struct kvmppc_sid_map *map;
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 12/26] KVM: PPC: Remove unused define
       [not found] ` <1282053481-18787-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                     ` (7 preceding siblings ...)
  2010-08-17 13:57   ` [PATCH 11/26] KVM: PPC: Revert "KVM: PPC: Use kernel hash function" Alexander Graf
@ 2010-08-17 13:57   ` Alexander Graf
  2010-08-17 13:57   ` [PATCH 13/26] KVM: PPC: Add feature bitmap for magic page Alexander Graf
                     ` (7 subsequent siblings)
  16 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:57 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: linuxppc-dev, KVM list

The define VSID_ALL is unused. Let's remove it.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s_64_mmu_host.c |    1 -
 1 files changed, 0 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index e7c4d00..4040c8d 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -30,7 +30,6 @@
 #include "trace.h"
 
 #define PTE_SIZE 12
-#define VSID_ALL 0
 
 void kvmppc_mmu_invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
 {
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 13/26] KVM: PPC: Add feature bitmap for magic page
       [not found] ` <1282053481-18787-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                     ` (8 preceding siblings ...)
  2010-08-17 13:57   ` [PATCH 12/26] KVM: PPC: Remove unused define Alexander Graf
@ 2010-08-17 13:57   ` Alexander Graf
  2010-08-22 16:42     ` Avi Kivity
  2010-08-17 13:57   ` [PATCH 15/26] KVM: PPC: Interpret SR registers on demand Alexander Graf
                     ` (6 subsequent siblings)
  16 siblings, 1 reply; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:57 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: linuxppc-dev, KVM list

We will soon add SR PV support to the shared page, so we need some
infrastructure that allows the guest to query for features KVM exports.

This patch adds a second return value to the magic mapping that
indicated to the guest which features are available.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/include/asm/kvm_para.h |    2 ++
 arch/powerpc/kernel/kvm.c           |   21 +++++++++++++++------
 arch/powerpc/kvm/powerpc.c          |    5 ++++-
 3 files changed, 21 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_para.h b/arch/powerpc/include/asm/kvm_para.h
index 7438ab3..43c1b22 100644
--- a/arch/powerpc/include/asm/kvm_para.h
+++ b/arch/powerpc/include/asm/kvm_para.h
@@ -47,6 +47,8 @@ struct kvm_vcpu_arch_shared {
 
 #define KVM_FEATURE_MAGIC_PAGE	1
 
+#define KVM_MAGIC_FEAT_SR	(1 << 0)
+
 #ifdef __KERNEL__
 
 #ifdef CONFIG_KVM_GUEST
diff --git a/arch/powerpc/kernel/kvm.c b/arch/powerpc/kernel/kvm.c
index e936817..f48144f 100644
--- a/arch/powerpc/kernel/kvm.c
+++ b/arch/powerpc/kernel/kvm.c
@@ -267,12 +267,20 @@ static void kvm_patch_ins_wrteei(u32 *inst)
 
 static void kvm_map_magic_page(void *data)
 {
-	kvm_hypercall2(KVM_HC_PPC_MAP_MAGIC_PAGE,
-		       KVM_MAGIC_PAGE,  /* Physical Address */
-		       KVM_MAGIC_PAGE); /* Effective Address */
+	u32 *features = data;
+
+	ulong in[8];
+	ulong out[8];
+
+	in[0] = KVM_MAGIC_PAGE;
+	in[1] = KVM_MAGIC_PAGE;
+
+	kvm_hypercall(in, out, HC_VENDOR_KVM | KVM_HC_PPC_MAP_MAGIC_PAGE);
+
+	*features = out[0];
 }
 
-static void kvm_check_ins(u32 *inst)
+static void kvm_check_ins(u32 *inst, u32 features)
 {
 	u32 _inst = *inst;
 	u32 inst_no_rt = _inst & ~KVM_MASK_RT;
@@ -368,9 +376,10 @@ static void kvm_use_magic_page(void)
 	u32 *p;
 	u32 *start, *end;
 	u32 tmp;
+	u32 features;
 
 	/* Tell the host to map the magic page to -4096 on all CPUs */
-	on_each_cpu(kvm_map_magic_page, NULL, 1);
+	on_each_cpu(kvm_map_magic_page, &features, 1);
 
 	/* Quick self-test to see if the mapping works */
 	if (__get_user(tmp, (u32*)KVM_MAGIC_PAGE)) {
@@ -383,7 +392,7 @@ static void kvm_use_magic_page(void)
 	end = (void*)_etext;
 
 	for (p = start; p < end; p++)
-		kvm_check_ins(p);
+		kvm_check_ins(p, features);
 
 	printk(KERN_INFO "KVM: Live patching for a fast VM %s\n",
 			 kvm_patching_worked ? "worked" : "failed");
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 6a53a3f..496d7a5 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -66,6 +66,8 @@ int kvmppc_kvm_pv(struct kvm_vcpu *vcpu)
 		vcpu->arch.magic_page_pa = param1;
 		vcpu->arch.magic_page_ea = param2;
 
+		r2 = 0;
+
 		r = HC_EV_SUCCESS;
 		break;
 	}
@@ -76,13 +78,14 @@ int kvmppc_kvm_pv(struct kvm_vcpu *vcpu)
 #endif
 
 		/* Second return value is in r4 */
-		kvmppc_set_gpr(vcpu, 4, r2);
 		break;
 	default:
 		r = HC_EV_UNIMPLEMENTED;
 		break;
 	}
 
+	kvmppc_set_gpr(vcpu, 4, r2);
+
 	return r;
 }
 
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 14/26] KVM: PPC: Move BAT handling code into spr handler
  2010-08-17 13:57 [PATCH 00/26] KVM: PPC: Mid-August patch queue Alexander Graf
                   ` (3 preceding siblings ...)
  2010-08-17 13:57 ` [PATCH 09/26] KVM: PPC: Make invalidation code more reliable Alexander Graf
@ 2010-08-17 13:57 ` Alexander Graf
  2010-08-17 13:57 ` [PATCH 17/26] KVM: PPC: Add mtsrin PV code Alexander Graf
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:57 UTC (permalink / raw)
  To: kvm-ppc; +Cc: linuxppc-dev, KVM list

The current approach duplicates the spr->bat finding logic and makes it harder
to reuse the actually used variables. So let's move everything down to the spr
handler.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_emulate.c |   48 ++++++++++++------------------------
 1 files changed, 16 insertions(+), 32 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index f333cb4..4668465 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -264,7 +264,7 @@ void kvmppc_set_bat(struct kvm_vcpu *vcpu, struct kvmppc_bat *bat, bool upper,
 	}
 }
 
-static u32 kvmppc_read_bat(struct kvm_vcpu *vcpu, int sprn)
+static struct kvmppc_bat *kvmppc_find_bat(struct kvm_vcpu *vcpu, int sprn)
 {
 	struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
 	struct kvmppc_bat *bat;
@@ -286,35 +286,7 @@ static u32 kvmppc_read_bat(struct kvm_vcpu *vcpu, int sprn)
 		BUG();
 	}
 
-	if (sprn % 2)
-		return bat->raw >> 32;
-	else
-		return bat->raw;
-}
-
-static void kvmppc_write_bat(struct kvm_vcpu *vcpu, int sprn, u32 val)
-{
-	struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
-	struct kvmppc_bat *bat;
-
-	switch (sprn) {
-	case SPRN_IBAT0U ... SPRN_IBAT3L:
-		bat = &vcpu_book3s->ibat[(sprn - SPRN_IBAT0U) / 2];
-		break;
-	case SPRN_IBAT4U ... SPRN_IBAT7L:
-		bat = &vcpu_book3s->ibat[4 + ((sprn - SPRN_IBAT4U) / 2)];
-		break;
-	case SPRN_DBAT0U ... SPRN_DBAT3L:
-		bat = &vcpu_book3s->dbat[(sprn - SPRN_DBAT0U) / 2];
-		break;
-	case SPRN_DBAT4U ... SPRN_DBAT7L:
-		bat = &vcpu_book3s->dbat[4 + ((sprn - SPRN_DBAT4U) / 2)];
-		break;
-	default:
-		BUG();
-	}
-
-	kvmppc_set_bat(vcpu, bat, !(sprn % 2), val);
+	return bat;
 }
 
 int kvmppc_core_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs)
@@ -339,12 +311,16 @@ int kvmppc_core_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs)
 	case SPRN_IBAT4U ... SPRN_IBAT7L:
 	case SPRN_DBAT0U ... SPRN_DBAT3L:
 	case SPRN_DBAT4U ... SPRN_DBAT7L:
-		kvmppc_write_bat(vcpu, sprn, (u32)spr_val);
+	{
+		struct kvmppc_bat *bat = kvmppc_find_bat(vcpu, sprn);
+
+		kvmppc_set_bat(vcpu, bat, !(sprn % 2), (u32)spr_val);
 		/* BAT writes happen so rarely that we're ok to flush
 		 * everything here */
 		kvmppc_mmu_pte_flush(vcpu, 0, 0);
 		kvmppc_mmu_flush_segments(vcpu);
 		break;
+	}
 	case SPRN_HID0:
 		to_book3s(vcpu)->hid[0] = spr_val;
 		break;
@@ -434,8 +410,16 @@ int kvmppc_core_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
 	case SPRN_IBAT4U ... SPRN_IBAT7L:
 	case SPRN_DBAT0U ... SPRN_DBAT3L:
 	case SPRN_DBAT4U ... SPRN_DBAT7L:
-		kvmppc_set_gpr(vcpu, rt, kvmppc_read_bat(vcpu, sprn));
+	{
+		struct kvmppc_bat *bat = kvmppc_find_bat(vcpu, sprn);
+
+		if (sprn % 2)
+			kvmppc_set_gpr(vcpu, rt, bat->raw >> 32);
+		else
+			kvmppc_set_gpr(vcpu, rt, bat->raw);
+
 		break;
+	}
 	case SPRN_SDR1:
 		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->sdr1);
 		break;
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 15/26] KVM: PPC: Interpret SR registers on demand
       [not found] ` <1282053481-18787-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                     ` (9 preceding siblings ...)
  2010-08-17 13:57   ` [PATCH 13/26] KVM: PPC: Add feature bitmap for magic page Alexander Graf
@ 2010-08-17 13:57   ` Alexander Graf
  2010-08-17 13:57   ` [PATCH 16/26] KVM: PPC: Put segment registers in shared page Alexander Graf
                     ` (5 subsequent siblings)
  16 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:57 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: linuxppc-dev, KVM list

Right now we're examining the contents of Book3s_32's segment registers when
the register is written and put the interpreted contents into a struct.

There are two reasons this is bad. For starters, the struct has worse real-time
performance, as it occupies more ram. But the more important part is that with
segment registers being interpreted from their raw values, we can put them in
the shared page, allowing guests to mess with them directly.

This patch makes the internal representation of SRs be u32s.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/include/asm/kvm_book3s.h |   11 +----
 arch/powerpc/kvm/book3s.c             |    4 +-
 arch/powerpc/kvm/book3s_32_mmu.c      |   79 ++++++++++++++++++---------------
 3 files changed, 46 insertions(+), 48 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index f04f516..0884652 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -38,15 +38,6 @@ struct kvmppc_slb {
 	bool class	: 1;
 };
 
-struct kvmppc_sr {
-	u32 raw;
-	u32 vsid;
-	bool Ks		: 1;
-	bool Kp		: 1;
-	bool nx		: 1;
-	bool valid	: 1;
-};
-
 struct kvmppc_bat {
 	u64 raw;
 	u32 bepi;
@@ -79,7 +70,7 @@ struct kvmppc_vcpu_book3s {
 		u64 vsid;
 	} slb_shadow[64];
 	u8 slb_shadow_max;
-	struct kvmppc_sr sr[16];
+	u32 sr[16];
 	struct kvmppc_bat ibat[8];
 	struct kvmppc_bat dbat[8];
 	u64 hid[6];
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 3e017da..082ec62 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -1160,8 +1160,8 @@ int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,
 		}
 	} else {
 		for (i = 0; i < 16; i++) {
-			sregs->u.s.ppc32.sr[i] = vcpu3s->sr[i].raw;
-			sregs->u.s.ppc32.sr[i] = vcpu3s->sr[i].raw;
+			sregs->u.s.ppc32.sr[i] = vcpu3s->sr[i];
+			sregs->u.s.ppc32.sr[i] = vcpu3s->sr[i];
 		}
 		for (i = 0; i < 8; i++) {
 			sregs->u.s.ppc32.ibat[i] = vcpu3s->ibat[i].raw;
diff --git a/arch/powerpc/kvm/book3s_32_mmu.c b/arch/powerpc/kvm/book3s_32_mmu.c
index 5bf4bf8..d4ff76f 100644
--- a/arch/powerpc/kvm/book3s_32_mmu.c
+++ b/arch/powerpc/kvm/book3s_32_mmu.c
@@ -58,14 +58,39 @@ static inline bool check_debug_ip(struct kvm_vcpu *vcpu)
 #endif
 }
 
+static inline u32 sr_vsid(u32 sr_raw)
+{
+	return sr_raw & 0x0fffffff;
+}
+
+static inline bool sr_valid(u32 sr_raw)
+{
+	return (sr_raw & 0x80000000) ? false : true;
+}
+
+static inline bool sr_ks(u32 sr_raw)
+{
+	return (sr_raw & 0x40000000) ? true: false;
+}
+
+static inline bool sr_kp(u32 sr_raw)
+{
+	return (sr_raw & 0x20000000) ? true: false;
+}
+
+static inline bool sr_nx(u32 sr_raw)
+{
+	return (sr_raw & 0x10000000) ? true: false;
+}
+
 static int kvmppc_mmu_book3s_32_xlate_bat(struct kvm_vcpu *vcpu, gva_t eaddr,
 					  struct kvmppc_pte *pte, bool data);
 static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
 					     u64 *vsid);
 
-static struct kvmppc_sr *find_sr(struct kvmppc_vcpu_book3s *vcpu_book3s, gva_t eaddr)
+static u32 find_sr(struct kvmppc_vcpu_book3s *vcpu_book3s, gva_t eaddr)
 {
-	return &vcpu_book3s->sr[(eaddr >> 28) & 0xf];
+	return vcpu_book3s->sr[(eaddr >> 28) & 0xf];
 }
 
 static u64 kvmppc_mmu_book3s_32_ea_to_vp(struct kvm_vcpu *vcpu, gva_t eaddr,
@@ -87,7 +112,7 @@ static void kvmppc_mmu_book3s_32_reset_msr(struct kvm_vcpu *vcpu)
 }
 
 static hva_t kvmppc_mmu_book3s_32_get_pteg(struct kvmppc_vcpu_book3s *vcpu_book3s,
-				      struct kvmppc_sr *sre, gva_t eaddr,
+				      u32 sre, gva_t eaddr,
 				      bool primary)
 {
 	u32 page, hash, pteg, htabmask;
@@ -96,7 +121,7 @@ static hva_t kvmppc_mmu_book3s_32_get_pteg(struct kvmppc_vcpu_book3s *vcpu_book3
 	page = (eaddr & 0x0FFFFFFF) >> 12;
 	htabmask = ((vcpu_book3s->sdr1 & 0x1FF) << 16) | 0xFFC0;
 
-	hash = ((sre->vsid ^ page) << 6);
+	hash = ((sr_vsid(sre) ^ page) << 6);
 	if (!primary)
 		hash = ~hash;
 	hash &= htabmask;
@@ -105,7 +130,7 @@ static hva_t kvmppc_mmu_book3s_32_get_pteg(struct kvmppc_vcpu_book3s *vcpu_book3
 
 	dprintk("MMU: pc=0x%lx eaddr=0x%lx sdr1=0x%llx pteg=0x%x vsid=0x%x\n",
 		kvmppc_get_pc(&vcpu_book3s->vcpu), eaddr, vcpu_book3s->sdr1, pteg,
-		sre->vsid);
+		sr_vsid(sre));
 
 	r = gfn_to_hva(vcpu_book3s->vcpu.kvm, pteg >> PAGE_SHIFT);
 	if (kvm_is_error_hva(r))
@@ -113,10 +138,9 @@ static hva_t kvmppc_mmu_book3s_32_get_pteg(struct kvmppc_vcpu_book3s *vcpu_book3
 	return r | (pteg & ~PAGE_MASK);
 }
 
-static u32 kvmppc_mmu_book3s_32_get_ptem(struct kvmppc_sr *sre, gva_t eaddr,
-				    bool primary)
+static u32 kvmppc_mmu_book3s_32_get_ptem(u32 sre, gva_t eaddr, bool primary)
 {
-	return ((eaddr & 0x0fffffff) >> 22) | (sre->vsid << 7) |
+	return ((eaddr & 0x0fffffff) >> 22) | (sr_vsid(sre) << 7) |
 	       (primary ? 0 : 0x40) | 0x80000000;
 }
 
@@ -180,7 +204,7 @@ static int kvmppc_mmu_book3s_32_xlate_pte(struct kvm_vcpu *vcpu, gva_t eaddr,
 				     bool primary)
 {
 	struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
-	struct kvmppc_sr *sre;
+	u32 sre;
 	hva_t ptegp;
 	u32 pteg[16];
 	u32 ptem = 0;
@@ -190,7 +214,7 @@ static int kvmppc_mmu_book3s_32_xlate_pte(struct kvm_vcpu *vcpu, gva_t eaddr,
 	sre = find_sr(vcpu_book3s, eaddr);
 
 	dprintk_pte("SR 0x%lx: vsid=0x%x, raw=0x%x\n", eaddr >> 28,
-		    sre->vsid, sre->raw);
+		    sr_vsid(sre), sre);
 
 	pte->vpage = kvmppc_mmu_book3s_32_ea_to_vp(vcpu, eaddr, data);
 
@@ -214,8 +238,8 @@ static int kvmppc_mmu_book3s_32_xlate_pte(struct kvm_vcpu *vcpu, gva_t eaddr,
 			pte->raddr = (pteg[i+1] & ~(0xFFFULL)) | (eaddr & 0xFFF);
 			pp = pteg[i+1] & 3;
 
-			if ((sre->Kp &&  (vcpu->arch.shared->msr & MSR_PR)) ||
-			    (sre->Ks && !(vcpu->arch.shared->msr & MSR_PR)))
+			if ((sr_kp(sre) &&  (vcpu->arch.shared->msr & MSR_PR)) ||
+			    (sr_ks(sre) && !(vcpu->arch.shared->msr & MSR_PR)))
 				pp |= 4;
 
 			pte->may_write = false;
@@ -311,30 +335,13 @@ static int kvmppc_mmu_book3s_32_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
 
 static u32 kvmppc_mmu_book3s_32_mfsrin(struct kvm_vcpu *vcpu, u32 srnum)
 {
-	return to_book3s(vcpu)->sr[srnum].raw;
+	return to_book3s(vcpu)->sr[srnum];
 }
 
 static void kvmppc_mmu_book3s_32_mtsrin(struct kvm_vcpu *vcpu, u32 srnum,
 					ulong value)
 {
-	struct kvmppc_sr *sre;
-
-	sre = &to_book3s(vcpu)->sr[srnum];
-
-	/* Flush any left-over shadows from the previous SR */
-
-	/* XXX Not necessary? */
-	/* kvmppc_mmu_pte_flush(vcpu, ((u64)sre->vsid) << 28, 0xf0000000ULL); */
-
-	/* And then put in the new SR */
-	sre->raw = value;
-	sre->vsid = (value & 0x0fffffff);
-	sre->valid = (value & 0x80000000) ? false : true;
-	sre->Ks = (value & 0x40000000) ? true : false;
-	sre->Kp = (value & 0x20000000) ? true : false;
-	sre->nx = (value & 0x10000000) ? true : false;
-
-	/* Map the new segment */
+	to_book3s(vcpu)->sr[srnum] = value;
 	kvmppc_mmu_map_segment(vcpu, srnum << SID_SHIFT);
 }
 
@@ -347,13 +354,13 @@ static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
 					     u64 *vsid)
 {
 	ulong ea = esid << SID_SHIFT;
-	struct kvmppc_sr *sr;
+	u32 sr;
 	u64 gvsid = esid;
 
 	if (vcpu->arch.shared->msr & (MSR_DR|MSR_IR)) {
 		sr = find_sr(to_book3s(vcpu), ea);
-		if (sr->valid)
-			gvsid = sr->vsid;
+		if (sr_valid(sr))
+			gvsid = sr_vsid(sr);
 	}
 
 	/* In case we only have one of MSR_IR or MSR_DR set, let's put
@@ -370,8 +377,8 @@ static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
 		*vsid = VSID_REAL_DR | gvsid;
 		break;
 	case MSR_DR|MSR_IR:
-		if (sr->valid)
-			*vsid = sr->vsid;
+		if (sr_valid(sr))
+			*vsid = sr_vsid(sr);
 		else
 			*vsid = VSID_BAT | gvsid;
 		break;
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 16/26] KVM: PPC: Put segment registers in shared page
       [not found] ` <1282053481-18787-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                     ` (10 preceding siblings ...)
  2010-08-17 13:57   ` [PATCH 15/26] KVM: PPC: Interpret SR registers on demand Alexander Graf
@ 2010-08-17 13:57   ` Alexander Graf
  2010-08-17 13:57   ` [PATCH 18/26] KVM: PPC: Make PV mtmsr work with r30 and r31 Alexander Graf
                     ` (4 subsequent siblings)
  16 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:57 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: linuxppc-dev, KVM list

Now that the actual mtsr doesn't do anything anymore, we can move the sr
contents over to the shared page, so a guest can directly read and write
its sr contents from guest context.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/include/asm/kvm_book3s.h |    1 -
 arch/powerpc/include/asm/kvm_para.h   |    1 +
 arch/powerpc/kvm/book3s.c             |    7 +++----
 arch/powerpc/kvm/book3s_32_mmu.c      |   12 ++++++------
 arch/powerpc/kvm/powerpc.c            |    2 +-
 5 files changed, 11 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 0884652..be8aac2 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -70,7 +70,6 @@ struct kvmppc_vcpu_book3s {
 		u64 vsid;
 	} slb_shadow[64];
 	u8 slb_shadow_max;
-	u32 sr[16];
 	struct kvmppc_bat ibat[8];
 	struct kvmppc_bat dbat[8];
 	u64 hid[6];
diff --git a/arch/powerpc/include/asm/kvm_para.h b/arch/powerpc/include/asm/kvm_para.h
index 43c1b22..d79fd09 100644
--- a/arch/powerpc/include/asm/kvm_para.h
+++ b/arch/powerpc/include/asm/kvm_para.h
@@ -38,6 +38,7 @@ struct kvm_vcpu_arch_shared {
 	__u64 msr;
 	__u32 dsisr;
 	__u32 int_pending;	/* Tells the guest if we have an interrupt */
+	__u32 sr[16];
 };
 
 #define KVM_SC_MAGIC_R0		0x4b564d21 /* "KVM!" */
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 082ec62..5fbe949 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -1159,10 +1159,9 @@ int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,
 			sregs->u.s.ppc64.slb[i].slbv = vcpu3s->slb[i].origv;
 		}
 	} else {
-		for (i = 0; i < 16; i++) {
-			sregs->u.s.ppc32.sr[i] = vcpu3s->sr[i];
-			sregs->u.s.ppc32.sr[i] = vcpu3s->sr[i];
-		}
+		for (i = 0; i < 16; i++)
+			sregs->u.s.ppc32.sr[i] = vcpu->arch.shared->sr[i];
+
 		for (i = 0; i < 8; i++) {
 			sregs->u.s.ppc32.ibat[i] = vcpu3s->ibat[i].raw;
 			sregs->u.s.ppc32.dbat[i] = vcpu3s->dbat[i].raw;
diff --git a/arch/powerpc/kvm/book3s_32_mmu.c b/arch/powerpc/kvm/book3s_32_mmu.c
index d4ff76f..c8cefdd 100644
--- a/arch/powerpc/kvm/book3s_32_mmu.c
+++ b/arch/powerpc/kvm/book3s_32_mmu.c
@@ -88,9 +88,9 @@ static int kvmppc_mmu_book3s_32_xlate_bat(struct kvm_vcpu *vcpu, gva_t eaddr,
 static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
 					     u64 *vsid);
 
-static u32 find_sr(struct kvmppc_vcpu_book3s *vcpu_book3s, gva_t eaddr)
+static u32 find_sr(struct kvm_vcpu *vcpu, gva_t eaddr)
 {
-	return vcpu_book3s->sr[(eaddr >> 28) & 0xf];
+	return vcpu->arch.shared->sr[(eaddr >> 28) & 0xf];
 }
 
 static u64 kvmppc_mmu_book3s_32_ea_to_vp(struct kvm_vcpu *vcpu, gva_t eaddr,
@@ -211,7 +211,7 @@ static int kvmppc_mmu_book3s_32_xlate_pte(struct kvm_vcpu *vcpu, gva_t eaddr,
 	int i;
 	int found = 0;
 
-	sre = find_sr(vcpu_book3s, eaddr);
+	sre = find_sr(vcpu, eaddr);
 
 	dprintk_pte("SR 0x%lx: vsid=0x%x, raw=0x%x\n", eaddr >> 28,
 		    sr_vsid(sre), sre);
@@ -335,13 +335,13 @@ static int kvmppc_mmu_book3s_32_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
 
 static u32 kvmppc_mmu_book3s_32_mfsrin(struct kvm_vcpu *vcpu, u32 srnum)
 {
-	return to_book3s(vcpu)->sr[srnum];
+	return vcpu->arch.shared->sr[srnum];
 }
 
 static void kvmppc_mmu_book3s_32_mtsrin(struct kvm_vcpu *vcpu, u32 srnum,
 					ulong value)
 {
-	to_book3s(vcpu)->sr[srnum] = value;
+	vcpu->arch.shared->sr[srnum] = value;
 	kvmppc_mmu_map_segment(vcpu, srnum << SID_SHIFT);
 }
 
@@ -358,7 +358,7 @@ static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
 	u64 gvsid = esid;
 
 	if (vcpu->arch.shared->msr & (MSR_DR|MSR_IR)) {
-		sr = find_sr(to_book3s(vcpu), ea);
+		sr = find_sr(vcpu, ea);
 		if (sr_valid(sr))
 			gvsid = sr_vsid(sr);
 	}
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 496d7a5..028891c 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -66,7 +66,7 @@ int kvmppc_kvm_pv(struct kvm_vcpu *vcpu)
 		vcpu->arch.magic_page_pa = param1;
 		vcpu->arch.magic_page_ea = param2;
 
-		r2 = 0;
+		r2 = KVM_MAGIC_FEAT_SR;
 
 		r = HC_EV_SUCCESS;
 		break;
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 17/26] KVM: PPC: Add mtsrin PV code
  2010-08-17 13:57 [PATCH 00/26] KVM: PPC: Mid-August patch queue Alexander Graf
                   ` (4 preceding siblings ...)
  2010-08-17 13:57 ` [PATCH 14/26] KVM: PPC: Move BAT handling code into spr handler Alexander Graf
@ 2010-08-17 13:57 ` Alexander Graf
  2010-08-17 13:57 ` [PATCH 19/26] KVM: PPC: Update int_pending also on dequeue Alexander Graf
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:57 UTC (permalink / raw)
  To: kvm-ppc; +Cc: linuxppc-dev, KVM list

This is the guest side of the mtsr acceleration. Using this a guest can now
call mtsrin with almost no overhead as long as it ensures that it only uses
it with (MSR_IR|MSR_DR) == 0. Linux does that, so we're good.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kernel/asm-offsets.c |    1 +
 arch/powerpc/kernel/kvm.c         |   60 +++++++++++++++++++++++++++++++++++++
 arch/powerpc/kernel/kvm_emul.S    |   50 ++++++++++++++++++++++++++++++
 3 files changed, 111 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index e3e740b..5e54d0f 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -478,6 +478,7 @@ int main(void)
 	DEFINE(KVM_MAGIC_MSR, offsetof(struct kvm_vcpu_arch_shared, msr));
 	DEFINE(KVM_MAGIC_CRITICAL, offsetof(struct kvm_vcpu_arch_shared,
 					    critical));
+	DEFINE(KVM_MAGIC_SR, offsetof(struct kvm_vcpu_arch_shared, sr));
 #endif
 
 #ifdef CONFIG_44x
diff --git a/arch/powerpc/kernel/kvm.c b/arch/powerpc/kernel/kvm.c
index f48144f..43ec78a 100644
--- a/arch/powerpc/kernel/kvm.c
+++ b/arch/powerpc/kernel/kvm.c
@@ -43,6 +43,7 @@
 #define KVM_INST_B_MAX		0x01ffffff
 
 #define KVM_MASK_RT		0x03e00000
+#define KVM_MASK_RB		0x0000f800
 #define KVM_INST_MFMSR		0x7c0000a6
 #define KVM_INST_MFSPR_SPRG0	0x7c1042a6
 #define KVM_INST_MFSPR_SPRG1	0x7c1142a6
@@ -70,6 +71,8 @@
 #define KVM_INST_WRTEEI_0	0x7c000146
 #define KVM_INST_WRTEEI_1	0x7c008146
 
+#define KVM_INST_MTSRIN		0x7c0001e4
+
 static bool kvm_patching_worked = true;
 static char kvm_tmp[1024 * 1024];
 static int kvm_tmp_index;
@@ -265,6 +268,51 @@ static void kvm_patch_ins_wrteei(u32 *inst)
 
 #endif
 
+#ifdef CONFIG_PPC_BOOK3S_32
+
+extern u32 kvm_emulate_mtsrin_branch_offs;
+extern u32 kvm_emulate_mtsrin_reg1_offs;
+extern u32 kvm_emulate_mtsrin_reg2_offs;
+extern u32 kvm_emulate_mtsrin_orig_ins_offs;
+extern u32 kvm_emulate_mtsrin_len;
+extern u32 kvm_emulate_mtsrin[];
+
+static void kvm_patch_ins_mtsrin(u32 *inst, u32 rt, u32 rb)
+{
+	u32 *p;
+	int distance_start;
+	int distance_end;
+	ulong next_inst;
+
+	p = kvm_alloc(kvm_emulate_mtsrin_len * 4);
+	if (!p)
+		return;
+
+	/* Find out where we are and put everything there */
+	distance_start = (ulong)p - (ulong)inst;
+	next_inst = ((ulong)inst + 4);
+	distance_end = next_inst - (ulong)&p[kvm_emulate_mtsrin_branch_offs];
+
+	/* Make sure we only write valid b instructions */
+	if (distance_start > KVM_INST_B_MAX) {
+		kvm_patching_worked = false;
+		return;
+	}
+
+	/* Modify the chunk to fit the invocation */
+	memcpy(p, kvm_emulate_mtsrin, kvm_emulate_mtsrin_len * 4);
+	p[kvm_emulate_mtsrin_branch_offs] |= distance_end & KVM_INST_B_MASK;
+	p[kvm_emulate_mtsrin_reg1_offs] |= (rb << 10);
+	p[kvm_emulate_mtsrin_reg2_offs] |= rt;
+	p[kvm_emulate_mtsrin_orig_ins_offs] = *inst;
+	flush_icache_range((ulong)p, (ulong)p + kvm_emulate_mtsrin_len * 4);
+
+	/* Patch the invocation */
+	kvm_patch_ins_b(inst, distance_start);
+}
+
+#endif
+
 static void kvm_map_magic_page(void *data)
 {
 	u32 *features = data;
@@ -361,6 +409,18 @@ static void kvm_check_ins(u32 *inst, u32 features)
 		break;
 	}
 
+	switch (inst_no_rt & ~KVM_MASK_RB) {
+#ifdef CONFIG_PPC_BOOK3S_32
+	case KVM_INST_MTSRIN:
+		if (features & KVM_MAGIC_FEAT_SR) {
+			u32 inst_rb = _inst & KVM_MASK_RB;
+			kvm_patch_ins_mtsrin(inst, inst_rt, inst_rb);
+		}
+		break;
+		break;
+#endif
+	}
+
 	switch (_inst) {
 #ifdef CONFIG_BOOKE
 	case KVM_INST_WRTEEI_0:
diff --git a/arch/powerpc/kernel/kvm_emul.S b/arch/powerpc/kernel/kvm_emul.S
index 3199f65..a6e97e7 100644
--- a/arch/powerpc/kernel/kvm_emul.S
+++ b/arch/powerpc/kernel/kvm_emul.S
@@ -245,3 +245,53 @@ kvm_emulate_wrteei_ee_offs:
 .global kvm_emulate_wrteei_len
 kvm_emulate_wrteei_len:
 	.long (kvm_emulate_wrteei_end - kvm_emulate_wrteei) / 4
+
+
+.global kvm_emulate_mtsrin
+kvm_emulate_mtsrin:
+
+	SCRATCH_SAVE
+
+	LL64(r31, KVM_MAGIC_PAGE + KVM_MAGIC_MSR, 0)
+	andi.	r31, r31, MSR_DR | MSR_IR
+	beq	kvm_emulate_mtsrin_reg1
+
+	SCRATCH_RESTORE
+
+kvm_emulate_mtsrin_orig_ins:
+	nop
+	b	kvm_emulate_mtsrin_branch
+
+kvm_emulate_mtsrin_reg1:
+	/* rX >> 26 */
+	rlwinm  r30,r0,6,26,29
+
+kvm_emulate_mtsrin_reg2:
+	stw	r0, (KVM_MAGIC_PAGE + KVM_MAGIC_SR)(r30)
+
+	SCRATCH_RESTORE
+
+	/* Go back to caller */
+kvm_emulate_mtsrin_branch:
+	b	.
+kvm_emulate_mtsrin_end:
+
+.global kvm_emulate_mtsrin_branch_offs
+kvm_emulate_mtsrin_branch_offs:
+	.long (kvm_emulate_mtsrin_branch - kvm_emulate_mtsrin) / 4
+
+.global kvm_emulate_mtsrin_reg1_offs
+kvm_emulate_mtsrin_reg1_offs:
+	.long (kvm_emulate_mtsrin_reg1 - kvm_emulate_mtsrin) / 4
+
+.global kvm_emulate_mtsrin_reg2_offs
+kvm_emulate_mtsrin_reg2_offs:
+	.long (kvm_emulate_mtsrin_reg2 - kvm_emulate_mtsrin) / 4
+
+.global kvm_emulate_mtsrin_orig_ins_offs
+kvm_emulate_mtsrin_orig_ins_offs:
+	.long (kvm_emulate_mtsrin_orig_ins - kvm_emulate_mtsrin) / 4
+
+.global kvm_emulate_mtsrin_len
+kvm_emulate_mtsrin_len:
+	.long (kvm_emulate_mtsrin_end - kvm_emulate_mtsrin) / 4
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 18/26] KVM: PPC: Make PV mtmsr work with r30 and r31
       [not found] ` <1282053481-18787-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                     ` (11 preceding siblings ...)
  2010-08-17 13:57   ` [PATCH 16/26] KVM: PPC: Put segment registers in shared page Alexander Graf
@ 2010-08-17 13:57   ` Alexander Graf
  2010-08-17 13:57   ` [PATCH 24/26] KVM: PPC: initialize IVORs in addition to IVPR Alexander Graf
                     ` (3 subsequent siblings)
  16 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:57 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: linuxppc-dev, KVM list

So far we've been restricting ourselves to r0-r29 as registers an mtmsr
instruction could use. This was bad, as there are some code paths in
Linux actually using r30.

So let's instead handle all registers gracefully and get rid of that
stupid limitation

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kernel/kvm.c      |   39 ++++++++++++++++++++++++++++++++-------
 arch/powerpc/kernel/kvm_emul.S |   17 ++++++++---------
 2 files changed, 40 insertions(+), 16 deletions(-)

diff --git a/arch/powerpc/kernel/kvm.c b/arch/powerpc/kernel/kvm.c
index 43ec78a..517967d 100644
--- a/arch/powerpc/kernel/kvm.c
+++ b/arch/powerpc/kernel/kvm.c
@@ -43,6 +43,7 @@
 #define KVM_INST_B_MAX		0x01ffffff
 
 #define KVM_MASK_RT		0x03e00000
+#define KVM_RT_30		0x03c00000
 #define KVM_MASK_RB		0x0000f800
 #define KVM_INST_MFMSR		0x7c0000a6
 #define KVM_INST_MFSPR_SPRG0	0x7c1042a6
@@ -83,6 +84,15 @@ static inline void kvm_patch_ins(u32 *inst, u32 new_inst)
 	flush_icache_range((ulong)inst, (ulong)inst + 4);
 }
 
+static void kvm_patch_ins_ll(u32 *inst, long addr, u32 rt)
+{
+#ifdef CONFIG_64BIT
+	kvm_patch_ins(inst, KVM_INST_LD | rt | (addr & 0x0000fffc));
+#else
+	kvm_patch_ins(inst, KVM_INST_LWZ | rt | (addr & 0x0000fffc));
+#endif
+}
+
 static void kvm_patch_ins_ld(u32 *inst, long addr, u32 rt)
 {
 #ifdef CONFIG_64BIT
@@ -187,7 +197,6 @@ static void kvm_patch_ins_mtmsrd(u32 *inst, u32 rt)
 extern u32 kvm_emulate_mtmsr_branch_offs;
 extern u32 kvm_emulate_mtmsr_reg1_offs;
 extern u32 kvm_emulate_mtmsr_reg2_offs;
-extern u32 kvm_emulate_mtmsr_reg3_offs;
 extern u32 kvm_emulate_mtmsr_orig_ins_offs;
 extern u32 kvm_emulate_mtmsr_len;
 extern u32 kvm_emulate_mtmsr[];
@@ -217,9 +226,27 @@ static void kvm_patch_ins_mtmsr(u32 *inst, u32 rt)
 	/* Modify the chunk to fit the invocation */
 	memcpy(p, kvm_emulate_mtmsr, kvm_emulate_mtmsr_len * 4);
 	p[kvm_emulate_mtmsr_branch_offs] |= distance_end & KVM_INST_B_MASK;
-	p[kvm_emulate_mtmsr_reg1_offs] |= rt;
-	p[kvm_emulate_mtmsr_reg2_offs] |= rt;
-	p[kvm_emulate_mtmsr_reg3_offs] |= rt;
+
+	/* Make clobbered registers work too */
+	switch (get_rt(rt)) {
+	case 30:
+		kvm_patch_ins_ll(&p[kvm_emulate_mtmsr_reg1_offs],
+				 magic_var(scratch2), KVM_RT_30);
+		kvm_patch_ins_ll(&p[kvm_emulate_mtmsr_reg2_offs],
+				 magic_var(scratch2), KVM_RT_30);
+		break;
+	case 31:
+		kvm_patch_ins_ll(&p[kvm_emulate_mtmsr_reg1_offs],
+				 magic_var(scratch1), KVM_RT_30);
+		kvm_patch_ins_ll(&p[kvm_emulate_mtmsr_reg2_offs],
+				 magic_var(scratch1), KVM_RT_30);
+		break;
+	default:
+		p[kvm_emulate_mtmsr_reg1_offs] |= rt;
+		p[kvm_emulate_mtmsr_reg2_offs] |= rt;
+		break;
+	}
+
 	p[kvm_emulate_mtmsr_orig_ins_offs] = *inst;
 	flush_icache_range((ulong)p, (ulong)p + kvm_emulate_mtmsr_len * 4);
 
@@ -403,9 +430,7 @@ static void kvm_check_ins(u32 *inst, u32 features)
 		break;
 	case KVM_INST_MTMSR:
 	case KVM_INST_MTMSRD_L0:
-		/* We use r30 and r31 during the hook */
-		if (get_rt(inst_rt) < 30)
-			kvm_patch_ins_mtmsr(inst, inst_rt);
+		kvm_patch_ins_mtmsr(inst, inst_rt);
 		break;
 	}
 
diff --git a/arch/powerpc/kernel/kvm_emul.S b/arch/powerpc/kernel/kvm_emul.S
index a6e97e7..6530532 100644
--- a/arch/powerpc/kernel/kvm_emul.S
+++ b/arch/powerpc/kernel/kvm_emul.S
@@ -135,7 +135,8 @@ kvm_emulate_mtmsr:
 
 	/* Find the changed bits between old and new MSR */
 kvm_emulate_mtmsr_reg1:
-	xor	r31, r0, r31
+	ori	r30, r0, 0
+	xor	r31, r30, r31
 
 	/* Check if we need to really do mtmsr */
 	LOAD_REG_IMMEDIATE(r30, MSR_CRITICAL_BITS)
@@ -156,14 +157,17 @@ kvm_emulate_mtmsr_orig_ins:
 
 maybe_stay_in_guest:
 
+	/* Get the target register in r30 */
+kvm_emulate_mtmsr_reg2:
+	ori	r30, r0, 0
+
 	/* Check if we have to fetch an interrupt */
 	lwz	r31, (KVM_MAGIC_PAGE + KVM_MAGIC_INT)(0)
 	cmpwi	r31, 0
 	beq+	no_mtmsr
 
 	/* Check if we may trigger an interrupt */
-kvm_emulate_mtmsr_reg2:
-	andi.	r31, r0, MSR_EE
+	andi.	r31, r30, MSR_EE
 	beq	no_mtmsr
 
 	b	do_mtmsr
@@ -171,8 +175,7 @@ kvm_emulate_mtmsr_reg2:
 no_mtmsr:
 
 	/* Put MSR into magic page because we don't call mtmsr */
-kvm_emulate_mtmsr_reg3:
-	STL64(r0, KVM_MAGIC_PAGE + KVM_MAGIC_MSR, 0)
+	STL64(r30, KVM_MAGIC_PAGE + KVM_MAGIC_MSR, 0)
 
 	SCRATCH_RESTORE
 
@@ -193,10 +196,6 @@ kvm_emulate_mtmsr_reg1_offs:
 kvm_emulate_mtmsr_reg2_offs:
 	.long (kvm_emulate_mtmsr_reg2 - kvm_emulate_mtmsr) / 4
 
-.global kvm_emulate_mtmsr_reg3_offs
-kvm_emulate_mtmsr_reg3_offs:
-	.long (kvm_emulate_mtmsr_reg3 - kvm_emulate_mtmsr) / 4
-
 .global kvm_emulate_mtmsr_orig_ins_offs
 kvm_emulate_mtmsr_orig_ins_offs:
 	.long (kvm_emulate_mtmsr_orig_ins - kvm_emulate_mtmsr) / 4
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 19/26] KVM: PPC: Update int_pending also on dequeue
  2010-08-17 13:57 [PATCH 00/26] KVM: PPC: Mid-August patch queue Alexander Graf
                   ` (5 preceding siblings ...)
  2010-08-17 13:57 ` [PATCH 17/26] KVM: PPC: Add mtsrin PV code Alexander Graf
@ 2010-08-17 13:57 ` Alexander Graf
  2010-08-17 13:57 ` [PATCH 20/26] KVM: PPC: Make PV mtmsrd L=1 work with r30 and r31 Alexander Graf
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:57 UTC (permalink / raw)
  To: kvm-ppc; +Cc: linuxppc-dev, KVM list

When having a decrementor interrupt pending, the dequeuing happens manually
through an mtdec instruction. This instruction simply calls dequeue on that
interrupt, so the int_pending hint doesn't get updated.

This patch enables updating the int_pending hint also on dequeue, thus
correctly enabling guests to stay in guest contexts more often.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 5fbe949..8138d31 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -201,6 +201,9 @@ static void kvmppc_book3s_dequeue_irqprio(struct kvm_vcpu *vcpu,
 {
 	clear_bit(kvmppc_book3s_vec2irqprio(vec),
 		  &vcpu->arch.pending_exceptions);
+
+	if (!vcpu->arch.pending_exceptions)
+		vcpu->arch.shared->int_pending = 0;
 }
 
 void kvmppc_book3s_queue_irqprio(struct kvm_vcpu *vcpu, unsigned int vec)
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 20/26] KVM: PPC: Make PV mtmsrd L=1 work with r30 and r31
  2010-08-17 13:57 [PATCH 00/26] KVM: PPC: Mid-August patch queue Alexander Graf
                   ` (6 preceding siblings ...)
  2010-08-17 13:57 ` [PATCH 19/26] KVM: PPC: Update int_pending also on dequeue Alexander Graf
@ 2010-08-17 13:57 ` Alexander Graf
  2010-08-17 13:57 ` [PATCH 21/26] KVM: PPC: Force enable nap on KVM Alexander Graf
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:57 UTC (permalink / raw)
  To: kvm-ppc; +Cc: linuxppc-dev, KVM list

We had an arbitrary limitation in mtmsrd L=1 that kept us from using r30 and
r31 as input registers. Let's get rid of that and get more potential speedups!

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kernel/kvm.c      |   21 +++++++++++++++++----
 arch/powerpc/kernel/kvm_emul.S |    8 +++++++-
 2 files changed, 24 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/kernel/kvm.c b/arch/powerpc/kernel/kvm.c
index 517967d..517da39 100644
--- a/arch/powerpc/kernel/kvm.c
+++ b/arch/powerpc/kernel/kvm.c
@@ -159,6 +159,7 @@ static u32 *kvm_alloc(int len)
 
 extern u32 kvm_emulate_mtmsrd_branch_offs;
 extern u32 kvm_emulate_mtmsrd_reg_offs;
+extern u32 kvm_emulate_mtmsrd_orig_ins_offs;
 extern u32 kvm_emulate_mtmsrd_len;
 extern u32 kvm_emulate_mtmsrd[];
 
@@ -187,7 +188,21 @@ static void kvm_patch_ins_mtmsrd(u32 *inst, u32 rt)
 	/* Modify the chunk to fit the invocation */
 	memcpy(p, kvm_emulate_mtmsrd, kvm_emulate_mtmsrd_len * 4);
 	p[kvm_emulate_mtmsrd_branch_offs] |= distance_end & KVM_INST_B_MASK;
-	p[kvm_emulate_mtmsrd_reg_offs] |= rt;
+	switch (get_rt(rt)) {
+	case 30:
+		kvm_patch_ins_ll(&p[kvm_emulate_mtmsrd_reg_offs],
+				 magic_var(scratch2), KVM_RT_30);
+		break;
+	case 31:
+		kvm_patch_ins_ll(&p[kvm_emulate_mtmsrd_reg_offs],
+				 magic_var(scratch1), KVM_RT_30);
+		break;
+	default:
+		p[kvm_emulate_mtmsrd_reg_offs] |= rt;
+		break;
+	}
+
+	p[kvm_emulate_mtmsrd_orig_ins_offs] = *inst;
 	flush_icache_range((ulong)p, (ulong)p + kvm_emulate_mtmsrd_len * 4);
 
 	/* Patch the invocation */
@@ -424,9 +439,7 @@ static void kvm_check_ins(u32 *inst, u32 features)
 
 	/* Rewrites */
 	case KVM_INST_MTMSRD_L1:
-		/* We use r30 and r31 during the hook */
-		if (get_rt(inst_rt) < 30)
-			kvm_patch_ins_mtmsrd(inst, inst_rt);
+		kvm_patch_ins_mtmsrd(inst, inst_rt);
 		break;
 	case KVM_INST_MTMSR:
 	case KVM_INST_MTMSRD_L0:
diff --git a/arch/powerpc/kernel/kvm_emul.S b/arch/powerpc/kernel/kvm_emul.S
index 6530532..f2b1b25 100644
--- a/arch/powerpc/kernel/kvm_emul.S
+++ b/arch/powerpc/kernel/kvm_emul.S
@@ -78,7 +78,8 @@ kvm_emulate_mtmsrd:
 
 	/* OR the register's (MSR_EE|MSR_RI) on MSR */
 kvm_emulate_mtmsrd_reg:
-	andi.	r30, r0, (MSR_EE|MSR_RI)
+	ori	r30, r0, 0
+	andi.	r30, r30, (MSR_EE|MSR_RI)
 	or	r31, r31, r30
 
 	/* Put MSR back into magic page */
@@ -96,6 +97,7 @@ kvm_emulate_mtmsrd_reg:
 	SCRATCH_RESTORE
 
 	/* Nag hypervisor */
+kvm_emulate_mtmsrd_orig_ins:
 	tlbsync
 
 	b	kvm_emulate_mtmsrd_branch
@@ -117,6 +119,10 @@ kvm_emulate_mtmsrd_branch_offs:
 kvm_emulate_mtmsrd_reg_offs:
 	.long (kvm_emulate_mtmsrd_reg - kvm_emulate_mtmsrd) / 4
 
+.global kvm_emulate_mtmsrd_orig_ins_offs
+kvm_emulate_mtmsrd_orig_ins_offs:
+	.long (kvm_emulate_mtmsrd_orig_ins - kvm_emulate_mtmsrd) / 4
+
 .global kvm_emulate_mtmsrd_len
 kvm_emulate_mtmsrd_len:
 	.long (kvm_emulate_mtmsrd_end - kvm_emulate_mtmsrd) / 4
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 21/26] KVM: PPC: Force enable nap on KVM
  2010-08-17 13:57 [PATCH 00/26] KVM: PPC: Mid-August patch queue Alexander Graf
                   ` (7 preceding siblings ...)
  2010-08-17 13:57 ` [PATCH 20/26] KVM: PPC: Make PV mtmsrd L=1 work with r30 and r31 Alexander Graf
@ 2010-08-17 13:57 ` Alexander Graf
  2010-08-17 18:28   ` Scott Wood
  2010-08-17 13:57 ` [PATCH 22/26] KVM: PPC: Implement correct SID mapping on Book3s_32 Alexander Graf
  2010-08-17 13:57 ` [PATCH 23/26] KVM: PPC: Don't put MSR_POW in MSR Alexander Graf
  10 siblings, 1 reply; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:57 UTC (permalink / raw)
  To: kvm-ppc; +Cc: linuxppc-dev, KVM list

There are some heuristics in the PPC power management code that try to find
out if the particular hardware we're running on supports proper power management
or just hangs the machine when going into nap mode.

Since we know that KVM is safe with nap, let's force enable it in the PV code
once we're certain that we are on a KVM VM.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kernel/kvm.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kernel/kvm.c b/arch/powerpc/kernel/kvm.c
index 517da39..95aed6b 100644
--- a/arch/powerpc/kernel/kvm.c
+++ b/arch/powerpc/kernel/kvm.c
@@ -583,6 +583,9 @@ static int __init kvm_guest_init(void)
 	if (kvm_para_has_feature(KVM_FEATURE_MAGIC_PAGE))
 		kvm_use_magic_page();
 
+	/* Enable napping */
+	powersave_nap = 1;
+
 free_tmp:
 	kvm_free_tmp();
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 22/26] KVM: PPC: Implement correct SID mapping on Book3s_32
  2010-08-17 13:57 [PATCH 00/26] KVM: PPC: Mid-August patch queue Alexander Graf
                   ` (8 preceding siblings ...)
  2010-08-17 13:57 ` [PATCH 21/26] KVM: PPC: Force enable nap on KVM Alexander Graf
@ 2010-08-17 13:57 ` Alexander Graf
  2010-08-17 13:57 ` [PATCH 23/26] KVM: PPC: Don't put MSR_POW in MSR Alexander Graf
  10 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:57 UTC (permalink / raw)
  To: kvm-ppc; +Cc: linuxppc-dev, KVM list

Up until now we were doing segment mappings wrong on Book3s_32. For Book3s_64
we were using a trick where we know that a single mmu_context gives us 16 bits
of context ids.

The mm system on Book3s_32 instead uses a clever algorithm to distribute VSIDs
across the available range, so a context id really only gives us 16 available
VSIDs.

To keep at least a few guest processes in the SID shadow, let's map a number of
contexts that we can use as VSID pool. This makes the code be actually correct
and shouldn't hurt performance too much.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s.h |   15 +++++++-
 arch/powerpc/kvm/book3s_32_mmu_host.c |   57 ++++++++++++++++++---------------
 arch/powerpc/kvm/book3s_64_mmu_host.c |    8 ++--
 3 files changed, 48 insertions(+), 32 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index be8aac2..d62e703 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -60,6 +60,13 @@ struct kvmppc_sid_map {
 #define SID_MAP_NUM     (1 << SID_MAP_BITS)
 #define SID_MAP_MASK    (SID_MAP_NUM - 1)
 
+#ifdef CONFIG_PPC_BOOK3S_64
+#define SID_CONTEXTS	1
+#else
+#define SID_CONTEXTS	128
+#define VSID_POOL_SIZE	(SID_CONTEXTS * 16)
+#endif
+
 struct kvmppc_vcpu_book3s {
 	struct kvm_vcpu vcpu;
 	struct kvmppc_book3s_shadow_vcpu *shadow_vcpu;
@@ -78,10 +85,14 @@ struct kvmppc_vcpu_book3s {
 	u64 sdr1;
 	u64 hior;
 	u64 msr_mask;
-	u64 vsid_first;
 	u64 vsid_next;
+#ifdef CONFIG_PPC_BOOK3S_32
+	u32 vsid_pool[VSID_POOL_SIZE];
+#else
+	u64 vsid_first;
 	u64 vsid_max;
-	int context_id;
+#endif
+	int context_id[SID_CONTEXTS];
 	ulong prog_flags; /* flags to inject when giving a 700 trap */
 };
 
diff --git a/arch/powerpc/kvm/book3s_32_mmu_host.c b/arch/powerpc/kvm/book3s_32_mmu_host.c
index 57dddeb..9fecbfb 100644
--- a/arch/powerpc/kvm/book3s_32_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_32_mmu_host.c
@@ -275,18 +275,15 @@ static struct kvmppc_sid_map *create_sid_map(struct kvm_vcpu *vcpu, u64 gvsid)
 	backwards_map = !backwards_map;
 
 	/* Uh-oh ... out of mappings. Let's flush! */
-	if (vcpu_book3s->vsid_next >= vcpu_book3s->vsid_max) {
-		vcpu_book3s->vsid_next = vcpu_book3s->vsid_first;
+	if (vcpu_book3s->vsid_next >= VSID_POOL_SIZE) {
+		vcpu_book3s->vsid_next = 0;
 		memset(vcpu_book3s->sid_map, 0,
 		       sizeof(struct kvmppc_sid_map) * SID_MAP_NUM);
 		kvmppc_mmu_pte_flush(vcpu, 0, 0);
 		kvmppc_mmu_flush_segments(vcpu);
 	}
-	map->host_vsid = vcpu_book3s->vsid_next;
-
-	/* Would have to be 111 to be completely aligned with the rest of
-	   Linux, but that is just way too little space! */
-	vcpu_book3s->vsid_next+=1;
+	map->host_vsid = vcpu_book3s->vsid_pool[vcpu_book3s->vsid_next];
+	vcpu_book3s->vsid_next++;
 
 	map->guest_vsid = gvsid;
 	map->valid = true;
@@ -333,40 +330,38 @@ void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu)
 
 void kvmppc_mmu_destroy(struct kvm_vcpu *vcpu)
 {
+	int i;
+
 	kvmppc_mmu_hpte_destroy(vcpu);
 	preempt_disable();
-	__destroy_context(to_book3s(vcpu)->context_id);
+	for (i = 0; i < SID_CONTEXTS; i++)
+		__destroy_context(to_book3s(vcpu)->context_id[i]);
 	preempt_enable();
 }
 
 /* From mm/mmu_context_hash32.c */
-#define CTX_TO_VSID(ctx) (((ctx) * (897 * 16)) & 0xffffff)
+#define CTX_TO_VSID(c, id)	((((c) * (897 * 16)) + (id * 0x111)) & 0xffffff)
 
 int kvmppc_mmu_init(struct kvm_vcpu *vcpu)
 {
 	struct kvmppc_vcpu_book3s *vcpu3s = to_book3s(vcpu);
 	int err;
 	ulong sdr1;
+	int i;
+	int j;
 
-	err = __init_new_context();
-	if (err < 0)
-		return -1;
-	vcpu3s->context_id = err;
-
-	vcpu3s->vsid_max = CTX_TO_VSID(vcpu3s->context_id + 1) - 1;
-	vcpu3s->vsid_first = CTX_TO_VSID(vcpu3s->context_id);
-
-#if 0 /* XXX still doesn't guarantee uniqueness */
-	/* We could collide with the Linux vsid space because the vsid
-	 * wraps around at 24 bits. We're safe if we do our own space
-	 * though, so let's always set the highest bit. */
+	for (i = 0; i < SID_CONTEXTS; i++) {
+		err = __init_new_context();
+		if (err < 0)
+			goto init_fail;
+		vcpu3s->context_id[i] = err;
 
-	vcpu3s->vsid_max |= 0x00800000;
-	vcpu3s->vsid_first |= 0x00800000;
-#endif
-	BUG_ON(vcpu3s->vsid_max < vcpu3s->vsid_first);
+		/* Remember context id for this combination */
+		for (j = 0; j < 16; j++)
+			vcpu3s->vsid_pool[(i * 16) + j] = CTX_TO_VSID(err, j);
+	}
 
-	vcpu3s->vsid_next = vcpu3s->vsid_first;
+	vcpu3s->vsid_next = 0;
 
 	/* Remember where the HTAB is */
 	asm ( "mfsdr1 %0" : "=r"(sdr1) );
@@ -376,4 +371,14 @@ int kvmppc_mmu_init(struct kvm_vcpu *vcpu)
 	kvmppc_mmu_hpte_init(vcpu);
 
 	return 0;
+
+init_fail:
+	for (j = 0; j < i; j++) {
+		if (!vcpu3s->context_id[j])
+			continue;
+
+		__destroy_context(to_book3s(vcpu)->context_id[j]);
+	}
+
+	return -1;
 }
diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index 4040c8d..fa2f084 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -286,7 +286,7 @@ void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu)
 void kvmppc_mmu_destroy(struct kvm_vcpu *vcpu)
 {
 	kvmppc_mmu_hpte_destroy(vcpu);
-	__destroy_context(to_book3s(vcpu)->context_id);
+	__destroy_context(to_book3s(vcpu)->context_id[0]);
 }
 
 int kvmppc_mmu_init(struct kvm_vcpu *vcpu)
@@ -297,10 +297,10 @@ int kvmppc_mmu_init(struct kvm_vcpu *vcpu)
 	err = __init_new_context();
 	if (err < 0)
 		return -1;
-	vcpu3s->context_id = err;
+	vcpu3s->context_id[0] = err;
 
-	vcpu3s->vsid_max = ((vcpu3s->context_id + 1) << USER_ESID_BITS) - 1;
-	vcpu3s->vsid_first = vcpu3s->context_id << USER_ESID_BITS;
+	vcpu3s->vsid_max = ((vcpu3s->context_id[0] + 1) << USER_ESID_BITS) - 1;
+	vcpu3s->vsid_first = vcpu3s->context_id[0] << USER_ESID_BITS;
 	vcpu3s->vsid_next = vcpu3s->vsid_first;
 
 	kvmppc_mmu_hpte_init(vcpu);
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 23/26] KVM: PPC: Don't put MSR_POW in MSR
  2010-08-17 13:57 [PATCH 00/26] KVM: PPC: Mid-August patch queue Alexander Graf
                   ` (9 preceding siblings ...)
  2010-08-17 13:57 ` [PATCH 22/26] KVM: PPC: Implement correct SID mapping on Book3s_32 Alexander Graf
@ 2010-08-17 13:57 ` Alexander Graf
  10 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:57 UTC (permalink / raw)
  To: kvm-ppc; +Cc: linuxppc-dev, KVM list

On Book3S a mtmsr with the MSR_POW bit set indicates that the OS is in
idle and only needs to be waked up on the next interrupt.

Now, unfortunately we let that bit slip into the stored MSR value which
is not what the real CPU does, so that we ended up executing code like
this:

	r = mfmsr();
	/* r containts MSR_POW */
	mtmsr(r | MSR_EE);

This obviously breaks, as we're going into idle mode in code sections that
don't expect to be idling.

This patch masks MSR_POW out of the stored MSR value on wakeup, making
guests happy again.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s.c |    6 +++++-
 1 files changed, 5 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 8138d31..35f9199 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -134,10 +134,14 @@ void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 msr)
 	vcpu->arch.shared->msr = msr;
 	kvmppc_recalc_shadow_msr(vcpu);
 
-	if (msr & (MSR_WE|MSR_POW)) {
+	if (msr & MSR_POW) {
 		if (!vcpu->arch.pending_exceptions) {
 			kvm_vcpu_block(vcpu);
 			vcpu->stat.halt_wakeup++;
+
+			/* Unset POW bit after we woke up */
+			msr &= ~MSR_POW;
+			vcpu->arch.shared->msr = msr;
 		}
 	}
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 24/26] KVM: PPC: initialize IVORs in addition to IVPR
       [not found] ` <1282053481-18787-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                     ` (12 preceding siblings ...)
  2010-08-17 13:57   ` [PATCH 18/26] KVM: PPC: Make PV mtmsr work with r30 and r31 Alexander Graf
@ 2010-08-17 13:57   ` Alexander Graf
  2010-08-17 13:58   ` [PATCH 25/26] KVM: PPC: fix compilation of "dump tlbs" debug function Alexander Graf
                     ` (2 subsequent siblings)
  16 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:57 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: linuxppc-dev, KVM list, Hollis Blanchard

From: Hollis Blanchard <hollis_blanchard-nmGgyN9QBj3QT0dZR+AlfA@public.gmane.org>

Developers can now tell at a glace the exact type of the premature interrupt,
instead of just knowing that there was some premature interrupt.

Signed-off-by: Hollis Blanchard <hollis_blanchard-nmGgyN9QBj3QT0dZR+AlfA@public.gmane.org>
Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/booke.c |    8 ++++++--
 1 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index c604277..835f6d0 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -497,15 +497,19 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 /* Initial guest state: 16MB mapping 0 -> 0, PC = 0, MSR = 0, R1 = 16MB */
 int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
 {
+	int i;
+
 	vcpu->arch.pc = 0;
 	vcpu->arch.shared->msr = 0;
 	kvmppc_set_gpr(vcpu, 1, (16<<20) - 8); /* -8 for the callee-save LR slot */
 
 	vcpu->arch.shadow_pid = 1;
 
-	/* Eye-catching number so we know if the guest takes an interrupt
-	 * before it's programmed its own IVPR. */
+	/* Eye-catching numbers so we know if the guest takes an interrupt
+	 * before it's programmed its own IVPR/IVORs. */
 	vcpu->arch.ivpr = 0x55550000;
+	for (i = 0; i < BOOKE_IRQPRIO_MAX; i++)
+		vcpu->arch.ivor[i] = 0x7700 | i * 4;
 
 	kvmppc_init_timing_stats(vcpu);
 
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 25/26] KVM: PPC: fix compilation of "dump tlbs" debug function
       [not found] ` <1282053481-18787-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                     ` (13 preceding siblings ...)
  2010-08-17 13:57   ` [PATCH 24/26] KVM: PPC: initialize IVORs in addition to IVPR Alexander Graf
@ 2010-08-17 13:58   ` Alexander Graf
  2010-08-17 13:58   ` [PATCH 26/26] KVM: PPC: allow ppc440gp to pass the compatibility check Alexander Graf
  2010-08-22 16:46   ` [PATCH 00/26] KVM: PPC: Mid-August patch queue Avi Kivity
  16 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:58 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: linuxppc-dev, KVM list, Hollis Blanchard

From: Hollis Blanchard <hollis_blanchard-nmGgyN9QBj3QT0dZR+AlfA@public.gmane.org>

Missing local variable.

Signed-off-by: Hollis Blanchard <hollis_blanchard-nmGgyN9QBj3QT0dZR+AlfA@public.gmane.org>
Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/44x_tlb.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/44x_tlb.c b/arch/powerpc/kvm/44x_tlb.c
index 9f71b8d..5f3cff8 100644
--- a/arch/powerpc/kvm/44x_tlb.c
+++ b/arch/powerpc/kvm/44x_tlb.c
@@ -47,6 +47,7 @@
 #ifdef DEBUG
 void kvmppc_dump_tlbs(struct kvm_vcpu *vcpu)
 {
+	struct kvmppc_vcpu_44x *vcpu_44x = to_44x(vcpu);
 	struct kvmppc_44x_tlbe *tlbe;
 	int i;
 
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 26/26] KVM: PPC: allow ppc440gp to pass the compatibility check
       [not found] ` <1282053481-18787-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                     ` (14 preceding siblings ...)
  2010-08-17 13:58   ` [PATCH 25/26] KVM: PPC: fix compilation of "dump tlbs" debug function Alexander Graf
@ 2010-08-17 13:58   ` Alexander Graf
  2010-08-22 16:46   ` [PATCH 00/26] KVM: PPC: Mid-August patch queue Avi Kivity
  16 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 13:58 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: linuxppc-dev, KVM list, Hollis Blanchard

From: Hollis Blanchard <hollis_blanchard-nmGgyN9QBj3QT0dZR+AlfA@public.gmane.org>

Match only the first part of cur_cpu_spec->platform.

440GP (the first 440 processor) is identified by the string "ppc440gp", while
all later 440 processors use simply "ppc440".

Signed-off-by: Hollis Blanchard <hollis_blanchard-nmGgyN9QBj3QT0dZR+AlfA@public.gmane.org>
Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/44x.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kvm/44x.c b/arch/powerpc/kvm/44x.c
index e7b1f3f..74d0e74 100644
--- a/arch/powerpc/kvm/44x.c
+++ b/arch/powerpc/kvm/44x.c
@@ -43,7 +43,7 @@ int kvmppc_core_check_processor_compat(void)
 {
 	int r;
 
-	if (strcmp(cur_cpu_spec->platform, "ppc440") == 0)
+	if (strncmp(cur_cpu_spec->platform, "ppc440", 6) == 0)
 		r = 0;
 	else
 		r = -ENOTSUPP;
@@ -72,6 +72,7 @@ int kvmppc_core_vcpu_setup(struct kvm_vcpu *vcpu)
 	/* Since the guest can directly access the timebase, it must know the
 	 * real timebase frequency. Accordingly, it must see the state of
 	 * CCR1[TCS]. */
+	/* XXX CCR1 doesn't exist on all 440 SoCs. */
 	vcpu->arch.ccr1 = mfspr(SPRN_CCR1);
 
 	for (i = 0; i < ARRAY_SIZE(vcpu_44x->shadow_refs); i++)
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [PATCH 21/26] KVM: PPC: Force enable nap on KVM
  2010-08-17 13:57 ` [PATCH 21/26] KVM: PPC: Force enable nap on KVM Alexander Graf
@ 2010-08-17 18:28   ` Scott Wood
  2010-08-17 20:07     ` Alexander Graf
  0 siblings, 1 reply; 33+ messages in thread
From: Scott Wood @ 2010-08-17 18:28 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, linuxppc-dev, KVM list

On Tue, 17 Aug 2010 15:57:56 +0200
Alexander Graf <agraf@suse.de> wrote:

> There are some heuristics in the PPC power management code that try to find
> out if the particular hardware we're running on supports proper power management
> or just hangs the machine when going into nap mode.
> 
> Since we know that KVM is safe with nap, let's force enable it in the PV code
> once we're certain that we are on a KVM VM.

Could this cause the cache to be flushed unnecessarily on e500?  Where
available, doze would probably be better for signalling the hypervisor
that the guest is idle.

> 
> Signed-off-by: Alexander Graf <agraf@suse.de>
> ---
>  arch/powerpc/kernel/kvm.c |    3 +++
>  1 files changed, 3 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/kvm.c b/arch/powerpc/kernel/kvm.c
> index 517da39..95aed6b 100644
> --- a/arch/powerpc/kernel/kvm.c
> +++ b/arch/powerpc/kernel/kvm.c
> @@ -583,6 +583,9 @@ static int __init kvm_guest_init(void)
>  	if (kvm_para_has_feature(KVM_FEATURE_MAGIC_PAGE))
>  		kvm_use_magic_page();
>  
> +	/* Enable napping */
> +	powersave_nap = 1;
> +
>  free_tmp:
>  	kvm_free_tmp();
>  



^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 21/26] KVM: PPC: Force enable nap on KVM
  2010-08-17 18:28   ` Scott Wood
@ 2010-08-17 20:07     ` Alexander Graf
  0 siblings, 0 replies; 33+ messages in thread
From: Alexander Graf @ 2010-08-17 20:07 UTC (permalink / raw)
  To: Scott Wood; +Cc: kvm-ppc, linuxppc-dev, KVM list


On 17.08.2010, at 20:28, Scott Wood wrote:

> On Tue, 17 Aug 2010 15:57:56 +0200
> Alexander Graf <agraf@suse.de> wrote:
> 
>> There are some heuristics in the PPC power management code that try to find
>> out if the particular hardware we're running on supports proper power management
>> or just hangs the machine when going into nap mode.
>> 
>> Since we know that KVM is safe with nap, let's force enable it in the PV code
>> once we're certain that we are on a KVM VM.
> 
> Could this cause the cache to be flushed unnecessarily on e500?  Where
> available, doze would probably be better for signalling the hypervisor
> that the guest is idle.

You're right - this should be #ifdef'ed on Book3s. I'll add that as a patch on top, as Qemu on BookE doesn't expose KVM capabilities yet.


Alex


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 13/26] KVM: PPC: Add feature bitmap for magic page
  2010-08-17 13:57   ` [PATCH 13/26] KVM: PPC: Add feature bitmap for magic page Alexander Graf
@ 2010-08-22 16:42     ` Avi Kivity
       [not found]       ` <4C715382.6050809-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 33+ messages in thread
From: Avi Kivity @ 2010-08-22 16:42 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, linuxppc-dev, KVM list

  On 08/17/2010 04:57 PM, Alexander Graf wrote:
> We will soon add SR PV support to the shared page, so we need some
> infrastructure that allows the guest to query for features KVM exports.
>
> This patch adds a second return value to the magic mapping that
> indicated to the guest which features are available.
>

You need to make that feature controllable from userspace, to allow 
new->old save/restore.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 00/26] KVM: PPC: Mid-August patch queue
       [not found] ` <1282053481-18787-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                     ` (15 preceding siblings ...)
  2010-08-17 13:58   ` [PATCH 26/26] KVM: PPC: allow ppc440gp to pass the compatibility check Alexander Graf
@ 2010-08-22 16:46   ` Avi Kivity
  16 siblings, 0 replies; 33+ messages in thread
From: Avi Kivity @ 2010-08-22 16:46 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, linuxppc-dev, KVM list

  On 08/17/2010 04:57 PM, Alexander Graf wrote:
> Howdy,
>
> This is my local patch queue with stuff that has accumulated over the last
> weeks on KVM for PPC with some last minute fixes, speedups and debugging help
> that I needed for the KVM Forum ;-).
>
> The highlights of this set are:
>
>    - Converted most important debug points to tracepoints
>    - Flush less PTEs (speedup)
>    - Go back to our own hash (less duplicates)
>    - Make SRs guest settable (speedup for 32 bit guests)
>    - Remove r30/r31 restrictions from PV hooks (speedup!)
>    - Fix random breakages
>    - Fix random guest stalls
>    - 440GP host support (Thanks Hollis!)
>
> Keep in mind that this is the first version that is stable on PPC32 hosts.
> All versions prior to this could occupy otherwise used segment entries and
> thus crash your machine :-).
>
> After finally meeting Avi again, we also agreed to give pulls a try. So
> here we go - this is my tree online:
>
>   arch/powerpc/include/asm/kvm_book3s.h |   25 ++--
>   arch/powerpc/include/asm/kvm_para.h   |    3 +
>   arch/powerpc/kernel/asm-offsets.c     |    1 +
>   arch/powerpc/kernel/kvm.c             |  144 ++++++++++++++++++---
>   arch/powerpc/kernel/kvm_emul.S        |   75 +++++++++--
>   arch/powerpc/kvm/44x.c                |    3 +-
>   arch/powerpc/kvm/44x_tlb.c            |    1 +
>   arch/powerpc/kvm/book3s.c             |   54 ++++----
>   arch/powerpc/kvm/book3s_32_mmu.c      |   83 +++++++------
>   arch/powerpc/kvm/book3s_32_mmu_host.c |   67 ++++++----
>   arch/powerpc/kvm/book3s_64_mmu_host.c |   59 +++------
>   arch/powerpc/kvm/book3s_emulate.c     |   48 +++-----
>   arch/powerpc/kvm/book3s_mmu_hpte.c    |   38 ++----
>   arch/powerpc/kvm/booke.c              |    8 +-
>   arch/powerpc/kvm/powerpc.c            |    5 +-
>   arch/powerpc/kvm/trace.h              |  230 +++++++++++++++++++++++++++++++++
>   16 files changed, 614 insertions(+), 230 deletions(-)

   Documentation/kvm/ppc-pv.txt +++++++++++++++++++++++++++

?

-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 13/26] KVM: PPC: Add feature bitmap for magic page
       [not found]       ` <4C715382.6050809-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2010-08-31  0:56         ` Alexander Graf
  2010-08-31  6:28           ` Avi Kivity
  0 siblings, 1 reply; 33+ messages in thread
From: Alexander Graf @ 2010-08-31  0:56 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, linuxppc-dev, KVM list


On 22.08.2010, at 18:42, Avi Kivity wrote:

> On 08/17/2010 04:57 PM, Alexander Graf wrote:
>> We will soon add SR PV support to the shared page, so we need some
>> infrastructure that allows the guest to query for features KVM exports.
>> 
>> This patch adds a second return value to the magic mapping that
>> indicated to the guest which features are available.
>> 
> 
> You need to make that feature controllable from userspace, to allow new->old save/restore.

Good one :). We're still missing too much stuff to even run without losing interrupts yet and you're thinking about new->old save/restore. Who'd want to migrate onto a system that's broken anyways? Besides - we're missing too many register values from the kernel side to even be able to perform a migration.

I'm planning to add migration, probably after SMP. But that will be another CAP and anything before that won't be able to save/restore.


Alex

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 13/26] KVM: PPC: Add feature bitmap for magic page
  2010-08-31  0:56         ` Alexander Graf
@ 2010-08-31  6:28           ` Avi Kivity
  0 siblings, 0 replies; 33+ messages in thread
From: Avi Kivity @ 2010-08-31  6:28 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, linuxppc-dev, KVM list

  On 08/31/2010 03:56 AM, Alexander Graf wrote:
> On 22.08.2010, at 18:42, Avi Kivity wrote:
>
>> On 08/17/2010 04:57 PM, Alexander Graf wrote:
>>> We will soon add SR PV support to the shared page, so we need some
>>> infrastructure that allows the guest to query for features KVM exports.
>>>
>>> This patch adds a second return value to the magic mapping that
>>> indicated to the guest which features are available.
>>>
>> You need to make that feature controllable from userspace, to allow new->old save/restore.
> Good one :). We're still missing too much stuff to even run without losing interrupts yet and you're thinking about new->old save/restore. Who'd want to migrate onto a system that's broken anyways? Besides - we're missing too many register values from the kernel side to even be able to perform a migration.
>
> I'm planning to add migration, probably after SMP. But that will be another CAP and anything before that won't be able to save/restore.

I'm thinking stability and basic functionality and your running around 
adding features (or new archs, depending on mood).  But I agree, this 
can wait until after SMP.

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2010-08-31  6:28 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-08-17 13:57 [PATCH 00/26] KVM: PPC: Mid-August patch queue Alexander Graf
2010-08-17 13:57 ` [PATCH 03/26] KVM: PPC: Add tracepoint for generic mmu map Alexander Graf
2010-08-17 13:57 ` [PATCH 04/26] KVM: PPC: Move pte invalidate debug code to tracepoint Alexander Graf
     [not found] ` <1282053481-18787-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
2010-08-17 13:57   ` [PATCH 01/26] KVM: PPC: Move EXIT_DEBUG partially to tracepoints Alexander Graf
2010-08-17 13:57   ` [PATCH 02/26] KVM: PPC: Move book3s_64 mmu map debug print to trace point Alexander Graf
2010-08-17 13:57   ` [PATCH 05/26] KVM: PPC: Fix sid map search after flush Alexander Graf
2010-08-17 13:57   ` [PATCH 06/26] KVM: PPC: Add tracepoints for generic spte flushes Alexander Graf
2010-08-17 13:57   ` [PATCH 07/26] KVM: PPC: Preload magic page when in kernel mode Alexander Graf
2010-08-17 13:57   ` [PATCH 08/26] KVM: PPC: Don't flush PTEs on NX/RO hit Alexander Graf
2010-08-17 13:57   ` [PATCH 10/26] KVM: PPC: Move slb debugging to tracepoints Alexander Graf
2010-08-17 13:57   ` [PATCH 11/26] KVM: PPC: Revert "KVM: PPC: Use kernel hash function" Alexander Graf
2010-08-17 13:57   ` [PATCH 12/26] KVM: PPC: Remove unused define Alexander Graf
2010-08-17 13:57   ` [PATCH 13/26] KVM: PPC: Add feature bitmap for magic page Alexander Graf
2010-08-22 16:42     ` Avi Kivity
     [not found]       ` <4C715382.6050809-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2010-08-31  0:56         ` Alexander Graf
2010-08-31  6:28           ` Avi Kivity
2010-08-17 13:57   ` [PATCH 15/26] KVM: PPC: Interpret SR registers on demand Alexander Graf
2010-08-17 13:57   ` [PATCH 16/26] KVM: PPC: Put segment registers in shared page Alexander Graf
2010-08-17 13:57   ` [PATCH 18/26] KVM: PPC: Make PV mtmsr work with r30 and r31 Alexander Graf
2010-08-17 13:57   ` [PATCH 24/26] KVM: PPC: initialize IVORs in addition to IVPR Alexander Graf
2010-08-17 13:58   ` [PATCH 25/26] KVM: PPC: fix compilation of "dump tlbs" debug function Alexander Graf
2010-08-17 13:58   ` [PATCH 26/26] KVM: PPC: allow ppc440gp to pass the compatibility check Alexander Graf
2010-08-22 16:46   ` [PATCH 00/26] KVM: PPC: Mid-August patch queue Avi Kivity
2010-08-17 13:57 ` [PATCH 09/26] KVM: PPC: Make invalidation code more reliable Alexander Graf
2010-08-17 13:57 ` [PATCH 14/26] KVM: PPC: Move BAT handling code into spr handler Alexander Graf
2010-08-17 13:57 ` [PATCH 17/26] KVM: PPC: Add mtsrin PV code Alexander Graf
2010-08-17 13:57 ` [PATCH 19/26] KVM: PPC: Update int_pending also on dequeue Alexander Graf
2010-08-17 13:57 ` [PATCH 20/26] KVM: PPC: Make PV mtmsrd L=1 work with r30 and r31 Alexander Graf
2010-08-17 13:57 ` [PATCH 21/26] KVM: PPC: Force enable nap on KVM Alexander Graf
2010-08-17 18:28   ` Scott Wood
2010-08-17 20:07     ` Alexander Graf
2010-08-17 13:57 ` [PATCH 22/26] KVM: PPC: Implement correct SID mapping on Book3s_32 Alexander Graf
2010-08-17 13:57 ` [PATCH 23/26] KVM: PPC: Don't put MSR_POW in MSR Alexander Graf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).