kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PULL 00/35] KVM: PPC: End-August patch queue
@ 2010-08-31  2:31 Alexander Graf
  2010-08-31  2:31 ` [PATCH 01/35] KVM: PPC: Move EXIT_DEBUG partially to tracepoints Alexander Graf
                   ` (22 more replies)
  0 siblings, 23 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:31 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Linuxppc-dev, KVM list

Howdy,

This is my local patch queue with stuff that has accumulated over the last
weeks on KVM for PPC with some last minute fixes, speedups and debugging help
that I needed for the KVM Forum ;-).

The highlights of this set are:

  - Converted most important debug points to tracepoints
  - Flush less PTEs (speedup)
  - Go back to our own hash (less duplicates)
  - Make SRs guest settable (speedup for 32 bit guests)
  - Remove r30/r31 restrictions from PV hooks (speedup!)
  - Fix random breakages
  - Fix random guest stalls
  - 440GP host support (Thanks Hollis!)
  - Reliable interrupt injection

Keep in mind that this is the first version that is stable on PPC32 hosts.
All versions prior to this could occupy otherwise used segment entries and
thus crash your machine :-).

It is also the first version that is stable with PPC64 guests, because they
require more sophisticated interrupt injection logic for which qemu patches
are also required.

Please pull this tree from:

    git://github.com/agraf/linux-2.6.git kvm-ppc-next

Have fun with more accurate, faster and less buggy KVM on PowerPC!

Alexander Graf (31):
  KVM: PPC: Move EXIT_DEBUG partially to tracepoints
  KVM: PPC: Move book3s_64 mmu map debug print to trace point
  KVM: PPC: Add tracepoint for generic mmu map
  KVM: PPC: Move pte invalidate debug code to tracepoint
  KVM: PPC: Fix sid map search after flush
  KVM: PPC: Add tracepoints for generic spte flushes
  KVM: PPC: Preload magic page when in kernel mode
  KVM: PPC: Don't flush PTEs on NX/RO hit
  KVM: PPC: Make invalidation code more reliable
  KVM: PPC: Move slb debugging to tracepoints
  KVM: PPC: Revert "KVM: PPC: Use kernel hash function"
  KVM: PPC: Remove unused define
  KVM: PPC: Add feature bitmap for magic page
  KVM: PPC: Move BAT handling code into spr handler
  KVM: PPC: Interpret SR registers on demand
  KVM: PPC: Put segment registers in shared page
  KVM: PPC: Add mtsrin PV code
  KVM: PPC: Make PV mtmsr work with r30 and r31
  KVM: PPC: Update int_pending also on dequeue
  KVM: PPC: Make PV mtmsrd L=1 work with r30 and r31
  KVM: PPC: Force enable nap on KVM
  KVM: PPC: Implement correct SID mapping on Book3s_32
  KVM: PPC: Don't put MSR_POW in MSR
  KVM: PPC: Enable napping only for Book3s_64
  KVM: PPC: Implement Level interrupts on Book3S
  KVM: PPC: Fix CONFIG_KVM_GUEST && !CONFIG_KVM case
  KVM: PPC: Expose level based interrupt cap
  KVM: PPC: Implement level interrupts for BookE
  KVM: PPC: Document KVM_INTERRUPT ioctl
  KVM: PPC: Fix compile error in e500_tlb.c
  KVM: PPC: Add documentation for magic page enhancements

Hollis Blanchard (3):
  KVM: PPC: initialize IVORs in addition to IVPR
  KVM: PPC: fix compilation of "dump tlbs" debug function
  KVM: PPC: allow ppc440gp to pass the compatibility check

Kyle Moffett (1):
  KVM: PPC: e500_tlb: Fix a minor copy-paste tracing bug

 Documentation/kvm/api.txt             |   33 +++++-
 Documentation/kvm/ppc-pv.txt          |   17 +++
 arch/powerpc/include/asm/kvm.h        |    1 +
 arch/powerpc/include/asm/kvm_asm.h    |    4 +-
 arch/powerpc/include/asm/kvm_book3s.h |   25 ++--
 arch/powerpc/include/asm/kvm_para.h   |    3 +
 arch/powerpc/kernel/asm-offsets.c     |    7 +-
 arch/powerpc/kernel/kvm.c             |  147 ++++++++++++++++++---
 arch/powerpc/kernel/kvm_emul.S        |   75 +++++++++--
 arch/powerpc/kvm/44x.c                |    3 +-
 arch/powerpc/kvm/44x_tlb.c            |    1 +
 arch/powerpc/kvm/book3s.c             |   84 +++++++-----
 arch/powerpc/kvm/book3s_32_mmu.c      |   83 ++++++-----
 arch/powerpc/kvm/book3s_32_mmu_host.c |   67 ++++++----
 arch/powerpc/kvm/book3s_64_mmu_host.c |   59 +++-----
 arch/powerpc/kvm/book3s_emulate.c     |   48 +++-----
 arch/powerpc/kvm/book3s_mmu_hpte.c    |   38 ++---
 arch/powerpc/kvm/booke.c              |   25 +++-
 arch/powerpc/kvm/booke.h              |    4 +-
 arch/powerpc/kvm/e500_tlb.c           |    9 +-
 arch/powerpc/kvm/powerpc.c            |    6 +-
 arch/powerpc/kvm/trace.h              |  239 +++++++++++++++++++++++++++++++++
 include/linux/kvm.h                   |    1 +
 23 files changed, 730 insertions(+), 249 deletions(-)

^ permalink raw reply	[flat|nested] 37+ messages in thread

* [PATCH 01/35] KVM: PPC: Move EXIT_DEBUG partially to tracepoints
  2010-08-31  2:31 [PULL 00/35] KVM: PPC: End-August patch queue Alexander Graf
@ 2010-08-31  2:31 ` Alexander Graf
  2010-08-31  2:31 ` [PATCH 02/35] KVM: PPC: Move book3s_64 mmu map debug print to trace point Alexander Graf
                   ` (21 subsequent siblings)
  22 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:31 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Linuxppc-dev, KVM list

We have a debug printk on every exit that is usually #ifdef'ed out. Using
tracepoints makes a lot more sense here though, as they can be dynamically
enabled.

This patch converts the most commonly used debug printks of EXIT_DEBUG to
tracepoints.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s.c |   26 +++-------------------
 arch/powerpc/kvm/trace.h  |   51 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 55 insertions(+), 22 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 7656b6d..37db61d 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -17,6 +17,7 @@
 #include <linux/kvm_host.h>
 #include <linux/err.h>
 #include <linux/slab.h>
+#include "trace.h"
 
 #include <asm/reg.h>
 #include <asm/cputable.h>
@@ -35,7 +36,6 @@
 #define VCPU_STAT(x) offsetof(struct kvm_vcpu, stat.x), KVM_STAT_VCPU
 
 /* #define EXIT_DEBUG */
-/* #define EXIT_DEBUG_SIMPLE */
 /* #define DEBUG_EXT */
 
 static int kvmppc_handle_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr,
@@ -105,14 +105,6 @@ void kvmppc_core_vcpu_put(struct kvm_vcpu *vcpu)
 	kvmppc_giveup_ext(vcpu, MSR_VSX);
 }
 
-#if defined(EXIT_DEBUG)
-static u32 kvmppc_get_dec(struct kvm_vcpu *vcpu)
-{
-	u64 jd = mftb() - vcpu->arch.dec_jiffies;
-	return vcpu->arch.dec - jd;
-}
-#endif
-
 static void kvmppc_recalc_shadow_msr(struct kvm_vcpu *vcpu)
 {
 	ulong smsr = vcpu->arch.shared->msr;
@@ -850,16 +842,8 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 
 	run->exit_reason = KVM_EXIT_UNKNOWN;
 	run->ready_for_interrupt_injection = 1;
-#ifdef EXIT_DEBUG
-	printk(KERN_EMERG "exit_nr=0x%x | pc=0x%lx | dar=0x%lx | dec=0x%x | msr=0x%lx\n",
-		exit_nr, kvmppc_get_pc(vcpu), kvmppc_get_fault_dar(vcpu),
-		kvmppc_get_dec(vcpu), to_svcpu(vcpu)->shadow_srr1);
-#elif defined (EXIT_DEBUG_SIMPLE)
-	if ((exit_nr != 0x900) && (exit_nr != 0x500))
-		printk(KERN_EMERG "exit_nr=0x%x | pc=0x%lx | dar=0x%lx | msr=0x%lx\n",
-			exit_nr, kvmppc_get_pc(vcpu), kvmppc_get_fault_dar(vcpu),
-			vcpu->arch.shared->msr);
-#endif
+
+	trace_kvm_book3s_exit(exit_nr, vcpu);
 	kvm_resched(vcpu);
 	switch (exit_nr) {
 	case BOOK3S_INTERRUPT_INST_STORAGE:
@@ -1091,9 +1075,7 @@ program_interrupt:
 		}
 	}
 
-#ifdef EXIT_DEBUG
-	printk(KERN_EMERG "KVM exit: vcpu=0x%p pc=0x%lx r=0x%x\n", vcpu, kvmppc_get_pc(vcpu), r);
-#endif
+	trace_kvm_book3s_reenter(r, vcpu);
 
 	return r;
 }
diff --git a/arch/powerpc/kvm/trace.h b/arch/powerpc/kvm/trace.h
index a8e8400..b5e9d81 100644
--- a/arch/powerpc/kvm/trace.h
+++ b/arch/powerpc/kvm/trace.h
@@ -98,6 +98,57 @@ TRACE_EVENT(kvm_gtlb_write,
 		__entry->word1, __entry->word2)
 );
 
+
+/*************************************************************************
+ *                         Book3S trace points                           *
+ *************************************************************************/
+
+#ifdef CONFIG_PPC_BOOK3S
+
+TRACE_EVENT(kvm_book3s_exit,
+	TP_PROTO(unsigned int exit_nr, struct kvm_vcpu *vcpu),
+	TP_ARGS(exit_nr, vcpu),
+
+	TP_STRUCT__entry(
+		__field(	unsigned int,	exit_nr		)
+		__field(	unsigned long,	pc		)
+		__field(	unsigned long,	msr		)
+		__field(	unsigned long,	dar		)
+		__field(	unsigned long,	srr1		)
+	),
+
+	TP_fast_assign(
+		__entry->exit_nr	= exit_nr;
+		__entry->pc		= kvmppc_get_pc(vcpu);
+		__entry->dar		= kvmppc_get_fault_dar(vcpu);
+		__entry->msr		= vcpu->arch.shared->msr;
+		__entry->srr1		= to_svcpu(vcpu)->shadow_srr1;
+	),
+
+	TP_printk("exit=0x%x | pc=0x%lx | msr=0x%lx | dar=0x%lx | srr1=0x%lx",
+		  __entry->exit_nr, __entry->pc, __entry->msr, __entry->dar,
+		  __entry->srr1)
+);
+
+TRACE_EVENT(kvm_book3s_reenter,
+	TP_PROTO(int r, struct kvm_vcpu *vcpu),
+	TP_ARGS(r, vcpu),
+
+	TP_STRUCT__entry(
+		__field(	unsigned int,	r		)
+		__field(	unsigned long,	pc		)
+	),
+
+	TP_fast_assign(
+		__entry->r		= r;
+		__entry->pc		= kvmppc_get_pc(vcpu);
+	),
+
+	TP_printk("reentry r=%d | pc=0x%lx", __entry->r, __entry->pc)
+);
+
+#endif /* CONFIG_PPC_BOOK3S */
+
 #endif /* _TRACE_KVM_H */
 
 /* This part must be outside protection */
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 02/35] KVM: PPC: Move book3s_64 mmu map debug print to trace point
  2010-08-31  2:31 [PULL 00/35] KVM: PPC: End-August patch queue Alexander Graf
  2010-08-31  2:31 ` [PATCH 01/35] KVM: PPC: Move EXIT_DEBUG partially to tracepoints Alexander Graf
@ 2010-08-31  2:31 ` Alexander Graf
  2010-08-31  2:31 ` [PATCH 07/35] KVM: PPC: Preload magic page when in kernel mode Alexander Graf
                   ` (20 subsequent siblings)
  22 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:31 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Linuxppc-dev, KVM list

This patch moves Book3s MMU debugging over to tracepoints.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_64_mmu_host.c |   13 +----------
 arch/powerpc/kvm/trace.h              |   34 +++++++++++++++++++++++++++++++++
 2 files changed, 36 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index 672b149..aa516ad 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -28,19 +28,13 @@
 #include <asm/machdep.h>
 #include <asm/mmu_context.h>
 #include <asm/hw_irq.h>
+#include "trace.h"
 
 #define PTE_SIZE 12
 #define VSID_ALL 0
 
-/* #define DEBUG_MMU */
 /* #define DEBUG_SLB */
 
-#ifdef DEBUG_MMU
-#define dprintk_mmu(a, ...) printk(KERN_INFO a, __VA_ARGS__)
-#else
-#define dprintk_mmu(a, ...) do { } while(0)
-#endif
-
 #ifdef DEBUG_SLB
 #define dprintk_slb(a, ...) printk(KERN_INFO a, __VA_ARGS__)
 #else
@@ -156,10 +150,7 @@ map_again:
 	} else {
 		struct hpte_cache *pte = kvmppc_mmu_hpte_cache_next(vcpu);
 
-		dprintk_mmu("KVM: %c%c Map 0x%lx: [%lx] 0x%lx (0x%llx) -> %lx\n",
-			    ((rflags & HPTE_R_PP) == 3) ? '-' : 'w',
-			    (rflags & HPTE_R_N) ? '-' : 'x',
-			    orig_pte->eaddr, hpteg, va, orig_pte->vpage, hpaddr);
+		trace_kvm_book3s_64_mmu_map(rflags, hpteg, va, hpaddr, orig_pte);
 
 		/* The ppc_md code may give us a secondary entry even though we
 		   asked for a primary. Fix up. */
diff --git a/arch/powerpc/kvm/trace.h b/arch/powerpc/kvm/trace.h
index b5e9d81..8ed6f1c 100644
--- a/arch/powerpc/kvm/trace.h
+++ b/arch/powerpc/kvm/trace.h
@@ -147,6 +147,40 @@ TRACE_EVENT(kvm_book3s_reenter,
 	TP_printk("reentry r=%d | pc=0x%lx", __entry->r, __entry->pc)
 );
 
+#ifdef CONFIG_PPC_BOOK3S_64
+
+TRACE_EVENT(kvm_book3s_64_mmu_map,
+	TP_PROTO(int rflags, ulong hpteg, ulong va, pfn_t hpaddr,
+		 struct kvmppc_pte *orig_pte),
+	TP_ARGS(rflags, hpteg, va, hpaddr, orig_pte),
+
+	TP_STRUCT__entry(
+		__field(	unsigned char,		flag_w		)
+		__field(	unsigned char,		flag_x		)
+		__field(	unsigned long,		eaddr		)
+		__field(	unsigned long,		hpteg		)
+		__field(	unsigned long,		va		)
+		__field(	unsigned long long,	vpage		)
+		__field(	unsigned long,		hpaddr		)
+	),
+
+	TP_fast_assign(
+		__entry->flag_w	= ((rflags & HPTE_R_PP) == 3) ? '-' : 'w';
+		__entry->flag_x	= (rflags & HPTE_R_N) ? '-' : 'x';
+		__entry->eaddr	= orig_pte->eaddr;
+		__entry->hpteg	= hpteg;
+		__entry->va	= va;
+		__entry->vpage	= orig_pte->vpage;
+		__entry->hpaddr	= hpaddr;
+	),
+
+	TP_printk("KVM: %c%c Map 0x%lx: [%lx] 0x%lx (0x%llx) -> %lx",
+		  __entry->flag_w, __entry->flag_x, __entry->eaddr,
+		  __entry->hpteg, __entry->va, __entry->vpage, __entry->hpaddr)
+);
+
+#endif /* CONFIG_PPC_BOOK3S_64 */
+
 #endif /* CONFIG_PPC_BOOK3S */
 
 #endif /* _TRACE_KVM_H */
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 03/35] KVM: PPC: Add tracepoint for generic mmu map
       [not found] ` <1283221937-21006-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-08-31  2:31   ` Alexander Graf
  2010-08-31  2:31   ` [PATCH 04/35] KVM: PPC: Move pte invalidate debug code to tracepoint Alexander Graf
                     ` (12 subsequent siblings)
  13 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:31 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: Linuxppc-dev, KVM list

This patch moves the generic mmu map debugging over to tracepoints.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s_mmu_hpte.c |    3 +++
 arch/powerpc/kvm/trace.h           |   29 +++++++++++++++++++++++++++++
 2 files changed, 32 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_mmu_hpte.c b/arch/powerpc/kvm/book3s_mmu_hpte.c
index 02c64ab..ac94bd9 100644
--- a/arch/powerpc/kvm/book3s_mmu_hpte.c
+++ b/arch/powerpc/kvm/book3s_mmu_hpte.c
@@ -21,6 +21,7 @@
 #include <linux/kvm_host.h>
 #include <linux/hash.h>
 #include <linux/slab.h>
+#include "trace.h"
 
 #include <asm/kvm_ppc.h>
 #include <asm/kvm_book3s.h>
@@ -66,6 +67,8 @@ void kvmppc_mmu_hpte_cache_map(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
 {
 	u64 index;
 
+	trace_kvm_book3s_mmu_map(pte);
+
 	spin_lock(&vcpu->arch.mmu_lock);
 
 	/* Add to ePTE list */
diff --git a/arch/powerpc/kvm/trace.h b/arch/powerpc/kvm/trace.h
index 8ed6f1c..68a8444 100644
--- a/arch/powerpc/kvm/trace.h
+++ b/arch/powerpc/kvm/trace.h
@@ -181,6 +181,35 @@ TRACE_EVENT(kvm_book3s_64_mmu_map,
 
 #endif /* CONFIG_PPC_BOOK3S_64 */
 
+TRACE_EVENT(kvm_book3s_mmu_map,
+	TP_PROTO(struct hpte_cache *pte),
+	TP_ARGS(pte),
+
+	TP_STRUCT__entry(
+		__field(	u64,		host_va		)
+		__field(	u64,		pfn		)
+		__field(	ulong,		eaddr		)
+		__field(	u64,		vpage		)
+		__field(	ulong,		raddr		)
+		__field(	int,		flags		)
+	),
+
+	TP_fast_assign(
+		__entry->host_va	= pte->host_va;
+		__entry->pfn		= pte->pfn;
+		__entry->eaddr		= pte->pte.eaddr;
+		__entry->vpage		= pte->pte.vpage;
+		__entry->raddr		= pte->pte.raddr;
+		__entry->flags		= (pte->pte.may_read ? 0x4 : 0) |
+					  (pte->pte.may_write ? 0x2 : 0) |
+					  (pte->pte.may_execute ? 0x1 : 0);
+	),
+
+	TP_printk("Map: hva=%llx pfn=%llx ea=%lx vp=%llx ra=%lx [%x]",
+		  __entry->host_va, __entry->pfn, __entry->eaddr,
+		  __entry->vpage, __entry->raddr, __entry->flags)
+);
+
 #endif /* CONFIG_PPC_BOOK3S */
 
 #endif /* _TRACE_KVM_H */
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 04/35] KVM: PPC: Move pte invalidate debug code to tracepoint
       [not found] ` <1283221937-21006-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
  2010-08-31  2:31   ` [PATCH 03/35] KVM: PPC: Add tracepoint for generic mmu map Alexander Graf
@ 2010-08-31  2:31   ` Alexander Graf
  2010-08-31  2:31   ` [PATCH 05/35] KVM: PPC: Fix sid map search after flush Alexander Graf
                     ` (11 subsequent siblings)
  13 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:31 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: Linuxppc-dev, KVM list

This patch moves the SPTE flush debug printk over to tracepoints.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s_mmu_hpte.c |    3 +--
 arch/powerpc/kvm/trace.h           |   29 +++++++++++++++++++++++++++++
 2 files changed, 30 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_mmu_hpte.c b/arch/powerpc/kvm/book3s_mmu_hpte.c
index ac94bd9..3397152 100644
--- a/arch/powerpc/kvm/book3s_mmu_hpte.c
+++ b/arch/powerpc/kvm/book3s_mmu_hpte.c
@@ -104,8 +104,7 @@ static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
 	if (hlist_unhashed(&pte->list_pte))
 		return;
 
-	dprintk_mmu("KVM: Flushing SPT: 0x%lx (0x%llx) -> 0x%llx\n",
-		    pte->pte.eaddr, pte->pte.vpage, pte->host_va);
+	trace_kvm_book3s_mmu_invalidate(pte);
 
 	/* Different for 32 and 64 bit */
 	kvmppc_mmu_invalidate_pte(vcpu, pte);
diff --git a/arch/powerpc/kvm/trace.h b/arch/powerpc/kvm/trace.h
index 68a8444..06ad93e 100644
--- a/arch/powerpc/kvm/trace.h
+++ b/arch/powerpc/kvm/trace.h
@@ -210,6 +210,35 @@ TRACE_EVENT(kvm_book3s_mmu_map,
 		  __entry->vpage, __entry->raddr, __entry->flags)
 );
 
+TRACE_EVENT(kvm_book3s_mmu_invalidate,
+	TP_PROTO(struct hpte_cache *pte),
+	TP_ARGS(pte),
+
+	TP_STRUCT__entry(
+		__field(	u64,		host_va		)
+		__field(	u64,		pfn		)
+		__field(	ulong,		eaddr		)
+		__field(	u64,		vpage		)
+		__field(	ulong,		raddr		)
+		__field(	int,		flags		)
+	),
+
+	TP_fast_assign(
+		__entry->host_va	= pte->host_va;
+		__entry->pfn		= pte->pfn;
+		__entry->eaddr		= pte->pte.eaddr;
+		__entry->vpage		= pte->pte.vpage;
+		__entry->raddr		= pte->pte.raddr;
+		__entry->flags		= (pte->pte.may_read ? 0x4 : 0) |
+					  (pte->pte.may_write ? 0x2 : 0) |
+					  (pte->pte.may_execute ? 0x1 : 0);
+	),
+
+	TP_printk("Flush: hva=%llx pfn=%llx ea=%lx vp=%llx ra=%lx [%x]",
+		  __entry->host_va, __entry->pfn, __entry->eaddr,
+		  __entry->vpage, __entry->raddr, __entry->flags)
+);
+
 #endif /* CONFIG_PPC_BOOK3S */
 
 #endif /* _TRACE_KVM_H */
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 05/35] KVM: PPC: Fix sid map search after flush
       [not found] ` <1283221937-21006-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
  2010-08-31  2:31   ` [PATCH 03/35] KVM: PPC: Add tracepoint for generic mmu map Alexander Graf
  2010-08-31  2:31   ` [PATCH 04/35] KVM: PPC: Move pte invalidate debug code to tracepoint Alexander Graf
@ 2010-08-31  2:31   ` Alexander Graf
  2010-08-31  2:31   ` [PATCH 06/35] KVM: PPC: Add tracepoints for generic spte flushes Alexander Graf
                     ` (10 subsequent siblings)
  13 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:31 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: Linuxppc-dev, KVM list

After a flush the sid map contained lots of entries with 0 for their gvsid and
hvsid value. Unfortunately, 0 can be a real value the guest searches for when
looking up a vsid so it would incorrectly find the host's 0 hvsid mapping which
doesn't belong to our sid space.

So let's also check for the valid bit that indicated that the sid we're
looking at actually contains useful data.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s_64_mmu_host.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index aa516ad..ebb1b5d 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -65,14 +65,14 @@ static struct kvmppc_sid_map *find_sid_vsid(struct kvm_vcpu *vcpu, u64 gvsid)
 
 	sid_map_mask = kvmppc_sid_hash(vcpu, gvsid);
 	map = &to_book3s(vcpu)->sid_map[sid_map_mask];
-	if (map->guest_vsid == gvsid) {
+	if (map->valid && (map->guest_vsid == gvsid)) {
 		dprintk_slb("SLB: Searching: 0x%llx -> 0x%llx\n",
 			    gvsid, map->host_vsid);
 		return map;
 	}
 
 	map = &to_book3s(vcpu)->sid_map[SID_MAP_MASK - sid_map_mask];
-	if (map->guest_vsid == gvsid) {
+	if (map->valid && (map->guest_vsid == gvsid)) {
 		dprintk_slb("SLB: Searching 0x%llx -> 0x%llx\n",
 			    gvsid, map->host_vsid);
 		return map;
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 06/35] KVM: PPC: Add tracepoints for generic spte flushes
       [not found] ` <1283221937-21006-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                     ` (2 preceding siblings ...)
  2010-08-31  2:31   ` [PATCH 05/35] KVM: PPC: Fix sid map search after flush Alexander Graf
@ 2010-08-31  2:31   ` Alexander Graf
  2010-08-31  2:31   ` [PATCH 09/35] KVM: PPC: Make invalidation code more reliable Alexander Graf
                     ` (9 subsequent siblings)
  13 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:31 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: Linuxppc-dev, KVM list

The different ways of flusing shadow ptes have their own debug prints which use
stupid old printk.

Let's move them to tracepoints, making them easier available, faster and
possible to activate on demand

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s_mmu_hpte.c |   18 +++---------------
 arch/powerpc/kvm/trace.h           |   23 +++++++++++++++++++++++
 2 files changed, 26 insertions(+), 15 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_mmu_hpte.c b/arch/powerpc/kvm/book3s_mmu_hpte.c
index 3397152..bd6a767 100644
--- a/arch/powerpc/kvm/book3s_mmu_hpte.c
+++ b/arch/powerpc/kvm/book3s_mmu_hpte.c
@@ -31,14 +31,6 @@
 
 #define PTE_SIZE	12
 
-/* #define DEBUG_MMU */
-
-#ifdef DEBUG_MMU
-#define dprintk_mmu(a, ...) printk(KERN_INFO a, __VA_ARGS__)
-#else
-#define dprintk_mmu(a, ...) do { } while(0)
-#endif
-
 static struct kmem_cache *hpte_cache;
 
 static inline u64 kvmppc_mmu_hash_pte(u64 eaddr)
@@ -186,9 +178,7 @@ static void kvmppc_mmu_pte_flush_long(struct kvm_vcpu *vcpu, ulong guest_ea)
 
 void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, ulong guest_ea, ulong ea_mask)
 {
-	dprintk_mmu("KVM: Flushing %d Shadow PTEs: 0x%lx & 0x%lx\n",
-		    vcpu->arch.hpte_cache_count, guest_ea, ea_mask);
-
+	trace_kvm_book3s_mmu_flush("", vcpu, guest_ea, ea_mask);
 	guest_ea &= ea_mask;
 
 	switch (ea_mask) {
@@ -251,8 +241,7 @@ static void kvmppc_mmu_pte_vflush_long(struct kvm_vcpu *vcpu, u64 guest_vp)
 
 void kvmppc_mmu_pte_vflush(struct kvm_vcpu *vcpu, u64 guest_vp, u64 vp_mask)
 {
-	dprintk_mmu("KVM: Flushing %d Shadow vPTEs: 0x%llx & 0x%llx\n",
-		    vcpu->arch.hpte_cache_count, guest_vp, vp_mask);
+	trace_kvm_book3s_mmu_flush("v", vcpu, guest_vp, vp_mask);
 	guest_vp &= vp_mask;
 
 	switch(vp_mask) {
@@ -274,8 +263,7 @@ void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, ulong pa_start, ulong pa_end)
 	struct hpte_cache *pte;
 	int i;
 
-	dprintk_mmu("KVM: Flushing %d Shadow pPTEs: 0x%lx - 0x%lx\n",
-		    vcpu->arch.hpte_cache_count, pa_start, pa_end);
+	trace_kvm_book3s_mmu_flush("p", vcpu, pa_start, pa_end);
 
 	rcu_read_lock();
 
diff --git a/arch/powerpc/kvm/trace.h b/arch/powerpc/kvm/trace.h
index 06ad93e..23f757a 100644
--- a/arch/powerpc/kvm/trace.h
+++ b/arch/powerpc/kvm/trace.h
@@ -239,6 +239,29 @@ TRACE_EVENT(kvm_book3s_mmu_invalidate,
 		  __entry->vpage, __entry->raddr, __entry->flags)
 );
 
+TRACE_EVENT(kvm_book3s_mmu_flush,
+	TP_PROTO(const char *type, struct kvm_vcpu *vcpu, unsigned long long p1,
+		 unsigned long long p2),
+	TP_ARGS(type, vcpu, p1, p2),
+
+	TP_STRUCT__entry(
+		__field(	int,			count		)
+		__field(	unsigned long long,	p1		)
+		__field(	unsigned long long,	p2		)
+		__field(	const char *,		type		)
+	),
+
+	TP_fast_assign(
+		__entry->count		= vcpu->arch.hpte_cache_count;
+		__entry->p1		= p1;
+		__entry->p2		= p2;
+		__entry->type		= type;
+	),
+
+	TP_printk("Flush %d %sPTEs: %llx - %llx",
+		  __entry->count, __entry->type, __entry->p1, __entry->p2)
+);
+
 #endif /* CONFIG_PPC_BOOK3S */
 
 #endif /* _TRACE_KVM_H */
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 07/35] KVM: PPC: Preload magic page when in kernel mode
  2010-08-31  2:31 [PULL 00/35] KVM: PPC: End-August patch queue Alexander Graf
  2010-08-31  2:31 ` [PATCH 01/35] KVM: PPC: Move EXIT_DEBUG partially to tracepoints Alexander Graf
  2010-08-31  2:31 ` [PATCH 02/35] KVM: PPC: Move book3s_64 mmu map debug print to trace point Alexander Graf
@ 2010-08-31  2:31 ` Alexander Graf
  2010-08-31  2:31 ` [PATCH 08/35] KVM: PPC: Don't flush PTEs on NX/RO hit Alexander Graf
                   ` (19 subsequent siblings)
  22 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:31 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Linuxppc-dev, KVM list

When the guest jumps into kernel mode and has the magic page mapped, theres a
very high chance that it will also use it. So let's detect that scenario and
map the segment accordingly.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s.c |   10 ++++++++++
 1 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 37db61d..54ca578 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -145,6 +145,16 @@ void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 msr)
 		   (old_msr & (MSR_PR|MSR_IR|MSR_DR))) {
 		kvmppc_mmu_flush_segments(vcpu);
 		kvmppc_mmu_map_segment(vcpu, kvmppc_get_pc(vcpu));
+
+		/* Preload magic page segment when in kernel mode */
+		if (!(msr & MSR_PR) && vcpu->arch.magic_page_pa) {
+			struct kvm_vcpu_arch *a = &vcpu->arch;
+
+			if (msr & MSR_DR)
+				kvmppc_mmu_map_segment(vcpu, a->magic_page_ea);
+			else
+				kvmppc_mmu_map_segment(vcpu, a->magic_page_pa);
+		}
 	}
 
 	/* Preload FPU if it's enabled */
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 08/35] KVM: PPC: Don't flush PTEs on NX/RO hit
  2010-08-31  2:31 [PULL 00/35] KVM: PPC: End-August patch queue Alexander Graf
                   ` (2 preceding siblings ...)
  2010-08-31  2:31 ` [PATCH 07/35] KVM: PPC: Preload magic page when in kernel mode Alexander Graf
@ 2010-08-31  2:31 ` Alexander Graf
  2010-08-31  2:31 ` [PATCH 10/35] KVM: PPC: Move slb debugging to tracepoints Alexander Graf
                   ` (18 subsequent siblings)
  22 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:31 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Linuxppc-dev, KVM list

When hitting a no-execute or read-only data/inst storage interrupt we were
flushing the respective PTE so we're sure it gets properly overwritten next.

According to the spec, this is unnecessary though. The guest issues a tlbie
anyways, so we're safe to just keep the PTE around and have it manually removed
from the guest, saving us a flush.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s.c |    2 --
 1 files changed, 0 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 54ca578..2fb528f 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -887,7 +887,6 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			vcpu->arch.shared->msr |=
 				to_svcpu(vcpu)->shadow_srr1 & 0x58000000;
 			kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
-			kvmppc_mmu_pte_flush(vcpu, kvmppc_get_pc(vcpu), ~0xFFFUL);
 			r = RESUME_GUEST;
 		}
 		break;
@@ -913,7 +912,6 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			vcpu->arch.shared->dar = dar;
 			vcpu->arch.shared->dsisr = to_svcpu(vcpu)->fault_dsisr;
 			kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
-			kvmppc_mmu_pte_flush(vcpu, dar, ~0xFFFUL);
 			r = RESUME_GUEST;
 		}
 		break;
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 09/35] KVM: PPC: Make invalidation code more reliable
       [not found] ` <1283221937-21006-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                     ` (3 preceding siblings ...)
  2010-08-31  2:31   ` [PATCH 06/35] KVM: PPC: Add tracepoints for generic spte flushes Alexander Graf
@ 2010-08-31  2:31   ` Alexander Graf
  2010-08-31  2:31   ` [PATCH 11/35] KVM: PPC: Revert "KVM: PPC: Use kernel hash function" Alexander Graf
                     ` (8 subsequent siblings)
  13 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:31 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: Linuxppc-dev, KVM list

There is a race condition in the pte invalidation code path where we can't
be sure if a pte was invalidated already. So let's move the spin lock around
to get rid of the race.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s_mmu_hpte.c |   14 ++++++++------
 1 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_mmu_hpte.c b/arch/powerpc/kvm/book3s_mmu_hpte.c
index bd6a767..79751d8 100644
--- a/arch/powerpc/kvm/book3s_mmu_hpte.c
+++ b/arch/powerpc/kvm/book3s_mmu_hpte.c
@@ -92,10 +92,6 @@ static void free_pte_rcu(struct rcu_head *head)
 
 static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
 {
-	/* pte already invalidated? */
-	if (hlist_unhashed(&pte->list_pte))
-		return;
-
 	trace_kvm_book3s_mmu_invalidate(pte);
 
 	/* Different for 32 and 64 bit */
@@ -103,18 +99,24 @@ static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
 
 	spin_lock(&vcpu->arch.mmu_lock);
 
+	/* pte already invalidated in between? */
+	if (hlist_unhashed(&pte->list_pte)) {
+		spin_unlock(&vcpu->arch.mmu_lock);
+		return;
+	}
+
 	hlist_del_init_rcu(&pte->list_pte);
 	hlist_del_init_rcu(&pte->list_pte_long);
 	hlist_del_init_rcu(&pte->list_vpte);
 	hlist_del_init_rcu(&pte->list_vpte_long);
 
-	spin_unlock(&vcpu->arch.mmu_lock);
-
 	if (pte->pte.may_write)
 		kvm_release_pfn_dirty(pte->pfn);
 	else
 		kvm_release_pfn_clean(pte->pfn);
 
+	spin_unlock(&vcpu->arch.mmu_lock);
+
 	vcpu->arch.hpte_cache_count--;
 	call_rcu(&pte->rcu_head, free_pte_rcu);
 }
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 10/35] KVM: PPC: Move slb debugging to tracepoints
  2010-08-31  2:31 [PULL 00/35] KVM: PPC: End-August patch queue Alexander Graf
                   ` (3 preceding siblings ...)
  2010-08-31  2:31 ` [PATCH 08/35] KVM: PPC: Don't flush PTEs on NX/RO hit Alexander Graf
@ 2010-08-31  2:31 ` Alexander Graf
       [not found] ` <1283221937-21006-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                   ` (17 subsequent siblings)
  22 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:31 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Linuxppc-dev, KVM list

This patch moves debugging printks for shadow SLB debugging over to tracepoints.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_64_mmu_host.c |   22 ++--------
 arch/powerpc/kvm/trace.h              |   73 +++++++++++++++++++++++++++++++++
 2 files changed, 78 insertions(+), 17 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index ebb1b5d..321c931 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -33,14 +33,6 @@
 #define PTE_SIZE 12
 #define VSID_ALL 0
 
-/* #define DEBUG_SLB */
-
-#ifdef DEBUG_SLB
-#define dprintk_slb(a, ...) printk(KERN_INFO a, __VA_ARGS__)
-#else
-#define dprintk_slb(a, ...) do { } while(0)
-#endif
-
 void kvmppc_mmu_invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
 {
 	ppc_md.hpte_invalidate(pte->slot, pte->host_va,
@@ -66,20 +58,17 @@ static struct kvmppc_sid_map *find_sid_vsid(struct kvm_vcpu *vcpu, u64 gvsid)
 	sid_map_mask = kvmppc_sid_hash(vcpu, gvsid);
 	map = &to_book3s(vcpu)->sid_map[sid_map_mask];
 	if (map->valid && (map->guest_vsid == gvsid)) {
-		dprintk_slb("SLB: Searching: 0x%llx -> 0x%llx\n",
-			    gvsid, map->host_vsid);
+		trace_kvm_book3s_slb_found(gvsid, map->host_vsid);
 		return map;
 	}
 
 	map = &to_book3s(vcpu)->sid_map[SID_MAP_MASK - sid_map_mask];
 	if (map->valid && (map->guest_vsid == gvsid)) {
-		dprintk_slb("SLB: Searching 0x%llx -> 0x%llx\n",
-			    gvsid, map->host_vsid);
+		trace_kvm_book3s_slb_found(gvsid, map->host_vsid);
 		return map;
 	}
 
-	dprintk_slb("SLB: Searching %d/%d: 0x%llx -> not found\n",
-		    sid_map_mask, SID_MAP_MASK - sid_map_mask, gvsid);
+	trace_kvm_book3s_slb_fail(sid_map_mask, gvsid);
 	return NULL;
 }
 
@@ -205,8 +194,7 @@ static struct kvmppc_sid_map *create_sid_map(struct kvm_vcpu *vcpu, u64 gvsid)
 	map->guest_vsid = gvsid;
 	map->valid = true;
 
-	dprintk_slb("SLB: New mapping at %d: 0x%llx -> 0x%llx\n",
-		    sid_map_mask, gvsid, map->host_vsid);
+	trace_kvm_book3s_slb_map(sid_map_mask, gvsid, map->host_vsid);
 
 	return map;
 }
@@ -278,7 +266,7 @@ int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr)
 	to_svcpu(vcpu)->slb[slb_index].esid = slb_esid;
 	to_svcpu(vcpu)->slb[slb_index].vsid = slb_vsid;
 
-	dprintk_slb("slbmte %#llx, %#llx\n", slb_vsid, slb_esid);
+	trace_kvm_book3s_slbmte(slb_vsid, slb_esid);
 
 	return 0;
 }
diff --git a/arch/powerpc/kvm/trace.h b/arch/powerpc/kvm/trace.h
index 23f757a..3aca1b0 100644
--- a/arch/powerpc/kvm/trace.h
+++ b/arch/powerpc/kvm/trace.h
@@ -262,6 +262,79 @@ TRACE_EVENT(kvm_book3s_mmu_flush,
 		  __entry->count, __entry->type, __entry->p1, __entry->p2)
 );
 
+TRACE_EVENT(kvm_book3s_slb_found,
+	TP_PROTO(unsigned long long gvsid, unsigned long long hvsid),
+	TP_ARGS(gvsid, hvsid),
+
+	TP_STRUCT__entry(
+		__field(	unsigned long long,	gvsid		)
+		__field(	unsigned long long,	hvsid		)
+	),
+
+	TP_fast_assign(
+		__entry->gvsid		= gvsid;
+		__entry->hvsid		= hvsid;
+	),
+
+	TP_printk("%llx -> %llx", __entry->gvsid, __entry->hvsid)
+);
+
+TRACE_EVENT(kvm_book3s_slb_fail,
+	TP_PROTO(u16 sid_map_mask, unsigned long long gvsid),
+	TP_ARGS(sid_map_mask, gvsid),
+
+	TP_STRUCT__entry(
+		__field(	unsigned short,		sid_map_mask	)
+		__field(	unsigned long long,	gvsid		)
+	),
+
+	TP_fast_assign(
+		__entry->sid_map_mask	= sid_map_mask;
+		__entry->gvsid		= gvsid;
+	),
+
+	TP_printk("%x/%x: %llx", __entry->sid_map_mask,
+		  SID_MAP_MASK - __entry->sid_map_mask, __entry->gvsid)
+);
+
+TRACE_EVENT(kvm_book3s_slb_map,
+	TP_PROTO(u16 sid_map_mask, unsigned long long gvsid,
+		 unsigned long long hvsid),
+	TP_ARGS(sid_map_mask, gvsid, hvsid),
+
+	TP_STRUCT__entry(
+		__field(	unsigned short,		sid_map_mask	)
+		__field(	unsigned long long,	guest_vsid	)
+		__field(	unsigned long long,	host_vsid	)
+	),
+
+	TP_fast_assign(
+		__entry->sid_map_mask	= sid_map_mask;
+		__entry->guest_vsid	= gvsid;
+		__entry->host_vsid	= hvsid;
+	),
+
+	TP_printk("%x: %llx -> %llx", __entry->sid_map_mask,
+		  __entry->guest_vsid, __entry->host_vsid)
+);
+
+TRACE_EVENT(kvm_book3s_slbmte,
+	TP_PROTO(u64 slb_vsid, u64 slb_esid),
+	TP_ARGS(slb_vsid, slb_esid),
+
+	TP_STRUCT__entry(
+		__field(	u64,	slb_vsid	)
+		__field(	u64,	slb_esid	)
+	),
+
+	TP_fast_assign(
+		__entry->slb_vsid	= slb_vsid;
+		__entry->slb_esid	= slb_esid;
+	),
+
+	TP_printk("%llx, %llx", __entry->slb_vsid, __entry->slb_esid)
+);
+
 #endif /* CONFIG_PPC_BOOK3S */
 
 #endif /* _TRACE_KVM_H */
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 11/35] KVM: PPC: Revert "KVM: PPC: Use kernel hash function"
       [not found] ` <1283221937-21006-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                     ` (4 preceding siblings ...)
  2010-08-31  2:31   ` [PATCH 09/35] KVM: PPC: Make invalidation code more reliable Alexander Graf
@ 2010-08-31  2:31   ` Alexander Graf
  2010-08-31  2:31   ` [PATCH 12/35] KVM: PPC: Remove unused define Alexander Graf
                     ` (7 subsequent siblings)
  13 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:31 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: Linuxppc-dev, KVM list

It turns out the in-kernel hash function is sub-optimal for our subtle
hash inputs where every bit is significant. So let's revert to the original
hash functions.

This reverts commit 05340ab4f9a6626f7a2e8f9fe5397c61d494f445.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s_32_mmu_host.c |   10 ++++++++--
 arch/powerpc/kvm/book3s_64_mmu_host.c |   11 +++++++++--
 2 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_32_mmu_host.c b/arch/powerpc/kvm/book3s_32_mmu_host.c
index 343452c..57dddeb 100644
--- a/arch/powerpc/kvm/book3s_32_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_32_mmu_host.c
@@ -19,7 +19,6 @@
  */
 
 #include <linux/kvm_host.h>
-#include <linux/hash.h>
 
 #include <asm/kvm_ppc.h>
 #include <asm/kvm_book3s.h>
@@ -77,7 +76,14 @@ void kvmppc_mmu_invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
  * a hash, so we don't waste cycles on looping */
 static u16 kvmppc_sid_hash(struct kvm_vcpu *vcpu, u64 gvsid)
 {
-	return hash_64(gvsid, SID_MAP_BITS);
+	return (u16)(((gvsid >> (SID_MAP_BITS * 7)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 6)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 5)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 4)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 3)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 2)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 1)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 0)) & SID_MAP_MASK));
 }
 
 
diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index 321c931..e7c4d00 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -20,7 +20,6 @@
  */
 
 #include <linux/kvm_host.h>
-#include <linux/hash.h>
 
 #include <asm/kvm_ppc.h>
 #include <asm/kvm_book3s.h>
@@ -44,9 +43,17 @@ void kvmppc_mmu_invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
  * a hash, so we don't waste cycles on looping */
 static u16 kvmppc_sid_hash(struct kvm_vcpu *vcpu, u64 gvsid)
 {
-	return hash_64(gvsid, SID_MAP_BITS);
+	return (u16)(((gvsid >> (SID_MAP_BITS * 7)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 6)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 5)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 4)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 3)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 2)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 1)) & SID_MAP_MASK) ^
+		     ((gvsid >> (SID_MAP_BITS * 0)) & SID_MAP_MASK));
 }
 
+
 static struct kvmppc_sid_map *find_sid_vsid(struct kvm_vcpu *vcpu, u64 gvsid)
 {
 	struct kvmppc_sid_map *map;
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 12/35] KVM: PPC: Remove unused define
       [not found] ` <1283221937-21006-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                     ` (5 preceding siblings ...)
  2010-08-31  2:31   ` [PATCH 11/35] KVM: PPC: Revert "KVM: PPC: Use kernel hash function" Alexander Graf
@ 2010-08-31  2:31   ` Alexander Graf
  2010-08-31  2:31   ` [PATCH 16/35] KVM: PPC: Put segment registers in shared page Alexander Graf
                     ` (6 subsequent siblings)
  13 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:31 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: Linuxppc-dev, KVM list

The define VSID_ALL is unused. Let's remove it.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s_64_mmu_host.c |    1 -
 1 files changed, 0 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index e7c4d00..4040c8d 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -30,7 +30,6 @@
 #include "trace.h"
 
 #define PTE_SIZE 12
-#define VSID_ALL 0
 
 void kvmppc_mmu_invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
 {
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 13/35] KVM: PPC: Add feature bitmap for magic page
  2010-08-31  2:31 [PULL 00/35] KVM: PPC: End-August patch queue Alexander Graf
                   ` (5 preceding siblings ...)
       [not found] ` <1283221937-21006-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-08-31  2:31 ` Alexander Graf
  2010-08-31  2:31 ` [PATCH 14/35] KVM: PPC: Move BAT handling code into spr handler Alexander Graf
                   ` (15 subsequent siblings)
  22 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:31 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Linuxppc-dev, KVM list

We will soon add SR PV support to the shared page, so we need some
infrastructure that allows the guest to query for features KVM exports.

This patch adds a second return value to the magic mapping that
indicated to the guest which features are available.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_para.h |    2 ++
 arch/powerpc/kernel/kvm.c           |   21 +++++++++++++++------
 arch/powerpc/kvm/powerpc.c          |    5 ++++-
 3 files changed, 21 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_para.h b/arch/powerpc/include/asm/kvm_para.h
index 7438ab3..43c1b22 100644
--- a/arch/powerpc/include/asm/kvm_para.h
+++ b/arch/powerpc/include/asm/kvm_para.h
@@ -47,6 +47,8 @@ struct kvm_vcpu_arch_shared {
 
 #define KVM_FEATURE_MAGIC_PAGE	1
 
+#define KVM_MAGIC_FEAT_SR	(1 << 0)
+
 #ifdef __KERNEL__
 
 #ifdef CONFIG_KVM_GUEST
diff --git a/arch/powerpc/kernel/kvm.c b/arch/powerpc/kernel/kvm.c
index e936817..f48144f 100644
--- a/arch/powerpc/kernel/kvm.c
+++ b/arch/powerpc/kernel/kvm.c
@@ -267,12 +267,20 @@ static void kvm_patch_ins_wrteei(u32 *inst)
 
 static void kvm_map_magic_page(void *data)
 {
-	kvm_hypercall2(KVM_HC_PPC_MAP_MAGIC_PAGE,
-		       KVM_MAGIC_PAGE,  /* Physical Address */
-		       KVM_MAGIC_PAGE); /* Effective Address */
+	u32 *features = data;
+
+	ulong in[8];
+	ulong out[8];
+
+	in[0] = KVM_MAGIC_PAGE;
+	in[1] = KVM_MAGIC_PAGE;
+
+	kvm_hypercall(in, out, HC_VENDOR_KVM | KVM_HC_PPC_MAP_MAGIC_PAGE);
+
+	*features = out[0];
 }
 
-static void kvm_check_ins(u32 *inst)
+static void kvm_check_ins(u32 *inst, u32 features)
 {
 	u32 _inst = *inst;
 	u32 inst_no_rt = _inst & ~KVM_MASK_RT;
@@ -368,9 +376,10 @@ static void kvm_use_magic_page(void)
 	u32 *p;
 	u32 *start, *end;
 	u32 tmp;
+	u32 features;
 
 	/* Tell the host to map the magic page to -4096 on all CPUs */
-	on_each_cpu(kvm_map_magic_page, NULL, 1);
+	on_each_cpu(kvm_map_magic_page, &features, 1);
 
 	/* Quick self-test to see if the mapping works */
 	if (__get_user(tmp, (u32*)KVM_MAGIC_PAGE)) {
@@ -383,7 +392,7 @@ static void kvm_use_magic_page(void)
 	end = (void*)_etext;
 
 	for (p = start; p < end; p++)
-		kvm_check_ins(p);
+		kvm_check_ins(p, features);
 
 	printk(KERN_INFO "KVM: Live patching for a fast VM %s\n",
 			 kvm_patching_worked ? "worked" : "failed");
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 6a53a3f..496d7a5 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -66,6 +66,8 @@ int kvmppc_kvm_pv(struct kvm_vcpu *vcpu)
 		vcpu->arch.magic_page_pa = param1;
 		vcpu->arch.magic_page_ea = param2;
 
+		r2 = 0;
+
 		r = HC_EV_SUCCESS;
 		break;
 	}
@@ -76,13 +78,14 @@ int kvmppc_kvm_pv(struct kvm_vcpu *vcpu)
 #endif
 
 		/* Second return value is in r4 */
-		kvmppc_set_gpr(vcpu, 4, r2);
 		break;
 	default:
 		r = HC_EV_UNIMPLEMENTED;
 		break;
 	}
 
+	kvmppc_set_gpr(vcpu, 4, r2);
+
 	return r;
 }
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 14/35] KVM: PPC: Move BAT handling code into spr handler
  2010-08-31  2:31 [PULL 00/35] KVM: PPC: End-August patch queue Alexander Graf
                   ` (6 preceding siblings ...)
  2010-08-31  2:31 ` [PATCH 13/35] KVM: PPC: Add feature bitmap for magic page Alexander Graf
@ 2010-08-31  2:31 ` Alexander Graf
  2010-08-31  2:31 ` [PATCH 15/35] KVM: PPC: Interpret SR registers on demand Alexander Graf
                   ` (14 subsequent siblings)
  22 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:31 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Linuxppc-dev, KVM list

The current approach duplicates the spr->bat finding logic and makes it harder
to reuse the actually used variables. So let's move everything down to the spr
handler.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_emulate.c |   48 ++++++++++++------------------------
 1 files changed, 16 insertions(+), 32 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index f333cb4..4668465 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -264,7 +264,7 @@ void kvmppc_set_bat(struct kvm_vcpu *vcpu, struct kvmppc_bat *bat, bool upper,
 	}
 }
 
-static u32 kvmppc_read_bat(struct kvm_vcpu *vcpu, int sprn)
+static struct kvmppc_bat *kvmppc_find_bat(struct kvm_vcpu *vcpu, int sprn)
 {
 	struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
 	struct kvmppc_bat *bat;
@@ -286,35 +286,7 @@ static u32 kvmppc_read_bat(struct kvm_vcpu *vcpu, int sprn)
 		BUG();
 	}
 
-	if (sprn % 2)
-		return bat->raw >> 32;
-	else
-		return bat->raw;
-}
-
-static void kvmppc_write_bat(struct kvm_vcpu *vcpu, int sprn, u32 val)
-{
-	struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
-	struct kvmppc_bat *bat;
-
-	switch (sprn) {
-	case SPRN_IBAT0U ... SPRN_IBAT3L:
-		bat = &vcpu_book3s->ibat[(sprn - SPRN_IBAT0U) / 2];
-		break;
-	case SPRN_IBAT4U ... SPRN_IBAT7L:
-		bat = &vcpu_book3s->ibat[4 + ((sprn - SPRN_IBAT4U) / 2)];
-		break;
-	case SPRN_DBAT0U ... SPRN_DBAT3L:
-		bat = &vcpu_book3s->dbat[(sprn - SPRN_DBAT0U) / 2];
-		break;
-	case SPRN_DBAT4U ... SPRN_DBAT7L:
-		bat = &vcpu_book3s->dbat[4 + ((sprn - SPRN_DBAT4U) / 2)];
-		break;
-	default:
-		BUG();
-	}
-
-	kvmppc_set_bat(vcpu, bat, !(sprn % 2), val);
+	return bat;
 }
 
 int kvmppc_core_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs)
@@ -339,12 +311,16 @@ int kvmppc_core_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs)
 	case SPRN_IBAT4U ... SPRN_IBAT7L:
 	case SPRN_DBAT0U ... SPRN_DBAT3L:
 	case SPRN_DBAT4U ... SPRN_DBAT7L:
-		kvmppc_write_bat(vcpu, sprn, (u32)spr_val);
+	{
+		struct kvmppc_bat *bat = kvmppc_find_bat(vcpu, sprn);
+
+		kvmppc_set_bat(vcpu, bat, !(sprn % 2), (u32)spr_val);
 		/* BAT writes happen so rarely that we're ok to flush
 		 * everything here */
 		kvmppc_mmu_pte_flush(vcpu, 0, 0);
 		kvmppc_mmu_flush_segments(vcpu);
 		break;
+	}
 	case SPRN_HID0:
 		to_book3s(vcpu)->hid[0] = spr_val;
 		break;
@@ -434,8 +410,16 @@ int kvmppc_core_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
 	case SPRN_IBAT4U ... SPRN_IBAT7L:
 	case SPRN_DBAT0U ... SPRN_DBAT3L:
 	case SPRN_DBAT4U ... SPRN_DBAT7L:
-		kvmppc_set_gpr(vcpu, rt, kvmppc_read_bat(vcpu, sprn));
+	{
+		struct kvmppc_bat *bat = kvmppc_find_bat(vcpu, sprn);
+
+		if (sprn % 2)
+			kvmppc_set_gpr(vcpu, rt, bat->raw >> 32);
+		else
+			kvmppc_set_gpr(vcpu, rt, bat->raw);
+
 		break;
+	}
 	case SPRN_SDR1:
 		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->sdr1);
 		break;
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 15/35] KVM: PPC: Interpret SR registers on demand
  2010-08-31  2:31 [PULL 00/35] KVM: PPC: End-August patch queue Alexander Graf
                   ` (7 preceding siblings ...)
  2010-08-31  2:31 ` [PATCH 14/35] KVM: PPC: Move BAT handling code into spr handler Alexander Graf
@ 2010-08-31  2:31 ` Alexander Graf
  2010-08-31  2:31 ` [PATCH 17/35] KVM: PPC: Add mtsrin PV code Alexander Graf
                   ` (13 subsequent siblings)
  22 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:31 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Linuxppc-dev, KVM list

Right now we're examining the contents of Book3s_32's segment registers when
the register is written and put the interpreted contents into a struct.

There are two reasons this is bad. For starters, the struct has worse real-time
performance, as it occupies more ram. But the more important part is that with
segment registers being interpreted from their raw values, we can put them in
the shared page, allowing guests to mess with them directly.

This patch makes the internal representation of SRs be u32s.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s.h |   11 +----
 arch/powerpc/kvm/book3s.c             |    4 +-
 arch/powerpc/kvm/book3s_32_mmu.c      |   79 ++++++++++++++++++---------------
 3 files changed, 46 insertions(+), 48 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index f04f516..0884652 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -38,15 +38,6 @@ struct kvmppc_slb {
 	bool class	: 1;
 };
 
-struct kvmppc_sr {
-	u32 raw;
-	u32 vsid;
-	bool Ks		: 1;
-	bool Kp		: 1;
-	bool nx		: 1;
-	bool valid	: 1;
-};
-
 struct kvmppc_bat {
 	u64 raw;
 	u32 bepi;
@@ -79,7 +70,7 @@ struct kvmppc_vcpu_book3s {
 		u64 vsid;
 	} slb_shadow[64];
 	u8 slb_shadow_max;
-	struct kvmppc_sr sr[16];
+	u32 sr[16];
 	struct kvmppc_bat ibat[8];
 	struct kvmppc_bat dbat[8];
 	u64 hid[6];
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 2fb528f..34472af 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -1162,8 +1162,8 @@ int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,
 		}
 	} else {
 		for (i = 0; i < 16; i++) {
-			sregs->u.s.ppc32.sr[i] = vcpu3s->sr[i].raw;
-			sregs->u.s.ppc32.sr[i] = vcpu3s->sr[i].raw;
+			sregs->u.s.ppc32.sr[i] = vcpu3s->sr[i];
+			sregs->u.s.ppc32.sr[i] = vcpu3s->sr[i];
 		}
 		for (i = 0; i < 8; i++) {
 			sregs->u.s.ppc32.ibat[i] = vcpu3s->ibat[i].raw;
diff --git a/arch/powerpc/kvm/book3s_32_mmu.c b/arch/powerpc/kvm/book3s_32_mmu.c
index 5bf4bf8..d4ff76f 100644
--- a/arch/powerpc/kvm/book3s_32_mmu.c
+++ b/arch/powerpc/kvm/book3s_32_mmu.c
@@ -58,14 +58,39 @@ static inline bool check_debug_ip(struct kvm_vcpu *vcpu)
 #endif
 }
 
+static inline u32 sr_vsid(u32 sr_raw)
+{
+	return sr_raw & 0x0fffffff;
+}
+
+static inline bool sr_valid(u32 sr_raw)
+{
+	return (sr_raw & 0x80000000) ? false : true;
+}
+
+static inline bool sr_ks(u32 sr_raw)
+{
+	return (sr_raw & 0x40000000) ? true: false;
+}
+
+static inline bool sr_kp(u32 sr_raw)
+{
+	return (sr_raw & 0x20000000) ? true: false;
+}
+
+static inline bool sr_nx(u32 sr_raw)
+{
+	return (sr_raw & 0x10000000) ? true: false;
+}
+
 static int kvmppc_mmu_book3s_32_xlate_bat(struct kvm_vcpu *vcpu, gva_t eaddr,
 					  struct kvmppc_pte *pte, bool data);
 static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
 					     u64 *vsid);
 
-static struct kvmppc_sr *find_sr(struct kvmppc_vcpu_book3s *vcpu_book3s, gva_t eaddr)
+static u32 find_sr(struct kvmppc_vcpu_book3s *vcpu_book3s, gva_t eaddr)
 {
-	return &vcpu_book3s->sr[(eaddr >> 28) & 0xf];
+	return vcpu_book3s->sr[(eaddr >> 28) & 0xf];
 }
 
 static u64 kvmppc_mmu_book3s_32_ea_to_vp(struct kvm_vcpu *vcpu, gva_t eaddr,
@@ -87,7 +112,7 @@ static void kvmppc_mmu_book3s_32_reset_msr(struct kvm_vcpu *vcpu)
 }
 
 static hva_t kvmppc_mmu_book3s_32_get_pteg(struct kvmppc_vcpu_book3s *vcpu_book3s,
-				      struct kvmppc_sr *sre, gva_t eaddr,
+				      u32 sre, gva_t eaddr,
 				      bool primary)
 {
 	u32 page, hash, pteg, htabmask;
@@ -96,7 +121,7 @@ static hva_t kvmppc_mmu_book3s_32_get_pteg(struct kvmppc_vcpu_book3s *vcpu_book3
 	page = (eaddr & 0x0FFFFFFF) >> 12;
 	htabmask = ((vcpu_book3s->sdr1 & 0x1FF) << 16) | 0xFFC0;
 
-	hash = ((sre->vsid ^ page) << 6);
+	hash = ((sr_vsid(sre) ^ page) << 6);
 	if (!primary)
 		hash = ~hash;
 	hash &= htabmask;
@@ -105,7 +130,7 @@ static hva_t kvmppc_mmu_book3s_32_get_pteg(struct kvmppc_vcpu_book3s *vcpu_book3
 
 	dprintk("MMU: pc=0x%lx eaddr=0x%lx sdr1=0x%llx pteg=0x%x vsid=0x%x\n",
 		kvmppc_get_pc(&vcpu_book3s->vcpu), eaddr, vcpu_book3s->sdr1, pteg,
-		sre->vsid);
+		sr_vsid(sre));
 
 	r = gfn_to_hva(vcpu_book3s->vcpu.kvm, pteg >> PAGE_SHIFT);
 	if (kvm_is_error_hva(r))
@@ -113,10 +138,9 @@ static hva_t kvmppc_mmu_book3s_32_get_pteg(struct kvmppc_vcpu_book3s *vcpu_book3
 	return r | (pteg & ~PAGE_MASK);
 }
 
-static u32 kvmppc_mmu_book3s_32_get_ptem(struct kvmppc_sr *sre, gva_t eaddr,
-				    bool primary)
+static u32 kvmppc_mmu_book3s_32_get_ptem(u32 sre, gva_t eaddr, bool primary)
 {
-	return ((eaddr & 0x0fffffff) >> 22) | (sre->vsid << 7) |
+	return ((eaddr & 0x0fffffff) >> 22) | (sr_vsid(sre) << 7) |
 	       (primary ? 0 : 0x40) | 0x80000000;
 }
 
@@ -180,7 +204,7 @@ static int kvmppc_mmu_book3s_32_xlate_pte(struct kvm_vcpu *vcpu, gva_t eaddr,
 				     bool primary)
 {
 	struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
-	struct kvmppc_sr *sre;
+	u32 sre;
 	hva_t ptegp;
 	u32 pteg[16];
 	u32 ptem = 0;
@@ -190,7 +214,7 @@ static int kvmppc_mmu_book3s_32_xlate_pte(struct kvm_vcpu *vcpu, gva_t eaddr,
 	sre = find_sr(vcpu_book3s, eaddr);
 
 	dprintk_pte("SR 0x%lx: vsid=0x%x, raw=0x%x\n", eaddr >> 28,
-		    sre->vsid, sre->raw);
+		    sr_vsid(sre), sre);
 
 	pte->vpage = kvmppc_mmu_book3s_32_ea_to_vp(vcpu, eaddr, data);
 
@@ -214,8 +238,8 @@ static int kvmppc_mmu_book3s_32_xlate_pte(struct kvm_vcpu *vcpu, gva_t eaddr,
 			pte->raddr = (pteg[i+1] & ~(0xFFFULL)) | (eaddr & 0xFFF);
 			pp = pteg[i+1] & 3;
 
-			if ((sre->Kp &&  (vcpu->arch.shared->msr & MSR_PR)) ||
-			    (sre->Ks && !(vcpu->arch.shared->msr & MSR_PR)))
+			if ((sr_kp(sre) &&  (vcpu->arch.shared->msr & MSR_PR)) ||
+			    (sr_ks(sre) && !(vcpu->arch.shared->msr & MSR_PR)))
 				pp |= 4;
 
 			pte->may_write = false;
@@ -311,30 +335,13 @@ static int kvmppc_mmu_book3s_32_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
 
 static u32 kvmppc_mmu_book3s_32_mfsrin(struct kvm_vcpu *vcpu, u32 srnum)
 {
-	return to_book3s(vcpu)->sr[srnum].raw;
+	return to_book3s(vcpu)->sr[srnum];
 }
 
 static void kvmppc_mmu_book3s_32_mtsrin(struct kvm_vcpu *vcpu, u32 srnum,
 					ulong value)
 {
-	struct kvmppc_sr *sre;
-
-	sre = &to_book3s(vcpu)->sr[srnum];
-
-	/* Flush any left-over shadows from the previous SR */
-
-	/* XXX Not necessary? */
-	/* kvmppc_mmu_pte_flush(vcpu, ((u64)sre->vsid) << 28, 0xf0000000ULL); */
-
-	/* And then put in the new SR */
-	sre->raw = value;
-	sre->vsid = (value & 0x0fffffff);
-	sre->valid = (value & 0x80000000) ? false : true;
-	sre->Ks = (value & 0x40000000) ? true : false;
-	sre->Kp = (value & 0x20000000) ? true : false;
-	sre->nx = (value & 0x10000000) ? true : false;
-
-	/* Map the new segment */
+	to_book3s(vcpu)->sr[srnum] = value;
 	kvmppc_mmu_map_segment(vcpu, srnum << SID_SHIFT);
 }
 
@@ -347,13 +354,13 @@ static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
 					     u64 *vsid)
 {
 	ulong ea = esid << SID_SHIFT;
-	struct kvmppc_sr *sr;
+	u32 sr;
 	u64 gvsid = esid;
 
 	if (vcpu->arch.shared->msr & (MSR_DR|MSR_IR)) {
 		sr = find_sr(to_book3s(vcpu), ea);
-		if (sr->valid)
-			gvsid = sr->vsid;
+		if (sr_valid(sr))
+			gvsid = sr_vsid(sr);
 	}
 
 	/* In case we only have one of MSR_IR or MSR_DR set, let's put
@@ -370,8 +377,8 @@ static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
 		*vsid = VSID_REAL_DR | gvsid;
 		break;
 	case MSR_DR|MSR_IR:
-		if (sr->valid)
-			*vsid = sr->vsid;
+		if (sr_valid(sr))
+			*vsid = sr_vsid(sr);
 		else
 			*vsid = VSID_BAT | gvsid;
 		break;
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 16/35] KVM: PPC: Put segment registers in shared page
       [not found] ` <1283221937-21006-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                     ` (6 preceding siblings ...)
  2010-08-31  2:31   ` [PATCH 12/35] KVM: PPC: Remove unused define Alexander Graf
@ 2010-08-31  2:31   ` Alexander Graf
  2010-08-31  2:32   ` [PATCH 19/35] KVM: PPC: Update int_pending also on dequeue Alexander Graf
                     ` (5 subsequent siblings)
  13 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:31 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: Linuxppc-dev, KVM list

Now that the actual mtsr doesn't do anything anymore, we can move the sr
contents over to the shared page, so a guest can directly read and write
its sr contents from guest context.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/include/asm/kvm_book3s.h |    1 -
 arch/powerpc/include/asm/kvm_para.h   |    1 +
 arch/powerpc/kvm/book3s.c             |    7 +++----
 arch/powerpc/kvm/book3s_32_mmu.c      |   12 ++++++------
 arch/powerpc/kvm/powerpc.c            |    2 +-
 5 files changed, 11 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 0884652..be8aac2 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -70,7 +70,6 @@ struct kvmppc_vcpu_book3s {
 		u64 vsid;
 	} slb_shadow[64];
 	u8 slb_shadow_max;
-	u32 sr[16];
 	struct kvmppc_bat ibat[8];
 	struct kvmppc_bat dbat[8];
 	u64 hid[6];
diff --git a/arch/powerpc/include/asm/kvm_para.h b/arch/powerpc/include/asm/kvm_para.h
index 43c1b22..d79fd09 100644
--- a/arch/powerpc/include/asm/kvm_para.h
+++ b/arch/powerpc/include/asm/kvm_para.h
@@ -38,6 +38,7 @@ struct kvm_vcpu_arch_shared {
 	__u64 msr;
 	__u32 dsisr;
 	__u32 int_pending;	/* Tells the guest if we have an interrupt */
+	__u32 sr[16];
 };
 
 #define KVM_SC_MAGIC_R0		0x4b564d21 /* "KVM!" */
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 34472af..02a9cb1 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -1161,10 +1161,9 @@ int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,
 			sregs->u.s.ppc64.slb[i].slbv = vcpu3s->slb[i].origv;
 		}
 	} else {
-		for (i = 0; i < 16; i++) {
-			sregs->u.s.ppc32.sr[i] = vcpu3s->sr[i];
-			sregs->u.s.ppc32.sr[i] = vcpu3s->sr[i];
-		}
+		for (i = 0; i < 16; i++)
+			sregs->u.s.ppc32.sr[i] = vcpu->arch.shared->sr[i];
+
 		for (i = 0; i < 8; i++) {
 			sregs->u.s.ppc32.ibat[i] = vcpu3s->ibat[i].raw;
 			sregs->u.s.ppc32.dbat[i] = vcpu3s->dbat[i].raw;
diff --git a/arch/powerpc/kvm/book3s_32_mmu.c b/arch/powerpc/kvm/book3s_32_mmu.c
index d4ff76f..c8cefdd 100644
--- a/arch/powerpc/kvm/book3s_32_mmu.c
+++ b/arch/powerpc/kvm/book3s_32_mmu.c
@@ -88,9 +88,9 @@ static int kvmppc_mmu_book3s_32_xlate_bat(struct kvm_vcpu *vcpu, gva_t eaddr,
 static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
 					     u64 *vsid);
 
-static u32 find_sr(struct kvmppc_vcpu_book3s *vcpu_book3s, gva_t eaddr)
+static u32 find_sr(struct kvm_vcpu *vcpu, gva_t eaddr)
 {
-	return vcpu_book3s->sr[(eaddr >> 28) & 0xf];
+	return vcpu->arch.shared->sr[(eaddr >> 28) & 0xf];
 }
 
 static u64 kvmppc_mmu_book3s_32_ea_to_vp(struct kvm_vcpu *vcpu, gva_t eaddr,
@@ -211,7 +211,7 @@ static int kvmppc_mmu_book3s_32_xlate_pte(struct kvm_vcpu *vcpu, gva_t eaddr,
 	int i;
 	int found = 0;
 
-	sre = find_sr(vcpu_book3s, eaddr);
+	sre = find_sr(vcpu, eaddr);
 
 	dprintk_pte("SR 0x%lx: vsid=0x%x, raw=0x%x\n", eaddr >> 28,
 		    sr_vsid(sre), sre);
@@ -335,13 +335,13 @@ static int kvmppc_mmu_book3s_32_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
 
 static u32 kvmppc_mmu_book3s_32_mfsrin(struct kvm_vcpu *vcpu, u32 srnum)
 {
-	return to_book3s(vcpu)->sr[srnum];
+	return vcpu->arch.shared->sr[srnum];
 }
 
 static void kvmppc_mmu_book3s_32_mtsrin(struct kvm_vcpu *vcpu, u32 srnum,
 					ulong value)
 {
-	to_book3s(vcpu)->sr[srnum] = value;
+	vcpu->arch.shared->sr[srnum] = value;
 	kvmppc_mmu_map_segment(vcpu, srnum << SID_SHIFT);
 }
 
@@ -358,7 +358,7 @@ static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
 	u64 gvsid = esid;
 
 	if (vcpu->arch.shared->msr & (MSR_DR|MSR_IR)) {
-		sr = find_sr(to_book3s(vcpu), ea);
+		sr = find_sr(vcpu, ea);
 		if (sr_valid(sr))
 			gvsid = sr_vsid(sr);
 	}
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 496d7a5..028891c 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -66,7 +66,7 @@ int kvmppc_kvm_pv(struct kvm_vcpu *vcpu)
 		vcpu->arch.magic_page_pa = param1;
 		vcpu->arch.magic_page_ea = param2;
 
-		r2 = 0;
+		r2 = KVM_MAGIC_FEAT_SR;
 
 		r = HC_EV_SUCCESS;
 		break;
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 17/35] KVM: PPC: Add mtsrin PV code
  2010-08-31  2:31 [PULL 00/35] KVM: PPC: End-August patch queue Alexander Graf
                   ` (8 preceding siblings ...)
  2010-08-31  2:31 ` [PATCH 15/35] KVM: PPC: Interpret SR registers on demand Alexander Graf
@ 2010-08-31  2:31 ` Alexander Graf
  2010-08-31  2:31 ` [PATCH 18/35] KVM: PPC: Make PV mtmsr work with r30 and r31 Alexander Graf
                   ` (12 subsequent siblings)
  22 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:31 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Linuxppc-dev, KVM list

This is the guest side of the mtsr acceleration. Using this a guest can now
call mtsrin with almost no overhead as long as it ensures that it only uses
it with (MSR_IR|MSR_DR) == 0. Linux does that, so we're good.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 Documentation/kvm/ppc-pv.txt      |    3 ++
 arch/powerpc/kernel/asm-offsets.c |    1 +
 arch/powerpc/kernel/kvm.c         |   60 +++++++++++++++++++++++++++++++++++++
 arch/powerpc/kernel/kvm_emul.S    |   50 ++++++++++++++++++++++++++++++
 4 files changed, 114 insertions(+), 0 deletions(-)

diff --git a/Documentation/kvm/ppc-pv.txt b/Documentation/kvm/ppc-pv.txt
index 41ee16d..922cf95 100644
--- a/Documentation/kvm/ppc-pv.txt
+++ b/Documentation/kvm/ppc-pv.txt
@@ -160,6 +160,9 @@ mtmsr	rX		b	<special mtmsr section>
 
 mtmsrd	rX, 1		b	<special mtmsrd section>
 
+[Book3S only]
+mtsrin	rX, rY		b	<special mtsrin section>
+
 [BookE only]
 wrteei	[0|1]		b	<special wrteei section>
 
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 37486ca..293d2a8 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -478,6 +478,7 @@ int main(void)
 	DEFINE(KVM_MAGIC_MSR, offsetof(struct kvm_vcpu_arch_shared, msr));
 	DEFINE(KVM_MAGIC_CRITICAL, offsetof(struct kvm_vcpu_arch_shared,
 					    critical));
+	DEFINE(KVM_MAGIC_SR, offsetof(struct kvm_vcpu_arch_shared, sr));
 #endif
 
 #ifdef CONFIG_44x
diff --git a/arch/powerpc/kernel/kvm.c b/arch/powerpc/kernel/kvm.c
index f48144f..43ec78a 100644
--- a/arch/powerpc/kernel/kvm.c
+++ b/arch/powerpc/kernel/kvm.c
@@ -43,6 +43,7 @@
 #define KVM_INST_B_MAX		0x01ffffff
 
 #define KVM_MASK_RT		0x03e00000
+#define KVM_MASK_RB		0x0000f800
 #define KVM_INST_MFMSR		0x7c0000a6
 #define KVM_INST_MFSPR_SPRG0	0x7c1042a6
 #define KVM_INST_MFSPR_SPRG1	0x7c1142a6
@@ -70,6 +71,8 @@
 #define KVM_INST_WRTEEI_0	0x7c000146
 #define KVM_INST_WRTEEI_1	0x7c008146
 
+#define KVM_INST_MTSRIN		0x7c0001e4
+
 static bool kvm_patching_worked = true;
 static char kvm_tmp[1024 * 1024];
 static int kvm_tmp_index;
@@ -265,6 +268,51 @@ static void kvm_patch_ins_wrteei(u32 *inst)
 
 #endif
 
+#ifdef CONFIG_PPC_BOOK3S_32
+
+extern u32 kvm_emulate_mtsrin_branch_offs;
+extern u32 kvm_emulate_mtsrin_reg1_offs;
+extern u32 kvm_emulate_mtsrin_reg2_offs;
+extern u32 kvm_emulate_mtsrin_orig_ins_offs;
+extern u32 kvm_emulate_mtsrin_len;
+extern u32 kvm_emulate_mtsrin[];
+
+static void kvm_patch_ins_mtsrin(u32 *inst, u32 rt, u32 rb)
+{
+	u32 *p;
+	int distance_start;
+	int distance_end;
+	ulong next_inst;
+
+	p = kvm_alloc(kvm_emulate_mtsrin_len * 4);
+	if (!p)
+		return;
+
+	/* Find out where we are and put everything there */
+	distance_start = (ulong)p - (ulong)inst;
+	next_inst = ((ulong)inst + 4);
+	distance_end = next_inst - (ulong)&p[kvm_emulate_mtsrin_branch_offs];
+
+	/* Make sure we only write valid b instructions */
+	if (distance_start > KVM_INST_B_MAX) {
+		kvm_patching_worked = false;
+		return;
+	}
+
+	/* Modify the chunk to fit the invocation */
+	memcpy(p, kvm_emulate_mtsrin, kvm_emulate_mtsrin_len * 4);
+	p[kvm_emulate_mtsrin_branch_offs] |= distance_end & KVM_INST_B_MASK;
+	p[kvm_emulate_mtsrin_reg1_offs] |= (rb << 10);
+	p[kvm_emulate_mtsrin_reg2_offs] |= rt;
+	p[kvm_emulate_mtsrin_orig_ins_offs] = *inst;
+	flush_icache_range((ulong)p, (ulong)p + kvm_emulate_mtsrin_len * 4);
+
+	/* Patch the invocation */
+	kvm_patch_ins_b(inst, distance_start);
+}
+
+#endif
+
 static void kvm_map_magic_page(void *data)
 {
 	u32 *features = data;
@@ -361,6 +409,18 @@ static void kvm_check_ins(u32 *inst, u32 features)
 		break;
 	}
 
+	switch (inst_no_rt & ~KVM_MASK_RB) {
+#ifdef CONFIG_PPC_BOOK3S_32
+	case KVM_INST_MTSRIN:
+		if (features & KVM_MAGIC_FEAT_SR) {
+			u32 inst_rb = _inst & KVM_MASK_RB;
+			kvm_patch_ins_mtsrin(inst, inst_rt, inst_rb);
+		}
+		break;
+		break;
+#endif
+	}
+
 	switch (_inst) {
 #ifdef CONFIG_BOOKE
 	case KVM_INST_WRTEEI_0:
diff --git a/arch/powerpc/kernel/kvm_emul.S b/arch/powerpc/kernel/kvm_emul.S
index 3199f65..a6e97e7 100644
--- a/arch/powerpc/kernel/kvm_emul.S
+++ b/arch/powerpc/kernel/kvm_emul.S
@@ -245,3 +245,53 @@ kvm_emulate_wrteei_ee_offs:
 .global kvm_emulate_wrteei_len
 kvm_emulate_wrteei_len:
 	.long (kvm_emulate_wrteei_end - kvm_emulate_wrteei) / 4
+
+
+.global kvm_emulate_mtsrin
+kvm_emulate_mtsrin:
+
+	SCRATCH_SAVE
+
+	LL64(r31, KVM_MAGIC_PAGE + KVM_MAGIC_MSR, 0)
+	andi.	r31, r31, MSR_DR | MSR_IR
+	beq	kvm_emulate_mtsrin_reg1
+
+	SCRATCH_RESTORE
+
+kvm_emulate_mtsrin_orig_ins:
+	nop
+	b	kvm_emulate_mtsrin_branch
+
+kvm_emulate_mtsrin_reg1:
+	/* rX >> 26 */
+	rlwinm  r30,r0,6,26,29
+
+kvm_emulate_mtsrin_reg2:
+	stw	r0, (KVM_MAGIC_PAGE + KVM_MAGIC_SR)(r30)
+
+	SCRATCH_RESTORE
+
+	/* Go back to caller */
+kvm_emulate_mtsrin_branch:
+	b	.
+kvm_emulate_mtsrin_end:
+
+.global kvm_emulate_mtsrin_branch_offs
+kvm_emulate_mtsrin_branch_offs:
+	.long (kvm_emulate_mtsrin_branch - kvm_emulate_mtsrin) / 4
+
+.global kvm_emulate_mtsrin_reg1_offs
+kvm_emulate_mtsrin_reg1_offs:
+	.long (kvm_emulate_mtsrin_reg1 - kvm_emulate_mtsrin) / 4
+
+.global kvm_emulate_mtsrin_reg2_offs
+kvm_emulate_mtsrin_reg2_offs:
+	.long (kvm_emulate_mtsrin_reg2 - kvm_emulate_mtsrin) / 4
+
+.global kvm_emulate_mtsrin_orig_ins_offs
+kvm_emulate_mtsrin_orig_ins_offs:
+	.long (kvm_emulate_mtsrin_orig_ins - kvm_emulate_mtsrin) / 4
+
+.global kvm_emulate_mtsrin_len
+kvm_emulate_mtsrin_len:
+	.long (kvm_emulate_mtsrin_end - kvm_emulate_mtsrin) / 4
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 18/35] KVM: PPC: Make PV mtmsr work with r30 and r31
  2010-08-31  2:31 [PULL 00/35] KVM: PPC: End-August patch queue Alexander Graf
                   ` (9 preceding siblings ...)
  2010-08-31  2:31 ` [PATCH 17/35] KVM: PPC: Add mtsrin PV code Alexander Graf
@ 2010-08-31  2:31 ` Alexander Graf
  2010-08-31  2:32 ` [PATCH 20/35] KVM: PPC: Make PV mtmsrd L=1 " Alexander Graf
                   ` (11 subsequent siblings)
  22 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:31 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Linuxppc-dev, KVM list

So far we've been restricting ourselves to r0-r29 as registers an mtmsr
instruction could use. This was bad, as there are some code paths in
Linux actually using r30.

So let's instead handle all registers gracefully and get rid of that
stupid limitation

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kernel/kvm.c      |   39 ++++++++++++++++++++++++++++++++-------
 arch/powerpc/kernel/kvm_emul.S |   17 ++++++++---------
 2 files changed, 40 insertions(+), 16 deletions(-)

diff --git a/arch/powerpc/kernel/kvm.c b/arch/powerpc/kernel/kvm.c
index 43ec78a..517967d 100644
--- a/arch/powerpc/kernel/kvm.c
+++ b/arch/powerpc/kernel/kvm.c
@@ -43,6 +43,7 @@
 #define KVM_INST_B_MAX		0x01ffffff
 
 #define KVM_MASK_RT		0x03e00000
+#define KVM_RT_30		0x03c00000
 #define KVM_MASK_RB		0x0000f800
 #define KVM_INST_MFMSR		0x7c0000a6
 #define KVM_INST_MFSPR_SPRG0	0x7c1042a6
@@ -83,6 +84,15 @@ static inline void kvm_patch_ins(u32 *inst, u32 new_inst)
 	flush_icache_range((ulong)inst, (ulong)inst + 4);
 }
 
+static void kvm_patch_ins_ll(u32 *inst, long addr, u32 rt)
+{
+#ifdef CONFIG_64BIT
+	kvm_patch_ins(inst, KVM_INST_LD | rt | (addr & 0x0000fffc));
+#else
+	kvm_patch_ins(inst, KVM_INST_LWZ | rt | (addr & 0x0000fffc));
+#endif
+}
+
 static void kvm_patch_ins_ld(u32 *inst, long addr, u32 rt)
 {
 #ifdef CONFIG_64BIT
@@ -187,7 +197,6 @@ static void kvm_patch_ins_mtmsrd(u32 *inst, u32 rt)
 extern u32 kvm_emulate_mtmsr_branch_offs;
 extern u32 kvm_emulate_mtmsr_reg1_offs;
 extern u32 kvm_emulate_mtmsr_reg2_offs;
-extern u32 kvm_emulate_mtmsr_reg3_offs;
 extern u32 kvm_emulate_mtmsr_orig_ins_offs;
 extern u32 kvm_emulate_mtmsr_len;
 extern u32 kvm_emulate_mtmsr[];
@@ -217,9 +226,27 @@ static void kvm_patch_ins_mtmsr(u32 *inst, u32 rt)
 	/* Modify the chunk to fit the invocation */
 	memcpy(p, kvm_emulate_mtmsr, kvm_emulate_mtmsr_len * 4);
 	p[kvm_emulate_mtmsr_branch_offs] |= distance_end & KVM_INST_B_MASK;
-	p[kvm_emulate_mtmsr_reg1_offs] |= rt;
-	p[kvm_emulate_mtmsr_reg2_offs] |= rt;
-	p[kvm_emulate_mtmsr_reg3_offs] |= rt;
+
+	/* Make clobbered registers work too */
+	switch (get_rt(rt)) {
+	case 30:
+		kvm_patch_ins_ll(&p[kvm_emulate_mtmsr_reg1_offs],
+				 magic_var(scratch2), KVM_RT_30);
+		kvm_patch_ins_ll(&p[kvm_emulate_mtmsr_reg2_offs],
+				 magic_var(scratch2), KVM_RT_30);
+		break;
+	case 31:
+		kvm_patch_ins_ll(&p[kvm_emulate_mtmsr_reg1_offs],
+				 magic_var(scratch1), KVM_RT_30);
+		kvm_patch_ins_ll(&p[kvm_emulate_mtmsr_reg2_offs],
+				 magic_var(scratch1), KVM_RT_30);
+		break;
+	default:
+		p[kvm_emulate_mtmsr_reg1_offs] |= rt;
+		p[kvm_emulate_mtmsr_reg2_offs] |= rt;
+		break;
+	}
+
 	p[kvm_emulate_mtmsr_orig_ins_offs] = *inst;
 	flush_icache_range((ulong)p, (ulong)p + kvm_emulate_mtmsr_len * 4);
 
@@ -403,9 +430,7 @@ static void kvm_check_ins(u32 *inst, u32 features)
 		break;
 	case KVM_INST_MTMSR:
 	case KVM_INST_MTMSRD_L0:
-		/* We use r30 and r31 during the hook */
-		if (get_rt(inst_rt) < 30)
-			kvm_patch_ins_mtmsr(inst, inst_rt);
+		kvm_patch_ins_mtmsr(inst, inst_rt);
 		break;
 	}
 
diff --git a/arch/powerpc/kernel/kvm_emul.S b/arch/powerpc/kernel/kvm_emul.S
index a6e97e7..6530532 100644
--- a/arch/powerpc/kernel/kvm_emul.S
+++ b/arch/powerpc/kernel/kvm_emul.S
@@ -135,7 +135,8 @@ kvm_emulate_mtmsr:
 
 	/* Find the changed bits between old and new MSR */
 kvm_emulate_mtmsr_reg1:
-	xor	r31, r0, r31
+	ori	r30, r0, 0
+	xor	r31, r30, r31
 
 	/* Check if we need to really do mtmsr */
 	LOAD_REG_IMMEDIATE(r30, MSR_CRITICAL_BITS)
@@ -156,14 +157,17 @@ kvm_emulate_mtmsr_orig_ins:
 
 maybe_stay_in_guest:
 
+	/* Get the target register in r30 */
+kvm_emulate_mtmsr_reg2:
+	ori	r30, r0, 0
+
 	/* Check if we have to fetch an interrupt */
 	lwz	r31, (KVM_MAGIC_PAGE + KVM_MAGIC_INT)(0)
 	cmpwi	r31, 0
 	beq+	no_mtmsr
 
 	/* Check if we may trigger an interrupt */
-kvm_emulate_mtmsr_reg2:
-	andi.	r31, r0, MSR_EE
+	andi.	r31, r30, MSR_EE
 	beq	no_mtmsr
 
 	b	do_mtmsr
@@ -171,8 +175,7 @@ kvm_emulate_mtmsr_reg2:
 no_mtmsr:
 
 	/* Put MSR into magic page because we don't call mtmsr */
-kvm_emulate_mtmsr_reg3:
-	STL64(r0, KVM_MAGIC_PAGE + KVM_MAGIC_MSR, 0)
+	STL64(r30, KVM_MAGIC_PAGE + KVM_MAGIC_MSR, 0)
 
 	SCRATCH_RESTORE
 
@@ -193,10 +196,6 @@ kvm_emulate_mtmsr_reg1_offs:
 kvm_emulate_mtmsr_reg2_offs:
 	.long (kvm_emulate_mtmsr_reg2 - kvm_emulate_mtmsr) / 4
 
-.global kvm_emulate_mtmsr_reg3_offs
-kvm_emulate_mtmsr_reg3_offs:
-	.long (kvm_emulate_mtmsr_reg3 - kvm_emulate_mtmsr) / 4
-
 .global kvm_emulate_mtmsr_orig_ins_offs
 kvm_emulate_mtmsr_orig_ins_offs:
 	.long (kvm_emulate_mtmsr_orig_ins - kvm_emulate_mtmsr) / 4
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 19/35] KVM: PPC: Update int_pending also on dequeue
       [not found] ` <1283221937-21006-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                     ` (7 preceding siblings ...)
  2010-08-31  2:31   ` [PATCH 16/35] KVM: PPC: Put segment registers in shared page Alexander Graf
@ 2010-08-31  2:32   ` Alexander Graf
  2010-08-31  2:32   ` [PATCH 21/35] KVM: PPC: Force enable nap on KVM Alexander Graf
                     ` (4 subsequent siblings)
  13 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:32 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: Linuxppc-dev, KVM list

When having a decrementor interrupt pending, the dequeuing happens manually
through an mtdec instruction. This instruction simply calls dequeue on that
interrupt, so the int_pending hint doesn't get updated.

This patch enables updating the int_pending hint also on dequeue, thus
correctly enabling guests to stay in guest contexts more often.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 02a9cb1..7adea63 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -201,6 +201,9 @@ static void kvmppc_book3s_dequeue_irqprio(struct kvm_vcpu *vcpu,
 {
 	clear_bit(kvmppc_book3s_vec2irqprio(vec),
 		  &vcpu->arch.pending_exceptions);
+
+	if (!vcpu->arch.pending_exceptions)
+		vcpu->arch.shared->int_pending = 0;
 }
 
 void kvmppc_book3s_queue_irqprio(struct kvm_vcpu *vcpu, unsigned int vec)
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 20/35] KVM: PPC: Make PV mtmsrd L=1 work with r30 and r31
  2010-08-31  2:31 [PULL 00/35] KVM: PPC: End-August patch queue Alexander Graf
                   ` (10 preceding siblings ...)
  2010-08-31  2:31 ` [PATCH 18/35] KVM: PPC: Make PV mtmsr work with r30 and r31 Alexander Graf
@ 2010-08-31  2:32 ` Alexander Graf
  2010-08-31  2:32 ` [PATCH 22/35] KVM: PPC: Implement correct SID mapping on Book3s_32 Alexander Graf
                   ` (10 subsequent siblings)
  22 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:32 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Linuxppc-dev, KVM list

We had an arbitrary limitation in mtmsrd L=1 that kept us from using r30 and
r31 as input registers. Let's get rid of that and get more potential speedups!

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kernel/kvm.c      |   21 +++++++++++++++++----
 arch/powerpc/kernel/kvm_emul.S |    8 +++++++-
 2 files changed, 24 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/kernel/kvm.c b/arch/powerpc/kernel/kvm.c
index 517967d..517da39 100644
--- a/arch/powerpc/kernel/kvm.c
+++ b/arch/powerpc/kernel/kvm.c
@@ -159,6 +159,7 @@ static u32 *kvm_alloc(int len)
 
 extern u32 kvm_emulate_mtmsrd_branch_offs;
 extern u32 kvm_emulate_mtmsrd_reg_offs;
+extern u32 kvm_emulate_mtmsrd_orig_ins_offs;
 extern u32 kvm_emulate_mtmsrd_len;
 extern u32 kvm_emulate_mtmsrd[];
 
@@ -187,7 +188,21 @@ static void kvm_patch_ins_mtmsrd(u32 *inst, u32 rt)
 	/* Modify the chunk to fit the invocation */
 	memcpy(p, kvm_emulate_mtmsrd, kvm_emulate_mtmsrd_len * 4);
 	p[kvm_emulate_mtmsrd_branch_offs] |= distance_end & KVM_INST_B_MASK;
-	p[kvm_emulate_mtmsrd_reg_offs] |= rt;
+	switch (get_rt(rt)) {
+	case 30:
+		kvm_patch_ins_ll(&p[kvm_emulate_mtmsrd_reg_offs],
+				 magic_var(scratch2), KVM_RT_30);
+		break;
+	case 31:
+		kvm_patch_ins_ll(&p[kvm_emulate_mtmsrd_reg_offs],
+				 magic_var(scratch1), KVM_RT_30);
+		break;
+	default:
+		p[kvm_emulate_mtmsrd_reg_offs] |= rt;
+		break;
+	}
+
+	p[kvm_emulate_mtmsrd_orig_ins_offs] = *inst;
 	flush_icache_range((ulong)p, (ulong)p + kvm_emulate_mtmsrd_len * 4);
 
 	/* Patch the invocation */
@@ -424,9 +439,7 @@ static void kvm_check_ins(u32 *inst, u32 features)
 
 	/* Rewrites */
 	case KVM_INST_MTMSRD_L1:
-		/* We use r30 and r31 during the hook */
-		if (get_rt(inst_rt) < 30)
-			kvm_patch_ins_mtmsrd(inst, inst_rt);
+		kvm_patch_ins_mtmsrd(inst, inst_rt);
 		break;
 	case KVM_INST_MTMSR:
 	case KVM_INST_MTMSRD_L0:
diff --git a/arch/powerpc/kernel/kvm_emul.S b/arch/powerpc/kernel/kvm_emul.S
index 6530532..f2b1b25 100644
--- a/arch/powerpc/kernel/kvm_emul.S
+++ b/arch/powerpc/kernel/kvm_emul.S
@@ -78,7 +78,8 @@ kvm_emulate_mtmsrd:
 
 	/* OR the register's (MSR_EE|MSR_RI) on MSR */
 kvm_emulate_mtmsrd_reg:
-	andi.	r30, r0, (MSR_EE|MSR_RI)
+	ori	r30, r0, 0
+	andi.	r30, r30, (MSR_EE|MSR_RI)
 	or	r31, r31, r30
 
 	/* Put MSR back into magic page */
@@ -96,6 +97,7 @@ kvm_emulate_mtmsrd_reg:
 	SCRATCH_RESTORE
 
 	/* Nag hypervisor */
+kvm_emulate_mtmsrd_orig_ins:
 	tlbsync
 
 	b	kvm_emulate_mtmsrd_branch
@@ -117,6 +119,10 @@ kvm_emulate_mtmsrd_branch_offs:
 kvm_emulate_mtmsrd_reg_offs:
 	.long (kvm_emulate_mtmsrd_reg - kvm_emulate_mtmsrd) / 4
 
+.global kvm_emulate_mtmsrd_orig_ins_offs
+kvm_emulate_mtmsrd_orig_ins_offs:
+	.long (kvm_emulate_mtmsrd_orig_ins - kvm_emulate_mtmsrd) / 4
+
 .global kvm_emulate_mtmsrd_len
 kvm_emulate_mtmsrd_len:
 	.long (kvm_emulate_mtmsrd_end - kvm_emulate_mtmsrd) / 4
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 21/35] KVM: PPC: Force enable nap on KVM
       [not found] ` <1283221937-21006-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                     ` (8 preceding siblings ...)
  2010-08-31  2:32   ` [PATCH 19/35] KVM: PPC: Update int_pending also on dequeue Alexander Graf
@ 2010-08-31  2:32   ` Alexander Graf
  2010-08-31  2:32   ` [PATCH 24/35] KVM: PPC: initialize IVORs in addition to IVPR Alexander Graf
                     ` (3 subsequent siblings)
  13 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:32 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: Linuxppc-dev, KVM list

There are some heuristics in the PPC power management code that try to find
out if the particular hardware we're running on supports proper power management
or just hangs the machine when going into nap mode.

Since we know that KVM is safe with nap, let's force enable it in the PV code
once we're certain that we are on a KVM VM.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kernel/kvm.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kernel/kvm.c b/arch/powerpc/kernel/kvm.c
index 517da39..95aed6b 100644
--- a/arch/powerpc/kernel/kvm.c
+++ b/arch/powerpc/kernel/kvm.c
@@ -583,6 +583,9 @@ static int __init kvm_guest_init(void)
 	if (kvm_para_has_feature(KVM_FEATURE_MAGIC_PAGE))
 		kvm_use_magic_page();
 
+	/* Enable napping */
+	powersave_nap = 1;
+
 free_tmp:
 	kvm_free_tmp();
 
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 22/35] KVM: PPC: Implement correct SID mapping on Book3s_32
  2010-08-31  2:31 [PULL 00/35] KVM: PPC: End-August patch queue Alexander Graf
                   ` (11 preceding siblings ...)
  2010-08-31  2:32 ` [PATCH 20/35] KVM: PPC: Make PV mtmsrd L=1 " Alexander Graf
@ 2010-08-31  2:32 ` Alexander Graf
  2010-08-31  2:32 ` [PATCH 23/35] KVM: PPC: Don't put MSR_POW in MSR Alexander Graf
                   ` (9 subsequent siblings)
  22 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:32 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Linuxppc-dev, KVM list

Up until now we were doing segment mappings wrong on Book3s_32. For Book3s_64
we were using a trick where we know that a single mmu_context gives us 16 bits
of context ids.

The mm system on Book3s_32 instead uses a clever algorithm to distribute VSIDs
across the available range, so a context id really only gives us 16 available
VSIDs.

To keep at least a few guest processes in the SID shadow, let's map a number of
contexts that we can use as VSID pool. This makes the code be actually correct
and shouldn't hurt performance too much.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s.h |   15 +++++++-
 arch/powerpc/kvm/book3s_32_mmu_host.c |   57 ++++++++++++++++++---------------
 arch/powerpc/kvm/book3s_64_mmu_host.c |    8 ++--
 3 files changed, 48 insertions(+), 32 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index be8aac2..d62e703 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -60,6 +60,13 @@ struct kvmppc_sid_map {
 #define SID_MAP_NUM     (1 << SID_MAP_BITS)
 #define SID_MAP_MASK    (SID_MAP_NUM - 1)
 
+#ifdef CONFIG_PPC_BOOK3S_64
+#define SID_CONTEXTS	1
+#else
+#define SID_CONTEXTS	128
+#define VSID_POOL_SIZE	(SID_CONTEXTS * 16)
+#endif
+
 struct kvmppc_vcpu_book3s {
 	struct kvm_vcpu vcpu;
 	struct kvmppc_book3s_shadow_vcpu *shadow_vcpu;
@@ -78,10 +85,14 @@ struct kvmppc_vcpu_book3s {
 	u64 sdr1;
 	u64 hior;
 	u64 msr_mask;
-	u64 vsid_first;
 	u64 vsid_next;
+#ifdef CONFIG_PPC_BOOK3S_32
+	u32 vsid_pool[VSID_POOL_SIZE];
+#else
+	u64 vsid_first;
 	u64 vsid_max;
-	int context_id;
+#endif
+	int context_id[SID_CONTEXTS];
 	ulong prog_flags; /* flags to inject when giving a 700 trap */
 };
 
diff --git a/arch/powerpc/kvm/book3s_32_mmu_host.c b/arch/powerpc/kvm/book3s_32_mmu_host.c
index 57dddeb..9fecbfb 100644
--- a/arch/powerpc/kvm/book3s_32_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_32_mmu_host.c
@@ -275,18 +275,15 @@ static struct kvmppc_sid_map *create_sid_map(struct kvm_vcpu *vcpu, u64 gvsid)
 	backwards_map = !backwards_map;
 
 	/* Uh-oh ... out of mappings. Let's flush! */
-	if (vcpu_book3s->vsid_next >= vcpu_book3s->vsid_max) {
-		vcpu_book3s->vsid_next = vcpu_book3s->vsid_first;
+	if (vcpu_book3s->vsid_next >= VSID_POOL_SIZE) {
+		vcpu_book3s->vsid_next = 0;
 		memset(vcpu_book3s->sid_map, 0,
 		       sizeof(struct kvmppc_sid_map) * SID_MAP_NUM);
 		kvmppc_mmu_pte_flush(vcpu, 0, 0);
 		kvmppc_mmu_flush_segments(vcpu);
 	}
-	map->host_vsid = vcpu_book3s->vsid_next;
-
-	/* Would have to be 111 to be completely aligned with the rest of
-	   Linux, but that is just way too little space! */
-	vcpu_book3s->vsid_next+=1;
+	map->host_vsid = vcpu_book3s->vsid_pool[vcpu_book3s->vsid_next];
+	vcpu_book3s->vsid_next++;
 
 	map->guest_vsid = gvsid;
 	map->valid = true;
@@ -333,40 +330,38 @@ void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu)
 
 void kvmppc_mmu_destroy(struct kvm_vcpu *vcpu)
 {
+	int i;
+
 	kvmppc_mmu_hpte_destroy(vcpu);
 	preempt_disable();
-	__destroy_context(to_book3s(vcpu)->context_id);
+	for (i = 0; i < SID_CONTEXTS; i++)
+		__destroy_context(to_book3s(vcpu)->context_id[i]);
 	preempt_enable();
 }
 
 /* From mm/mmu_context_hash32.c */
-#define CTX_TO_VSID(ctx) (((ctx) * (897 * 16)) & 0xffffff)
+#define CTX_TO_VSID(c, id)	((((c) * (897 * 16)) + (id * 0x111)) & 0xffffff)
 
 int kvmppc_mmu_init(struct kvm_vcpu *vcpu)
 {
 	struct kvmppc_vcpu_book3s *vcpu3s = to_book3s(vcpu);
 	int err;
 	ulong sdr1;
+	int i;
+	int j;
 
-	err = __init_new_context();
-	if (err < 0)
-		return -1;
-	vcpu3s->context_id = err;
-
-	vcpu3s->vsid_max = CTX_TO_VSID(vcpu3s->context_id + 1) - 1;
-	vcpu3s->vsid_first = CTX_TO_VSID(vcpu3s->context_id);
-
-#if 0 /* XXX still doesn't guarantee uniqueness */
-	/* We could collide with the Linux vsid space because the vsid
-	 * wraps around at 24 bits. We're safe if we do our own space
-	 * though, so let's always set the highest bit. */
+	for (i = 0; i < SID_CONTEXTS; i++) {
+		err = __init_new_context();
+		if (err < 0)
+			goto init_fail;
+		vcpu3s->context_id[i] = err;
 
-	vcpu3s->vsid_max |= 0x00800000;
-	vcpu3s->vsid_first |= 0x00800000;
-#endif
-	BUG_ON(vcpu3s->vsid_max < vcpu3s->vsid_first);
+		/* Remember context id for this combination */
+		for (j = 0; j < 16; j++)
+			vcpu3s->vsid_pool[(i * 16) + j] = CTX_TO_VSID(err, j);
+	}
 
-	vcpu3s->vsid_next = vcpu3s->vsid_first;
+	vcpu3s->vsid_next = 0;
 
 	/* Remember where the HTAB is */
 	asm ( "mfsdr1 %0" : "=r"(sdr1) );
@@ -376,4 +371,14 @@ int kvmppc_mmu_init(struct kvm_vcpu *vcpu)
 	kvmppc_mmu_hpte_init(vcpu);
 
 	return 0;
+
+init_fail:
+	for (j = 0; j < i; j++) {
+		if (!vcpu3s->context_id[j])
+			continue;
+
+		__destroy_context(to_book3s(vcpu)->context_id[j]);
+	}
+
+	return -1;
 }
diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index 4040c8d..fa2f084 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -286,7 +286,7 @@ void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu)
 void kvmppc_mmu_destroy(struct kvm_vcpu *vcpu)
 {
 	kvmppc_mmu_hpte_destroy(vcpu);
-	__destroy_context(to_book3s(vcpu)->context_id);
+	__destroy_context(to_book3s(vcpu)->context_id[0]);
 }
 
 int kvmppc_mmu_init(struct kvm_vcpu *vcpu)
@@ -297,10 +297,10 @@ int kvmppc_mmu_init(struct kvm_vcpu *vcpu)
 	err = __init_new_context();
 	if (err < 0)
 		return -1;
-	vcpu3s->context_id = err;
+	vcpu3s->context_id[0] = err;
 
-	vcpu3s->vsid_max = ((vcpu3s->context_id + 1) << USER_ESID_BITS) - 1;
-	vcpu3s->vsid_first = vcpu3s->context_id << USER_ESID_BITS;
+	vcpu3s->vsid_max = ((vcpu3s->context_id[0] + 1) << USER_ESID_BITS) - 1;
+	vcpu3s->vsid_first = vcpu3s->context_id[0] << USER_ESID_BITS;
 	vcpu3s->vsid_next = vcpu3s->vsid_first;
 
 	kvmppc_mmu_hpte_init(vcpu);
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 23/35] KVM: PPC: Don't put MSR_POW in MSR
  2010-08-31  2:31 [PULL 00/35] KVM: PPC: End-August patch queue Alexander Graf
                   ` (12 preceding siblings ...)
  2010-08-31  2:32 ` [PATCH 22/35] KVM: PPC: Implement correct SID mapping on Book3s_32 Alexander Graf
@ 2010-08-31  2:32 ` Alexander Graf
  2010-08-31  2:32 ` [PATCH 25/35] KVM: PPC: fix compilation of "dump tlbs" debug function Alexander Graf
                   ` (8 subsequent siblings)
  22 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:32 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Linuxppc-dev, KVM list

On Book3S a mtmsr with the MSR_POW bit set indicates that the OS is in
idle and only needs to be waked up on the next interrupt.

Now, unfortunately we let that bit slip into the stored MSR value which
is not what the real CPU does, so that we ended up executing code like
this:

	r = mfmsr();
	/* r containts MSR_POW */
	mtmsr(r | MSR_EE);

This obviously breaks, as we're going into idle mode in code sections that
don't expect to be idling.

This patch masks MSR_POW out of the stored MSR value on wakeup, making
guests happy again.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s.c |    6 +++++-
 1 files changed, 5 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 7adea63..5833df7 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -134,10 +134,14 @@ void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 msr)
 	vcpu->arch.shared->msr = msr;
 	kvmppc_recalc_shadow_msr(vcpu);
 
-	if (msr & (MSR_WE|MSR_POW)) {
+	if (msr & MSR_POW) {
 		if (!vcpu->arch.pending_exceptions) {
 			kvm_vcpu_block(vcpu);
 			vcpu->stat.halt_wakeup++;
+
+			/* Unset POW bit after we woke up */
+			msr &= ~MSR_POW;
+			vcpu->arch.shared->msr = msr;
 		}
 	}
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 24/35] KVM: PPC: initialize IVORs in addition to IVPR
       [not found] ` <1283221937-21006-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                     ` (9 preceding siblings ...)
  2010-08-31  2:32   ` [PATCH 21/35] KVM: PPC: Force enable nap on KVM Alexander Graf
@ 2010-08-31  2:32   ` Alexander Graf
  2010-08-31  2:32   ` [PATCH 29/35] KVM: PPC: Fix CONFIG_KVM_GUEST && !CONFIG_KVM case Alexander Graf
                     ` (2 subsequent siblings)
  13 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:32 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: Linuxppc-dev, KVM list

From: Hollis Blanchard <hollis_blanchard-nmGgyN9QBj3QT0dZR+AlfA@public.gmane.org>

Developers can now tell at a glace the exact type of the premature interrupt,
instead of just knowing that there was some premature interrupt.

Signed-off-by: Hollis Blanchard <hollis_blanchard-nmGgyN9QBj3QT0dZR+AlfA@public.gmane.org>
Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/booke.c |    8 ++++++--
 1 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index c604277..835f6d0 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -497,15 +497,19 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
 /* Initial guest state: 16MB mapping 0 -> 0, PC = 0, MSR = 0, R1 = 16MB */
 int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
 {
+	int i;
+
 	vcpu->arch.pc = 0;
 	vcpu->arch.shared->msr = 0;
 	kvmppc_set_gpr(vcpu, 1, (16<<20) - 8); /* -8 for the callee-save LR slot */
 
 	vcpu->arch.shadow_pid = 1;
 
-	/* Eye-catching number so we know if the guest takes an interrupt
-	 * before it's programmed its own IVPR. */
+	/* Eye-catching numbers so we know if the guest takes an interrupt
+	 * before it's programmed its own IVPR/IVORs. */
 	vcpu->arch.ivpr = 0x55550000;
+	for (i = 0; i < BOOKE_IRQPRIO_MAX; i++)
+		vcpu->arch.ivor[i] = 0x7700 | i * 4;
 
 	kvmppc_init_timing_stats(vcpu);
 
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 25/35] KVM: PPC: fix compilation of "dump tlbs" debug function
  2010-08-31  2:31 [PULL 00/35] KVM: PPC: End-August patch queue Alexander Graf
                   ` (13 preceding siblings ...)
  2010-08-31  2:32 ` [PATCH 23/35] KVM: PPC: Don't put MSR_POW in MSR Alexander Graf
@ 2010-08-31  2:32 ` Alexander Graf
  2010-08-31  2:32 ` [PATCH 26/35] KVM: PPC: allow ppc440gp to pass the compatibility check Alexander Graf
                   ` (7 subsequent siblings)
  22 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:32 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Linuxppc-dev, KVM list

From: Hollis Blanchard <hollis_blanchard@mentor.com>

Missing local variable.

Signed-off-by: Hollis Blanchard <hollis_blanchard@mentor.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/44x_tlb.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/44x_tlb.c b/arch/powerpc/kvm/44x_tlb.c
index 9f71b8d..5f3cff8 100644
--- a/arch/powerpc/kvm/44x_tlb.c
+++ b/arch/powerpc/kvm/44x_tlb.c
@@ -47,6 +47,7 @@
 #ifdef DEBUG
 void kvmppc_dump_tlbs(struct kvm_vcpu *vcpu)
 {
+	struct kvmppc_vcpu_44x *vcpu_44x = to_44x(vcpu);
 	struct kvmppc_44x_tlbe *tlbe;
 	int i;
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 26/35] KVM: PPC: allow ppc440gp to pass the compatibility check
  2010-08-31  2:31 [PULL 00/35] KVM: PPC: End-August patch queue Alexander Graf
                   ` (14 preceding siblings ...)
  2010-08-31  2:32 ` [PATCH 25/35] KVM: PPC: fix compilation of "dump tlbs" debug function Alexander Graf
@ 2010-08-31  2:32 ` Alexander Graf
  2010-08-31  2:32 ` [PATCH 27/35] KVM: PPC: Enable napping only for Book3s_64 Alexander Graf
                   ` (6 subsequent siblings)
  22 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:32 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Linuxppc-dev, KVM list

From: Hollis Blanchard <hollis_blanchard@mentor.com>

Match only the first part of cur_cpu_spec->platform.

440GP (the first 440 processor) is identified by the string "ppc440gp", while
all later 440 processors use simply "ppc440".

Signed-off-by: Hollis Blanchard <hollis_blanchard@mentor.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/44x.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kvm/44x.c b/arch/powerpc/kvm/44x.c
index e7b1f3f..74d0e74 100644
--- a/arch/powerpc/kvm/44x.c
+++ b/arch/powerpc/kvm/44x.c
@@ -43,7 +43,7 @@ int kvmppc_core_check_processor_compat(void)
 {
 	int r;
 
-	if (strcmp(cur_cpu_spec->platform, "ppc440") == 0)
+	if (strncmp(cur_cpu_spec->platform, "ppc440", 6) == 0)
 		r = 0;
 	else
 		r = -ENOTSUPP;
@@ -72,6 +72,7 @@ int kvmppc_core_vcpu_setup(struct kvm_vcpu *vcpu)
 	/* Since the guest can directly access the timebase, it must know the
 	 * real timebase frequency. Accordingly, it must see the state of
 	 * CCR1[TCS]. */
+	/* XXX CCR1 doesn't exist on all 440 SoCs. */
 	vcpu->arch.ccr1 = mfspr(SPRN_CCR1);
 
 	for (i = 0; i < ARRAY_SIZE(vcpu_44x->shadow_refs); i++)
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 27/35] KVM: PPC: Enable napping only for Book3s_64
  2010-08-31  2:31 [PULL 00/35] KVM: PPC: End-August patch queue Alexander Graf
                   ` (15 preceding siblings ...)
  2010-08-31  2:32 ` [PATCH 26/35] KVM: PPC: allow ppc440gp to pass the compatibility check Alexander Graf
@ 2010-08-31  2:32 ` Alexander Graf
  2010-08-31  2:32 ` [PATCH 28/35] KVM: PPC: Implement Level interrupts on Book3S Alexander Graf
                   ` (5 subsequent siblings)
  22 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:32 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Linuxppc-dev, KVM list

Before I incorrectly enabled napping also for BookE, which would result in
needless dcache flushes. Since we only need to force enable napping on
Book3s_64 because it doesn't go into MSR_POW otherwise, we can just #ifdef
that code to this particular platform.

Reported-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kernel/kvm.c |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kernel/kvm.c b/arch/powerpc/kernel/kvm.c
index 95aed6b..293765a 100644
--- a/arch/powerpc/kernel/kvm.c
+++ b/arch/powerpc/kernel/kvm.c
@@ -583,8 +583,10 @@ static int __init kvm_guest_init(void)
 	if (kvm_para_has_feature(KVM_FEATURE_MAGIC_PAGE))
 		kvm_use_magic_page();
 
+#ifdef CONFIG_PPC_BOOK3S_64
 	/* Enable napping */
 	powersave_nap = 1;
+#endif
 
 free_tmp:
 	kvm_free_tmp();
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 28/35] KVM: PPC: Implement Level interrupts on Book3S
  2010-08-31  2:31 [PULL 00/35] KVM: PPC: End-August patch queue Alexander Graf
                   ` (16 preceding siblings ...)
  2010-08-31  2:32 ` [PATCH 27/35] KVM: PPC: Enable napping only for Book3s_64 Alexander Graf
@ 2010-08-31  2:32 ` Alexander Graf
  2010-08-31  2:32 ` [PATCH 30/35] KVM: PPC: Expose level based interrupt cap Alexander Graf
                   ` (4 subsequent siblings)
  22 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:32 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Linuxppc-dev, KVM list

The current interrupt logic is just completely broken. We get a notification
from user space, telling us that an interrupt is there. But then user space
expects us that we just acknowledge an interrupt once we deliver it to the
guest.

This is not how real hardware works though. On real hardware, the interrupt
controller pulls the external interrupt line until it gets notified that the
interrupt was received.

So in reality we have two events: pulling and letting go of the interrupt line.

To maintain backwards compatibility, I added a new request for the pulling
part. The letting go part was implemented earlier already.

With this in place, we can now finally start guests that do not randomly stall
and stop to work at random times.

This patch implements above logic for Book3S.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm.h     |    1 +
 arch/powerpc/include/asm/kvm_asm.h |    4 +++-
 arch/powerpc/kvm/book3s.c          |   30 +++++++++++++++++++++++++++---
 3 files changed, 31 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm.h b/arch/powerpc/include/asm/kvm.h
index 6c5547d..18ea696 100644
--- a/arch/powerpc/include/asm/kvm.h
+++ b/arch/powerpc/include/asm/kvm.h
@@ -86,5 +86,6 @@ struct kvm_guest_debug_arch {
 
 #define KVM_INTERRUPT_SET	-1U
 #define KVM_INTERRUPT_UNSET	-2U
+#define KVM_INTERRUPT_SET_LEVEL	-3U
 
 #endif /* __LINUX_KVM_POWERPC_H */
diff --git a/arch/powerpc/include/asm/kvm_asm.h b/arch/powerpc/include/asm/kvm_asm.h
index c5ea4cd..5b75046 100644
--- a/arch/powerpc/include/asm/kvm_asm.h
+++ b/arch/powerpc/include/asm/kvm_asm.h
@@ -58,6 +58,7 @@
 #define BOOK3S_INTERRUPT_INST_STORAGE	0x400
 #define BOOK3S_INTERRUPT_INST_SEGMENT	0x480
 #define BOOK3S_INTERRUPT_EXTERNAL	0x500
+#define BOOK3S_INTERRUPT_EXTERNAL_LEVEL	0x501
 #define BOOK3S_INTERRUPT_ALIGNMENT	0x600
 #define BOOK3S_INTERRUPT_PROGRAM	0x700
 #define BOOK3S_INTERRUPT_FP_UNAVAIL	0x800
@@ -84,7 +85,8 @@
 #define BOOK3S_IRQPRIO_EXTERNAL			13
 #define BOOK3S_IRQPRIO_DECREMENTER		14
 #define BOOK3S_IRQPRIO_PERFORMANCE_MONITOR	15
-#define BOOK3S_IRQPRIO_MAX			16
+#define BOOK3S_IRQPRIO_EXTERNAL_LEVEL		16
+#define BOOK3S_IRQPRIO_MAX			17
 
 #define BOOK3S_HFLAG_DCBZ32			0x1
 #define BOOK3S_HFLAG_SLB			0x2
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 5833df7..e316847 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -186,6 +186,7 @@ static int kvmppc_book3s_vec2irqprio(unsigned int vec)
 	case 0x400: prio = BOOK3S_IRQPRIO_INST_STORAGE;		break;
 	case 0x480: prio = BOOK3S_IRQPRIO_INST_SEGMENT;		break;
 	case 0x500: prio = BOOK3S_IRQPRIO_EXTERNAL;		break;
+	case 0x501: prio = BOOK3S_IRQPRIO_EXTERNAL_LEVEL;	break;
 	case 0x600: prio = BOOK3S_IRQPRIO_ALIGNMENT;		break;
 	case 0x700: prio = BOOK3S_IRQPRIO_PROGRAM;		break;
 	case 0x800: prio = BOOK3S_IRQPRIO_FP_UNAVAIL;		break;
@@ -246,13 +247,19 @@ void kvmppc_core_dequeue_dec(struct kvm_vcpu *vcpu)
 void kvmppc_core_queue_external(struct kvm_vcpu *vcpu,
                                 struct kvm_interrupt *irq)
 {
-	kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_EXTERNAL);
+	unsigned int vec = BOOK3S_INTERRUPT_EXTERNAL;
+
+	if (irq->irq == KVM_INTERRUPT_SET_LEVEL)
+		vec = BOOK3S_INTERRUPT_EXTERNAL_LEVEL;
+
+	kvmppc_book3s_queue_irqprio(vcpu, vec);
 }
 
 void kvmppc_core_dequeue_external(struct kvm_vcpu *vcpu,
                                   struct kvm_interrupt *irq)
 {
 	kvmppc_book3s_dequeue_irqprio(vcpu, BOOK3S_INTERRUPT_EXTERNAL);
+	kvmppc_book3s_dequeue_irqprio(vcpu, BOOK3S_INTERRUPT_EXTERNAL_LEVEL);
 }
 
 int kvmppc_book3s_irqprio_deliver(struct kvm_vcpu *vcpu, unsigned int priority)
@@ -281,6 +288,7 @@ int kvmppc_book3s_irqprio_deliver(struct kvm_vcpu *vcpu, unsigned int priority)
 		vec = BOOK3S_INTERRUPT_DECREMENTER;
 		break;
 	case BOOK3S_IRQPRIO_EXTERNAL:
+	case BOOK3S_IRQPRIO_EXTERNAL_LEVEL:
 		deliver = (vcpu->arch.shared->msr & MSR_EE) && !crit;
 		vec = BOOK3S_INTERRUPT_EXTERNAL;
 		break;
@@ -343,6 +351,23 @@ int kvmppc_book3s_irqprio_deliver(struct kvm_vcpu *vcpu, unsigned int priority)
 	return deliver;
 }
 
+/*
+ * This function determines if an irqprio should be cleared once issued.
+ */
+static bool clear_irqprio(struct kvm_vcpu *vcpu, unsigned int priority)
+{
+	switch (priority) {
+		case BOOK3S_IRQPRIO_DECREMENTER:
+			/* DEC interrupts get cleared by mtdec */
+			return false;
+		case BOOK3S_IRQPRIO_EXTERNAL_LEVEL:
+			/* External interrupts get cleared by userspace */
+			return false;
+	}
+
+	return true;
+}
+
 void kvmppc_core_deliver_interrupts(struct kvm_vcpu *vcpu)
 {
 	unsigned long *pending = &vcpu->arch.pending_exceptions;
@@ -356,8 +381,7 @@ void kvmppc_core_deliver_interrupts(struct kvm_vcpu *vcpu)
 	priority = __ffs(*pending);
 	while (priority < BOOK3S_IRQPRIO_MAX) {
 		if (kvmppc_book3s_irqprio_deliver(vcpu, priority) &&
-		    (priority != BOOK3S_IRQPRIO_DECREMENTER)) {
-			/* DEC interrupts get cleared by mtdec */
+		    clear_irqprio(vcpu, priority)) {
 			clear_bit(priority, &vcpu->arch.pending_exceptions);
 			break;
 		}
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 29/35] KVM: PPC: Fix CONFIG_KVM_GUEST && !CONFIG_KVM case
       [not found] ` <1283221937-21006-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                     ` (10 preceding siblings ...)
  2010-08-31  2:32   ` [PATCH 24/35] KVM: PPC: initialize IVORs in addition to IVPR Alexander Graf
@ 2010-08-31  2:32   ` Alexander Graf
  2010-08-31  2:32   ` [PATCH 33/35] KVM: PPC: e500_tlb: Fix a minor copy-paste tracing bug Alexander Graf
  2010-08-31  2:32   ` [PATCH 35/35] KVM: PPC: Add documentation for magic page enhancements Alexander Graf
  13 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:32 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: Linuxppc-dev, KVM list

When CONFIG_KVM_GUEST is selected, but CONFIG_KVM is not, we were missing
some defines in asm-offsets.c and included too many headers at other places.

This patch makes above configuration work.

Reported-by: Stephen Rothwell <sfr-3FnU+UHB4dNDw9hX6IcOSA@public.gmane.org>
Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kernel/asm-offsets.c |    6 +++---
 arch/powerpc/kernel/kvm.c         |    1 -
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 293d2a8..7f0d6fc 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -48,11 +48,11 @@
 #ifdef CONFIG_PPC_ISERIES
 #include <asm/iseries/alpaca.h>
 #endif
-#ifdef CONFIG_KVM
+#if defined(CONFIG_KVM) || defined(CONFIG_KVM_GUEST)
 #include <linux/kvm_host.h>
-#ifndef CONFIG_BOOKE
-#include <asm/kvm_book3s.h>
 #endif
+#if defined(CONFIG_KVM) && defined(CONFIG_PPC_BOOK3S)
+#include <asm/kvm_book3s.h>
 #endif
 
 #ifdef CONFIG_PPC32
diff --git a/arch/powerpc/kernel/kvm.c b/arch/powerpc/kernel/kvm.c
index 293765a..428d0e5 100644
--- a/arch/powerpc/kernel/kvm.c
+++ b/arch/powerpc/kernel/kvm.c
@@ -25,7 +25,6 @@
 #include <linux/of.h>
 
 #include <asm/reg.h>
-#include <asm/kvm_ppc.h>
 #include <asm/sections.h>
 #include <asm/cacheflush.h>
 #include <asm/disassemble.h>
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 30/35] KVM: PPC: Expose level based interrupt cap
  2010-08-31  2:31 [PULL 00/35] KVM: PPC: End-August patch queue Alexander Graf
                   ` (17 preceding siblings ...)
  2010-08-31  2:32 ` [PATCH 28/35] KVM: PPC: Implement Level interrupts on Book3S Alexander Graf
@ 2010-08-31  2:32 ` Alexander Graf
  2010-08-31  2:32 ` [PATCH 31/35] KVM: PPC: Implement level interrupts for BookE Alexander Graf
                   ` (3 subsequent siblings)
  22 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:32 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Linuxppc-dev, KVM list

Now that we have all the level interrupt magic in place, let's
expose the capability to user space, so it can make use of it!

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/powerpc.c |    1 +
 include/linux/kvm.h        |    1 +
 2 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 028891c..2f87a16 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -192,6 +192,7 @@ int kvm_dev_ioctl_check_extension(long ext)
 	case KVM_CAP_PPC_SEGSTATE:
 	case KVM_CAP_PPC_PAIRED_SINGLES:
 	case KVM_CAP_PPC_UNSET_IRQ:
+	case KVM_CAP_PPC_IRQ_LEVEL:
 	case KVM_CAP_ENABLE_CAP:
 	case KVM_CAP_PPC_OSI:
 	case KVM_CAP_PPC_GET_PVINFO:
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index 3707704..919ae53 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -539,6 +539,7 @@ struct kvm_ppc_pvinfo {
 #define KVM_CAP_XCRS 56
 #endif
 #define KVM_CAP_PPC_GET_PVINFO 57
+#define KVM_CAP_PPC_IRQ_LEVEL 58
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 31/35] KVM: PPC: Implement level interrupts for BookE
  2010-08-31  2:31 [PULL 00/35] KVM: PPC: End-August patch queue Alexander Graf
                   ` (18 preceding siblings ...)
  2010-08-31  2:32 ` [PATCH 30/35] KVM: PPC: Expose level based interrupt cap Alexander Graf
@ 2010-08-31  2:32 ` Alexander Graf
  2010-08-31  2:32 ` [PATCH 32/35] KVM: PPC: Document KVM_INTERRUPT ioctl Alexander Graf
                   ` (2 subsequent siblings)
  22 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:32 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Linuxppc-dev, KVM list

BookE also wants to support level based interrupts, so let's implement
all the necessary logic there. We need to trick a bit here because the
irqprios are 1:1 assigned to architecture defined values. But since there
is some space left there, we can just pick a random one and move it later
on - it's internal anyways.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/booke.c |   17 +++++++++++++++--
 arch/powerpc/kvm/booke.h |    4 +++-
 2 files changed, 18 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 835f6d0..77575d0 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -131,13 +131,19 @@ void kvmppc_core_dequeue_dec(struct kvm_vcpu *vcpu)
 void kvmppc_core_queue_external(struct kvm_vcpu *vcpu,
                                 struct kvm_interrupt *irq)
 {
-	kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_EXTERNAL);
+	unsigned int prio = BOOKE_IRQPRIO_EXTERNAL;
+
+	if (irq->irq == KVM_INTERRUPT_SET_LEVEL)
+		prio = BOOKE_IRQPRIO_EXTERNAL_LEVEL;
+
+	kvmppc_booke_queue_irqprio(vcpu, prio);
 }
 
 void kvmppc_core_dequeue_external(struct kvm_vcpu *vcpu,
                                   struct kvm_interrupt *irq)
 {
 	clear_bit(BOOKE_IRQPRIO_EXTERNAL, &vcpu->arch.pending_exceptions);
+	clear_bit(BOOKE_IRQPRIO_EXTERNAL_LEVEL, &vcpu->arch.pending_exceptions);
 }
 
 /* Deliver the interrupt of the corresponding priority, if possible. */
@@ -150,6 +156,7 @@ static int kvmppc_booke_irqprio_deliver(struct kvm_vcpu *vcpu,
 	ulong crit_raw = vcpu->arch.shared->critical;
 	ulong crit_r1 = kvmppc_get_gpr(vcpu, 1);
 	bool crit;
+	bool keep_irq = false;
 
 	/* Truncate crit indicators in 32 bit mode */
 	if (!(vcpu->arch.shared->msr & MSR_SF)) {
@@ -162,6 +169,11 @@ static int kvmppc_booke_irqprio_deliver(struct kvm_vcpu *vcpu,
 	/* ... and we're in supervisor mode */
 	crit = crit && !(vcpu->arch.shared->msr & MSR_PR);
 
+	if (priority == BOOKE_IRQPRIO_EXTERNAL_LEVEL) {
+		priority = BOOKE_IRQPRIO_EXTERNAL;
+		keep_irq = true;
+	}
+
 	switch (priority) {
 	case BOOKE_IRQPRIO_DTLB_MISS:
 	case BOOKE_IRQPRIO_DATA_STORAGE:
@@ -214,7 +226,8 @@ static int kvmppc_booke_irqprio_deliver(struct kvm_vcpu *vcpu,
 			vcpu->arch.shared->dar = vcpu->arch.queued_dear;
 		kvmppc_set_msr(vcpu, vcpu->arch.shared->msr & msr_mask);
 
-		clear_bit(priority, &vcpu->arch.pending_exceptions);
+		if (!keep_irq)
+			clear_bit(priority, &vcpu->arch.pending_exceptions);
 	}
 
 	return allowed;
diff --git a/arch/powerpc/kvm/booke.h b/arch/powerpc/kvm/booke.h
index 88258ac..492bb70 100644
--- a/arch/powerpc/kvm/booke.h
+++ b/arch/powerpc/kvm/booke.h
@@ -46,7 +46,9 @@
 #define BOOKE_IRQPRIO_FIT 17
 #define BOOKE_IRQPRIO_DECREMENTER 18
 #define BOOKE_IRQPRIO_PERFORMANCE_MONITOR 19
-#define BOOKE_IRQPRIO_MAX 19
+/* Internal pseudo-irqprio for level triggered externals */
+#define BOOKE_IRQPRIO_EXTERNAL_LEVEL 20
+#define BOOKE_IRQPRIO_MAX 20
 
 extern unsigned long kvmppc_booke_handlers;
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 32/35] KVM: PPC: Document KVM_INTERRUPT ioctl
  2010-08-31  2:31 [PULL 00/35] KVM: PPC: End-August patch queue Alexander Graf
                   ` (19 preceding siblings ...)
  2010-08-31  2:32 ` [PATCH 31/35] KVM: PPC: Implement level interrupts for BookE Alexander Graf
@ 2010-08-31  2:32 ` Alexander Graf
  2010-08-31  2:32 ` [PATCH 34/35] KVM: PPC: Fix compile error in e500_tlb.c Alexander Graf
  2010-09-01  7:50 ` [PULL 00/35] KVM: PPC: End-August patch queue Avi Kivity
  22 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:32 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Linuxppc-dev, KVM list

This adds some documentation for the KVM_INTERRUPT special cases that
PowerPC now implements.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 Documentation/kvm/api.txt |   33 +++++++++++++++++++++++++++++++--
 1 files changed, 31 insertions(+), 2 deletions(-)

diff --git a/Documentation/kvm/api.txt b/Documentation/kvm/api.txt
index 44d9893..24d6341 100644
--- a/Documentation/kvm/api.txt
+++ b/Documentation/kvm/api.txt
@@ -320,13 +320,13 @@ struct kvm_translation {
 4.15 KVM_INTERRUPT
 
 Capability: basic
-Architectures: x86
+Architectures: x86, ppc
 Type: vcpu ioctl
 Parameters: struct kvm_interrupt (in)
 Returns: 0 on success, -1 on error
 
 Queues a hardware interrupt vector to be injected.  This is only
-useful if in-kernel local APIC is not used.
+useful if in-kernel local APIC or equivalent is not used.
 
 /* for KVM_INTERRUPT */
 struct kvm_interrupt {
@@ -334,8 +334,37 @@ struct kvm_interrupt {
 	__u32 irq;
 };
 
+X86:
+
 Note 'irq' is an interrupt vector, not an interrupt pin or line.
 
+PPC:
+
+Queues an external interrupt to be injected. This ioctl is overleaded
+with 3 different irq values:
+
+a) KVM_INTERRUPT_SET
+
+  This injects an edge type external interrupt into the guest once it's ready
+  to receive interrupts. When injected, the interrupt is done.
+
+b) KVM_INTERRUPT_UNSET
+
+  This unsets any pending interrupt.
+
+  Only available with KVM_CAP_PPC_UNSET_IRQ.
+
+c) KVM_INTERRUPT_SET_LEVEL
+
+  This injects a level type external interrupt into the guest context. The
+  interrupt stays pending until a specific ioctl with KVM_INTERRUPT_UNSET
+  is triggered.
+
+  Only available with KVM_CAP_PPC_IRQ_LEVEL.
+
+Note that any value for 'irq' other than the ones stated above is invalid
+and incurs unexpected behavior.
+
 4.16 KVM_DEBUG_GUEST
 
 Capability: basic
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 33/35] KVM: PPC: e500_tlb: Fix a minor copy-paste tracing bug
       [not found] ` <1283221937-21006-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                     ` (11 preceding siblings ...)
  2010-08-31  2:32   ` [PATCH 29/35] KVM: PPC: Fix CONFIG_KVM_GUEST && !CONFIG_KVM case Alexander Graf
@ 2010-08-31  2:32   ` Alexander Graf
  2010-08-31  2:32   ` [PATCH 35/35] KVM: PPC: Add documentation for magic page enhancements Alexander Graf
  13 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:32 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: Linuxppc-dev, KVM list

From: Kyle Moffett <Kyle.D.Moffett-X8CqP27nNzzQT0dZR+AlfA@public.gmane.org>

The kvmppc_e500_stlbe_invalidate() function was trying to pass too many
parameters to trace_kvm_stlb_inval().  This appears to be a bad
copy-paste from a call to trace_kvm_stlb_write().

Signed-off-by: Kyle Moffett <Kyle.D.Moffett-X8CqP27nNzzQT0dZR+AlfA@public.gmane.org>
Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/e500_tlb.c |    6 +-----
 1 files changed, 1 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/kvm/e500_tlb.c b/arch/powerpc/kvm/e500_tlb.c
index 66845a5..a413883 100644
--- a/arch/powerpc/kvm/e500_tlb.c
+++ b/arch/powerpc/kvm/e500_tlb.c
@@ -226,11 +226,7 @@ static void kvmppc_e500_stlbe_invalidate(struct kvmppc_vcpu_e500 *vcpu_e500,
 
 	kvmppc_e500_shadow_release(vcpu_e500, tlbsel, esel);
 	stlbe->mas1 = 0;
-	/* XXX doesn't compile */
-#if 0
-	trace_kvm_stlb_inval(index_of(tlbsel, esel), stlbe->mas1, stlbe->mas2,
-			     stlbe->mas3, stlbe->mas7);
-#endif
+	trace_kvm_stlb_inval(index_of(tlbsel, esel));
 }
 
 static void kvmppc_e500_tlb1_invalidate(struct kvmppc_vcpu_e500 *vcpu_e500,
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 34/35] KVM: PPC: Fix compile error in e500_tlb.c
  2010-08-31  2:31 [PULL 00/35] KVM: PPC: End-August patch queue Alexander Graf
                   ` (20 preceding siblings ...)
  2010-08-31  2:32 ` [PATCH 32/35] KVM: PPC: Document KVM_INTERRUPT ioctl Alexander Graf
@ 2010-08-31  2:32 ` Alexander Graf
  2010-09-01  7:50 ` [PULL 00/35] KVM: PPC: End-August patch queue Avi Kivity
  22 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:32 UTC (permalink / raw)
  To: kvm-ppc; +Cc: Linuxppc-dev, KVM list

The e500_tlb.c file didn't compile for me due to the following error:

arch/powerpc/kvm/e500_tlb.c: In function ‘kvmppc_e500_shadow_map’:
arch/powerpc/kvm/e500_tlb.c:300: error: format ‘%lx’ expects type ‘long unsigned int’, but argument 2 has type ‘gfn_t’

So let's explicitly cast the argument to make printk happy.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/e500_tlb.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kvm/e500_tlb.c b/arch/powerpc/kvm/e500_tlb.c
index a413883..d6d6d47 100644
--- a/arch/powerpc/kvm/e500_tlb.c
+++ b/arch/powerpc/kvm/e500_tlb.c
@@ -297,7 +297,8 @@ static inline void kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
 	/* Get reference to new page. */
 	new_page = gfn_to_page(vcpu_e500->vcpu.kvm, gfn);
 	if (is_error_page(new_page)) {
-		printk(KERN_ERR "Couldn't get guest page for gfn %lx!\n", gfn);
+		printk(KERN_ERR "Couldn't get guest page for gfn %lx!\n",
+				(long)gfn);
 		kvm_release_page_clean(new_page);
 		return;
 	}
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 35/35] KVM: PPC: Add documentation for magic page enhancements
       [not found] ` <1283221937-21006-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
                     ` (12 preceding siblings ...)
  2010-08-31  2:32   ` [PATCH 33/35] KVM: PPC: e500_tlb: Fix a minor copy-paste tracing bug Alexander Graf
@ 2010-08-31  2:32   ` Alexander Graf
  13 siblings, 0 replies; 37+ messages in thread
From: Alexander Graf @ 2010-08-31  2:32 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: Linuxppc-dev, KVM list

This documents how to detect additional features inside the magic
page when a guest maps it.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 Documentation/kvm/ppc-pv.txt |   14 ++++++++++++++
 1 files changed, 14 insertions(+), 0 deletions(-)

diff --git a/Documentation/kvm/ppc-pv.txt b/Documentation/kvm/ppc-pv.txt
index 922cf95..a7f2244 100644
--- a/Documentation/kvm/ppc-pv.txt
+++ b/Documentation/kvm/ppc-pv.txt
@@ -102,6 +102,20 @@ struct kvm_vcpu_arch_shared {
 Additions to the page must only occur at the end. Struct fields are always 32
 or 64 bit aligned, depending on them being 32 or 64 bit wide respectively.
 
+Magic page features
+===================
+
+When mapping the magic page using the KVM hypercall KVM_HC_PPC_MAP_MAGIC_PAGE,
+a second return value is passed to the guest. This second return value contains
+a bitmap of available features inside the magic page.
+
+The following enhancements to the magic page are currently available:
+
+  KVM_MAGIC_FEAT_SR		Maps SR registers r/w in the magic page
+
+For enhanced features in the magic page, please check for the existence of the
+feature before using them!
+
 MSR bits
 ========
 
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* Re: [PULL 00/35] KVM: PPC: End-August patch queue
  2010-08-31  2:31 [PULL 00/35] KVM: PPC: End-August patch queue Alexander Graf
                   ` (21 preceding siblings ...)
  2010-08-31  2:32 ` [PATCH 34/35] KVM: PPC: Fix compile error in e500_tlb.c Alexander Graf
@ 2010-09-01  7:50 ` Avi Kivity
  22 siblings, 0 replies; 37+ messages in thread
From: Avi Kivity @ 2010-09-01  7:50 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, Linuxppc-dev, KVM list

  On 08/31/2010 05:31 AM, Alexander Graf wrote:
> Howdy,
>
> This is my local patch queue with stuff that has accumulated over the last
> weeks on KVM for PPC with some last minute fixes, speedups and debugging help
> that I needed for the KVM Forum ;-).
>
> The highlights of this set are:
>
>    - Converted most important debug points to tracepoints
>    - Flush less PTEs (speedup)
>    - Go back to our own hash (less duplicates)
>    - Make SRs guest settable (speedup for 32 bit guests)
>    - Remove r30/r31 restrictions from PV hooks (speedup!)
>    - Fix random breakages
>    - Fix random guest stalls
>    - 440GP host support (Thanks Hollis!)
>    - Reliable interrupt injection
>
> Keep in mind that this is the first version that is stable on PPC32 hosts.
> All versions prior to this could occupy otherwise used segment entries and
> thus crash your machine :-).
>
> It is also the first version that is stable with PPC64 guests, because they
> require more sophisticated interrupt injection logic for which qemu patches
> are also required.
>
> Please pull this tree from:
>
>      git://github.com/agraf/linux-2.6.git kvm-ppc-next
>
> Have fun with more accurate, faster and less buggy KVM on PowerPC!

Pulled, thanks.

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


^ permalink raw reply	[flat|nested] 37+ messages in thread

end of thread, other threads:[~2010-09-01  7:50 UTC | newest]

Thread overview: 37+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-08-31  2:31 [PULL 00/35] KVM: PPC: End-August patch queue Alexander Graf
2010-08-31  2:31 ` [PATCH 01/35] KVM: PPC: Move EXIT_DEBUG partially to tracepoints Alexander Graf
2010-08-31  2:31 ` [PATCH 02/35] KVM: PPC: Move book3s_64 mmu map debug print to trace point Alexander Graf
2010-08-31  2:31 ` [PATCH 07/35] KVM: PPC: Preload magic page when in kernel mode Alexander Graf
2010-08-31  2:31 ` [PATCH 08/35] KVM: PPC: Don't flush PTEs on NX/RO hit Alexander Graf
2010-08-31  2:31 ` [PATCH 10/35] KVM: PPC: Move slb debugging to tracepoints Alexander Graf
     [not found] ` <1283221937-21006-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
2010-08-31  2:31   ` [PATCH 03/35] KVM: PPC: Add tracepoint for generic mmu map Alexander Graf
2010-08-31  2:31   ` [PATCH 04/35] KVM: PPC: Move pte invalidate debug code to tracepoint Alexander Graf
2010-08-31  2:31   ` [PATCH 05/35] KVM: PPC: Fix sid map search after flush Alexander Graf
2010-08-31  2:31   ` [PATCH 06/35] KVM: PPC: Add tracepoints for generic spte flushes Alexander Graf
2010-08-31  2:31   ` [PATCH 09/35] KVM: PPC: Make invalidation code more reliable Alexander Graf
2010-08-31  2:31   ` [PATCH 11/35] KVM: PPC: Revert "KVM: PPC: Use kernel hash function" Alexander Graf
2010-08-31  2:31   ` [PATCH 12/35] KVM: PPC: Remove unused define Alexander Graf
2010-08-31  2:31   ` [PATCH 16/35] KVM: PPC: Put segment registers in shared page Alexander Graf
2010-08-31  2:32   ` [PATCH 19/35] KVM: PPC: Update int_pending also on dequeue Alexander Graf
2010-08-31  2:32   ` [PATCH 21/35] KVM: PPC: Force enable nap on KVM Alexander Graf
2010-08-31  2:32   ` [PATCH 24/35] KVM: PPC: initialize IVORs in addition to IVPR Alexander Graf
2010-08-31  2:32   ` [PATCH 29/35] KVM: PPC: Fix CONFIG_KVM_GUEST && !CONFIG_KVM case Alexander Graf
2010-08-31  2:32   ` [PATCH 33/35] KVM: PPC: e500_tlb: Fix a minor copy-paste tracing bug Alexander Graf
2010-08-31  2:32   ` [PATCH 35/35] KVM: PPC: Add documentation for magic page enhancements Alexander Graf
2010-08-31  2:31 ` [PATCH 13/35] KVM: PPC: Add feature bitmap for magic page Alexander Graf
2010-08-31  2:31 ` [PATCH 14/35] KVM: PPC: Move BAT handling code into spr handler Alexander Graf
2010-08-31  2:31 ` [PATCH 15/35] KVM: PPC: Interpret SR registers on demand Alexander Graf
2010-08-31  2:31 ` [PATCH 17/35] KVM: PPC: Add mtsrin PV code Alexander Graf
2010-08-31  2:31 ` [PATCH 18/35] KVM: PPC: Make PV mtmsr work with r30 and r31 Alexander Graf
2010-08-31  2:32 ` [PATCH 20/35] KVM: PPC: Make PV mtmsrd L=1 " Alexander Graf
2010-08-31  2:32 ` [PATCH 22/35] KVM: PPC: Implement correct SID mapping on Book3s_32 Alexander Graf
2010-08-31  2:32 ` [PATCH 23/35] KVM: PPC: Don't put MSR_POW in MSR Alexander Graf
2010-08-31  2:32 ` [PATCH 25/35] KVM: PPC: fix compilation of "dump tlbs" debug function Alexander Graf
2010-08-31  2:32 ` [PATCH 26/35] KVM: PPC: allow ppc440gp to pass the compatibility check Alexander Graf
2010-08-31  2:32 ` [PATCH 27/35] KVM: PPC: Enable napping only for Book3s_64 Alexander Graf
2010-08-31  2:32 ` [PATCH 28/35] KVM: PPC: Implement Level interrupts on Book3S Alexander Graf
2010-08-31  2:32 ` [PATCH 30/35] KVM: PPC: Expose level based interrupt cap Alexander Graf
2010-08-31  2:32 ` [PATCH 31/35] KVM: PPC: Implement level interrupts for BookE Alexander Graf
2010-08-31  2:32 ` [PATCH 32/35] KVM: PPC: Document KVM_INTERRUPT ioctl Alexander Graf
2010-08-31  2:32 ` [PATCH 34/35] KVM: PPC: Fix compile error in e500_tlb.c Alexander Graf
2010-09-01  7:50 ` [PULL 00/35] KVM: PPC: End-August patch queue Avi Kivity

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).