All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/15] KVM: PPC: MOL bringup patches
@ 2010-03-05 16:50 ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

Mac-on-Linux has always lacked PPC64 host support. This is going to
change now!

This patchset contains minor patches to enable MOL, but is mostly about
bug fixes that came out of running Mac OS X. With this set and a pretty
small patch to MOL I have 10.4.11 running as a guest on a 970MP host.

I'll send the MOl patches to the respective ML in the next days.

Alexander Graf (15):
  KVM: PPC: Make register read/write wrappers always work
  KVM: PPC: Ensure split mode works
  KVM: PPC: Allow userspace to unset the IRQ line
  KVM: PPC: Make DSISR 32 bits wide
  KVM: PPC: Book3S_32 guest MMU fixes
  KVM: PPC: Split instruction reading out
  KVM: PPC: Don't reload FPU with invalid values
  KVM: PPC: Load VCPU for register fetching
  KVM: PPC: Implement mfsr emulation
  KVM: PPC: Implement BAT reads
  KVM: PPC: Make XER load 32 bit
  KVM: PPC: Implement emulation for lbzux and lhax
  KVM: PPC: Implement alignment interrupt
  KVM: Add support for enabling capabilities per-vcpu
  KVM: PPC: Add OSI hypercall interface

 arch/powerpc/include/asm/kvm.h          |    3 +
 arch/powerpc/include/asm/kvm_book3s.h   |   19 ++++-
 arch/powerpc/include/asm/kvm_host.h     |    4 +-
 arch/powerpc/include/asm/kvm_ppc.h      |   21 ++++-
 arch/powerpc/kvm/book3s.c               |  124 ++++++++++++++++++++++---------
 arch/powerpc/kvm/book3s_32_mmu.c        |   30 ++++++--
 arch/powerpc/kvm/book3s_64_emulate.c    |   88 ++++++++++++++++++++++
 arch/powerpc/kvm/book3s_64_interrupts.S |    2 +-
 arch/powerpc/kvm/book3s_64_slb.S        |    2 +-
 arch/powerpc/kvm/emulate.c              |   20 +++++
 arch/powerpc/kvm/powerpc.c              |   40 ++++++++++-
 include/linux/kvm.h                     |   14 ++++
 12 files changed, 310 insertions(+), 57 deletions(-)

^ permalink raw reply	[flat|nested] 140+ messages in thread

* [PATCH 00/15] KVM: PPC: MOL bringup patches
@ 2010-03-05 16:50 ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

Mac-on-Linux has always lacked PPC64 host support. This is going to
change now!

This patchset contains minor patches to enable MOL, but is mostly about
bug fixes that came out of running Mac OS X. With this set and a pretty
small patch to MOL I have 10.4.11 running as a guest on a 970MP host.

I'll send the MOl patches to the respective ML in the next days.

Alexander Graf (15):
  KVM: PPC: Make register read/write wrappers always work
  KVM: PPC: Ensure split mode works
  KVM: PPC: Allow userspace to unset the IRQ line
  KVM: PPC: Make DSISR 32 bits wide
  KVM: PPC: Book3S_32 guest MMU fixes
  KVM: PPC: Split instruction reading out
  KVM: PPC: Don't reload FPU with invalid values
  KVM: PPC: Load VCPU for register fetching
  KVM: PPC: Implement mfsr emulation
  KVM: PPC: Implement BAT reads
  KVM: PPC: Make XER load 32 bit
  KVM: PPC: Implement emulation for lbzux and lhax
  KVM: PPC: Implement alignment interrupt
  KVM: Add support for enabling capabilities per-vcpu
  KVM: PPC: Add OSI hypercall interface

 arch/powerpc/include/asm/kvm.h          |    3 +
 arch/powerpc/include/asm/kvm_book3s.h   |   19 ++++-
 arch/powerpc/include/asm/kvm_host.h     |    4 +-
 arch/powerpc/include/asm/kvm_ppc.h      |   21 ++++-
 arch/powerpc/kvm/book3s.c               |  124 ++++++++++++++++++++++---------
 arch/powerpc/kvm/book3s_32_mmu.c        |   30 ++++++--
 arch/powerpc/kvm/book3s_64_emulate.c    |   88 ++++++++++++++++++++++
 arch/powerpc/kvm/book3s_64_interrupts.S |    2 +-
 arch/powerpc/kvm/book3s_64_slb.S        |    2 +-
 arch/powerpc/kvm/emulate.c              |   20 +++++
 arch/powerpc/kvm/powerpc.c              |   40 ++++++++++-
 include/linux/kvm.h                     |   14 ++++
 12 files changed, 310 insertions(+), 57 deletions(-)


^ permalink raw reply	[flat|nested] 140+ messages in thread

* [PATCH 01/15] KVM: PPC: Make register read/write wrappers always work
  2010-03-05 16:50 ` Alexander Graf
@ 2010-03-05 16:50   ` Alexander Graf
  -1 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

We have wrappers to do for example gpr read/write accesses with,
because the contents of registers could be either in the PACA
or in the VCPU struct.

There's nothing that says we have to have the guest vcpu loaded
when using these wrappers though, so let's introduce a flag that
tells us whether we're inside a vcpu_load context.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s.h |    1 +
 arch/powerpc/include/asm/kvm_ppc.h    |   19 ++++++++++++++-----
 arch/powerpc/kvm/book3s.c             |    2 ++
 3 files changed, 17 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index e6ea974..3c7b335 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -78,6 +78,7 @@ struct kvmppc_vcpu_book3s {
 		u64 vsid;
 	} slb_shadow[64];
 	u8 slb_shadow_max;
+	u8 shadow_vcpu_paca;
 	struct kvmppc_sr sr[16];
 	struct kvmppc_bat ibat[8];
 	struct kvmppc_bat dbat[8];
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index c7fcdd7..c3912e9 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -151,9 +151,12 @@ static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
 
 static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
 {
-	if ( num < 14 )
-		return get_paca()->shadow_vcpu.gpr[num];
-	else
+	if ( num < 14 ) {
+		if (to_book3s(vcpu)->shadow_vcpu_paca)
+			return get_paca()->shadow_vcpu.gpr[num];
+		else
+			return to_book3s(vcpu)->shadow_vcpu.gpr[num];
+	} else
 		return vcpu->arch.gpr[num];
 }
 
@@ -165,7 +168,10 @@ static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
 
 static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
 {
-	return get_paca()->shadow_vcpu.cr;
+	if (to_book3s(vcpu)->shadow_vcpu_paca)
+		return get_paca()->shadow_vcpu.cr;
+	else
+		return to_book3s(vcpu)->shadow_vcpu.cr;
 }
 
 static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
@@ -176,7 +182,10 @@ static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
 
 static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
 {
-	return get_paca()->shadow_vcpu.xer;
+	if (to_book3s(vcpu)->shadow_vcpu_paca)
+		return get_paca()->shadow_vcpu.xer;
+	else
+		return to_book3s(vcpu)->shadow_vcpu.xer;
 }
 
 #else
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 94c229d..8a04ec6 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -73,10 +73,12 @@ void kvmppc_core_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 	memcpy(&get_paca()->shadow_vcpu, &to_book3s(vcpu)->shadow_vcpu,
 	       sizeof(get_paca()->shadow_vcpu));
 	get_paca()->kvm_slb_max = to_book3s(vcpu)->slb_shadow_max;
+	to_book3s(vcpu)->shadow_vcpu_paca = true;
 }
 
 void kvmppc_core_vcpu_put(struct kvm_vcpu *vcpu)
 {
+	to_book3s(vcpu)->shadow_vcpu_paca = false;
 	memcpy(to_book3s(vcpu)->slb_shadow, get_paca()->kvm_slb, sizeof(get_paca()->kvm_slb));
 	memcpy(&to_book3s(vcpu)->shadow_vcpu, &get_paca()->shadow_vcpu,
 	       sizeof(get_paca()->shadow_vcpu));
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 01/15] KVM: PPC: Make register read/write wrappers always work
@ 2010-03-05 16:50   ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

We have wrappers to do for example gpr read/write accesses with,
because the contents of registers could be either in the PACA
or in the VCPU struct.

There's nothing that says we have to have the guest vcpu loaded
when using these wrappers though, so let's introduce a flag that
tells us whether we're inside a vcpu_load context.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s.h |    1 +
 arch/powerpc/include/asm/kvm_ppc.h    |   19 ++++++++++++++-----
 arch/powerpc/kvm/book3s.c             |    2 ++
 3 files changed, 17 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index e6ea974..3c7b335 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -78,6 +78,7 @@ struct kvmppc_vcpu_book3s {
 		u64 vsid;
 	} slb_shadow[64];
 	u8 slb_shadow_max;
+	u8 shadow_vcpu_paca;
 	struct kvmppc_sr sr[16];
 	struct kvmppc_bat ibat[8];
 	struct kvmppc_bat dbat[8];
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index c7fcdd7..c3912e9 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -151,9 +151,12 @@ static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
 
 static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
 {
-	if ( num < 14 )
-		return get_paca()->shadow_vcpu.gpr[num];
-	else
+	if ( num < 14 ) {
+		if (to_book3s(vcpu)->shadow_vcpu_paca)
+			return get_paca()->shadow_vcpu.gpr[num];
+		else
+			return to_book3s(vcpu)->shadow_vcpu.gpr[num];
+	} else
 		return vcpu->arch.gpr[num];
 }
 
@@ -165,7 +168,10 @@ static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
 
 static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
 {
-	return get_paca()->shadow_vcpu.cr;
+	if (to_book3s(vcpu)->shadow_vcpu_paca)
+		return get_paca()->shadow_vcpu.cr;
+	else
+		return to_book3s(vcpu)->shadow_vcpu.cr;
 }
 
 static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
@@ -176,7 +182,10 @@ static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
 
 static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
 {
-	return get_paca()->shadow_vcpu.xer;
+	if (to_book3s(vcpu)->shadow_vcpu_paca)
+		return get_paca()->shadow_vcpu.xer;
+	else
+		return to_book3s(vcpu)->shadow_vcpu.xer;
 }
 
 #else
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 94c229d..8a04ec6 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -73,10 +73,12 @@ void kvmppc_core_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 	memcpy(&get_paca()->shadow_vcpu, &to_book3s(vcpu)->shadow_vcpu,
 	       sizeof(get_paca()->shadow_vcpu));
 	get_paca()->kvm_slb_max = to_book3s(vcpu)->slb_shadow_max;
+	to_book3s(vcpu)->shadow_vcpu_paca = true;
 }
 
 void kvmppc_core_vcpu_put(struct kvm_vcpu *vcpu)
 {
+	to_book3s(vcpu)->shadow_vcpu_paca = false;
 	memcpy(to_book3s(vcpu)->slb_shadow, get_paca()->kvm_slb, sizeof(get_paca()->kvm_slb));
 	memcpy(&to_book3s(vcpu)->shadow_vcpu, &get_paca()->shadow_vcpu,
 	       sizeof(get_paca()->shadow_vcpu));
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 02/15] KVM: PPC: Ensure split mode works
       [not found] ` <1267807842-3751-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-03-05 16:50     ` Alexander Graf
  2010-03-05 16:50     ` Alexander Graf
                       ` (3 subsequent siblings)
  4 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

On PowerPC we can go into MMU Split Mode. That means that either
data relocation is on but instruction relocation is off or vice
versa.

That mode didn't work properly, as we weren't always flushing
entries when going into a new split mode, potentially mapping
different code or data that we're supposed to.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/include/asm/kvm_book3s.h |    9 +++---
 arch/powerpc/kvm/book3s.c             |   46 +++++++++++++++++---------------
 2 files changed, 29 insertions(+), 26 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 3c7b335..6bc61d7 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -100,10 +100,11 @@ struct kvmppc_vcpu_book3s {
 #define CONTEXT_GUEST		1
 #define CONTEXT_GUEST_END	2
 
-#define VSID_REAL	0xfffffffffff00000
-#define VSID_REAL_DR	0xffffffffffe00000
-#define VSID_REAL_IR	0xffffffffffd00000
-#define VSID_BAT	0xffffffffffc00000
+#define VSID_REAL_DR	0x7ffffffffff00000
+#define VSID_REAL_IR	0x7fffffffffe00000
+#define VSID_SPLIT_MASK	0x7fffffffffe00000
+#define VSID_REAL	0x7fffffffffc00000
+#define VSID_BAT	0x7fffffffffb00000
 #define VSID_PR		0x8000000000000000
 
 extern void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, u64 ea, u64 ea_mask);
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 8a04ec6..3f29959 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -135,6 +135,14 @@ void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 msr)
 
 	if (((vcpu->arch.msr & (MSR_IR|MSR_DR)) != (old_msr & (MSR_IR|MSR_DR))) ||
 	    (vcpu->arch.msr & MSR_PR) != (old_msr & MSR_PR)) {
+		bool dr = (vcpu->arch.msr & MSR_DR) ? true : false;
+		bool ir = (vcpu->arch.msr & MSR_IR) ? true : false;
+
+		/* Flush split mode PTEs */
+		if (dr != ir)
+			kvmppc_mmu_pte_vflush(vcpu, VSID_SPLIT_MASK,
+					      VSID_SPLIT_MASK);
+
 		kvmppc_mmu_flush_segments(vcpu);
 		kvmppc_mmu_map_segment(vcpu, vcpu->arch.pc);
 	}
@@ -397,15 +405,7 @@ static int kvmppc_xlate(struct kvm_vcpu *vcpu, ulong eaddr, bool data,
 	} else {
 		pte->eaddr = eaddr;
 		pte->raddr = eaddr & 0xffffffff;
-		pte->vpage = eaddr >> 12;
-		switch (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
-		case 0:
-			pte->vpage |= VSID_REAL;
-		case MSR_DR:
-			pte->vpage |= VSID_REAL_DR;
-		case MSR_IR:
-			pte->vpage |= VSID_REAL_IR;
-		}
+		pte->vpage = VSID_REAL | eaddr >> 12;
 		pte->may_read = true;
 		pte->may_write = true;
 		pte->may_execute = true;
@@ -514,12 +514,10 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	int page_found = 0;
 	struct kvmppc_pte pte;
 	bool is_mmio = false;
+	bool dr = (vcpu->arch.msr & MSR_DR) ? true : false;
+	bool ir = (vcpu->arch.msr & MSR_IR) ? true : false;
 
-	if ( vec == BOOK3S_INTERRUPT_DATA_STORAGE ) {
-		relocated = (vcpu->arch.msr & MSR_DR);
-	} else {
-		relocated = (vcpu->arch.msr & MSR_IR);
-	}
+	relocated = data ? dr : ir;
 
 	/* Resolve real address if translation turned on */
 	if (relocated) {
@@ -531,14 +529,18 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		pte.raddr = eaddr & 0xffffffff;
 		pte.eaddr = eaddr;
 		pte.vpage = eaddr >> 12;
-		switch (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
-		case 0:
-			pte.vpage |= VSID_REAL;
-		case MSR_DR:
-			pte.vpage |= VSID_REAL_DR;
-		case MSR_IR:
-			pte.vpage |= VSID_REAL_IR;
-		}
+	}
+
+	switch (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
+	case 0:
+		pte.vpage |= VSID_REAL;
+		break;
+	case MSR_DR:
+		pte.vpage |= VSID_REAL_DR;
+		break;
+	case MSR_IR:
+		pte.vpage |= VSID_REAL_IR;
+		break;
 	}
 
 	if (vcpu->arch.mmu.is_dcbz32(vcpu) &&
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 02/15] KVM: PPC: Ensure split mode works
@ 2010-03-05 16:50     ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

On PowerPC we can go into MMU Split Mode. That means that either
data relocation is on but instruction relocation is off or vice
versa.

That mode didn't work properly, as we weren't always flushing
entries when going into a new split mode, potentially mapping
different code or data that we're supposed to.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s.h |    9 +++---
 arch/powerpc/kvm/book3s.c             |   46 +++++++++++++++++---------------
 2 files changed, 29 insertions(+), 26 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 3c7b335..6bc61d7 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -100,10 +100,11 @@ struct kvmppc_vcpu_book3s {
 #define CONTEXT_GUEST		1
 #define CONTEXT_GUEST_END	2
 
-#define VSID_REAL	0xfffffffffff00000
-#define VSID_REAL_DR	0xffffffffffe00000
-#define VSID_REAL_IR	0xffffffffffd00000
-#define VSID_BAT	0xffffffffffc00000
+#define VSID_REAL_DR	0x7ffffffffff00000
+#define VSID_REAL_IR	0x7fffffffffe00000
+#define VSID_SPLIT_MASK	0x7fffffffffe00000
+#define VSID_REAL	0x7fffffffffc00000
+#define VSID_BAT	0x7fffffffffb00000
 #define VSID_PR		0x8000000000000000
 
 extern void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, u64 ea, u64 ea_mask);
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 8a04ec6..3f29959 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -135,6 +135,14 @@ void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 msr)
 
 	if (((vcpu->arch.msr & (MSR_IR|MSR_DR)) != (old_msr & (MSR_IR|MSR_DR))) ||
 	    (vcpu->arch.msr & MSR_PR) != (old_msr & MSR_PR)) {
+		bool dr = (vcpu->arch.msr & MSR_DR) ? true : false;
+		bool ir = (vcpu->arch.msr & MSR_IR) ? true : false;
+
+		/* Flush split mode PTEs */
+		if (dr != ir)
+			kvmppc_mmu_pte_vflush(vcpu, VSID_SPLIT_MASK,
+					      VSID_SPLIT_MASK);
+
 		kvmppc_mmu_flush_segments(vcpu);
 		kvmppc_mmu_map_segment(vcpu, vcpu->arch.pc);
 	}
@@ -397,15 +405,7 @@ static int kvmppc_xlate(struct kvm_vcpu *vcpu, ulong eaddr, bool data,
 	} else {
 		pte->eaddr = eaddr;
 		pte->raddr = eaddr & 0xffffffff;
-		pte->vpage = eaddr >> 12;
-		switch (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
-		case 0:
-			pte->vpage |= VSID_REAL;
-		case MSR_DR:
-			pte->vpage |= VSID_REAL_DR;
-		case MSR_IR:
-			pte->vpage |= VSID_REAL_IR;
-		}
+		pte->vpage = VSID_REAL | eaddr >> 12;
 		pte->may_read = true;
 		pte->may_write = true;
 		pte->may_execute = true;
@@ -514,12 +514,10 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	int page_found = 0;
 	struct kvmppc_pte pte;
 	bool is_mmio = false;
+	bool dr = (vcpu->arch.msr & MSR_DR) ? true : false;
+	bool ir = (vcpu->arch.msr & MSR_IR) ? true : false;
 
-	if ( vec = BOOK3S_INTERRUPT_DATA_STORAGE ) {
-		relocated = (vcpu->arch.msr & MSR_DR);
-	} else {
-		relocated = (vcpu->arch.msr & MSR_IR);
-	}
+	relocated = data ? dr : ir;
 
 	/* Resolve real address if translation turned on */
 	if (relocated) {
@@ -531,14 +529,18 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		pte.raddr = eaddr & 0xffffffff;
 		pte.eaddr = eaddr;
 		pte.vpage = eaddr >> 12;
-		switch (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
-		case 0:
-			pte.vpage |= VSID_REAL;
-		case MSR_DR:
-			pte.vpage |= VSID_REAL_DR;
-		case MSR_IR:
-			pte.vpage |= VSID_REAL_IR;
-		}
+	}
+
+	switch (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
+	case 0:
+		pte.vpage |= VSID_REAL;
+		break;
+	case MSR_DR:
+		pte.vpage |= VSID_REAL_DR;
+		break;
+	case MSR_IR:
+		pte.vpage |= VSID_REAL_IR;
+		break;
 	}
 
 	if (vcpu->arch.mmu.is_dcbz32(vcpu) &&
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 03/15] KVM: PPC: Allow userspace to unset the IRQ line
  2010-03-05 16:50 ` Alexander Graf
@ 2010-03-05 16:50   ` Alexander Graf
  -1 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

Userspace can tell us that it wants to trigger an interrupt. But
so far it can't tell us that it wants to stop triggering one.

So let's interpret the parameter to the ioctl that we have anyways
to tell us if we want to raise or lower the interrupt line.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm.h     |    3 +++
 arch/powerpc/include/asm/kvm_ppc.h |    2 ++
 arch/powerpc/kvm/book3s.c          |    6 ++++++
 arch/powerpc/kvm/powerpc.c         |    5 ++++-
 4 files changed, 15 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm.h b/arch/powerpc/include/asm/kvm.h
index 19bae31..6c5547d 100644
--- a/arch/powerpc/include/asm/kvm.h
+++ b/arch/powerpc/include/asm/kvm.h
@@ -84,4 +84,7 @@ struct kvm_guest_debug_arch {
 #define KVM_REG_QPR		0x0040
 #define KVM_REG_FQPR		0x0060
 
+#define KVM_INTERRUPT_SET	-1U
+#define KVM_INTERRUPT_UNSET	-2U
+
 #endif /* __LINUX_KVM_POWERPC_H */
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index c3912e9..6f92a70 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -92,6 +92,8 @@ extern void kvmppc_core_queue_dec(struct kvm_vcpu *vcpu);
 extern void kvmppc_core_dequeue_dec(struct kvm_vcpu *vcpu);
 extern void kvmppc_core_queue_external(struct kvm_vcpu *vcpu,
                                        struct kvm_interrupt *irq);
+extern void kvmppc_core_dequeue_external(struct kvm_vcpu *vcpu,
+                                         struct kvm_interrupt *irq);
 
 extern int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
                                   unsigned int op, int *advance);
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 3f29959..1830414 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -232,6 +232,12 @@ void kvmppc_core_queue_external(struct kvm_vcpu *vcpu,
 	kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_EXTERNAL);
 }
 
+void kvmppc_core_dequeue_external(struct kvm_vcpu *vcpu,
+                                  struct kvm_interrupt *irq)
+{
+	kvmppc_book3s_dequeue_irqprio(vcpu, BOOK3S_INTERRUPT_EXTERNAL);
+}
+
 int kvmppc_book3s_irqprio_deliver(struct kvm_vcpu *vcpu, unsigned int priority)
 {
 	int deliver = 1;
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 5a8eb95..a28a512 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -449,7 +449,10 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 
 int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, struct kvm_interrupt *irq)
 {
-	kvmppc_core_queue_external(vcpu, irq);
+	if (irq->irq == KVM_INTERRUPT_UNSET)
+		kvmppc_core_dequeue_external(vcpu, irq);
+	else
+		kvmppc_core_queue_external(vcpu, irq);
 
 	if (waitqueue_active(&vcpu->wq)) {
 		wake_up_interruptible(&vcpu->wq);
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 03/15] KVM: PPC: Allow userspace to unset the IRQ line
@ 2010-03-05 16:50   ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

Userspace can tell us that it wants to trigger an interrupt. But
so far it can't tell us that it wants to stop triggering one.

So let's interpret the parameter to the ioctl that we have anyways
to tell us if we want to raise or lower the interrupt line.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm.h     |    3 +++
 arch/powerpc/include/asm/kvm_ppc.h |    2 ++
 arch/powerpc/kvm/book3s.c          |    6 ++++++
 arch/powerpc/kvm/powerpc.c         |    5 ++++-
 4 files changed, 15 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm.h b/arch/powerpc/include/asm/kvm.h
index 19bae31..6c5547d 100644
--- a/arch/powerpc/include/asm/kvm.h
+++ b/arch/powerpc/include/asm/kvm.h
@@ -84,4 +84,7 @@ struct kvm_guest_debug_arch {
 #define KVM_REG_QPR		0x0040
 #define KVM_REG_FQPR		0x0060
 
+#define KVM_INTERRUPT_SET	-1U
+#define KVM_INTERRUPT_UNSET	-2U
+
 #endif /* __LINUX_KVM_POWERPC_H */
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index c3912e9..6f92a70 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -92,6 +92,8 @@ extern void kvmppc_core_queue_dec(struct kvm_vcpu *vcpu);
 extern void kvmppc_core_dequeue_dec(struct kvm_vcpu *vcpu);
 extern void kvmppc_core_queue_external(struct kvm_vcpu *vcpu,
                                        struct kvm_interrupt *irq);
+extern void kvmppc_core_dequeue_external(struct kvm_vcpu *vcpu,
+                                         struct kvm_interrupt *irq);
 
 extern int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
                                   unsigned int op, int *advance);
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 3f29959..1830414 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -232,6 +232,12 @@ void kvmppc_core_queue_external(struct kvm_vcpu *vcpu,
 	kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_EXTERNAL);
 }
 
+void kvmppc_core_dequeue_external(struct kvm_vcpu *vcpu,
+                                  struct kvm_interrupt *irq)
+{
+	kvmppc_book3s_dequeue_irqprio(vcpu, BOOK3S_INTERRUPT_EXTERNAL);
+}
+
 int kvmppc_book3s_irqprio_deliver(struct kvm_vcpu *vcpu, unsigned int priority)
 {
 	int deliver = 1;
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 5a8eb95..a28a512 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -449,7 +449,10 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 
 int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, struct kvm_interrupt *irq)
 {
-	kvmppc_core_queue_external(vcpu, irq);
+	if (irq->irq = KVM_INTERRUPT_UNSET)
+		kvmppc_core_dequeue_external(vcpu, irq);
+	else
+		kvmppc_core_queue_external(vcpu, irq);
 
 	if (waitqueue_active(&vcpu->wq)) {
 		wake_up_interruptible(&vcpu->wq);
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 04/15] KVM: PPC: Make DSISR 32 bits wide
       [not found] ` <1267807842-3751-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-03-05 16:50     ` Alexander Graf
  2010-03-05 16:50     ` Alexander Graf
                       ` (3 subsequent siblings)
  4 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

DSISR is only defined as 32 bits wide. So let's reflect that in the
structs too.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/include/asm/kvm_book3s.h   |    2 +-
 arch/powerpc/include/asm/kvm_host.h     |    2 +-
 arch/powerpc/kvm/book3s_64_interrupts.S |    2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 6bc61d7..997fcc0 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -85,8 +85,8 @@ struct kvmppc_vcpu_book3s {
 	u64 hid[6];
 	u64 gqr[8];
 	int slb_nr;
+	u32 dsisr;
 	u64 sdr1;
-	u64 dsisr;
 	u64 hior;
 	u64 msr_mask;
 	u64 vsid_first;
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 119deb4..0ebda67 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -260,7 +260,7 @@ struct kvm_vcpu_arch {
 
 	u32 last_inst;
 #ifdef CONFIG_PPC64
-	ulong fault_dsisr;
+	u32 fault_dsisr;
 #endif
 	ulong fault_dear;
 	ulong fault_esr;
diff --git a/arch/powerpc/kvm/book3s_64_interrupts.S b/arch/powerpc/kvm/book3s_64_interrupts.S
index c1584d0..faca876 100644
--- a/arch/powerpc/kvm/book3s_64_interrupts.S
+++ b/arch/powerpc/kvm/book3s_64_interrupts.S
@@ -171,7 +171,7 @@ kvmppc_handler_highmem:
 	std	r3, VCPU_PC(r7)
 	std	r4, VCPU_SHADOW_SRR1(r7)
 	std	r5, VCPU_FAULT_DEAR(r7)
-	std	r6, VCPU_FAULT_DSISR(r7)
+	stw	r6, VCPU_FAULT_DSISR(r7)
 
 	ld	r5, VCPU_HFLAGS(r7)
 	rldicl.	r5, r5, 0, 63		/* CR = ((r5 & 1) == 0) */
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 04/15] KVM: PPC: Make DSISR 32 bits wide
@ 2010-03-05 16:50     ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

DSISR is only defined as 32 bits wide. So let's reflect that in the
structs too.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s.h   |    2 +-
 arch/powerpc/include/asm/kvm_host.h     |    2 +-
 arch/powerpc/kvm/book3s_64_interrupts.S |    2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 6bc61d7..997fcc0 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -85,8 +85,8 @@ struct kvmppc_vcpu_book3s {
 	u64 hid[6];
 	u64 gqr[8];
 	int slb_nr;
+	u32 dsisr;
 	u64 sdr1;
-	u64 dsisr;
 	u64 hior;
 	u64 msr_mask;
 	u64 vsid_first;
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 119deb4..0ebda67 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -260,7 +260,7 @@ struct kvm_vcpu_arch {
 
 	u32 last_inst;
 #ifdef CONFIG_PPC64
-	ulong fault_dsisr;
+	u32 fault_dsisr;
 #endif
 	ulong fault_dear;
 	ulong fault_esr;
diff --git a/arch/powerpc/kvm/book3s_64_interrupts.S b/arch/powerpc/kvm/book3s_64_interrupts.S
index c1584d0..faca876 100644
--- a/arch/powerpc/kvm/book3s_64_interrupts.S
+++ b/arch/powerpc/kvm/book3s_64_interrupts.S
@@ -171,7 +171,7 @@ kvmppc_handler_highmem:
 	std	r3, VCPU_PC(r7)
 	std	r4, VCPU_SHADOW_SRR1(r7)
 	std	r5, VCPU_FAULT_DEAR(r7)
-	std	r6, VCPU_FAULT_DSISR(r7)
+	stw	r6, VCPU_FAULT_DSISR(r7)
 
 	ld	r5, VCPU_HFLAGS(r7)
 	rldicl.	r5, r5, 0, 63		/* CR = ((r5 & 1) = 0) */
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 05/15] KVM: PPC: Book3S_32 guest MMU fixes
       [not found] ` <1267807842-3751-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-03-05 16:50     ` Alexander Graf
  2010-03-05 16:50     ` Alexander Graf
                       ` (3 subsequent siblings)
  4 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

This patch makes the VSID of mapped pages always reflecting all special cases
we have, like split mode.

It also changes the tlbie mask to 0x0ffff000 according to the spec. The mask
we used before was incorrect.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/include/asm/kvm_book3s.h |    1 +
 arch/powerpc/kvm/book3s_32_mmu.c      |   30 +++++++++++++++++++++++-------
 2 files changed, 24 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 997fcc0..1b76fa1 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -44,6 +44,7 @@ struct kvmppc_sr {
 	bool Ks;
 	bool Kp;
 	bool nx;
+	bool valid;
 };
 
 struct kvmppc_bat {
diff --git a/arch/powerpc/kvm/book3s_32_mmu.c b/arch/powerpc/kvm/book3s_32_mmu.c
index 1483a9b..7071e22 100644
--- a/arch/powerpc/kvm/book3s_32_mmu.c
+++ b/arch/powerpc/kvm/book3s_32_mmu.c
@@ -57,6 +57,8 @@ static inline bool check_debug_ip(struct kvm_vcpu *vcpu)
 
 static int kvmppc_mmu_book3s_32_xlate_bat(struct kvm_vcpu *vcpu, gva_t eaddr,
 					  struct kvmppc_pte *pte, bool data);
+static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, u64 esid,
+					     u64 *vsid);
 
 static struct kvmppc_sr *find_sr(struct kvmppc_vcpu_book3s *vcpu_book3s, gva_t eaddr)
 {
@@ -66,13 +68,14 @@ static struct kvmppc_sr *find_sr(struct kvmppc_vcpu_book3s *vcpu_book3s, gva_t e
 static u64 kvmppc_mmu_book3s_32_ea_to_vp(struct kvm_vcpu *vcpu, gva_t eaddr,
 					 bool data)
 {
-	struct kvmppc_sr *sre = find_sr(to_book3s(vcpu), eaddr);
+	u64 vsid;
 	struct kvmppc_pte pte;
 
 	if (!kvmppc_mmu_book3s_32_xlate_bat(vcpu, eaddr, &pte, data))
 		return pte.vpage;
 
-	return (((u64)eaddr >> 12) & 0xffff) | (((u64)sre->vsid) << 16);
+	kvmppc_mmu_book3s_32_esid_to_vsid(vcpu, eaddr >> SID_SHIFT, &vsid);
+	return (((u64)eaddr >> 12) & 0xffff) | (vsid << 16);
 }
 
 static void kvmppc_mmu_book3s_32_reset_msr(struct kvm_vcpu *vcpu)
@@ -142,8 +145,13 @@ static int kvmppc_mmu_book3s_32_xlate_bat(struct kvm_vcpu *vcpu, gva_t eaddr,
 				    bat->bepi_mask);
 		}
 		if ((eaddr & bat->bepi_mask) == bat->bepi) {
+			u64 vsid;
+			kvmppc_mmu_book3s_32_esid_to_vsid(vcpu,
+				eaddr >> SID_SHIFT, &vsid);
+			vsid <<= 16;
+			pte->vpage = (((u64)eaddr >> 12) & 0xffff) | vsid;
+
 			pte->raddr = bat->brpn | (eaddr & ~bat->bepi_mask);
-			pte->vpage = (eaddr >> 12) | VSID_BAT;
 			pte->may_read = bat->pp;
 			pte->may_write = bat->pp > 1;
 			pte->may_execute = true;
@@ -302,6 +310,7 @@ static void kvmppc_mmu_book3s_32_mtsrin(struct kvm_vcpu *vcpu, u32 srnum,
 	/* And then put in the new SR */
 	sre->raw = value;
 	sre->vsid = (value & 0x0fffffff);
+	sre->valid = (value & 0x80000000) ? false : true;
 	sre->Ks = (value & 0x40000000) ? true : false;
 	sre->Kp = (value & 0x20000000) ? true : false;
 	sre->nx = (value & 0x10000000) ? true : false;
@@ -312,7 +321,7 @@ static void kvmppc_mmu_book3s_32_mtsrin(struct kvm_vcpu *vcpu, u32 srnum,
 
 static void kvmppc_mmu_book3s_32_tlbie(struct kvm_vcpu *vcpu, ulong ea, bool large)
 {
-	kvmppc_mmu_pte_flush(vcpu, ea, ~0xFFFULL);
+	kvmppc_mmu_pte_flush(vcpu, ea, 0x0FFFF000);
 }
 
 static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, u64 esid,
@@ -333,15 +342,22 @@ static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, u64 esid,
 		break;
 	case MSR_DR|MSR_IR:
 	{
-		ulong ea;
-		ea = esid << SID_SHIFT;
-		*vsid = find_sr(to_book3s(vcpu), ea)->vsid;
+		ulong ea = esid << SID_SHIFT;
+		struct kvmppc_sr *sr = find_sr(to_book3s(vcpu), ea);
+
+		if (!sr->valid)
+			return -1;
+
+		*vsid = sr->vsid;
 		break;
 	}
 	default:
 		BUG();
 	}
 
+	if (vcpu->arch.msr & MSR_PR)
+		*vsid |= VSID_PR;
+
 	return 0;
 }
 
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 05/15] KVM: PPC: Book3S_32 guest MMU fixes
@ 2010-03-05 16:50     ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

This patch makes the VSID of mapped pages always reflecting all special cases
we have, like split mode.

It also changes the tlbie mask to 0x0ffff000 according to the spec. The mask
we used before was incorrect.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s.h |    1 +
 arch/powerpc/kvm/book3s_32_mmu.c      |   30 +++++++++++++++++++++++-------
 2 files changed, 24 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 997fcc0..1b76fa1 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -44,6 +44,7 @@ struct kvmppc_sr {
 	bool Ks;
 	bool Kp;
 	bool nx;
+	bool valid;
 };
 
 struct kvmppc_bat {
diff --git a/arch/powerpc/kvm/book3s_32_mmu.c b/arch/powerpc/kvm/book3s_32_mmu.c
index 1483a9b..7071e22 100644
--- a/arch/powerpc/kvm/book3s_32_mmu.c
+++ b/arch/powerpc/kvm/book3s_32_mmu.c
@@ -57,6 +57,8 @@ static inline bool check_debug_ip(struct kvm_vcpu *vcpu)
 
 static int kvmppc_mmu_book3s_32_xlate_bat(struct kvm_vcpu *vcpu, gva_t eaddr,
 					  struct kvmppc_pte *pte, bool data);
+static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, u64 esid,
+					     u64 *vsid);
 
 static struct kvmppc_sr *find_sr(struct kvmppc_vcpu_book3s *vcpu_book3s, gva_t eaddr)
 {
@@ -66,13 +68,14 @@ static struct kvmppc_sr *find_sr(struct kvmppc_vcpu_book3s *vcpu_book3s, gva_t e
 static u64 kvmppc_mmu_book3s_32_ea_to_vp(struct kvm_vcpu *vcpu, gva_t eaddr,
 					 bool data)
 {
-	struct kvmppc_sr *sre = find_sr(to_book3s(vcpu), eaddr);
+	u64 vsid;
 	struct kvmppc_pte pte;
 
 	if (!kvmppc_mmu_book3s_32_xlate_bat(vcpu, eaddr, &pte, data))
 		return pte.vpage;
 
-	return (((u64)eaddr >> 12) & 0xffff) | (((u64)sre->vsid) << 16);
+	kvmppc_mmu_book3s_32_esid_to_vsid(vcpu, eaddr >> SID_SHIFT, &vsid);
+	return (((u64)eaddr >> 12) & 0xffff) | (vsid << 16);
 }
 
 static void kvmppc_mmu_book3s_32_reset_msr(struct kvm_vcpu *vcpu)
@@ -142,8 +145,13 @@ static int kvmppc_mmu_book3s_32_xlate_bat(struct kvm_vcpu *vcpu, gva_t eaddr,
 				    bat->bepi_mask);
 		}
 		if ((eaddr & bat->bepi_mask) = bat->bepi) {
+			u64 vsid;
+			kvmppc_mmu_book3s_32_esid_to_vsid(vcpu,
+				eaddr >> SID_SHIFT, &vsid);
+			vsid <<= 16;
+			pte->vpage = (((u64)eaddr >> 12) & 0xffff) | vsid;
+
 			pte->raddr = bat->brpn | (eaddr & ~bat->bepi_mask);
-			pte->vpage = (eaddr >> 12) | VSID_BAT;
 			pte->may_read = bat->pp;
 			pte->may_write = bat->pp > 1;
 			pte->may_execute = true;
@@ -302,6 +310,7 @@ static void kvmppc_mmu_book3s_32_mtsrin(struct kvm_vcpu *vcpu, u32 srnum,
 	/* And then put in the new SR */
 	sre->raw = value;
 	sre->vsid = (value & 0x0fffffff);
+	sre->valid = (value & 0x80000000) ? false : true;
 	sre->Ks = (value & 0x40000000) ? true : false;
 	sre->Kp = (value & 0x20000000) ? true : false;
 	sre->nx = (value & 0x10000000) ? true : false;
@@ -312,7 +321,7 @@ static void kvmppc_mmu_book3s_32_mtsrin(struct kvm_vcpu *vcpu, u32 srnum,
 
 static void kvmppc_mmu_book3s_32_tlbie(struct kvm_vcpu *vcpu, ulong ea, bool large)
 {
-	kvmppc_mmu_pte_flush(vcpu, ea, ~0xFFFULL);
+	kvmppc_mmu_pte_flush(vcpu, ea, 0x0FFFF000);
 }
 
 static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, u64 esid,
@@ -333,15 +342,22 @@ static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, u64 esid,
 		break;
 	case MSR_DR|MSR_IR:
 	{
-		ulong ea;
-		ea = esid << SID_SHIFT;
-		*vsid = find_sr(to_book3s(vcpu), ea)->vsid;
+		ulong ea = esid << SID_SHIFT;
+		struct kvmppc_sr *sr = find_sr(to_book3s(vcpu), ea);
+
+		if (!sr->valid)
+			return -1;
+
+		*vsid = sr->vsid;
 		break;
 	}
 	default:
 		BUG();
 	}
 
+	if (vcpu->arch.msr & MSR_PR)
+		*vsid |= VSID_PR;
+
 	return 0;
 }
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 06/15] KVM: PPC: Split instruction reading out
  2010-03-05 16:50 ` Alexander Graf
@ 2010-03-05 16:50   ` Alexander Graf
  -1 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

The current check_ext function reads the instruction and then does
the checking. Let's split the reading out so we can reuse it for
different functions.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s.c |   24 ++++++++++++++++--------
 1 files changed, 16 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 1830414..fd6ee5f 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -652,26 +652,34 @@ void kvmppc_giveup_ext(struct kvm_vcpu *vcpu, ulong msr)
 	kvmppc_recalc_shadow_msr(vcpu);
 }
 
-static int kvmppc_check_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr)
+static int kvmppc_read_inst(struct kvm_vcpu *vcpu)
 {
 	ulong srr0 = vcpu->arch.pc;
 	int ret;
 
-	/* Need to do paired single emulation? */
-	if (!(vcpu->arch.hflags & BOOK3S_HFLAG_PAIRED_SINGLE))
-		return EMULATE_DONE;
-
-	/* Read out the instruction */
 	ret = kvmppc_ld(vcpu, &srr0, sizeof(u32), &vcpu->arch.last_inst, false);
 	if (ret == -ENOENT) {
 		vcpu->arch.msr = kvmppc_set_field(vcpu->arch.msr, 33, 33, 1);
 		vcpu->arch.msr = kvmppc_set_field(vcpu->arch.msr, 34, 36, 0);
 		vcpu->arch.msr = kvmppc_set_field(vcpu->arch.msr, 42, 47, 0);
 		kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_INST_STORAGE);
-	} else if(ret == EMULATE_DONE) {
+		return EMULATE_AGAIN;
+	}
+
+	return EMULATE_DONE;
+}
+
+static int kvmppc_check_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr)
+{
+
+	/* Need to do paired single emulation? */
+	if (!(vcpu->arch.hflags & BOOK3S_HFLAG_PAIRED_SINGLE))
+		return EMULATE_DONE;
+
+	/* Read out the instruction */
+	if (kvmppc_read_inst(vcpu) == EMULATE_DONE)
 		/* Need to emulate */
 		return EMULATE_FAIL;
-	}
 
 	return EMULATE_AGAIN;
 }
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 06/15] KVM: PPC: Split instruction reading out
@ 2010-03-05 16:50   ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

The current check_ext function reads the instruction and then does
the checking. Let's split the reading out so we can reuse it for
different functions.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s.c |   24 ++++++++++++++++--------
 1 files changed, 16 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 1830414..fd6ee5f 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -652,26 +652,34 @@ void kvmppc_giveup_ext(struct kvm_vcpu *vcpu, ulong msr)
 	kvmppc_recalc_shadow_msr(vcpu);
 }
 
-static int kvmppc_check_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr)
+static int kvmppc_read_inst(struct kvm_vcpu *vcpu)
 {
 	ulong srr0 = vcpu->arch.pc;
 	int ret;
 
-	/* Need to do paired single emulation? */
-	if (!(vcpu->arch.hflags & BOOK3S_HFLAG_PAIRED_SINGLE))
-		return EMULATE_DONE;
-
-	/* Read out the instruction */
 	ret = kvmppc_ld(vcpu, &srr0, sizeof(u32), &vcpu->arch.last_inst, false);
 	if (ret = -ENOENT) {
 		vcpu->arch.msr = kvmppc_set_field(vcpu->arch.msr, 33, 33, 1);
 		vcpu->arch.msr = kvmppc_set_field(vcpu->arch.msr, 34, 36, 0);
 		vcpu->arch.msr = kvmppc_set_field(vcpu->arch.msr, 42, 47, 0);
 		kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_INST_STORAGE);
-	} else if(ret = EMULATE_DONE) {
+		return EMULATE_AGAIN;
+	}
+
+	return EMULATE_DONE;
+}
+
+static int kvmppc_check_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr)
+{
+
+	/* Need to do paired single emulation? */
+	if (!(vcpu->arch.hflags & BOOK3S_HFLAG_PAIRED_SINGLE))
+		return EMULATE_DONE;
+
+	/* Read out the instruction */
+	if (kvmppc_read_inst(vcpu) = EMULATE_DONE)
 		/* Need to emulate */
 		return EMULATE_FAIL;
-	}
 
 	return EMULATE_AGAIN;
 }
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 07/15] KVM: PPC: Don't reload FPU with invalid values
  2010-03-05 16:50 ` Alexander Graf
@ 2010-03-05 16:50   ` Alexander Graf
  -1 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

When the guest activates the FPU, we load it up. That's fine when
it wasn't activated before on the host, but if it was we end up
reloading FPU values from last time the FPU was deactivated on the
host without writing the proper values back to the vcpu struct.

This patch checks if the FPU is enabled already and if so just doesn't
bother activating it, making FPU operations survive guest context switches.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s.c |    5 +++++
 1 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index fd6ee5f..68c615b 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -703,6 +703,11 @@ static int kvmppc_handle_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr,
 		return RESUME_GUEST;
 	}
 
+	/* We already own the ext */
+	if (vcpu->arch.guest_owned_ext & msr) {
+		return RESUME_GUEST;
+	}
+
 #ifdef DEBUG_EXT
 	printk(KERN_INFO "Loading up ext 0x%lx\n", msr);
 #endif
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 07/15] KVM: PPC: Don't reload FPU with invalid values
@ 2010-03-05 16:50   ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

When the guest activates the FPU, we load it up. That's fine when
it wasn't activated before on the host, but if it was we end up
reloading FPU values from last time the FPU was deactivated on the
host without writing the proper values back to the vcpu struct.

This patch checks if the FPU is enabled already and if so just doesn't
bother activating it, making FPU operations survive guest context switches.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s.c |    5 +++++
 1 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index fd6ee5f..68c615b 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -703,6 +703,11 @@ static int kvmppc_handle_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr,
 		return RESUME_GUEST;
 	}
 
+	/* We already own the ext */
+	if (vcpu->arch.guest_owned_ext & msr) {
+		return RESUME_GUEST;
+	}
+
 #ifdef DEBUG_EXT
 	printk(KERN_INFO "Loading up ext 0x%lx\n", msr);
 #endif
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 08/15] KVM: PPC: Load VCPU for register fetching
  2010-03-05 16:50 ` Alexander Graf
@ 2010-03-05 16:50   ` Alexander Graf
  -1 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

When trying to read or store vcpu register data, we should also make
sure the vcpu is actually loaded, so we're 100% sure we get the correct
values.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s.c |    8 ++++++++
 1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 68c615b..46c8954 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -957,6 +957,8 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 {
 	int i;
 
+	vcpu_load(vcpu);
+
 	regs->pc = vcpu->arch.pc;
 	regs->cr = kvmppc_get_cr(vcpu);
 	regs->ctr = vcpu->arch.ctr;
@@ -977,6 +979,8 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 	for (i = 0; i < ARRAY_SIZE(regs->gpr); i++)
 		regs->gpr[i] = kvmppc_get_gpr(vcpu, i);
 
+	vcpu_put(vcpu);
+
 	return 0;
 }
 
@@ -984,6 +988,8 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 {
 	int i;
 
+	vcpu_load(vcpu);
+
 	vcpu->arch.pc = regs->pc;
 	kvmppc_set_cr(vcpu, regs->cr);
 	vcpu->arch.ctr = regs->ctr;
@@ -1003,6 +1009,8 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 	for (i = 0; i < ARRAY_SIZE(regs->gpr); i++)
 		kvmppc_set_gpr(vcpu, i, regs->gpr[i]);
 
+	vcpu_put(vcpu);
+
 	return 0;
 }
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 08/15] KVM: PPC: Load VCPU for register fetching
@ 2010-03-05 16:50   ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

When trying to read or store vcpu register data, we should also make
sure the vcpu is actually loaded, so we're 100% sure we get the correct
values.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s.c |    8 ++++++++
 1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 68c615b..46c8954 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -957,6 +957,8 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 {
 	int i;
 
+	vcpu_load(vcpu);
+
 	regs->pc = vcpu->arch.pc;
 	regs->cr = kvmppc_get_cr(vcpu);
 	regs->ctr = vcpu->arch.ctr;
@@ -977,6 +979,8 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 	for (i = 0; i < ARRAY_SIZE(regs->gpr); i++)
 		regs->gpr[i] = kvmppc_get_gpr(vcpu, i);
 
+	vcpu_put(vcpu);
+
 	return 0;
 }
 
@@ -984,6 +988,8 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 {
 	int i;
 
+	vcpu_load(vcpu);
+
 	vcpu->arch.pc = regs->pc;
 	kvmppc_set_cr(vcpu, regs->cr);
 	vcpu->arch.ctr = regs->ctr;
@@ -1003,6 +1009,8 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 	for (i = 0; i < ARRAY_SIZE(regs->gpr); i++)
 		kvmppc_set_gpr(vcpu, i, regs->gpr[i]);
 
+	vcpu_put(vcpu);
+
 	return 0;
 }
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 09/15] KVM: PPC: Implement mfsr emulation
  2010-03-05 16:50 ` Alexander Graf
@ 2010-03-05 16:50   ` Alexander Graf
  -1 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

We emulate the mfsrin instruction already, that passes the SR number
in a register value. But we lacked support for mfsr that encoded the
SR number in the opcode.

So let's implement it.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_64_emulate.c |   13 +++++++++++++
 1 files changed, 13 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_emulate.c b/arch/powerpc/kvm/book3s_64_emulate.c
index c989214..8d7a78d 100644
--- a/arch/powerpc/kvm/book3s_64_emulate.c
+++ b/arch/powerpc/kvm/book3s_64_emulate.c
@@ -35,6 +35,7 @@
 #define OP_31_XOP_SLBMTE	402
 #define OP_31_XOP_SLBIE		434
 #define OP_31_XOP_SLBIA		498
+#define OP_31_XOP_MFSR		595
 #define OP_31_XOP_MFSRIN	659
 #define OP_31_XOP_SLBMFEV	851
 #define OP_31_XOP_EIOIO		854
@@ -90,6 +91,18 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		case OP_31_XOP_MTMSR:
 			kvmppc_set_msr(vcpu, kvmppc_get_gpr(vcpu, get_rs(inst)));
 			break;
+		case OP_31_XOP_MFSR:
+		{
+			int srnum;
+
+			srnum = kvmppc_get_field(inst, 12 + 32, 15 + 32);
+			if (vcpu->arch.mmu.mfsrin) {
+				u32 sr;
+				sr = vcpu->arch.mmu.mfsrin(vcpu, srnum);
+				kvmppc_set_gpr(vcpu, get_rt(inst), sr);
+			}
+			break;
+		}
 		case OP_31_XOP_MFSRIN:
 		{
 			int srnum;
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 09/15] KVM: PPC: Implement mfsr emulation
@ 2010-03-05 16:50   ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

We emulate the mfsrin instruction already, that passes the SR number
in a register value. But we lacked support for mfsr that encoded the
SR number in the opcode.

So let's implement it.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_64_emulate.c |   13 +++++++++++++
 1 files changed, 13 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_emulate.c b/arch/powerpc/kvm/book3s_64_emulate.c
index c989214..8d7a78d 100644
--- a/arch/powerpc/kvm/book3s_64_emulate.c
+++ b/arch/powerpc/kvm/book3s_64_emulate.c
@@ -35,6 +35,7 @@
 #define OP_31_XOP_SLBMTE	402
 #define OP_31_XOP_SLBIE		434
 #define OP_31_XOP_SLBIA		498
+#define OP_31_XOP_MFSR		595
 #define OP_31_XOP_MFSRIN	659
 #define OP_31_XOP_SLBMFEV	851
 #define OP_31_XOP_EIOIO		854
@@ -90,6 +91,18 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		case OP_31_XOP_MTMSR:
 			kvmppc_set_msr(vcpu, kvmppc_get_gpr(vcpu, get_rs(inst)));
 			break;
+		case OP_31_XOP_MFSR:
+		{
+			int srnum;
+
+			srnum = kvmppc_get_field(inst, 12 + 32, 15 + 32);
+			if (vcpu->arch.mmu.mfsrin) {
+				u32 sr;
+				sr = vcpu->arch.mmu.mfsrin(vcpu, srnum);
+				kvmppc_set_gpr(vcpu, get_rt(inst), sr);
+			}
+			break;
+		}
 		case OP_31_XOP_MFSRIN:
 		{
 			int srnum;
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 10/15] KVM: PPC: Implement BAT reads
       [not found] ` <1267807842-3751-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-03-05 16:50     ` Alexander Graf
  2010-03-05 16:50     ` Alexander Graf
                       ` (3 subsequent siblings)
  4 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

BATs can't only be written to, you can also read them out!
So let's implement emulation for reading BAT values again.

While at it, I also made BAT setting flush the segment cache,
so we're absolutely sure there's no MMU state left when writing
BATs.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s_64_emulate.c |   35 ++++++++++++++++++++++++++++++++++
 1 files changed, 35 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_emulate.c b/arch/powerpc/kvm/book3s_64_emulate.c
index 8d7a78d..39d5003 100644
--- a/arch/powerpc/kvm/book3s_64_emulate.c
+++ b/arch/powerpc/kvm/book3s_64_emulate.c
@@ -239,6 +239,34 @@ void kvmppc_set_bat(struct kvm_vcpu *vcpu, struct kvmppc_bat *bat, bool upper,
 	}
 }
 
+static u32 kvmppc_read_bat(struct kvm_vcpu *vcpu, int sprn)
+{
+	struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
+	struct kvmppc_bat *bat;
+
+	switch (sprn) {
+	case SPRN_IBAT0U ... SPRN_IBAT3L:
+		bat = &vcpu_book3s->ibat[(sprn - SPRN_IBAT0U) / 2];
+		break;
+	case SPRN_IBAT4U ... SPRN_IBAT7L:
+		bat = &vcpu_book3s->ibat[4 + ((sprn - SPRN_IBAT4U) / 2)];
+		break;
+	case SPRN_DBAT0U ... SPRN_DBAT3L:
+		bat = &vcpu_book3s->dbat[(sprn - SPRN_DBAT0U) / 2];
+		break;
+	case SPRN_DBAT4U ... SPRN_DBAT7L:
+		bat = &vcpu_book3s->dbat[4 + ((sprn - SPRN_DBAT4U) / 2)];
+		break;
+	default:
+		BUG();
+	}
+
+	if (sprn % 2)
+		return bat->raw >> 32;
+	else
+		return bat->raw;
+}
+
 static void kvmppc_write_bat(struct kvm_vcpu *vcpu, int sprn, u32 val)
 {
 	struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
@@ -290,6 +318,7 @@ int kvmppc_core_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs)
 		/* BAT writes happen so rarely that we're ok to flush
 		 * everything here */
 		kvmppc_mmu_pte_flush(vcpu, 0, 0);
+		kvmppc_mmu_flush_segments(vcpu);
 		break;
 	case SPRN_HID0:
 		to_book3s(vcpu)->hid[0] = spr_val;
@@ -373,6 +402,12 @@ int kvmppc_core_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
 	int emulated = EMULATE_DONE;
 
 	switch (sprn) {
+	case SPRN_IBAT0U ... SPRN_IBAT3L:
+	case SPRN_IBAT4U ... SPRN_IBAT7L:
+	case SPRN_DBAT0U ... SPRN_DBAT3L:
+	case SPRN_DBAT4U ... SPRN_DBAT7L:
+		kvmppc_set_gpr(vcpu, rt, kvmppc_read_bat(vcpu, sprn));
+		break;
 	case SPRN_SDR1:
 		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->sdr1);
 		break;
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 10/15] KVM: PPC: Implement BAT reads
@ 2010-03-05 16:50     ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

BATs can't only be written to, you can also read them out!
So let's implement emulation for reading BAT values again.

While at it, I also made BAT setting flush the segment cache,
so we're absolutely sure there's no MMU state left when writing
BATs.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_64_emulate.c |   35 ++++++++++++++++++++++++++++++++++
 1 files changed, 35 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_emulate.c b/arch/powerpc/kvm/book3s_64_emulate.c
index 8d7a78d..39d5003 100644
--- a/arch/powerpc/kvm/book3s_64_emulate.c
+++ b/arch/powerpc/kvm/book3s_64_emulate.c
@@ -239,6 +239,34 @@ void kvmppc_set_bat(struct kvm_vcpu *vcpu, struct kvmppc_bat *bat, bool upper,
 	}
 }
 
+static u32 kvmppc_read_bat(struct kvm_vcpu *vcpu, int sprn)
+{
+	struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
+	struct kvmppc_bat *bat;
+
+	switch (sprn) {
+	case SPRN_IBAT0U ... SPRN_IBAT3L:
+		bat = &vcpu_book3s->ibat[(sprn - SPRN_IBAT0U) / 2];
+		break;
+	case SPRN_IBAT4U ... SPRN_IBAT7L:
+		bat = &vcpu_book3s->ibat[4 + ((sprn - SPRN_IBAT4U) / 2)];
+		break;
+	case SPRN_DBAT0U ... SPRN_DBAT3L:
+		bat = &vcpu_book3s->dbat[(sprn - SPRN_DBAT0U) / 2];
+		break;
+	case SPRN_DBAT4U ... SPRN_DBAT7L:
+		bat = &vcpu_book3s->dbat[4 + ((sprn - SPRN_DBAT4U) / 2)];
+		break;
+	default:
+		BUG();
+	}
+
+	if (sprn % 2)
+		return bat->raw >> 32;
+	else
+		return bat->raw;
+}
+
 static void kvmppc_write_bat(struct kvm_vcpu *vcpu, int sprn, u32 val)
 {
 	struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
@@ -290,6 +318,7 @@ int kvmppc_core_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs)
 		/* BAT writes happen so rarely that we're ok to flush
 		 * everything here */
 		kvmppc_mmu_pte_flush(vcpu, 0, 0);
+		kvmppc_mmu_flush_segments(vcpu);
 		break;
 	case SPRN_HID0:
 		to_book3s(vcpu)->hid[0] = spr_val;
@@ -373,6 +402,12 @@ int kvmppc_core_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
 	int emulated = EMULATE_DONE;
 
 	switch (sprn) {
+	case SPRN_IBAT0U ... SPRN_IBAT3L:
+	case SPRN_IBAT4U ... SPRN_IBAT7L:
+	case SPRN_DBAT0U ... SPRN_DBAT3L:
+	case SPRN_DBAT4U ... SPRN_DBAT7L:
+		kvmppc_set_gpr(vcpu, rt, kvmppc_read_bat(vcpu, sprn));
+		break;
 	case SPRN_SDR1:
 		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->sdr1);
 		break;
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 11/15] KVM: PPC: Make XER load 32 bit
  2010-03-05 16:50 ` Alexander Graf
@ 2010-03-05 16:50   ` Alexander Graf
  -1 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

We have a 32 bit value in the PACA to store XER in. We also do an stw
when storing XER in there. But then we load it with ld, completely
screwing it up on every entry.

Welcome to the Big Endian world.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_64_slb.S |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_slb.S b/arch/powerpc/kvm/book3s_64_slb.S
index 35b7627..0919679 100644
--- a/arch/powerpc/kvm/book3s_64_slb.S
+++ b/arch/powerpc/kvm/book3s_64_slb.S
@@ -145,7 +145,7 @@ slb_do_enter:
 	lwz	r11, (PACA_KVM_CR)(r13)
 	mtcr	r11
 
-	ld	r11, (PACA_KVM_XER)(r13)
+	lwz	r11, (PACA_KVM_XER)(r13)
 	mtxer	r11
 
 	ld	r11, (PACA_KVM_R11)(r13)
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 11/15] KVM: PPC: Make XER load 32 bit
@ 2010-03-05 16:50   ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

We have a 32 bit value in the PACA to store XER in. We also do an stw
when storing XER in there. But then we load it with ld, completely
screwing it up on every entry.

Welcome to the Big Endian world.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_64_slb.S |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_slb.S b/arch/powerpc/kvm/book3s_64_slb.S
index 35b7627..0919679 100644
--- a/arch/powerpc/kvm/book3s_64_slb.S
+++ b/arch/powerpc/kvm/book3s_64_slb.S
@@ -145,7 +145,7 @@ slb_do_enter:
 	lwz	r11, (PACA_KVM_CR)(r13)
 	mtcr	r11
 
-	ld	r11, (PACA_KVM_XER)(r13)
+	lwz	r11, (PACA_KVM_XER)(r13)
 	mtxer	r11
 
 	ld	r11, (PACA_KVM_R11)(r13)
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 12/15] KVM: PPC: Implement emulation for lbzux and lhax
  2010-03-05 16:50 ` Alexander Graf
@ 2010-03-05 16:50   ` Alexander Graf
  -1 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

We get MMIOs with the weirdest instructions. But every time we do,
we need to improve our emulator to implement them.

So let's do that - this time it's lbzux and lhax's round.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/emulate.c |   20 ++++++++++++++++++++
 1 files changed, 20 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 2410ec2..dbb5d68 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -38,10 +38,12 @@
 #define OP_31_XOP_LBZX      87
 #define OP_31_XOP_STWX      151
 #define OP_31_XOP_STBX      215
+#define OP_31_XOP_LBZUX     119
 #define OP_31_XOP_STBUX     247
 #define OP_31_XOP_LHZX      279
 #define OP_31_XOP_LHZUX     311
 #define OP_31_XOP_MFSPR     339
+#define OP_31_XOP_LHAX      343
 #define OP_31_XOP_STHX      407
 #define OP_31_XOP_STHUX     439
 #define OP_31_XOP_MTSPR     467
@@ -173,6 +175,19 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
 			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
 			break;
 
+		case OP_31_XOP_LBZUX:
+			rt = get_rt(inst);
+			ra = get_ra(inst);
+			rb = get_rb(inst);
+
+			ea = kvmppc_get_gpr(vcpu, rb);
+			if (ra)
+				ea += kvmppc_get_gpr(vcpu, ra);
+
+			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
+			kvmppc_set_gpr(vcpu, ra, ea);
+			break;
+
 		case OP_31_XOP_STWX:
 			rs = get_rs(inst);
 			emulated = kvmppc_handle_store(run, vcpu,
@@ -202,6 +217,11 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
 			kvmppc_set_gpr(vcpu, rs, ea);
 			break;
 
+		case OP_31_XOP_LHAX:
+			rt = get_rt(inst);
+			emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
+			break;
+
 		case OP_31_XOP_LHZX:
 			rt = get_rt(inst);
 			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 12/15] KVM: PPC: Implement emulation for lbzux and lhax
@ 2010-03-05 16:50   ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

We get MMIOs with the weirdest instructions. But every time we do,
we need to improve our emulator to implement them.

So let's do that - this time it's lbzux and lhax's round.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/emulate.c |   20 ++++++++++++++++++++
 1 files changed, 20 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 2410ec2..dbb5d68 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -38,10 +38,12 @@
 #define OP_31_XOP_LBZX      87
 #define OP_31_XOP_STWX      151
 #define OP_31_XOP_STBX      215
+#define OP_31_XOP_LBZUX     119
 #define OP_31_XOP_STBUX     247
 #define OP_31_XOP_LHZX      279
 #define OP_31_XOP_LHZUX     311
 #define OP_31_XOP_MFSPR     339
+#define OP_31_XOP_LHAX      343
 #define OP_31_XOP_STHX      407
 #define OP_31_XOP_STHUX     439
 #define OP_31_XOP_MTSPR     467
@@ -173,6 +175,19 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
 			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
 			break;
 
+		case OP_31_XOP_LBZUX:
+			rt = get_rt(inst);
+			ra = get_ra(inst);
+			rb = get_rb(inst);
+
+			ea = kvmppc_get_gpr(vcpu, rb);
+			if (ra)
+				ea += kvmppc_get_gpr(vcpu, ra);
+
+			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
+			kvmppc_set_gpr(vcpu, ra, ea);
+			break;
+
 		case OP_31_XOP_STWX:
 			rs = get_rs(inst);
 			emulated = kvmppc_handle_store(run, vcpu,
@@ -202,6 +217,11 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
 			kvmppc_set_gpr(vcpu, rs, ea);
 			break;
 
+		case OP_31_XOP_LHAX:
+			rt = get_rt(inst);
+			emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
+			break;
+
 		case OP_31_XOP_LHZX:
 			rt = get_rt(inst);
 			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 13/15] KVM: PPC: Implement alignment interrupt
       [not found] ` <1267807842-3751-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-03-05 16:50     ` Alexander Graf
  2010-03-05 16:50     ` Alexander Graf
                       ` (3 subsequent siblings)
  4 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

Mac OS X has some applications - namely the Finder - that require alignment
interrupts to work properly. So we need to implement them.

But the spec for 970 and 750 also looks different. While 750 requires the
DSISR fields to reflect some instruction bits, the 970 declares this as an
optional feature. So we need to reconstruct DSISR manually.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/include/asm/kvm_book3s.h |    1 +
 arch/powerpc/kvm/book3s.c             |    9 +++++++
 arch/powerpc/kvm/book3s_64_emulate.c  |   40 +++++++++++++++++++++++++++++++++
 3 files changed, 50 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 1b76fa1..6476e70 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -132,6 +132,7 @@ extern void kvmppc_rmcall(ulong srr0, ulong srr1);
 extern void kvmppc_load_up_fpu(void);
 extern void kvmppc_load_up_altivec(void);
 extern void kvmppc_load_up_vsx(void);
+extern u32 kvmppc_alignment_dsisr(struct kvm_vcpu *vcpu, unsigned int inst);
 
 static inline struct kvmppc_vcpu_book3s *to_book3s(struct kvm_vcpu *vcpu)
 {
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 46c8954..b2405ab 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -907,6 +907,15 @@ program_interrupt:
 		}
 		break;
 	}
+	case BOOK3S_INTERRUPT_ALIGNMENT:
+		vcpu->arch.dear = vcpu->arch.fault_dear;
+		if (kvmppc_read_inst(vcpu) == EMULATE_DONE) {
+			to_book3s(vcpu)->dsisr = kvmppc_alignment_dsisr(vcpu,
+				vcpu->arch.last_inst);
+			kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
+		}
+		r = RESUME_GUEST;
+		break;
 	case BOOK3S_INTERRUPT_MACHINE_CHECK:
 	case BOOK3S_INTERRUPT_TRACE:
 		kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
diff --git a/arch/powerpc/kvm/book3s_64_emulate.c b/arch/powerpc/kvm/book3s_64_emulate.c
index 39d5003..c401dd4 100644
--- a/arch/powerpc/kvm/book3s_64_emulate.c
+++ b/arch/powerpc/kvm/book3s_64_emulate.c
@@ -44,6 +44,8 @@
 /* DCBZ is actually 1014, but we patch it to 1010 so we get a trap */
 #define OP_31_XOP_DCBZ		1010
 
+#define OP_LFS			48
+
 #define SPRN_GQR0		912
 #define SPRN_GQR1		913
 #define SPRN_GQR2		914
@@ -474,3 +476,41 @@ int kvmppc_core_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
 	return emulated;
 }
 
+u32 kvmppc_alignment_dsisr(struct kvm_vcpu *vcpu, unsigned int inst)
+{
+	u32 dsisr = 0;
+
+	/*
+	 * This is what the spec says about DSISR bits (not mentioned = 0):
+	 *
+	 * 12:13		[DS]	Set to bits 30:31
+	 * 15:16		[X]	Set to bits 29:30
+	 * 17			[X]	Set to bit 25
+	 *			[D/DS]	Set to bit 5
+	 * 18:21		[X]	Set to bits 21:24
+	 *			[D/DS]	Set to bits 1:4
+	 * 22:26			Set to bits 6:10 (RT/RS/FRT/FRS)
+	 * 27:31			Set to bits 11:15 (RA)
+	 */
+
+	switch (get_op(inst)) {
+	/* D-form */
+	case OP_LFS:
+		dsisr |= (inst >> 12) & 0x4000;	/* bit 17 */
+		dsisr |= (inst >> 17) & 0x3c00; /* bits 18:21 */
+		break;
+	/* X-form */
+	case 31:
+		dsisr |= (inst << 14) & 0x18000; /* bits 15:16 */
+		dsisr |= (inst << 8)  & 0x04000; /* bit 17 */
+		dsisr |= (inst << 3)  & 0x03c00; /* bits 18:21 */
+		break;
+	default:
+		printk(KERN_INFO "KVM: Unaligned instruction 0x%x\n", inst);
+		break;
+	}
+
+	dsisr |= (inst >> 16) & 0x03ff; /* bits 22:31 */
+
+	return dsisr;
+}
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 13/15] KVM: PPC: Implement alignment interrupt
@ 2010-03-05 16:50     ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

Mac OS X has some applications - namely the Finder - that require alignment
interrupts to work properly. So we need to implement them.

But the spec for 970 and 750 also looks different. While 750 requires the
DSISR fields to reflect some instruction bits, the 970 declares this as an
optional feature. So we need to reconstruct DSISR manually.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s.h |    1 +
 arch/powerpc/kvm/book3s.c             |    9 +++++++
 arch/powerpc/kvm/book3s_64_emulate.c  |   40 +++++++++++++++++++++++++++++++++
 3 files changed, 50 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 1b76fa1..6476e70 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -132,6 +132,7 @@ extern void kvmppc_rmcall(ulong srr0, ulong srr1);
 extern void kvmppc_load_up_fpu(void);
 extern void kvmppc_load_up_altivec(void);
 extern void kvmppc_load_up_vsx(void);
+extern u32 kvmppc_alignment_dsisr(struct kvm_vcpu *vcpu, unsigned int inst);
 
 static inline struct kvmppc_vcpu_book3s *to_book3s(struct kvm_vcpu *vcpu)
 {
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 46c8954..b2405ab 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -907,6 +907,15 @@ program_interrupt:
 		}
 		break;
 	}
+	case BOOK3S_INTERRUPT_ALIGNMENT:
+		vcpu->arch.dear = vcpu->arch.fault_dear;
+		if (kvmppc_read_inst(vcpu) = EMULATE_DONE) {
+			to_book3s(vcpu)->dsisr = kvmppc_alignment_dsisr(vcpu,
+				vcpu->arch.last_inst);
+			kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
+		}
+		r = RESUME_GUEST;
+		break;
 	case BOOK3S_INTERRUPT_MACHINE_CHECK:
 	case BOOK3S_INTERRUPT_TRACE:
 		kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
diff --git a/arch/powerpc/kvm/book3s_64_emulate.c b/arch/powerpc/kvm/book3s_64_emulate.c
index 39d5003..c401dd4 100644
--- a/arch/powerpc/kvm/book3s_64_emulate.c
+++ b/arch/powerpc/kvm/book3s_64_emulate.c
@@ -44,6 +44,8 @@
 /* DCBZ is actually 1014, but we patch it to 1010 so we get a trap */
 #define OP_31_XOP_DCBZ		1010
 
+#define OP_LFS			48
+
 #define SPRN_GQR0		912
 #define SPRN_GQR1		913
 #define SPRN_GQR2		914
@@ -474,3 +476,41 @@ int kvmppc_core_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
 	return emulated;
 }
 
+u32 kvmppc_alignment_dsisr(struct kvm_vcpu *vcpu, unsigned int inst)
+{
+	u32 dsisr = 0;
+
+	/*
+	 * This is what the spec says about DSISR bits (not mentioned = 0):
+	 *
+	 * 12:13		[DS]	Set to bits 30:31
+	 * 15:16		[X]	Set to bits 29:30
+	 * 17			[X]	Set to bit 25
+	 *			[D/DS]	Set to bit 5
+	 * 18:21		[X]	Set to bits 21:24
+	 *			[D/DS]	Set to bits 1:4
+	 * 22:26			Set to bits 6:10 (RT/RS/FRT/FRS)
+	 * 27:31			Set to bits 11:15 (RA)
+	 */
+
+	switch (get_op(inst)) {
+	/* D-form */
+	case OP_LFS:
+		dsisr |= (inst >> 12) & 0x4000;	/* bit 17 */
+		dsisr |= (inst >> 17) & 0x3c00; /* bits 18:21 */
+		break;
+	/* X-form */
+	case 31:
+		dsisr |= (inst << 14) & 0x18000; /* bits 15:16 */
+		dsisr |= (inst << 8)  & 0x04000; /* bit 17 */
+		dsisr |= (inst << 3)  & 0x03c00; /* bits 18:21 */
+		break;
+	default:
+		printk(KERN_INFO "KVM: Unaligned instruction 0x%x\n", inst);
+		break;
+	}
+
+	dsisr |= (inst >> 16) & 0x03ff; /* bits 22:31 */
+
+	return dsisr;
+}
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 14/15] KVM: Add support for enabling capabilities per-vcpu
  2010-03-05 16:50 ` Alexander Graf
@ 2010-03-05 16:50   ` Alexander Graf
  -1 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

Some times we don't want all capabilities to be available to all
our vcpus. One example for that is the OSI interface, implemented
in the next patch.

In order to have a generic mechanism in how to enable capabilities
individually, this patch introduces a new ioctl that can be used
for this purpose. That way features we don't want in all guests or
userspace configurations can just not be enabled and we're good.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/powerpc.c |   23 +++++++++++++++++++++++
 include/linux/kvm.h        |    8 ++++++++
 2 files changed, 31 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index a28a512..f52752c 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -462,6 +462,20 @@ int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, struct kvm_interrupt *irq)
 	return 0;
 }
 
+static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
+				     struct kvm_enable_cap *cap)
+{
+	int r;
+
+	switch (cap->cap) {
+	default:
+		r = -EINVAL;
+		break;
+	}
+
+	return r;
+}
+
 int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
                                     struct kvm_mp_state *mp_state)
 {
@@ -490,6 +504,15 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
 		r = kvm_vcpu_ioctl_interrupt(vcpu, &irq);
 		break;
 	}
+	case KVM_ENABLE_CAP:
+	{
+		struct kvm_enable_cap cap;
+		r = -EFAULT;
+		if (copy_from_user(&cap, argp, sizeof(cap)))
+			goto out;
+		r = kvm_vcpu_ioctl_enable_cap(vcpu, &cap);
+		break;
+	}
 	default:
 		r = -EINVAL;
 	}
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index ce28767..c7ed3cb 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -400,6 +400,12 @@ struct kvm_ioeventfd {
 	__u8  pad[36];
 };
 
+/* for KVM_ENABLE_CAP */
+struct kvm_enable_cap {
+	/* in */
+	__u32 cap;
+};
+
 #define KVMIO 0xAE
 
 /*
@@ -696,6 +702,8 @@ struct kvm_clock_data {
 /* Available with KVM_CAP_DEBUGREGS */
 #define KVM_GET_DEBUGREGS         _IOR(KVMIO,  0xa1, struct kvm_debugregs)
 #define KVM_SET_DEBUGREGS         _IOW(KVMIO,  0xa2, struct kvm_debugregs)
+/* No need for CAP, because then it just always fails */
+#define KVM_ENABLE_CAP            _IOW(KVMIO,  0xa3, struct kvm_enable_cap)
 
 #define KVM_DEV_ASSIGN_ENABLE_IOMMU	(1 << 0)
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 14/15] KVM: Add support for enabling capabilities per-vcpu
@ 2010-03-05 16:50   ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

Some times we don't want all capabilities to be available to all
our vcpus. One example for that is the OSI interface, implemented
in the next patch.

In order to have a generic mechanism in how to enable capabilities
individually, this patch introduces a new ioctl that can be used
for this purpose. That way features we don't want in all guests or
userspace configurations can just not be enabled and we're good.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/powerpc.c |   23 +++++++++++++++++++++++
 include/linux/kvm.h        |    8 ++++++++
 2 files changed, 31 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index a28a512..f52752c 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -462,6 +462,20 @@ int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, struct kvm_interrupt *irq)
 	return 0;
 }
 
+static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
+				     struct kvm_enable_cap *cap)
+{
+	int r;
+
+	switch (cap->cap) {
+	default:
+		r = -EINVAL;
+		break;
+	}
+
+	return r;
+}
+
 int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
                                     struct kvm_mp_state *mp_state)
 {
@@ -490,6 +504,15 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
 		r = kvm_vcpu_ioctl_interrupt(vcpu, &irq);
 		break;
 	}
+	case KVM_ENABLE_CAP:
+	{
+		struct kvm_enable_cap cap;
+		r = -EFAULT;
+		if (copy_from_user(&cap, argp, sizeof(cap)))
+			goto out;
+		r = kvm_vcpu_ioctl_enable_cap(vcpu, &cap);
+		break;
+	}
 	default:
 		r = -EINVAL;
 	}
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index ce28767..c7ed3cb 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -400,6 +400,12 @@ struct kvm_ioeventfd {
 	__u8  pad[36];
 };
 
+/* for KVM_ENABLE_CAP */
+struct kvm_enable_cap {
+	/* in */
+	__u32 cap;
+};
+
 #define KVMIO 0xAE
 
 /*
@@ -696,6 +702,8 @@ struct kvm_clock_data {
 /* Available with KVM_CAP_DEBUGREGS */
 #define KVM_GET_DEBUGREGS         _IOR(KVMIO,  0xa1, struct kvm_debugregs)
 #define KVM_SET_DEBUGREGS         _IOW(KVMIO,  0xa2, struct kvm_debugregs)
+/* No need for CAP, because then it just always fails */
+#define KVM_ENABLE_CAP            _IOW(KVMIO,  0xa3, struct kvm_enable_cap)
 
 #define KVM_DEV_ASSIGN_ENABLE_IOMMU	(1 << 0)
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 15/15] KVM: PPC: Add OSI hypercall interface
  2010-03-05 16:50 ` Alexander Graf
@ 2010-03-05 16:50   ` Alexander Graf
  -1 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

MOL uses its own hypercall interface to call back into userspace when
the guest wants to do something.

So let's implement that as an exit reason, specify it with a CAP and
only really use it when userspace wants us to.

The only user of it so far is MOL.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s.h |    5 +++++
 arch/powerpc/include/asm/kvm_host.h   |    2 ++
 arch/powerpc/kvm/book3s.c             |   24 ++++++++++++++++++------
 arch/powerpc/kvm/powerpc.c            |   12 ++++++++++++
 include/linux/kvm.h                   |    6 ++++++
 5 files changed, 43 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 6476e70..5009cf8 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -148,6 +148,11 @@ static inline ulong dsisr(void)
 
 extern void kvm_return_point(void);
 
+/* Magic register values loaded into r3 and r4 before the 'sc' assembly
+ * instruction for the OSI hypercalls */
+#define OSI_SC_MAGIC_R3			0x113724FA
+#define OSI_SC_MAGIC_R4			0x77810F9B
+
 #define INS_DCBZ			0x7c0007ec
 
 #endif /* __ASM_KVM_BOOK3S_H__ */
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 0ebda67..486f1ca 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -273,6 +273,8 @@ struct kvm_vcpu_arch {
 	u8 mmio_sign_extend;
 	u8 dcr_needed;
 	u8 dcr_is_write;
+	u8 osi_needed;
+	u8 osi_enabled;
 
 	u32 cpr0_cfgaddr; /* holds the last set cpr0_cfgaddr */
 
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index b2405ab..7c079c0 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -873,12 +873,24 @@ program_interrupt:
 		break;
 	}
 	case BOOK3S_INTERRUPT_SYSCALL:
-#ifdef EXIT_DEBUG
-		printk(KERN_INFO "Syscall Nr %d\n", (int)kvmppc_get_gpr(vcpu, 0));
-#endif
-		vcpu->stat.syscall_exits++;
-		kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
-		r = RESUME_GUEST;
+		// XXX make user settable
+		if (vcpu->arch.osi_enabled &&
+		    (((u32)kvmppc_get_gpr(vcpu, 3)) == OSI_SC_MAGIC_R3) &&
+		    (((u32)kvmppc_get_gpr(vcpu, 4)) == OSI_SC_MAGIC_R4)) {
+			u64 *gprs = run->osi.gprs;
+			int i;
+
+			run->exit_reason = KVM_EXIT_OSI;
+			for (i = 0; i < 32; i++)
+				gprs[i] = kvmppc_get_gpr(vcpu, i);
+			vcpu->arch.osi_needed = 1;
+			r = RESUME_HOST_NV;
+
+		} else {
+			vcpu->stat.syscall_exits++;
+			kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
+			r = RESUME_GUEST;
+		}
 		break;
 	case BOOK3S_INTERRUPT_FP_UNAVAIL:
 	case BOOK3S_INTERRUPT_ALTIVEC:
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index f52752c..9a57c02 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -148,6 +148,7 @@ int kvm_dev_ioctl_check_extension(long ext)
 	switch (ext) {
 	case KVM_CAP_PPC_SEGSTATE:
 	case KVM_CAP_PPC_PAIRED_SINGLES:
+	case KVM_CAP_PPC_OSI:
 		r = 1;
 		break;
 	case KVM_CAP_COALESCED_MMIO:
@@ -429,6 +430,13 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		if (!vcpu->arch.dcr_is_write)
 			kvmppc_complete_dcr_load(vcpu, run);
 		vcpu->arch.dcr_needed = 0;
+	} else if (vcpu->arch.osi_needed) {
+		u64 *gprs = run->osi.gprs;
+		int i;
+
+		for (i = 0; i < 32; i++)
+			kvmppc_set_gpr(vcpu, i, gprs[i]);
+		vcpu->arch.osi_needed = 0;
 	}
 
 	kvmppc_core_deliver_interrupts(vcpu);
@@ -468,6 +476,10 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
 	int r;
 
 	switch (cap->cap) {
+	case KVM_CAP_PPC_OSI:
+		r = 0;
+		vcpu->arch.osi_enabled = true;
+		break;
 	default:
 		r = -EINVAL;
 		break;
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index c7ed3cb..44291d7 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -160,6 +160,7 @@ struct kvm_pit_config {
 #define KVM_EXIT_DCR              15
 #define KVM_EXIT_NMI              16
 #define KVM_EXIT_INTERNAL_ERROR   17
+#define KVM_EXIT_OSI              18
 
 /* For KVM_EXIT_INTERNAL_ERROR */
 #define KVM_INTERNAL_ERROR_EMULATION 1
@@ -259,6 +260,10 @@ struct kvm_run {
 			__u32 ndata;
 			__u64 data[16];
 		} internal;
+		/* KVM_EXIT_OSI */
+		struct {
+			__u64 gprs[32];
+		} osi;
 		/* Fix the size of the union. */
 		char padding[256];
 	};
@@ -513,6 +518,7 @@ struct kvm_enable_cap {
 #define KVM_CAP_DEBUGREGS 50
 #endif
 #define KVM_CAP_X86_ROBUST_SINGLESTEP 51
+#define KVM_CAP_PPC_OSI 52
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 15/15] KVM: PPC: Add OSI hypercall interface
@ 2010-03-05 16:50   ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-05 16:50 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

MOL uses its own hypercall interface to call back into userspace when
the guest wants to do something.

So let's implement that as an exit reason, specify it with a CAP and
only really use it when userspace wants us to.

The only user of it so far is MOL.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s.h |    5 +++++
 arch/powerpc/include/asm/kvm_host.h   |    2 ++
 arch/powerpc/kvm/book3s.c             |   24 ++++++++++++++++++------
 arch/powerpc/kvm/powerpc.c            |   12 ++++++++++++
 include/linux/kvm.h                   |    6 ++++++
 5 files changed, 43 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 6476e70..5009cf8 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -148,6 +148,11 @@ static inline ulong dsisr(void)
 
 extern void kvm_return_point(void);
 
+/* Magic register values loaded into r3 and r4 before the 'sc' assembly
+ * instruction for the OSI hypercalls */
+#define OSI_SC_MAGIC_R3			0x113724FA
+#define OSI_SC_MAGIC_R4			0x77810F9B
+
 #define INS_DCBZ			0x7c0007ec
 
 #endif /* __ASM_KVM_BOOK3S_H__ */
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 0ebda67..486f1ca 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -273,6 +273,8 @@ struct kvm_vcpu_arch {
 	u8 mmio_sign_extend;
 	u8 dcr_needed;
 	u8 dcr_is_write;
+	u8 osi_needed;
+	u8 osi_enabled;
 
 	u32 cpr0_cfgaddr; /* holds the last set cpr0_cfgaddr */
 
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index b2405ab..7c079c0 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -873,12 +873,24 @@ program_interrupt:
 		break;
 	}
 	case BOOK3S_INTERRUPT_SYSCALL:
-#ifdef EXIT_DEBUG
-		printk(KERN_INFO "Syscall Nr %d\n", (int)kvmppc_get_gpr(vcpu, 0));
-#endif
-		vcpu->stat.syscall_exits++;
-		kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
-		r = RESUME_GUEST;
+		// XXX make user settable
+		if (vcpu->arch.osi_enabled &&
+		    (((u32)kvmppc_get_gpr(vcpu, 3)) = OSI_SC_MAGIC_R3) &&
+		    (((u32)kvmppc_get_gpr(vcpu, 4)) = OSI_SC_MAGIC_R4)) {
+			u64 *gprs = run->osi.gprs;
+			int i;
+
+			run->exit_reason = KVM_EXIT_OSI;
+			for (i = 0; i < 32; i++)
+				gprs[i] = kvmppc_get_gpr(vcpu, i);
+			vcpu->arch.osi_needed = 1;
+			r = RESUME_HOST_NV;
+
+		} else {
+			vcpu->stat.syscall_exits++;
+			kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
+			r = RESUME_GUEST;
+		}
 		break;
 	case BOOK3S_INTERRUPT_FP_UNAVAIL:
 	case BOOK3S_INTERRUPT_ALTIVEC:
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index f52752c..9a57c02 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -148,6 +148,7 @@ int kvm_dev_ioctl_check_extension(long ext)
 	switch (ext) {
 	case KVM_CAP_PPC_SEGSTATE:
 	case KVM_CAP_PPC_PAIRED_SINGLES:
+	case KVM_CAP_PPC_OSI:
 		r = 1;
 		break;
 	case KVM_CAP_COALESCED_MMIO:
@@ -429,6 +430,13 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		if (!vcpu->arch.dcr_is_write)
 			kvmppc_complete_dcr_load(vcpu, run);
 		vcpu->arch.dcr_needed = 0;
+	} else if (vcpu->arch.osi_needed) {
+		u64 *gprs = run->osi.gprs;
+		int i;
+
+		for (i = 0; i < 32; i++)
+			kvmppc_set_gpr(vcpu, i, gprs[i]);
+		vcpu->arch.osi_needed = 0;
 	}
 
 	kvmppc_core_deliver_interrupts(vcpu);
@@ -468,6 +476,10 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
 	int r;
 
 	switch (cap->cap) {
+	case KVM_CAP_PPC_OSI:
+		r = 0;
+		vcpu->arch.osi_enabled = true;
+		break;
 	default:
 		r = -EINVAL;
 		break;
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index c7ed3cb..44291d7 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -160,6 +160,7 @@ struct kvm_pit_config {
 #define KVM_EXIT_DCR              15
 #define KVM_EXIT_NMI              16
 #define KVM_EXIT_INTERNAL_ERROR   17
+#define KVM_EXIT_OSI              18
 
 /* For KVM_EXIT_INTERNAL_ERROR */
 #define KVM_INTERNAL_ERROR_EMULATION 1
@@ -259,6 +260,10 @@ struct kvm_run {
 			__u32 ndata;
 			__u64 data[16];
 		} internal;
+		/* KVM_EXIT_OSI */
+		struct {
+			__u64 gprs[32];
+		} osi;
 		/* Fix the size of the union. */
 		char padding[256];
 	};
@@ -513,6 +518,7 @@ struct kvm_enable_cap {
 #define KVM_CAP_DEBUGREGS 50
 #endif
 #define KVM_CAP_X86_ROBUST_SINGLESTEP 51
+#define KVM_CAP_PPC_OSI 52
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* Re: [PATCH 01/15] KVM: PPC: Make register read/write wrappers always work
       [not found]   ` <1267807842-3751-2-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-03-08 13:40       ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 13:40 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On 03/05/2010 06:50 PM, Alexander Graf wrote:
> We have wrappers to do for example gpr read/write accesses with,
> because the contents of registers could be either in the PACA
> or in the VCPU struct.
>
> There's nothing that says we have to have the guest vcpu loaded
> when using these wrappers though, so let's introduce a flag that
> tells us whether we're inside a vcpu_load context.
>
>    

On x86 we always access registers within vcpu_load() context.  That 
simplifies things.  Does this not apply here?

Even so, sometimes guest registers are present on the cpu, and sometimes 
in shadow variables (for example, msrs might be loaded or not).  The 
approach here is to always unload and access the variable data.  See for 
example vmx_set_msr() calling vmx_load_host_state() before accessing msrs.

Seems like this could reduce the if () tree?

-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 01/15] KVM: PPC: Make register read/write wrappers always
@ 2010-03-08 13:40       ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 13:40 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On 03/05/2010 06:50 PM, Alexander Graf wrote:
> We have wrappers to do for example gpr read/write accesses with,
> because the contents of registers could be either in the PACA
> or in the VCPU struct.
>
> There's nothing that says we have to have the guest vcpu loaded
> when using these wrappers though, so let's introduce a flag that
> tells us whether we're inside a vcpu_load context.
>
>    

On x86 we always access registers within vcpu_load() context.  That 
simplifies things.  Does this not apply here?

Even so, sometimes guest registers are present on the cpu, and sometimes 
in shadow variables (for example, msrs might be loaded or not).  The 
approach here is to always unload and access the variable data.  See for 
example vmx_set_msr() calling vmx_load_host_state() before accessing msrs.

Seems like this could reduce the if () tree?

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 03/15] KVM: PPC: Allow userspace to unset the IRQ line
       [not found]   ` <1267807842-3751-4-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-03-08 13:44       ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 13:44 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On 03/05/2010 06:50 PM, Alexander Graf wrote:
> Userspace can tell us that it wants to trigger an interrupt. But
> so far it can't tell us that it wants to stop triggering one.
>
> So let's interpret the parameter to the ioctl that we have anyways
> to tell us if we want to raise or lower the interrupt line.
>
> Signed-off-by: Alexander Graf<agraf-l3A5Bk7waGM@public.gmane.org>
> ---
>   arch/powerpc/include/asm/kvm.h     |    3 +++
>   arch/powerpc/include/asm/kvm_ppc.h |    2 ++
>   arch/powerpc/kvm/book3s.c          |    6 ++++++
>   arch/powerpc/kvm/powerpc.c         |    5 ++++-
>   4 files changed, 15 insertions(+), 1 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/kvm.h b/arch/powerpc/include/asm/kvm.h
> index 19bae31..6c5547d 100644
> --- a/arch/powerpc/include/asm/kvm.h
> +++ b/arch/powerpc/include/asm/kvm.h
> @@ -84,4 +84,7 @@ struct kvm_guest_debug_arch {
>   #define KVM_REG_QPR		0x0040
>   #define KVM_REG_FQPR		0x0060
>
> +#define KVM_INTERRUPT_SET	-1U
> +#define KVM_INTERRUPT_UNSET	-2U
>    

Funny choice of numbers.

How does userspace know they exist?

Can you use KVM_IRQ_LINE?



-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 03/15] KVM: PPC: Allow userspace to unset the IRQ line
@ 2010-03-08 13:44       ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 13:44 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On 03/05/2010 06:50 PM, Alexander Graf wrote:
> Userspace can tell us that it wants to trigger an interrupt. But
> so far it can't tell us that it wants to stop triggering one.
>
> So let's interpret the parameter to the ioctl that we have anyways
> to tell us if we want to raise or lower the interrupt line.
>
> Signed-off-by: Alexander Graf<agraf@suse.de>
> ---
>   arch/powerpc/include/asm/kvm.h     |    3 +++
>   arch/powerpc/include/asm/kvm_ppc.h |    2 ++
>   arch/powerpc/kvm/book3s.c          |    6 ++++++
>   arch/powerpc/kvm/powerpc.c         |    5 ++++-
>   4 files changed, 15 insertions(+), 1 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/kvm.h b/arch/powerpc/include/asm/kvm.h
> index 19bae31..6c5547d 100644
> --- a/arch/powerpc/include/asm/kvm.h
> +++ b/arch/powerpc/include/asm/kvm.h
> @@ -84,4 +84,7 @@ struct kvm_guest_debug_arch {
>   #define KVM_REG_QPR		0x0040
>   #define KVM_REG_FQPR		0x0060
>
> +#define KVM_INTERRUPT_SET	-1U
> +#define KVM_INTERRUPT_UNSET	-2U
>    

Funny choice of numbers.

How does userspace know they exist?

Can you use KVM_IRQ_LINE?



-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 01/15] KVM: PPC: Make register read/write wrappers always work
       [not found]       ` <4B94FE41.1040904-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2010-03-08 13:44           ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 13:44 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

Avi Kivity wrote:
> On 03/05/2010 06:50 PM, Alexander Graf wrote:
>> We have wrappers to do for example gpr read/write accesses with,
>> because the contents of registers could be either in the PACA
>> or in the VCPU struct.
>>
>> There's nothing that says we have to have the guest vcpu loaded
>> when using these wrappers though, so let's introduce a flag that
>> tells us whether we're inside a vcpu_load context.
>>
>>    
>
> On x86 we always access registers within vcpu_load() context.  That
> simplifies things.  Does this not apply here?
>
> Even so, sometimes guest registers are present on the cpu, and
> sometimes in shadow variables (for example, msrs might be loaded or
> not).  The approach here is to always unload and access the variable
> data.  See for example vmx_set_msr() calling vmx_load_host_state()
> before accessing msrs.
>
> Seems like this could reduce the if () tree?

Well - it would probably render this particular patch void. In fact, I
think it is already useless thanks to the other "always do vcpu_load" patch.

As far as the already existing if goes, we can't really get rid of that.
I want to be fast in the instruction emulation. Copying around the
registers won't help there.



Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 01/15] KVM: PPC: Make register read/write wrappers always
@ 2010-03-08 13:44           ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 13:44 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

Avi Kivity wrote:
> On 03/05/2010 06:50 PM, Alexander Graf wrote:
>> We have wrappers to do for example gpr read/write accesses with,
>> because the contents of registers could be either in the PACA
>> or in the VCPU struct.
>>
>> There's nothing that says we have to have the guest vcpu loaded
>> when using these wrappers though, so let's introduce a flag that
>> tells us whether we're inside a vcpu_load context.
>>
>>    
>
> On x86 we always access registers within vcpu_load() context.  That
> simplifies things.  Does this not apply here?
>
> Even so, sometimes guest registers are present on the cpu, and
> sometimes in shadow variables (for example, msrs might be loaded or
> not).  The approach here is to always unload and access the variable
> data.  See for example vmx_set_msr() calling vmx_load_host_state()
> before accessing msrs.
>
> Seems like this could reduce the if () tree?

Well - it would probably render this particular patch void. In fact, I
think it is already useless thanks to the other "always do vcpu_load" patch.

As far as the already existing if goes, we can't really get rid of that.
I want to be fast in the instruction emulation. Copying around the
registers won't help there.



Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 03/15] KVM: PPC: Allow userspace to unset the IRQ line
       [not found]       ` <4B94FF27.5010800-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2010-03-08 13:48           ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 13:48 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

Avi Kivity wrote:
> On 03/05/2010 06:50 PM, Alexander Graf wrote:
>> Userspace can tell us that it wants to trigger an interrupt. But
>> so far it can't tell us that it wants to stop triggering one.
>>
>> So let's interpret the parameter to the ioctl that we have anyways
>> to tell us if we want to raise or lower the interrupt line.
>>
>> Signed-off-by: Alexander Graf<agraf-l3A5Bk7waGM@public.gmane.org>
>> ---
>>   arch/powerpc/include/asm/kvm.h     |    3 +++
>>   arch/powerpc/include/asm/kvm_ppc.h |    2 ++
>>   arch/powerpc/kvm/book3s.c          |    6 ++++++
>>   arch/powerpc/kvm/powerpc.c         |    5 ++++-
>>   4 files changed, 15 insertions(+), 1 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/kvm.h
>> b/arch/powerpc/include/asm/kvm.h
>> index 19bae31..6c5547d 100644
>> --- a/arch/powerpc/include/asm/kvm.h
>> +++ b/arch/powerpc/include/asm/kvm.h
>> @@ -84,4 +84,7 @@ struct kvm_guest_debug_arch {
>>   #define KVM_REG_QPR        0x0040
>>   #define KVM_REG_FQPR        0x0060
>>
>> +#define KVM_INTERRUPT_SET    -1U
>> +#define KVM_INTERRUPT_UNSET    -2U
>>    
>
> Funny choice of numbers.

Qemu currently does explicitly set -1U and is the only user.

> How does userspace know they exist?

#ifdef KVM_INTERRUPT_SET? MOL is the only user of this so far. And that
won't work without the hypervisor call anyways.

> Can you use KVM_IRQ_LINE?

I'd rather like to keep that around for when we get an in-kernel-mpic,
which is what we probably ultimately want for qemu.



Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 03/15] KVM: PPC: Allow userspace to unset the IRQ line
@ 2010-03-08 13:48           ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 13:48 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

Avi Kivity wrote:
> On 03/05/2010 06:50 PM, Alexander Graf wrote:
>> Userspace can tell us that it wants to trigger an interrupt. But
>> so far it can't tell us that it wants to stop triggering one.
>>
>> So let's interpret the parameter to the ioctl that we have anyways
>> to tell us if we want to raise or lower the interrupt line.
>>
>> Signed-off-by: Alexander Graf<agraf@suse.de>
>> ---
>>   arch/powerpc/include/asm/kvm.h     |    3 +++
>>   arch/powerpc/include/asm/kvm_ppc.h |    2 ++
>>   arch/powerpc/kvm/book3s.c          |    6 ++++++
>>   arch/powerpc/kvm/powerpc.c         |    5 ++++-
>>   4 files changed, 15 insertions(+), 1 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/kvm.h
>> b/arch/powerpc/include/asm/kvm.h
>> index 19bae31..6c5547d 100644
>> --- a/arch/powerpc/include/asm/kvm.h
>> +++ b/arch/powerpc/include/asm/kvm.h
>> @@ -84,4 +84,7 @@ struct kvm_guest_debug_arch {
>>   #define KVM_REG_QPR        0x0040
>>   #define KVM_REG_FQPR        0x0060
>>
>> +#define KVM_INTERRUPT_SET    -1U
>> +#define KVM_INTERRUPT_UNSET    -2U
>>    
>
> Funny choice of numbers.

Qemu currently does explicitly set -1U and is the only user.

> How does userspace know they exist?

#ifdef KVM_INTERRUPT_SET? MOL is the only user of this so far. And that
won't work without the hypervisor call anyways.

> Can you use KVM_IRQ_LINE?

I'd rather like to keep that around for when we get an in-kernel-mpic,
which is what we probably ultimately want for qemu.



Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: Add support for enabling capabilities per-vcpu
       [not found]   ` <1267807842-3751-15-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-03-08 13:49       ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 13:49 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On 03/05/2010 06:50 PM, Alexander Graf wrote:
>   	}
> diff --git a/include/linux/kvm.h b/include/linux/kvm.h
> index ce28767..c7ed3cb 100644
> --- a/include/linux/kvm.h
> +++ b/include/linux/kvm.h
> @@ -400,6 +400,12 @@ struct kvm_ioeventfd {
>   	__u8  pad[36];
>   };
>
> +/* for KVM_ENABLE_CAP */
> +struct kvm_enable_cap {
> +	/* in */
> +	__u32 cap;
>    

Reserve space here.  Add a flags field and check it for zeros.

Patch Documentation/kvm/api.txt please.

-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: Add support for enabling capabilities per-vcpu
@ 2010-03-08 13:49       ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 13:49 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On 03/05/2010 06:50 PM, Alexander Graf wrote:
>   	}
> diff --git a/include/linux/kvm.h b/include/linux/kvm.h
> index ce28767..c7ed3cb 100644
> --- a/include/linux/kvm.h
> +++ b/include/linux/kvm.h
> @@ -400,6 +400,12 @@ struct kvm_ioeventfd {
>   	__u8  pad[36];
>   };
>
> +/* for KVM_ENABLE_CAP */
> +struct kvm_enable_cap {
> +	/* in */
> +	__u32 cap;
>    

Reserve space here.  Add a flags field and check it for zeros.

Patch Documentation/kvm/api.txt please.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 01/15] KVM: PPC: Make register read/write wrappers always work
       [not found]           ` <4B94FF56.9060200-l3A5Bk7waGM@public.gmane.org>
@ 2010-03-08 13:50               ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 13:50 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On 03/08/2010 03:44 PM, Alexander Graf wrote:
> Avi Kivity wrote:
>    
>> On 03/05/2010 06:50 PM, Alexander Graf wrote:
>>      
>>> We have wrappers to do for example gpr read/write accesses with,
>>> because the contents of registers could be either in the PACA
>>> or in the VCPU struct.
>>>
>>> There's nothing that says we have to have the guest vcpu loaded
>>> when using these wrappers though, so let's introduce a flag that
>>> tells us whether we're inside a vcpu_load context.
>>>
>>>
>>>        
>> On x86 we always access registers within vcpu_load() context.  That
>> simplifies things.  Does this not apply here?
>>
>> Even so, sometimes guest registers are present on the cpu, and
>> sometimes in shadow variables (for example, msrs might be loaded or
>> not).  The approach here is to always unload and access the variable
>> data.  See for example vmx_set_msr() calling vmx_load_host_state()
>> before accessing msrs.
>>
>> Seems like this could reduce the if () tree?
>>      
> Well - it would probably render this particular patch void. In fact, I
> think it is already useless thanks to the other "always do vcpu_load" patch.
>
> As far as the already existing if goes, we can't really get rid of that.
> I want to be fast in the instruction emulation. Copying around the
> registers won't help there.
>    

So do it the other way around.  Always load the registers (of course, do 
nothing if already loaded) and then access them in just one way.  I 
assume during emulation the registers will always be loaded?

-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 01/15] KVM: PPC: Make register read/write wrappers always
@ 2010-03-08 13:50               ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 13:50 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On 03/08/2010 03:44 PM, Alexander Graf wrote:
> Avi Kivity wrote:
>    
>> On 03/05/2010 06:50 PM, Alexander Graf wrote:
>>      
>>> We have wrappers to do for example gpr read/write accesses with,
>>> because the contents of registers could be either in the PACA
>>> or in the VCPU struct.
>>>
>>> There's nothing that says we have to have the guest vcpu loaded
>>> when using these wrappers though, so let's introduce a flag that
>>> tells us whether we're inside a vcpu_load context.
>>>
>>>
>>>        
>> On x86 we always access registers within vcpu_load() context.  That
>> simplifies things.  Does this not apply here?
>>
>> Even so, sometimes guest registers are present on the cpu, and
>> sometimes in shadow variables (for example, msrs might be loaded or
>> not).  The approach here is to always unload and access the variable
>> data.  See for example vmx_set_msr() calling vmx_load_host_state()
>> before accessing msrs.
>>
>> Seems like this could reduce the if () tree?
>>      
> Well - it would probably render this particular patch void. In fact, I
> think it is already useless thanks to the other "always do vcpu_load" patch.
>
> As far as the already existing if goes, we can't really get rid of that.
> I want to be fast in the instruction emulation. Copying around the
> registers won't help there.
>    

So do it the other way around.  Always load the registers (of course, do 
nothing if already loaded) and then access them in just one way.  I 
assume during emulation the registers will always be loaded?

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: Add support for enabling capabilities per-vcpu
       [not found]       ` <4B950057.1090204-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2010-03-08 13:51           ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 13:51 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

Avi Kivity wrote:
> On 03/05/2010 06:50 PM, Alexander Graf wrote:
>>       }
>> diff --git a/include/linux/kvm.h b/include/linux/kvm.h
>> index ce28767..c7ed3cb 100644
>> --- a/include/linux/kvm.h
>> +++ b/include/linux/kvm.h
>> @@ -400,6 +400,12 @@ struct kvm_ioeventfd {
>>       __u8  pad[36];
>>   };
>>
>> +/* for KVM_ENABLE_CAP */
>> +struct kvm_enable_cap {
>> +    /* in */
>> +    __u32 cap;
>>    
>
> Reserve space here.  Add a flags field and check it for zeros.

Flags? How about something like

u64 args[4]

That way the capability enabling code could decide what to do with the
arguments. We don't always only need flags I suppose?.


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: Add support for enabling capabilities per-vcpu
@ 2010-03-08 13:51           ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 13:51 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

Avi Kivity wrote:
> On 03/05/2010 06:50 PM, Alexander Graf wrote:
>>       }
>> diff --git a/include/linux/kvm.h b/include/linux/kvm.h
>> index ce28767..c7ed3cb 100644
>> --- a/include/linux/kvm.h
>> +++ b/include/linux/kvm.h
>> @@ -400,6 +400,12 @@ struct kvm_ioeventfd {
>>       __u8  pad[36];
>>   };
>>
>> +/* for KVM_ENABLE_CAP */
>> +struct kvm_enable_cap {
>> +    /* in */
>> +    __u32 cap;
>>    
>
> Reserve space here.  Add a flags field and check it for zeros.

Flags? How about something like

u64 args[4]

That way the capability enabling code could decide what to do with the
arguments. We don't always only need flags I suppose?.


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 03/15] KVM: PPC: Allow userspace to unset the IRQ line
  2010-03-08 13:48           ` Alexander Graf
@ 2010-03-08 13:52             ` Avi Kivity
  -1 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 13:52 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On 03/08/2010 03:48 PM, Alexander Graf wrote:
>
>
>> How does userspace know they exist?
>>      
> #ifdef KVM_INTERRUPT_SET? MOL is the only user of this so far. And that
> won't work without the hypervisor call anyways.
>    

We generally compile on one machine, and run on another.

>> Can you use KVM_IRQ_LINE?
>>      
> I'd rather like to keep that around for when we get an in-kernel-mpic,
> which is what we probably ultimately want for qemu.
>    

Yes.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 03/15] KVM: PPC: Allow userspace to unset the IRQ line
@ 2010-03-08 13:52             ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 13:52 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On 03/08/2010 03:48 PM, Alexander Graf wrote:
>
>
>> How does userspace know they exist?
>>      
> #ifdef KVM_INTERRUPT_SET? MOL is the only user of this so far. And that
> won't work without the hypervisor call anyways.
>    

We generally compile on one machine, and run on another.

>> Can you use KVM_IRQ_LINE?
>>      
> I'd rather like to keep that around for when we get an in-kernel-mpic,
> which is what we probably ultimately want for qemu.
>    

Yes.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: Add support for enabling capabilities per-vcpu
       [not found]           ` <4B9500D1.2060008-l3A5Bk7waGM@public.gmane.org>
@ 2010-03-08 13:52               ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 13:52 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On 03/08/2010 03:51 PM, Alexander Graf wrote:
> Avi Kivity wrote:
>    
>> On 03/05/2010 06:50 PM, Alexander Graf wrote:
>>      
>>>        }
>>> diff --git a/include/linux/kvm.h b/include/linux/kvm.h
>>> index ce28767..c7ed3cb 100644
>>> --- a/include/linux/kvm.h
>>> +++ b/include/linux/kvm.h
>>> @@ -400,6 +400,12 @@ struct kvm_ioeventfd {
>>>        __u8  pad[36];
>>>    };
>>>
>>> +/* for KVM_ENABLE_CAP */
>>> +struct kvm_enable_cap {
>>> +    /* in */
>>> +    __u32 cap;
>>>
>>>        
>> Reserve space here.  Add a flags field and check it for zeros.
>>      
> Flags? How about something like
>
> u64 args[4]
>
> That way the capability enabling code could decide what to do with the
> arguments. We don't always only need flags I suppose?.
>    

If you interpret these as bit flags anyway, that would be redundant.

-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: Add support for enabling capabilities per-vcpu
@ 2010-03-08 13:52               ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 13:52 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On 03/08/2010 03:51 PM, Alexander Graf wrote:
> Avi Kivity wrote:
>    
>> On 03/05/2010 06:50 PM, Alexander Graf wrote:
>>      
>>>        }
>>> diff --git a/include/linux/kvm.h b/include/linux/kvm.h
>>> index ce28767..c7ed3cb 100644
>>> --- a/include/linux/kvm.h
>>> +++ b/include/linux/kvm.h
>>> @@ -400,6 +400,12 @@ struct kvm_ioeventfd {
>>>        __u8  pad[36];
>>>    };
>>>
>>> +/* for KVM_ENABLE_CAP */
>>> +struct kvm_enable_cap {
>>> +    /* in */
>>> +    __u32 cap;
>>>
>>>        
>> Reserve space here.  Add a flags field and check it for zeros.
>>      
> Flags? How about something like
>
> u64 args[4]
>
> That way the capability enabling code could decide what to do with the
> arguments. We don't always only need flags I suppose?.
>    

If you interpret these as bit flags anyway, that would be redundant.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 01/15] KVM: PPC: Make register read/write wrappers always work
  2010-03-08 13:50               ` [PATCH 01/15] KVM: PPC: Make register read/write wrappers always Avi Kivity
@ 2010-03-08 13:53                 ` Alexander Graf
  -1 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 13:53 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc, kvm

Avi Kivity wrote:
> On 03/08/2010 03:44 PM, Alexander Graf wrote:
>> Avi Kivity wrote:
>>   
>>> On 03/05/2010 06:50 PM, Alexander Graf wrote:
>>>     
>>>> We have wrappers to do for example gpr read/write accesses with,
>>>> because the contents of registers could be either in the PACA
>>>> or in the VCPU struct.
>>>>
>>>> There's nothing that says we have to have the guest vcpu loaded
>>>> when using these wrappers though, so let's introduce a flag that
>>>> tells us whether we're inside a vcpu_load context.
>>>>
>>>>
>>>>        
>>> On x86 we always access registers within vcpu_load() context.  That
>>> simplifies things.  Does this not apply here?
>>>
>>> Even so, sometimes guest registers are present on the cpu, and
>>> sometimes in shadow variables (for example, msrs might be loaded or
>>> not).  The approach here is to always unload and access the variable
>>> data.  See for example vmx_set_msr() calling vmx_load_host_state()
>>> before accessing msrs.
>>>
>>> Seems like this could reduce the if () tree?
>>>      
>> Well - it would probably render this particular patch void. In fact, I
>> think it is already useless thanks to the other "always do vcpu_load"
>> patch.
>>
>> As far as the already existing if goes, we can't really get rid of that.
>> I want to be fast in the instruction emulation. Copying around the
>> registers won't help there.
>>    
>
> So do it the other way around.  Always load the registers (of course,
> do nothing if already loaded) and then access them in just one way.  I
> assume during emulation the registers will always be loaded?

During emulation we're always in VCPU_RUN, so the vcpu is loaded.

Do you mean something like:

read_register(num) {
  vcpu_load();
  read register from PACA(num);
  vcpu_put();
}

? Does vcpu_load incur overhead when it doesnt' need to do anything?


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 01/15] KVM: PPC: Make register read/write wrappers always
@ 2010-03-08 13:53                 ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 13:53 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc, kvm

Avi Kivity wrote:
> On 03/08/2010 03:44 PM, Alexander Graf wrote:
>> Avi Kivity wrote:
>>   
>>> On 03/05/2010 06:50 PM, Alexander Graf wrote:
>>>     
>>>> We have wrappers to do for example gpr read/write accesses with,
>>>> because the contents of registers could be either in the PACA
>>>> or in the VCPU struct.
>>>>
>>>> There's nothing that says we have to have the guest vcpu loaded
>>>> when using these wrappers though, so let's introduce a flag that
>>>> tells us whether we're inside a vcpu_load context.
>>>>
>>>>
>>>>        
>>> On x86 we always access registers within vcpu_load() context.  That
>>> simplifies things.  Does this not apply here?
>>>
>>> Even so, sometimes guest registers are present on the cpu, and
>>> sometimes in shadow variables (for example, msrs might be loaded or
>>> not).  The approach here is to always unload and access the variable
>>> data.  See for example vmx_set_msr() calling vmx_load_host_state()
>>> before accessing msrs.
>>>
>>> Seems like this could reduce the if () tree?
>>>      
>> Well - it would probably render this particular patch void. In fact, I
>> think it is already useless thanks to the other "always do vcpu_load"
>> patch.
>>
>> As far as the already existing if goes, we can't really get rid of that.
>> I want to be fast in the instruction emulation. Copying around the
>> registers won't help there.
>>    
>
> So do it the other way around.  Always load the registers (of course,
> do nothing if already loaded) and then access them in just one way.  I
> assume during emulation the registers will always be loaded?

During emulation we're always in VCPU_RUN, so the vcpu is loaded.

Do you mean something like:

read_register(num) {
  vcpu_load();
  read register from PACA(num);
  vcpu_put();
}

? Does vcpu_load incur overhead when it doesnt' need to do anything?


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 03/15] KVM: PPC: Allow userspace to unset the IRQ line
  2010-03-08 13:52             ` Avi Kivity
@ 2010-03-08 13:55               ` Alexander Graf
  -1 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 13:55 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc, kvm

Avi Kivity wrote:
> On 03/08/2010 03:48 PM, Alexander Graf wrote:
>>
>>
>>> How does userspace know they exist?
>>>      
>> #ifdef KVM_INTERRUPT_SET? MOL is the only user of this so far. And that
>> won't work without the hypervisor call anyways.
>>    
>
> We generally compile on one machine, and run on another.

So? Then IRQ unsetting doesn't work. Without this series you won't get
much further than booting the kernel anyways because XER is broken, TLB
flushes are broken and FPU loading is broken. So not being able to unset
an IRQ line is the least of your problems :).


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 03/15] KVM: PPC: Allow userspace to unset the IRQ line
@ 2010-03-08 13:55               ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 13:55 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc, kvm

Avi Kivity wrote:
> On 03/08/2010 03:48 PM, Alexander Graf wrote:
>>
>>
>>> How does userspace know they exist?
>>>      
>> #ifdef KVM_INTERRUPT_SET? MOL is the only user of this so far. And that
>> won't work without the hypervisor call anyways.
>>    
>
> We generally compile on one machine, and run on another.

So? Then IRQ unsetting doesn't work. Without this series you won't get
much further than booting the kernel anyways because XER is broken, TLB
flushes are broken and FPU loading is broken. So not being able to unset
an IRQ line is the least of your problems :).


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: Add support for enabling capabilities per-vcpu
       [not found]               ` <4B95012B.3030505-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2010-03-08 13:56                   ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 13:56 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

Avi Kivity wrote:
> On 03/08/2010 03:51 PM, Alexander Graf wrote:
>> Avi Kivity wrote:
>>   
>>> On 03/05/2010 06:50 PM, Alexander Graf wrote:
>>>     
>>>>        }
>>>> diff --git a/include/linux/kvm.h b/include/linux/kvm.h
>>>> index ce28767..c7ed3cb 100644
>>>> --- a/include/linux/kvm.h
>>>> +++ b/include/linux/kvm.h
>>>> @@ -400,6 +400,12 @@ struct kvm_ioeventfd {
>>>>        __u8  pad[36];
>>>>    };
>>>>
>>>> +/* for KVM_ENABLE_CAP */
>>>> +struct kvm_enable_cap {
>>>> +    /* in */
>>>> +    __u32 cap;
>>>>
>>>>        
>>> Reserve space here.  Add a flags field and check it for zeros.
>>>      
>> Flags? How about something like
>>
>> u64 args[4]
>>
>> That way the capability enabling code could decide what to do with the
>> arguments. We don't always only need flags I suppose?.
>>    
>
> If you interpret these as bit flags anyway, that would be redundant.
>

I think I just don't understand what you're trying to say with "flags".
For the OSI enabling we don't need any flags. For later additions we
don't know what we'll need.


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: Add support for enabling capabilities per-vcpu
@ 2010-03-08 13:56                   ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 13:56 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

Avi Kivity wrote:
> On 03/08/2010 03:51 PM, Alexander Graf wrote:
>> Avi Kivity wrote:
>>   
>>> On 03/05/2010 06:50 PM, Alexander Graf wrote:
>>>     
>>>>        }
>>>> diff --git a/include/linux/kvm.h b/include/linux/kvm.h
>>>> index ce28767..c7ed3cb 100644
>>>> --- a/include/linux/kvm.h
>>>> +++ b/include/linux/kvm.h
>>>> @@ -400,6 +400,12 @@ struct kvm_ioeventfd {
>>>>        __u8  pad[36];
>>>>    };
>>>>
>>>> +/* for KVM_ENABLE_CAP */
>>>> +struct kvm_enable_cap {
>>>> +    /* in */
>>>> +    __u32 cap;
>>>>
>>>>        
>>> Reserve space here.  Add a flags field and check it for zeros.
>>>      
>> Flags? How about something like
>>
>> u64 args[4]
>>
>> That way the capability enabling code could decide what to do with the
>> arguments. We don't always only need flags I suppose?.
>>    
>
> If you interpret these as bit flags anyway, that would be redundant.
>

I think I just don't understand what you're trying to say with "flags".
For the OSI enabling we don't need any flags. For later additions we
don't know what we'll need.


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 03/15] KVM: PPC: Allow userspace to unset the IRQ line
  2010-03-08 13:55               ` Alexander Graf
@ 2010-03-08 13:58                 ` Avi Kivity
  -1 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 13:58 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On 03/08/2010 03:55 PM, Alexander Graf wrote:
> Avi Kivity wrote:
>    
>> On 03/08/2010 03:48 PM, Alexander Graf wrote:
>>      
>>>
>>>        
>>>> How does userspace know they exist?
>>>>
>>>>          
>>> #ifdef KVM_INTERRUPT_SET? MOL is the only user of this so far. And that
>>> won't work without the hypervisor call anyways.
>>>
>>>        
>> We generally compile on one machine, and run on another.
>>      
> So? Then IRQ unsetting doesn't work. Without this series you won't get
> much further than booting the kernel anyways because XER is broken, TLB
> flushes are broken and FPU loading is broken. So not being able to unset
> an IRQ line is the least of your problems :).
>    

There's a difference between an error message telling you to upgrade to 
a kernel with KVM_CAP_BLAH and a failure.  It's the difference between a 
bug report and silence.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 03/15] KVM: PPC: Allow userspace to unset the IRQ line
@ 2010-03-08 13:58                 ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 13:58 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On 03/08/2010 03:55 PM, Alexander Graf wrote:
> Avi Kivity wrote:
>    
>> On 03/08/2010 03:48 PM, Alexander Graf wrote:
>>      
>>>
>>>        
>>>> How does userspace know they exist?
>>>>
>>>>          
>>> #ifdef KVM_INTERRUPT_SET? MOL is the only user of this so far. And that
>>> won't work without the hypervisor call anyways.
>>>
>>>        
>> We generally compile on one machine, and run on another.
>>      
> So? Then IRQ unsetting doesn't work. Without this series you won't get
> much further than booting the kernel anyways because XER is broken, TLB
> flushes are broken and FPU loading is broken. So not being able to unset
> an IRQ line is the least of your problems :).
>    

There's a difference between an error message telling you to upgrade to 
a kernel with KVM_CAP_BLAH and a failure.  It's the difference between a 
bug report and silence.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 03/15] KVM: PPC: Allow userspace to unset the IRQ line
       [not found]                 ` <4B95029C.6000800-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2010-03-08 14:01                     ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 14:01 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

Avi Kivity wrote:
> On 03/08/2010 03:55 PM, Alexander Graf wrote:
>> Avi Kivity wrote:
>>   
>>> On 03/08/2010 03:48 PM, Alexander Graf wrote:
>>>     
>>>>
>>>>       
>>>>> How does userspace know they exist?
>>>>>
>>>>>          
>>>> #ifdef KVM_INTERRUPT_SET? MOL is the only user of this so far. And
>>>> that
>>>> won't work without the hypervisor call anyways.
>>>>
>>>>        
>>> We generally compile on one machine, and run on another.
>>>      
>> So? Then IRQ unsetting doesn't work. Without this series you won't get
>> much further than booting the kernel anyways because XER is broken, TLB
>> flushes are broken and FPU loading is broken. So not being able to unset
>> an IRQ line is the least of your problems :).
>>    
>
> There's a difference between an error message telling you to upgrade
> to a kernel with KVM_CAP_BLAH and a failure.  It's the difference
> between a bug report and silence.

I see. So we can check for KVM_CAP_PPC_OSI and know that it's in the
same patch series, also making KVM_INTERRUPT_XXX work, right? Or do you
really want to have 500 capabilities for every single patch?


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 03/15] KVM: PPC: Allow userspace to unset the IRQ line
@ 2010-03-08 14:01                     ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 14:01 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

Avi Kivity wrote:
> On 03/08/2010 03:55 PM, Alexander Graf wrote:
>> Avi Kivity wrote:
>>   
>>> On 03/08/2010 03:48 PM, Alexander Graf wrote:
>>>     
>>>>
>>>>       
>>>>> How does userspace know they exist?
>>>>>
>>>>>          
>>>> #ifdef KVM_INTERRUPT_SET? MOL is the only user of this so far. And
>>>> that
>>>> won't work without the hypervisor call anyways.
>>>>
>>>>        
>>> We generally compile on one machine, and run on another.
>>>      
>> So? Then IRQ unsetting doesn't work. Without this series you won't get
>> much further than booting the kernel anyways because XER is broken, TLB
>> flushes are broken and FPU loading is broken. So not being able to unset
>> an IRQ line is the least of your problems :).
>>    
>
> There's a difference between an error message telling you to upgrade
> to a kernel with KVM_CAP_BLAH and a failure.  It's the difference
> between a bug report and silence.

I see. So we can check for KVM_CAP_PPC_OSI and know that it's in the
same patch series, also making KVM_INTERRUPT_XXX work, right? Or do you
really want to have 500 capabilities for every single patch?


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: Add support for enabling capabilities per-vcpu
  2010-03-08 13:56                   ` Alexander Graf
@ 2010-03-08 14:02                     ` Avi Kivity
  -1 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 14:02 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On 03/08/2010 03:56 PM, Alexander Graf wrote:
> Avi Kivity wrote:
>    
>> On 03/08/2010 03:51 PM, Alexander Graf wrote:
>>      
>>> Avi Kivity wrote:
>>>
>>>        
>>>> On 03/05/2010 06:50 PM, Alexander Graf wrote:
>>>>
>>>>          
>>>>>         }
>>>>> diff --git a/include/linux/kvm.h b/include/linux/kvm.h
>>>>> index ce28767..c7ed3cb 100644
>>>>> --- a/include/linux/kvm.h
>>>>> +++ b/include/linux/kvm.h
>>>>> @@ -400,6 +400,12 @@ struct kvm_ioeventfd {
>>>>>         __u8  pad[36];
>>>>>     };
>>>>>
>>>>> +/* for KVM_ENABLE_CAP */
>>>>> +struct kvm_enable_cap {
>>>>> +    /* in */
>>>>> +    __u32 cap;
>>>>>
>>>>>
>>>>>            
>>>> Reserve space here.  Add a flags field and check it for zeros.
>>>>
>>>>          
>>> Flags? How about something like
>>>
>>> u64 args[4]
>>>
>>> That way the capability enabling code could decide what to do with the
>>> arguments. We don't always only need flags I suppose?.
>>>
>>>        
>> If you interpret these as bit flags anyway, that would be redundant.
>>
>>      
> I think I just don't understand what you're trying to say with "flags".
> For the OSI enabling we don't need any flags. For later additions we
> don't know what we'll need.
>    

When we have reserved fields which are later used for something new, the 
kernel needs a way to know if the reserved fields are known or not by 
userspace.  One way to do this is to assume a value of zero means the 
field is unknown to usespace so ignore it.  Another is to require 
userspace to set a bit in an already-known flags field, and only act on 
the new field if its bit was set.  This has the advantage that the old 
kernel checks for unknown flags and errors out, improving forwards and 
backwards compatibility.

I thought ->cap was already a bit field, so this isn't necessary, but if 
it isn't, then a flags field is helpful.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: Add support for enabling capabilities per-vcpu
@ 2010-03-08 14:02                     ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 14:02 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On 03/08/2010 03:56 PM, Alexander Graf wrote:
> Avi Kivity wrote:
>    
>> On 03/08/2010 03:51 PM, Alexander Graf wrote:
>>      
>>> Avi Kivity wrote:
>>>
>>>        
>>>> On 03/05/2010 06:50 PM, Alexander Graf wrote:
>>>>
>>>>          
>>>>>         }
>>>>> diff --git a/include/linux/kvm.h b/include/linux/kvm.h
>>>>> index ce28767..c7ed3cb 100644
>>>>> --- a/include/linux/kvm.h
>>>>> +++ b/include/linux/kvm.h
>>>>> @@ -400,6 +400,12 @@ struct kvm_ioeventfd {
>>>>>         __u8  pad[36];
>>>>>     };
>>>>>
>>>>> +/* for KVM_ENABLE_CAP */
>>>>> +struct kvm_enable_cap {
>>>>> +    /* in */
>>>>> +    __u32 cap;
>>>>>
>>>>>
>>>>>            
>>>> Reserve space here.  Add a flags field and check it for zeros.
>>>>
>>>>          
>>> Flags? How about something like
>>>
>>> u64 args[4]
>>>
>>> That way the capability enabling code could decide what to do with the
>>> arguments. We don't always only need flags I suppose?.
>>>
>>>        
>> If you interpret these as bit flags anyway, that would be redundant.
>>
>>      
> I think I just don't understand what you're trying to say with "flags".
> For the OSI enabling we don't need any flags. For later additions we
> don't know what we'll need.
>    

When we have reserved fields which are later used for something new, the 
kernel needs a way to know if the reserved fields are known or not by 
userspace.  One way to do this is to assume a value of zero means the 
field is unknown to usespace so ignore it.  Another is to require 
userspace to set a bit in an already-known flags field, and only act on 
the new field if its bit was set.  This has the advantage that the old 
kernel checks for unknown flags and errors out, improving forwards and 
backwards compatibility.

I thought ->cap was already a bit field, so this isn't necessary, but if 
it isn't, then a flags field is helpful.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 01/15] KVM: PPC: Make register read/write wrappers always work
       [not found]                 ` <4B950174.7010709-l3A5Bk7waGM@public.gmane.org>
@ 2010-03-08 14:06                     ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 14:06 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On 03/08/2010 03:53 PM, Alexander Graf wrote:
>
>> So do it the other way around.  Always load the registers (of course,
>> do nothing if already loaded) and then access them in just one way.  I
>> assume during emulation the registers will always be loaded?
>>      
> During emulation we're always in VCPU_RUN, so the vcpu is loaded.
>
> Do you mean something like:
>
> read_register(num) {
>    vcpu_load();
>    read register from PACA(num);
>    vcpu_put();
> }
>
> ? Does vcpu_load incur overhead when it doesnt' need to do anything?
>    

If the vcpu is always loaded, this would be redundant, no?

The situation is that a piece of data is in one of two places.  Instead 
of checking and loading it from either, force it to the place where it 
normally is, and load it from there.

So instead of

     if (x)
         y = p1;
     else
         y = p2;

in a zillion places, just do

     force_to_p2(); // the common case anyway
     y = p2;

which results in cleaner code.  Assuming that you have a common case of 
course.

-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 01/15] KVM: PPC: Make register read/write wrappers always
@ 2010-03-08 14:06                     ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 14:06 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On 03/08/2010 03:53 PM, Alexander Graf wrote:
>
>> So do it the other way around.  Always load the registers (of course,
>> do nothing if already loaded) and then access them in just one way.  I
>> assume during emulation the registers will always be loaded?
>>      
> During emulation we're always in VCPU_RUN, so the vcpu is loaded.
>
> Do you mean something like:
>
> read_register(num) {
>    vcpu_load();
>    read register from PACA(num);
>    vcpu_put();
> }
>
> ? Does vcpu_load incur overhead when it doesnt' need to do anything?
>    

If the vcpu is always loaded, this would be redundant, no?

The situation is that a piece of data is in one of two places.  Instead 
of checking and loading it from either, force it to the place where it 
normally is, and load it from there.

So instead of

     if (x)
         y = p1;
     else
         y = p2;

in a zillion places, just do

     force_to_p2(); // the common case anyway
     y = p2;

which results in cleaner code.  Assuming that you have a common case of 
course.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 03/15] KVM: PPC: Allow userspace to unset the IRQ line
  2010-03-08 14:01                     ` Alexander Graf
@ 2010-03-08 14:09                       ` Avi Kivity
  -1 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 14:09 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On 03/08/2010 04:01 PM, Alexander Graf wrote:
> Avi Kivity wrote:
>    
>> On 03/08/2010 03:55 PM, Alexander Graf wrote:
>>      
>>> Avi Kivity wrote:
>>>
>>>        
>>>> On 03/08/2010 03:48 PM, Alexander Graf wrote:
>>>>
>>>>          
>>>>>
>>>>>            
>>>>>> How does userspace know they exist?
>>>>>>
>>>>>>
>>>>>>              
>>>>> #ifdef KVM_INTERRUPT_SET? MOL is the only user of this so far. And
>>>>> that
>>>>> won't work without the hypervisor call anyways.
>>>>>
>>>>>
>>>>>            
>>>> We generally compile on one machine, and run on another.
>>>>
>>>>          
>>> So? Then IRQ unsetting doesn't work. Without this series you won't get
>>> much further than booting the kernel anyways because XER is broken, TLB
>>> flushes are broken and FPU loading is broken. So not being able to unset
>>> an IRQ line is the least of your problems :).
>>>
>>>        
>> There's a difference between an error message telling you to upgrade
>> to a kernel with KVM_CAP_BLAH and a failure.  It's the difference
>> between a bug report and silence.
>>      
> I see. So we can check for KVM_CAP_PPC_OSI and know that it's in the
> same patch series, also making KVM_INTERRUPT_XXX work, right? Or do you
> really want to have 500 capabilities for every single patch?
>    

Having individual capabilities makes backporting a lot easier (otherwise 
you have to backport the whole thing).  If the changes are logically 
separate, I prefer 500 separate capabilities.

However, for a platform bringup, it's okay to have just one capability, 
assuming none of the changes are applicable to other platforms.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 03/15] KVM: PPC: Allow userspace to unset the IRQ line
@ 2010-03-08 14:09                       ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 14:09 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On 03/08/2010 04:01 PM, Alexander Graf wrote:
> Avi Kivity wrote:
>    
>> On 03/08/2010 03:55 PM, Alexander Graf wrote:
>>      
>>> Avi Kivity wrote:
>>>
>>>        
>>>> On 03/08/2010 03:48 PM, Alexander Graf wrote:
>>>>
>>>>          
>>>>>
>>>>>            
>>>>>> How does userspace know they exist?
>>>>>>
>>>>>>
>>>>>>              
>>>>> #ifdef KVM_INTERRUPT_SET? MOL is the only user of this so far. And
>>>>> that
>>>>> won't work without the hypervisor call anyways.
>>>>>
>>>>>
>>>>>            
>>>> We generally compile on one machine, and run on another.
>>>>
>>>>          
>>> So? Then IRQ unsetting doesn't work. Without this series you won't get
>>> much further than booting the kernel anyways because XER is broken, TLB
>>> flushes are broken and FPU loading is broken. So not being able to unset
>>> an IRQ line is the least of your problems :).
>>>
>>>        
>> There's a difference between an error message telling you to upgrade
>> to a kernel with KVM_CAP_BLAH and a failure.  It's the difference
>> between a bug report and silence.
>>      
> I see. So we can check for KVM_CAP_PPC_OSI and know that it's in the
> same patch series, also making KVM_INTERRUPT_XXX work, right? Or do you
> really want to have 500 capabilities for every single patch?
>    

Having individual capabilities makes backporting a lot easier (otherwise 
you have to backport the whole thing).  If the changes are logically 
separate, I prefer 500 separate capabilities.

However, for a platform bringup, it's okay to have just one capability, 
assuming none of the changes are applicable to other platforms.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: Add support for enabling capabilities per-vcpu
  2010-03-08 14:02                     ` Avi Kivity
@ 2010-03-08 14:10                       ` Alexander Graf
  -1 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 14:10 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc, kvm

Avi Kivity wrote:
> On 03/08/2010 03:56 PM, Alexander Graf wrote:
>> Avi Kivity wrote:
>>   
>>> On 03/08/2010 03:51 PM, Alexander Graf wrote:
>>>     
>>>> Avi Kivity wrote:
>>>>
>>>>       
>>>>> On 03/05/2010 06:50 PM, Alexander Graf wrote:
>>>>>
>>>>>         
>>>>>>         }
>>>>>> diff --git a/include/linux/kvm.h b/include/linux/kvm.h
>>>>>> index ce28767..c7ed3cb 100644
>>>>>> --- a/include/linux/kvm.h
>>>>>> +++ b/include/linux/kvm.h
>>>>>> @@ -400,6 +400,12 @@ struct kvm_ioeventfd {
>>>>>>         __u8  pad[36];
>>>>>>     };
>>>>>>
>>>>>> +/* for KVM_ENABLE_CAP */
>>>>>> +struct kvm_enable_cap {
>>>>>> +    /* in */
>>>>>> +    __u32 cap;
>>>>>>
>>>>>>
>>>>>>            
>>>>> Reserve space here.  Add a flags field and check it for zeros.
>>>>>
>>>>>          
>>>> Flags? How about something like
>>>>
>>>> u64 args[4]
>>>>
>>>> That way the capability enabling code could decide what to do with the
>>>> arguments. We don't always only need flags I suppose?.
>>>>
>>>>        
>>> If you interpret these as bit flags anyway, that would be redundant.
>>>
>>>      
>> I think I just don't understand what you're trying to say with "flags".
>> For the OSI enabling we don't need any flags. For later additions we
>> don't know what we'll need.
>>    
>
> When we have reserved fields which are later used for something new,
> the kernel needs a way to know if the reserved fields are known or not
> by userspace.  One way to do this is to assume a value of zero means
> the field is unknown to usespace so ignore it.  Another is to require
> userspace to set a bit in an already-known flags field, and only act
> on the new field if its bit was set.  This has the advantage that the
> old kernel checks for unknown flags and errors out, improving forwards
> and backwards compatibility.
>
> I thought ->cap was already a bit field, so this isn't necessary, but
> if it isn't, then a flags field is helpful.

-> cap is the capability number. So you want something like:

struct kvm_enable_cap {
  __u32 cap;
  __u32 flags;
  __u64 args[4];
  __u8 pad[64];
};

And then check for flags == 0 in the ioctl handler? Flags could later on
define if the padding changed to a different position, adding new fields
in between args and pad?


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: Add support for enabling capabilities per-vcpu
@ 2010-03-08 14:10                       ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 14:10 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc, kvm

Avi Kivity wrote:
> On 03/08/2010 03:56 PM, Alexander Graf wrote:
>> Avi Kivity wrote:
>>   
>>> On 03/08/2010 03:51 PM, Alexander Graf wrote:
>>>     
>>>> Avi Kivity wrote:
>>>>
>>>>       
>>>>> On 03/05/2010 06:50 PM, Alexander Graf wrote:
>>>>>
>>>>>         
>>>>>>         }
>>>>>> diff --git a/include/linux/kvm.h b/include/linux/kvm.h
>>>>>> index ce28767..c7ed3cb 100644
>>>>>> --- a/include/linux/kvm.h
>>>>>> +++ b/include/linux/kvm.h
>>>>>> @@ -400,6 +400,12 @@ struct kvm_ioeventfd {
>>>>>>         __u8  pad[36];
>>>>>>     };
>>>>>>
>>>>>> +/* for KVM_ENABLE_CAP */
>>>>>> +struct kvm_enable_cap {
>>>>>> +    /* in */
>>>>>> +    __u32 cap;
>>>>>>
>>>>>>
>>>>>>            
>>>>> Reserve space here.  Add a flags field and check it for zeros.
>>>>>
>>>>>          
>>>> Flags? How about something like
>>>>
>>>> u64 args[4]
>>>>
>>>> That way the capability enabling code could decide what to do with the
>>>> arguments. We don't always only need flags I suppose?.
>>>>
>>>>        
>>> If you interpret these as bit flags anyway, that would be redundant.
>>>
>>>      
>> I think I just don't understand what you're trying to say with "flags".
>> For the OSI enabling we don't need any flags. For later additions we
>> don't know what we'll need.
>>    
>
> When we have reserved fields which are later used for something new,
> the kernel needs a way to know if the reserved fields are known or not
> by userspace.  One way to do this is to assume a value of zero means
> the field is unknown to usespace so ignore it.  Another is to require
> userspace to set a bit in an already-known flags field, and only act
> on the new field if its bit was set.  This has the advantage that the
> old kernel checks for unknown flags and errors out, improving forwards
> and backwards compatibility.
>
> I thought ->cap was already a bit field, so this isn't necessary, but
> if it isn't, then a flags field is helpful.

-> cap is the capability number. So you want something like:

struct kvm_enable_cap {
  __u32 cap;
  __u32 flags;
  __u64 args[4];
  __u8 pad[64];
};

And then check for flags = 0 in the ioctl handler? Flags could later on
define if the padding changed to a different position, adding new fields
in between args and pad?


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 01/15] KVM: PPC: Make register read/write wrappers always work
       [not found]                     ` <4B950475.1020106-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2010-03-08 14:14                         ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 14:14 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

Avi Kivity wrote:
> On 03/08/2010 03:53 PM, Alexander Graf wrote:
>>
>>> So do it the other way around.  Always load the registers (of course,
>>> do nothing if already loaded) and then access them in just one way.  I
>>> assume during emulation the registers will always be loaded?
>>>      
>> During emulation we're always in VCPU_RUN, so the vcpu is loaded.
>>
>> Do you mean something like:
>>
>> read_register(num) {
>>    vcpu_load();
>>    read register from PACA(num);
>>    vcpu_put();
>> }
>>
>> ? Does vcpu_load incur overhead when it doesnt' need to do anything?
>>    
>
> If the vcpu is always loaded, this would be redundant, no?
>
> The situation is that a piece of data is in one of two places. 
> Instead of checking and loading it from either, force it to the place
> where it normally is, and load it from there.
>
> So instead of
>
>     if (x)
>         y = p1;
>     else
>         y = p2;
>
> in a zillion places, just do
>
>     force_to_p2(); // the common case anyway
>     y = p2;
>
> which results in cleaner code.  Assuming that you have a common case
> of course.


We're looking at two different ifs here.

1) GPR Inside the PACA or not (volatile vs non-volatile)

This is constant. Volatile registers go to the PACA; non-volatiles go to
the vcpu struct.

2) GPR actually loaded in the PACA

When we're in vcpu_load context the registers in the PACA, when not
they're in the vcpu struct


If you have a really easy and fast way to assure that we're always
inside a vcpu_load context, all is great. I could probably even just put
in a BUG_ON(not in vcpu_load context) and make the callers safe. But
some check needs to be done.


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 01/15] KVM: PPC: Make register read/write wrappers always
@ 2010-03-08 14:14                         ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 14:14 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

Avi Kivity wrote:
> On 03/08/2010 03:53 PM, Alexander Graf wrote:
>>
>>> So do it the other way around.  Always load the registers (of course,
>>> do nothing if already loaded) and then access them in just one way.  I
>>> assume during emulation the registers will always be loaded?
>>>      
>> During emulation we're always in VCPU_RUN, so the vcpu is loaded.
>>
>> Do you mean something like:
>>
>> read_register(num) {
>>    vcpu_load();
>>    read register from PACA(num);
>>    vcpu_put();
>> }
>>
>> ? Does vcpu_load incur overhead when it doesnt' need to do anything?
>>    
>
> If the vcpu is always loaded, this would be redundant, no?
>
> The situation is that a piece of data is in one of two places. 
> Instead of checking and loading it from either, force it to the place
> where it normally is, and load it from there.
>
> So instead of
>
>     if (x)
>         y = p1;
>     else
>         y = p2;
>
> in a zillion places, just do
>
>     force_to_p2(); // the common case anyway
>     y = p2;
>
> which results in cleaner code.  Assuming that you have a common case
> of course.


We're looking at two different ifs here.

1) GPR Inside the PACA or not (volatile vs non-volatile)

This is constant. Volatile registers go to the PACA; non-volatiles go to
the vcpu struct.

2) GPR actually loaded in the PACA

When we're in vcpu_load context the registers in the PACA, when not
they're in the vcpu struct


If you have a really easy and fast way to assure that we're always
inside a vcpu_load context, all is great. I could probably even just put
in a BUG_ON(not in vcpu_load context) and make the callers safe. But
some check needs to be done.


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: Add support for enabling capabilities per-vcpu
       [not found]                       ` <4B950562.6050509-l3A5Bk7waGM@public.gmane.org>
@ 2010-03-08 14:14                           ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 14:14 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On 03/08/2010 04:10 PM, Alexander Graf wrote:
>
>> When we have reserved fields which are later used for something new,
>> the kernel needs a way to know if the reserved fields are known or not
>> by userspace.  One way to do this is to assume a value of zero means
>> the field is unknown to usespace so ignore it.  Another is to require
>> userspace to set a bit in an already-known flags field, and only act
>> on the new field if its bit was set.  This has the advantage that the
>> old kernel checks for unknown flags and errors out, improving forwards
>> and backwards compatibility.
>>
>> I thought ->cap was already a bit field, so this isn't necessary, but
>> if it isn't, then a flags field is helpful.
>>      
> ->  cap is the capability number. So you want something like:
>
> struct kvm_enable_cap {
>    __u32 cap;
>    __u32 flags;
>    __u64 args[4];
>    __u8 pad[64];
> };
>
> And then check for flags == 0 in the ioctl handler? Flags could later on
> define if the padding changed to a different position, adding new fields
> in between args and pad?
>    

Exactly, we do so in several places.  Can be useful if, for example, 
some new capability comes with a resource count value.

What's this thing anyway?  like cpuid bits for x86?


-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: Add support for enabling capabilities per-vcpu
@ 2010-03-08 14:14                           ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 14:14 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On 03/08/2010 04:10 PM, Alexander Graf wrote:
>
>> When we have reserved fields which are later used for something new,
>> the kernel needs a way to know if the reserved fields are known or not
>> by userspace.  One way to do this is to assume a value of zero means
>> the field is unknown to usespace so ignore it.  Another is to require
>> userspace to set a bit in an already-known flags field, and only act
>> on the new field if its bit was set.  This has the advantage that the
>> old kernel checks for unknown flags and errors out, improving forwards
>> and backwards compatibility.
>>
>> I thought ->cap was already a bit field, so this isn't necessary, but
>> if it isn't, then a flags field is helpful.
>>      
> ->  cap is the capability number. So you want something like:
>
> struct kvm_enable_cap {
>    __u32 cap;
>    __u32 flags;
>    __u64 args[4];
>    __u8 pad[64];
> };
>
> And then check for flags = 0 in the ioctl handler? Flags could later on
> define if the padding changed to a different position, adding new fields
> in between args and pad?
>    

Exactly, we do so in several places.  Can be useful if, for example, 
some new capability comes with a resource count value.

What's this thing anyway?  like cpuid bits for x86?


-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 01/15] KVM: PPC: Make register read/write wrappers always work
       [not found]                         ` <4B95062D.2020908-l3A5Bk7waGM@public.gmane.org>
@ 2010-03-08 14:16                             ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 14:16 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On 03/08/2010 04:14 PM, Alexander Graf wrote:
>
> We're looking at two different ifs here.
>
> 1) GPR Inside the PACA or not (volatile vs non-volatile)
>
> This is constant. Volatile registers go to the PACA; non-volatiles go to
> the vcpu struct.
>    

Okay - so no if ().

> 2) GPR actually loaded in the PACA
>
> When we're in vcpu_load context the registers in the PACA, when not
> they're in the vcpu struct
>
>
> If you have a really easy and fast way to assure that we're always
> inside a vcpu_load context, all is great. I could probably even just put
> in a BUG_ON(not in vcpu_load context) and make the callers safe. But
> some check needs to be done.
>    

x86 assumes in vcpu_load() context (without even a BUG_ON()).  
KVM_GET_REGS and friends are responsible for this.

-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 01/15] KVM: PPC: Make register read/write wrappers always
@ 2010-03-08 14:16                             ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 14:16 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On 03/08/2010 04:14 PM, Alexander Graf wrote:
>
> We're looking at two different ifs here.
>
> 1) GPR Inside the PACA or not (volatile vs non-volatile)
>
> This is constant. Volatile registers go to the PACA; non-volatiles go to
> the vcpu struct.
>    

Okay - so no if ().

> 2) GPR actually loaded in the PACA
>
> When we're in vcpu_load context the registers in the PACA, when not
> they're in the vcpu struct
>
>
> If you have a really easy and fast way to assure that we're always
> inside a vcpu_load context, all is great. I could probably even just put
> in a BUG_ON(not in vcpu_load context) and make the callers safe. But
> some check needs to be done.
>    

x86 assumes in vcpu_load() context (without even a BUG_ON()).  
KVM_GET_REGS and friends are responsible for this.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: Add support for enabling capabilities per-vcpu
       [not found]                           ` <4B950656.4010307-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2010-03-08 14:18                               ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 14:18 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

Avi Kivity wrote:
> On 03/08/2010 04:10 PM, Alexander Graf wrote:
>>
>>> When we have reserved fields which are later used for something new,
>>> the kernel needs a way to know if the reserved fields are known or not
>>> by userspace.  One way to do this is to assume a value of zero means
>>> the field is unknown to usespace so ignore it.  Another is to require
>>> userspace to set a bit in an already-known flags field, and only act
>>> on the new field if its bit was set.  This has the advantage that the
>>> old kernel checks for unknown flags and errors out, improving forwards
>>> and backwards compatibility.
>>>
>>> I thought ->cap was already a bit field, so this isn't necessary, but
>>> if it isn't, then a flags field is helpful.
>>>      
>> ->  cap is the capability number. So you want something like:
>>
>> struct kvm_enable_cap {
>>    __u32 cap;
>>    __u32 flags;
>>    __u64 args[4];
>>    __u8 pad[64];
>> };
>>
>> And then check for flags == 0 in the ioctl handler? Flags could later on
>> define if the padding changed to a different position, adding new fields
>> in between args and pad?
>>    
>
> Exactly, we do so in several places.  Can be useful if, for example,
> some new capability comes with a resource count value.
>
> What's this thing anyway?  like cpuid bits for x86?

What thing? This ioctl or the OSI call?

The ioctl is a way to enable a feature on a per-vcpu basis. MOL overlays
the syscall interface with a hypercall interface, so a normal OS syscall
magically becomes a hypercall when magic constants get passed in r3 and r4.

Because for obvious reasons we don't want to enable that when not using
MOL, I figured I'd go in and have userspace decide if it wants to get a
hypercall exit or not. Qemu couldn't really do anything with it after
all. And while at it, I figured let's better make the interface generic.


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: Add support for enabling capabilities per-vcpu
@ 2010-03-08 14:18                               ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 14:18 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

Avi Kivity wrote:
> On 03/08/2010 04:10 PM, Alexander Graf wrote:
>>
>>> When we have reserved fields which are later used for something new,
>>> the kernel needs a way to know if the reserved fields are known or not
>>> by userspace.  One way to do this is to assume a value of zero means
>>> the field is unknown to usespace so ignore it.  Another is to require
>>> userspace to set a bit in an already-known flags field, and only act
>>> on the new field if its bit was set.  This has the advantage that the
>>> old kernel checks for unknown flags and errors out, improving forwards
>>> and backwards compatibility.
>>>
>>> I thought ->cap was already a bit field, so this isn't necessary, but
>>> if it isn't, then a flags field is helpful.
>>>      
>> ->  cap is the capability number. So you want something like:
>>
>> struct kvm_enable_cap {
>>    __u32 cap;
>>    __u32 flags;
>>    __u64 args[4];
>>    __u8 pad[64];
>> };
>>
>> And then check for flags = 0 in the ioctl handler? Flags could later on
>> define if the padding changed to a different position, adding new fields
>> in between args and pad?
>>    
>
> Exactly, we do so in several places.  Can be useful if, for example,
> some new capability comes with a resource count value.
>
> What's this thing anyway?  like cpuid bits for x86?

What thing? This ioctl or the OSI call?

The ioctl is a way to enable a feature on a per-vcpu basis. MOL overlays
the syscall interface with a hypercall interface, so a normal OS syscall
magically becomes a hypercall when magic constants get passed in r3 and r4.

Because for obvious reasons we don't want to enable that when not using
MOL, I figured I'd go in and have userspace decide if it wants to get a
hypercall exit or not. Qemu couldn't really do anything with it after
all. And while at it, I figured let's better make the interface generic.


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 01/15] KVM: PPC: Make register read/write wrappers always work
       [not found]                             ` <4B9506C5.30606-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2010-03-08 14:20                                 ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 14:20 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

Avi Kivity wrote:
> On 03/08/2010 04:14 PM, Alexander Graf wrote:
>>
>> We're looking at two different ifs here.
>>
>> 1) GPR Inside the PACA or not (volatile vs non-volatile)
>>
>> This is constant. Volatile registers go to the PACA; non-volatiles go to
>> the vcpu struct.
>>    
>
> Okay - so no if ().

Eh.

r[0 - 12] are volatile
r[13 - 31] are non-volatile

So if we want a common gpr access function we need an if. And we need
one, because the opcodes just use register numbers and doesn't care
where they are.

>
>> 2) GPR actually loaded in the PACA
>>
>> When we're in vcpu_load context the registers in the PACA, when not
>> they're in the vcpu struct
>>
>>
>> If you have a really easy and fast way to assure that we're always
>> inside a vcpu_load context, all is great. I could probably even just put
>> in a BUG_ON(not in vcpu_load context) and make the callers safe. But
>> some check needs to be done.
>>    
>
> x86 assumes in vcpu_load() context (without even a BUG_ON()). 
> KVM_GET_REGS and friends are responsible for this.

Oh, interesting. Just drop this patch then :).


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 01/15] KVM: PPC: Make register read/write wrappers always
@ 2010-03-08 14:20                                 ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 14:20 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

Avi Kivity wrote:
> On 03/08/2010 04:14 PM, Alexander Graf wrote:
>>
>> We're looking at two different ifs here.
>>
>> 1) GPR Inside the PACA or not (volatile vs non-volatile)
>>
>> This is constant. Volatile registers go to the PACA; non-volatiles go to
>> the vcpu struct.
>>    
>
> Okay - so no if ().

Eh.

r[0 - 12] are volatile
r[13 - 31] are non-volatile

So if we want a common gpr access function we need an if. And we need
one, because the opcodes just use register numbers and doesn't care
where they are.

>
>> 2) GPR actually loaded in the PACA
>>
>> When we're in vcpu_load context the registers in the PACA, when not
>> they're in the vcpu struct
>>
>>
>> If you have a really easy and fast way to assure that we're always
>> inside a vcpu_load context, all is great. I could probably even just put
>> in a BUG_ON(not in vcpu_load context) and make the callers safe. But
>> some check needs to be done.
>>    
>
> x86 assumes in vcpu_load() context (without even a BUG_ON()). 
> KVM_GET_REGS and friends are responsible for this.

Oh, interesting. Just drop this patch then :).


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: Add support for enabling capabilities per-vcpu
  2010-03-08 14:18                               ` Alexander Graf
@ 2010-03-08 14:21                                 ` Avi Kivity
  -1 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 14:21 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On 03/08/2010 04:18 PM, Alexander Graf wrote:
> Avi Kivity wrote:
>    
>> On 03/08/2010 04:10 PM, Alexander Graf wrote:
>>      
>>>        
>>>> When we have reserved fields which are later used for something new,
>>>> the kernel needs a way to know if the reserved fields are known or not
>>>> by userspace.  One way to do this is to assume a value of zero means
>>>> the field is unknown to usespace so ignore it.  Another is to require
>>>> userspace to set a bit in an already-known flags field, and only act
>>>> on the new field if its bit was set.  This has the advantage that the
>>>> old kernel checks for unknown flags and errors out, improving forwards
>>>> and backwards compatibility.
>>>>
>>>> I thought ->cap was already a bit field, so this isn't necessary, but
>>>> if it isn't, then a flags field is helpful.
>>>>
>>>>          
>>> ->   cap is the capability number. So you want something like:
>>>
>>> struct kvm_enable_cap {
>>>     __u32 cap;
>>>     __u32 flags;
>>>     __u64 args[4];
>>>     __u8 pad[64];
>>> };
>>>
>>> And then check for flags == 0 in the ioctl handler? Flags could later on
>>> define if the padding changed to a different position, adding new fields
>>> in between args and pad?
>>>
>>>        
>> Exactly, we do so in several places.  Can be useful if, for example,
>> some new capability comes with a resource count value.
>>
>> What's this thing anyway?  like cpuid bits for x86?
>>      
> What thing? This ioctl or the OSI call?
>
> The ioctl is a way to enable a feature on a per-vcpu basis. MOL overlays
> the syscall interface with a hypercall interface, so a normal OS syscall
> magically becomes a hypercall when magic constants get passed in r3 and r4.
>
> Because for obvious reasons we don't want to enable that when not using
> MOL, I figured I'd go in and have userspace decide if it wants to get a
> hypercall exit or not. Qemu couldn't really do anything with it after
> all. And while at it, I figured let's better make the interface generic.
>    

That's reasonable.  Thanks.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: Add support for enabling capabilities per-vcpu
@ 2010-03-08 14:21                                 ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 14:21 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On 03/08/2010 04:18 PM, Alexander Graf wrote:
> Avi Kivity wrote:
>    
>> On 03/08/2010 04:10 PM, Alexander Graf wrote:
>>      
>>>        
>>>> When we have reserved fields which are later used for something new,
>>>> the kernel needs a way to know if the reserved fields are known or not
>>>> by userspace.  One way to do this is to assume a value of zero means
>>>> the field is unknown to usespace so ignore it.  Another is to require
>>>> userspace to set a bit in an already-known flags field, and only act
>>>> on the new field if its bit was set.  This has the advantage that the
>>>> old kernel checks for unknown flags and errors out, improving forwards
>>>> and backwards compatibility.
>>>>
>>>> I thought ->cap was already a bit field, so this isn't necessary, but
>>>> if it isn't, then a flags field is helpful.
>>>>
>>>>          
>>> ->   cap is the capability number. So you want something like:
>>>
>>> struct kvm_enable_cap {
>>>     __u32 cap;
>>>     __u32 flags;
>>>     __u64 args[4];
>>>     __u8 pad[64];
>>> };
>>>
>>> And then check for flags = 0 in the ioctl handler? Flags could later on
>>> define if the padding changed to a different position, adding new fields
>>> in between args and pad?
>>>
>>>        
>> Exactly, we do so in several places.  Can be useful if, for example,
>> some new capability comes with a resource count value.
>>
>> What's this thing anyway?  like cpuid bits for x86?
>>      
> What thing? This ioctl or the OSI call?
>
> The ioctl is a way to enable a feature on a per-vcpu basis. MOL overlays
> the syscall interface with a hypercall interface, so a normal OS syscall
> magically becomes a hypercall when magic constants get passed in r3 and r4.
>
> Because for obvious reasons we don't want to enable that when not using
> MOL, I figured I'd go in and have userspace decide if it wants to get a
> hypercall exit or not. Qemu couldn't really do anything with it after
> all. And while at it, I figured let's better make the interface generic.
>    

That's reasonable.  Thanks.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 01/15] KVM: PPC: Make register read/write wrappers always work
  2010-03-08 14:20                                 ` [PATCH 01/15] KVM: PPC: Make register read/write wrappers always Alexander Graf
@ 2010-03-08 14:23                                   ` Avi Kivity
  -1 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 14:23 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On 03/08/2010 04:20 PM, Alexander Graf wrote:
> Avi Kivity wrote:
>    
>> On 03/08/2010 04:14 PM, Alexander Graf wrote:
>>      
>>> We're looking at two different ifs here.
>>>
>>> 1) GPR Inside the PACA or not (volatile vs non-volatile)
>>>
>>> This is constant. Volatile registers go to the PACA; non-volatiles go to
>>> the vcpu struct.
>>>
>>>        
>> Okay - so no if ().
>>      
> Eh.
>
> r[0 - 12] are volatile
> r[13 - 31] are non-volatile
>
> So if we want a common gpr access function we need an if. And we need
> one, because the opcodes just use register numbers and doesn't care
> where they are.
>    

I see - we have something similar on x86 (where vmx keeps rsp/rip in a 
register and lets us save everything else manually).

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 01/15] KVM: PPC: Make register read/write wrappers always
@ 2010-03-08 14:23                                   ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-08 14:23 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On 03/08/2010 04:20 PM, Alexander Graf wrote:
> Avi Kivity wrote:
>    
>> On 03/08/2010 04:14 PM, Alexander Graf wrote:
>>      
>>> We're looking at two different ifs here.
>>>
>>> 1) GPR Inside the PACA or not (volatile vs non-volatile)
>>>
>>> This is constant. Volatile registers go to the PACA; non-volatiles go to
>>> the vcpu struct.
>>>
>>>        
>> Okay - so no if ().
>>      
> Eh.
>
> r[0 - 12] are volatile
> r[13 - 31] are non-volatile
>
> So if we want a common gpr access function we need an if. And we need
> one, because the opcodes just use register numbers and doesn't care
> where they are.
>    

I see - we have something similar on x86 (where vmx keeps rsp/rip in a 
register and lets us save everything else manually).

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* [PATCH 00/15] KVM: PPC: MOL bringup patches
  2010-03-05 16:50 ` Alexander Graf
@ 2010-03-08 18:03 ` Alexander Graf
  -1 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

Mac-on-Linux has always lacked PPC64 host support. This is going to
change now!

This patchset contains minor patches to enable MOL, but is mostly about
bug fixes that came out of running Mac OS X. With this set and a pretty
small patch to MOL I have 10.4.11 running as a guest on a 970MP host.

I'll send the MOl patches to the respective ML in the next days.


v1 -> v2:

 - Add documentation for EXIT_OSI and ENABLE_CAP
 - Add flags to enable_cap
 - Add build fix for !CONFIG_VSX
 - Remove in-paca register check

Alexander Graf (15):
  KVM: PPC: Ensure split mode works
  KVM: PPC: Allow userspace to unset the IRQ line
  KVM: PPC: Make DSISR 32 bits wide
  KVM: PPC: Book3S_32 guest MMU fixes
  KVM: PPC: Split instruction reading out
  KVM: PPC: Don't reload FPU with invalid values
  KVM: PPC: Load VCPU for register fetching
  KVM: PPC: Implement mfsr emulation
  KVM: PPC: Implement BAT reads
  KVM: PPC: Make XER load 32 bit
  KVM: PPC: Implement emulation for lbzux and lhax
  KVM: PPC: Implement alignment interrupt
  KVM: Add support for enabling capabilities per-vcpu
  KVM: PPC: Add OSI hypercall interface
  KVM: PPC: Make build work without CONFIG_VSX/ALTIVEC

 Documentation/kvm/api.txt               |   28 +++++++
 arch/powerpc/include/asm/kvm.h          |    3 +
 arch/powerpc/include/asm/kvm_book3s.h   |   18 +++-
 arch/powerpc/include/asm/kvm_host.h     |    4 +-
 arch/powerpc/include/asm/kvm_ppc.h      |    2 +
 arch/powerpc/kvm/book3s.c               |  130 ++++++++++++++++++++++---------
 arch/powerpc/kvm/book3s_32_mmu.c        |   30 ++++++--
 arch/powerpc/kvm/book3s_64_emulate.c    |   88 +++++++++++++++++++++
 arch/powerpc/kvm/book3s_64_interrupts.S |    2 +-
 arch/powerpc/kvm/book3s_64_slb.S        |    2 +-
 arch/powerpc/kvm/emulate.c              |   20 +++++
 arch/powerpc/kvm/powerpc.c              |   43 ++++++++++-
 include/linux/kvm.h                     |   17 ++++
 13 files changed, 335 insertions(+), 52 deletions(-)

^ permalink raw reply	[flat|nested] 140+ messages in thread

* [PATCH 00/15] KVM: PPC: MOL bringup patches
@ 2010-03-08 18:03 ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

Mac-on-Linux has always lacked PPC64 host support. This is going to
change now!

This patchset contains minor patches to enable MOL, but is mostly about
bug fixes that came out of running Mac OS X. With this set and a pretty
small patch to MOL I have 10.4.11 running as a guest on a 970MP host.

I'll send the MOl patches to the respective ML in the next days.


v1 -> v2:

 - Add documentation for EXIT_OSI and ENABLE_CAP
 - Add flags to enable_cap
 - Add build fix for !CONFIG_VSX
 - Remove in-paca register check

Alexander Graf (15):
  KVM: PPC: Ensure split mode works
  KVM: PPC: Allow userspace to unset the IRQ line
  KVM: PPC: Make DSISR 32 bits wide
  KVM: PPC: Book3S_32 guest MMU fixes
  KVM: PPC: Split instruction reading out
  KVM: PPC: Don't reload FPU with invalid values
  KVM: PPC: Load VCPU for register fetching
  KVM: PPC: Implement mfsr emulation
  KVM: PPC: Implement BAT reads
  KVM: PPC: Make XER load 32 bit
  KVM: PPC: Implement emulation for lbzux and lhax
  KVM: PPC: Implement alignment interrupt
  KVM: Add support for enabling capabilities per-vcpu
  KVM: PPC: Add OSI hypercall interface
  KVM: PPC: Make build work without CONFIG_VSX/ALTIVEC

 Documentation/kvm/api.txt               |   28 +++++++
 arch/powerpc/include/asm/kvm.h          |    3 +
 arch/powerpc/include/asm/kvm_book3s.h   |   18 +++-
 arch/powerpc/include/asm/kvm_host.h     |    4 +-
 arch/powerpc/include/asm/kvm_ppc.h      |    2 +
 arch/powerpc/kvm/book3s.c               |  130 ++++++++++++++++++++++---------
 arch/powerpc/kvm/book3s_32_mmu.c        |   30 ++++++--
 arch/powerpc/kvm/book3s_64_emulate.c    |   88 +++++++++++++++++++++
 arch/powerpc/kvm/book3s_64_interrupts.S |    2 +-
 arch/powerpc/kvm/book3s_64_slb.S        |    2 +-
 arch/powerpc/kvm/emulate.c              |   20 +++++
 arch/powerpc/kvm/powerpc.c              |   43 ++++++++++-
 include/linux/kvm.h                     |   17 ++++
 13 files changed, 335 insertions(+), 52 deletions(-)


^ permalink raw reply	[flat|nested] 140+ messages in thread

* [PATCH 01/15] KVM: PPC: Ensure split mode works
       [not found] ` <1268071402-27112-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-03-08 18:03     ` Alexander Graf
  2010-03-08 18:03     ` Alexander Graf
                       ` (7 subsequent siblings)
  8 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

On PowerPC we can go into MMU Split Mode. That means that either
data relocation is on but instruction relocation is off or vice
versa.

That mode didn't work properly, as we weren't always flushing
entries when going into a new split mode, potentially mapping
different code or data that we're supposed to.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/include/asm/kvm_book3s.h |    9 +++---
 arch/powerpc/kvm/book3s.c             |   46 +++++++++++++++++---------------
 2 files changed, 29 insertions(+), 26 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index e6ea974..14d0262 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -99,10 +99,11 @@ struct kvmppc_vcpu_book3s {
 #define CONTEXT_GUEST		1
 #define CONTEXT_GUEST_END	2
 
-#define VSID_REAL	0xfffffffffff00000
-#define VSID_REAL_DR	0xffffffffffe00000
-#define VSID_REAL_IR	0xffffffffffd00000
-#define VSID_BAT	0xffffffffffc00000
+#define VSID_REAL_DR	0x7ffffffffff00000
+#define VSID_REAL_IR	0x7fffffffffe00000
+#define VSID_SPLIT_MASK	0x7fffffffffe00000
+#define VSID_REAL	0x7fffffffffc00000
+#define VSID_BAT	0x7fffffffffb00000
 #define VSID_PR		0x8000000000000000
 
 extern void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, u64 ea, u64 ea_mask);
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 94c229d..c2ffb91 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -133,6 +133,14 @@ void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 msr)
 
 	if (((vcpu->arch.msr & (MSR_IR|MSR_DR)) != (old_msr & (MSR_IR|MSR_DR))) ||
 	    (vcpu->arch.msr & MSR_PR) != (old_msr & MSR_PR)) {
+		bool dr = (vcpu->arch.msr & MSR_DR) ? true : false;
+		bool ir = (vcpu->arch.msr & MSR_IR) ? true : false;
+
+		/* Flush split mode PTEs */
+		if (dr != ir)
+			kvmppc_mmu_pte_vflush(vcpu, VSID_SPLIT_MASK,
+					      VSID_SPLIT_MASK);
+
 		kvmppc_mmu_flush_segments(vcpu);
 		kvmppc_mmu_map_segment(vcpu, vcpu->arch.pc);
 	}
@@ -395,15 +403,7 @@ static int kvmppc_xlate(struct kvm_vcpu *vcpu, ulong eaddr, bool data,
 	} else {
 		pte->eaddr = eaddr;
 		pte->raddr = eaddr & 0xffffffff;
-		pte->vpage = eaddr >> 12;
-		switch (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
-		case 0:
-			pte->vpage |= VSID_REAL;
-		case MSR_DR:
-			pte->vpage |= VSID_REAL_DR;
-		case MSR_IR:
-			pte->vpage |= VSID_REAL_IR;
-		}
+		pte->vpage = VSID_REAL | eaddr >> 12;
 		pte->may_read = true;
 		pte->may_write = true;
 		pte->may_execute = true;
@@ -512,12 +512,10 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	int page_found = 0;
 	struct kvmppc_pte pte;
 	bool is_mmio = false;
+	bool dr = (vcpu->arch.msr & MSR_DR) ? true : false;
+	bool ir = (vcpu->arch.msr & MSR_IR) ? true : false;
 
-	if ( vec == BOOK3S_INTERRUPT_DATA_STORAGE ) {
-		relocated = (vcpu->arch.msr & MSR_DR);
-	} else {
-		relocated = (vcpu->arch.msr & MSR_IR);
-	}
+	relocated = data ? dr : ir;
 
 	/* Resolve real address if translation turned on */
 	if (relocated) {
@@ -529,14 +527,18 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		pte.raddr = eaddr & 0xffffffff;
 		pte.eaddr = eaddr;
 		pte.vpage = eaddr >> 12;
-		switch (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
-		case 0:
-			pte.vpage |= VSID_REAL;
-		case MSR_DR:
-			pte.vpage |= VSID_REAL_DR;
-		case MSR_IR:
-			pte.vpage |= VSID_REAL_IR;
-		}
+	}
+
+	switch (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
+	case 0:
+		pte.vpage |= VSID_REAL;
+		break;
+	case MSR_DR:
+		pte.vpage |= VSID_REAL_DR;
+		break;
+	case MSR_IR:
+		pte.vpage |= VSID_REAL_IR;
+		break;
 	}
 
 	if (vcpu->arch.mmu.is_dcbz32(vcpu) &&
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 01/15] KVM: PPC: Ensure split mode works
@ 2010-03-08 18:03     ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

On PowerPC we can go into MMU Split Mode. That means that either
data relocation is on but instruction relocation is off or vice
versa.

That mode didn't work properly, as we weren't always flushing
entries when going into a new split mode, potentially mapping
different code or data that we're supposed to.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s.h |    9 +++---
 arch/powerpc/kvm/book3s.c             |   46 +++++++++++++++++---------------
 2 files changed, 29 insertions(+), 26 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index e6ea974..14d0262 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -99,10 +99,11 @@ struct kvmppc_vcpu_book3s {
 #define CONTEXT_GUEST		1
 #define CONTEXT_GUEST_END	2
 
-#define VSID_REAL	0xfffffffffff00000
-#define VSID_REAL_DR	0xffffffffffe00000
-#define VSID_REAL_IR	0xffffffffffd00000
-#define VSID_BAT	0xffffffffffc00000
+#define VSID_REAL_DR	0x7ffffffffff00000
+#define VSID_REAL_IR	0x7fffffffffe00000
+#define VSID_SPLIT_MASK	0x7fffffffffe00000
+#define VSID_REAL	0x7fffffffffc00000
+#define VSID_BAT	0x7fffffffffb00000
 #define VSID_PR		0x8000000000000000
 
 extern void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, u64 ea, u64 ea_mask);
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 94c229d..c2ffb91 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -133,6 +133,14 @@ void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 msr)
 
 	if (((vcpu->arch.msr & (MSR_IR|MSR_DR)) != (old_msr & (MSR_IR|MSR_DR))) ||
 	    (vcpu->arch.msr & MSR_PR) != (old_msr & MSR_PR)) {
+		bool dr = (vcpu->arch.msr & MSR_DR) ? true : false;
+		bool ir = (vcpu->arch.msr & MSR_IR) ? true : false;
+
+		/* Flush split mode PTEs */
+		if (dr != ir)
+			kvmppc_mmu_pte_vflush(vcpu, VSID_SPLIT_MASK,
+					      VSID_SPLIT_MASK);
+
 		kvmppc_mmu_flush_segments(vcpu);
 		kvmppc_mmu_map_segment(vcpu, vcpu->arch.pc);
 	}
@@ -395,15 +403,7 @@ static int kvmppc_xlate(struct kvm_vcpu *vcpu, ulong eaddr, bool data,
 	} else {
 		pte->eaddr = eaddr;
 		pte->raddr = eaddr & 0xffffffff;
-		pte->vpage = eaddr >> 12;
-		switch (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
-		case 0:
-			pte->vpage |= VSID_REAL;
-		case MSR_DR:
-			pte->vpage |= VSID_REAL_DR;
-		case MSR_IR:
-			pte->vpage |= VSID_REAL_IR;
-		}
+		pte->vpage = VSID_REAL | eaddr >> 12;
 		pte->may_read = true;
 		pte->may_write = true;
 		pte->may_execute = true;
@@ -512,12 +512,10 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	int page_found = 0;
 	struct kvmppc_pte pte;
 	bool is_mmio = false;
+	bool dr = (vcpu->arch.msr & MSR_DR) ? true : false;
+	bool ir = (vcpu->arch.msr & MSR_IR) ? true : false;
 
-	if ( vec = BOOK3S_INTERRUPT_DATA_STORAGE ) {
-		relocated = (vcpu->arch.msr & MSR_DR);
-	} else {
-		relocated = (vcpu->arch.msr & MSR_IR);
-	}
+	relocated = data ? dr : ir;
 
 	/* Resolve real address if translation turned on */
 	if (relocated) {
@@ -529,14 +527,18 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		pte.raddr = eaddr & 0xffffffff;
 		pte.eaddr = eaddr;
 		pte.vpage = eaddr >> 12;
-		switch (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
-		case 0:
-			pte.vpage |= VSID_REAL;
-		case MSR_DR:
-			pte.vpage |= VSID_REAL_DR;
-		case MSR_IR:
-			pte.vpage |= VSID_REAL_IR;
-		}
+	}
+
+	switch (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
+	case 0:
+		pte.vpage |= VSID_REAL;
+		break;
+	case MSR_DR:
+		pte.vpage |= VSID_REAL_DR;
+		break;
+	case MSR_IR:
+		pte.vpage |= VSID_REAL_IR;
+		break;
 	}
 
 	if (vcpu->arch.mmu.is_dcbz32(vcpu) &&
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 02/15] KVM: PPC: Allow userspace to unset the IRQ line
  2010-03-08 18:03 ` Alexander Graf
@ 2010-03-08 18:03   ` Alexander Graf
  -1 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

Userspace can tell us that it wants to trigger an interrupt. But
so far it can't tell us that it wants to stop triggering one.

So let's interpret the parameter to the ioctl that we have anyways
to tell us if we want to raise or lower the interrupt line.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm.h     |    3 +++
 arch/powerpc/include/asm/kvm_ppc.h |    2 ++
 arch/powerpc/kvm/book3s.c          |    6 ++++++
 arch/powerpc/kvm/powerpc.c         |    5 ++++-
 4 files changed, 15 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm.h b/arch/powerpc/include/asm/kvm.h
index 19bae31..6c5547d 100644
--- a/arch/powerpc/include/asm/kvm.h
+++ b/arch/powerpc/include/asm/kvm.h
@@ -84,4 +84,7 @@ struct kvm_guest_debug_arch {
 #define KVM_REG_QPR		0x0040
 #define KVM_REG_FQPR		0x0060
 
+#define KVM_INTERRUPT_SET	-1U
+#define KVM_INTERRUPT_UNSET	-2U
+
 #endif /* __LINUX_KVM_POWERPC_H */
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index c7fcdd7..6a2464e 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -92,6 +92,8 @@ extern void kvmppc_core_queue_dec(struct kvm_vcpu *vcpu);
 extern void kvmppc_core_dequeue_dec(struct kvm_vcpu *vcpu);
 extern void kvmppc_core_queue_external(struct kvm_vcpu *vcpu,
                                        struct kvm_interrupt *irq);
+extern void kvmppc_core_dequeue_external(struct kvm_vcpu *vcpu,
+                                         struct kvm_interrupt *irq);
 
 extern int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
                                   unsigned int op, int *advance);
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index c2ffb91..9e0bc47 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -230,6 +230,12 @@ void kvmppc_core_queue_external(struct kvm_vcpu *vcpu,
 	kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_EXTERNAL);
 }
 
+void kvmppc_core_dequeue_external(struct kvm_vcpu *vcpu,
+                                  struct kvm_interrupt *irq)
+{
+	kvmppc_book3s_dequeue_irqprio(vcpu, BOOK3S_INTERRUPT_EXTERNAL);
+}
+
 int kvmppc_book3s_irqprio_deliver(struct kvm_vcpu *vcpu, unsigned int priority)
 {
 	int deliver = 1;
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 5a8eb95..a28a512 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -449,7 +449,10 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 
 int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, struct kvm_interrupt *irq)
 {
-	kvmppc_core_queue_external(vcpu, irq);
+	if (irq->irq == KVM_INTERRUPT_UNSET)
+		kvmppc_core_dequeue_external(vcpu, irq);
+	else
+		kvmppc_core_queue_external(vcpu, irq);
 
 	if (waitqueue_active(&vcpu->wq)) {
 		wake_up_interruptible(&vcpu->wq);
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 02/15] KVM: PPC: Allow userspace to unset the IRQ line
@ 2010-03-08 18:03   ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

Userspace can tell us that it wants to trigger an interrupt. But
so far it can't tell us that it wants to stop triggering one.

So let's interpret the parameter to the ioctl that we have anyways
to tell us if we want to raise or lower the interrupt line.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm.h     |    3 +++
 arch/powerpc/include/asm/kvm_ppc.h |    2 ++
 arch/powerpc/kvm/book3s.c          |    6 ++++++
 arch/powerpc/kvm/powerpc.c         |    5 ++++-
 4 files changed, 15 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm.h b/arch/powerpc/include/asm/kvm.h
index 19bae31..6c5547d 100644
--- a/arch/powerpc/include/asm/kvm.h
+++ b/arch/powerpc/include/asm/kvm.h
@@ -84,4 +84,7 @@ struct kvm_guest_debug_arch {
 #define KVM_REG_QPR		0x0040
 #define KVM_REG_FQPR		0x0060
 
+#define KVM_INTERRUPT_SET	-1U
+#define KVM_INTERRUPT_UNSET	-2U
+
 #endif /* __LINUX_KVM_POWERPC_H */
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index c7fcdd7..6a2464e 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -92,6 +92,8 @@ extern void kvmppc_core_queue_dec(struct kvm_vcpu *vcpu);
 extern void kvmppc_core_dequeue_dec(struct kvm_vcpu *vcpu);
 extern void kvmppc_core_queue_external(struct kvm_vcpu *vcpu,
                                        struct kvm_interrupt *irq);
+extern void kvmppc_core_dequeue_external(struct kvm_vcpu *vcpu,
+                                         struct kvm_interrupt *irq);
 
 extern int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
                                   unsigned int op, int *advance);
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index c2ffb91..9e0bc47 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -230,6 +230,12 @@ void kvmppc_core_queue_external(struct kvm_vcpu *vcpu,
 	kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_EXTERNAL);
 }
 
+void kvmppc_core_dequeue_external(struct kvm_vcpu *vcpu,
+                                  struct kvm_interrupt *irq)
+{
+	kvmppc_book3s_dequeue_irqprio(vcpu, BOOK3S_INTERRUPT_EXTERNAL);
+}
+
 int kvmppc_book3s_irqprio_deliver(struct kvm_vcpu *vcpu, unsigned int priority)
 {
 	int deliver = 1;
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 5a8eb95..a28a512 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -449,7 +449,10 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 
 int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, struct kvm_interrupt *irq)
 {
-	kvmppc_core_queue_external(vcpu, irq);
+	if (irq->irq = KVM_INTERRUPT_UNSET)
+		kvmppc_core_dequeue_external(vcpu, irq);
+	else
+		kvmppc_core_queue_external(vcpu, irq);
 
 	if (waitqueue_active(&vcpu->wq)) {
 		wake_up_interruptible(&vcpu->wq);
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 03/15] KVM: PPC: Make DSISR 32 bits wide
       [not found] ` <1268071402-27112-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-03-08 18:03     ` Alexander Graf
  2010-03-08 18:03     ` Alexander Graf
                       ` (7 subsequent siblings)
  8 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

DSISR is only defined as 32 bits wide. So let's reflect that in the
structs too.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/include/asm/kvm_book3s.h   |    2 +-
 arch/powerpc/include/asm/kvm_host.h     |    2 +-
 arch/powerpc/kvm/book3s_64_interrupts.S |    2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 14d0262..9f5a992 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -84,8 +84,8 @@ struct kvmppc_vcpu_book3s {
 	u64 hid[6];
 	u64 gqr[8];
 	int slb_nr;
+	u32 dsisr;
 	u64 sdr1;
-	u64 dsisr;
 	u64 hior;
 	u64 msr_mask;
 	u64 vsid_first;
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 119deb4..0ebda67 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -260,7 +260,7 @@ struct kvm_vcpu_arch {
 
 	u32 last_inst;
 #ifdef CONFIG_PPC64
-	ulong fault_dsisr;
+	u32 fault_dsisr;
 #endif
 	ulong fault_dear;
 	ulong fault_esr;
diff --git a/arch/powerpc/kvm/book3s_64_interrupts.S b/arch/powerpc/kvm/book3s_64_interrupts.S
index c1584d0..faca876 100644
--- a/arch/powerpc/kvm/book3s_64_interrupts.S
+++ b/arch/powerpc/kvm/book3s_64_interrupts.S
@@ -171,7 +171,7 @@ kvmppc_handler_highmem:
 	std	r3, VCPU_PC(r7)
 	std	r4, VCPU_SHADOW_SRR1(r7)
 	std	r5, VCPU_FAULT_DEAR(r7)
-	std	r6, VCPU_FAULT_DSISR(r7)
+	stw	r6, VCPU_FAULT_DSISR(r7)
 
 	ld	r5, VCPU_HFLAGS(r7)
 	rldicl.	r5, r5, 0, 63		/* CR = ((r5 & 1) == 0) */
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 03/15] KVM: PPC: Make DSISR 32 bits wide
@ 2010-03-08 18:03     ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

DSISR is only defined as 32 bits wide. So let's reflect that in the
structs too.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s.h   |    2 +-
 arch/powerpc/include/asm/kvm_host.h     |    2 +-
 arch/powerpc/kvm/book3s_64_interrupts.S |    2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 14d0262..9f5a992 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -84,8 +84,8 @@ struct kvmppc_vcpu_book3s {
 	u64 hid[6];
 	u64 gqr[8];
 	int slb_nr;
+	u32 dsisr;
 	u64 sdr1;
-	u64 dsisr;
 	u64 hior;
 	u64 msr_mask;
 	u64 vsid_first;
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 119deb4..0ebda67 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -260,7 +260,7 @@ struct kvm_vcpu_arch {
 
 	u32 last_inst;
 #ifdef CONFIG_PPC64
-	ulong fault_dsisr;
+	u32 fault_dsisr;
 #endif
 	ulong fault_dear;
 	ulong fault_esr;
diff --git a/arch/powerpc/kvm/book3s_64_interrupts.S b/arch/powerpc/kvm/book3s_64_interrupts.S
index c1584d0..faca876 100644
--- a/arch/powerpc/kvm/book3s_64_interrupts.S
+++ b/arch/powerpc/kvm/book3s_64_interrupts.S
@@ -171,7 +171,7 @@ kvmppc_handler_highmem:
 	std	r3, VCPU_PC(r7)
 	std	r4, VCPU_SHADOW_SRR1(r7)
 	std	r5, VCPU_FAULT_DEAR(r7)
-	std	r6, VCPU_FAULT_DSISR(r7)
+	stw	r6, VCPU_FAULT_DSISR(r7)
 
 	ld	r5, VCPU_HFLAGS(r7)
 	rldicl.	r5, r5, 0, 63		/* CR = ((r5 & 1) = 0) */
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 04/15] KVM: PPC: Book3S_32 guest MMU fixes
  2010-03-08 18:03 ` Alexander Graf
@ 2010-03-08 18:03   ` Alexander Graf
  -1 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

This patch makes the VSID of mapped pages always reflecting all special cases
we have, like split mode.

It also changes the tlbie mask to 0x0ffff000 according to the spec. The mask
we used before was incorrect.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s.h |    1 +
 arch/powerpc/kvm/book3s_32_mmu.c      |   30 +++++++++++++++++++++++-------
 2 files changed, 24 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 9f5a992..b47b2f5 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -44,6 +44,7 @@ struct kvmppc_sr {
 	bool Ks;
 	bool Kp;
 	bool nx;
+	bool valid;
 };
 
 struct kvmppc_bat {
diff --git a/arch/powerpc/kvm/book3s_32_mmu.c b/arch/powerpc/kvm/book3s_32_mmu.c
index 1483a9b..7071e22 100644
--- a/arch/powerpc/kvm/book3s_32_mmu.c
+++ b/arch/powerpc/kvm/book3s_32_mmu.c
@@ -57,6 +57,8 @@ static inline bool check_debug_ip(struct kvm_vcpu *vcpu)
 
 static int kvmppc_mmu_book3s_32_xlate_bat(struct kvm_vcpu *vcpu, gva_t eaddr,
 					  struct kvmppc_pte *pte, bool data);
+static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, u64 esid,
+					     u64 *vsid);
 
 static struct kvmppc_sr *find_sr(struct kvmppc_vcpu_book3s *vcpu_book3s, gva_t eaddr)
 {
@@ -66,13 +68,14 @@ static struct kvmppc_sr *find_sr(struct kvmppc_vcpu_book3s *vcpu_book3s, gva_t e
 static u64 kvmppc_mmu_book3s_32_ea_to_vp(struct kvm_vcpu *vcpu, gva_t eaddr,
 					 bool data)
 {
-	struct kvmppc_sr *sre = find_sr(to_book3s(vcpu), eaddr);
+	u64 vsid;
 	struct kvmppc_pte pte;
 
 	if (!kvmppc_mmu_book3s_32_xlate_bat(vcpu, eaddr, &pte, data))
 		return pte.vpage;
 
-	return (((u64)eaddr >> 12) & 0xffff) | (((u64)sre->vsid) << 16);
+	kvmppc_mmu_book3s_32_esid_to_vsid(vcpu, eaddr >> SID_SHIFT, &vsid);
+	return (((u64)eaddr >> 12) & 0xffff) | (vsid << 16);
 }
 
 static void kvmppc_mmu_book3s_32_reset_msr(struct kvm_vcpu *vcpu)
@@ -142,8 +145,13 @@ static int kvmppc_mmu_book3s_32_xlate_bat(struct kvm_vcpu *vcpu, gva_t eaddr,
 				    bat->bepi_mask);
 		}
 		if ((eaddr & bat->bepi_mask) == bat->bepi) {
+			u64 vsid;
+			kvmppc_mmu_book3s_32_esid_to_vsid(vcpu,
+				eaddr >> SID_SHIFT, &vsid);
+			vsid <<= 16;
+			pte->vpage = (((u64)eaddr >> 12) & 0xffff) | vsid;
+
 			pte->raddr = bat->brpn | (eaddr & ~bat->bepi_mask);
-			pte->vpage = (eaddr >> 12) | VSID_BAT;
 			pte->may_read = bat->pp;
 			pte->may_write = bat->pp > 1;
 			pte->may_execute = true;
@@ -302,6 +310,7 @@ static void kvmppc_mmu_book3s_32_mtsrin(struct kvm_vcpu *vcpu, u32 srnum,
 	/* And then put in the new SR */
 	sre->raw = value;
 	sre->vsid = (value & 0x0fffffff);
+	sre->valid = (value & 0x80000000) ? false : true;
 	sre->Ks = (value & 0x40000000) ? true : false;
 	sre->Kp = (value & 0x20000000) ? true : false;
 	sre->nx = (value & 0x10000000) ? true : false;
@@ -312,7 +321,7 @@ static void kvmppc_mmu_book3s_32_mtsrin(struct kvm_vcpu *vcpu, u32 srnum,
 
 static void kvmppc_mmu_book3s_32_tlbie(struct kvm_vcpu *vcpu, ulong ea, bool large)
 {
-	kvmppc_mmu_pte_flush(vcpu, ea, ~0xFFFULL);
+	kvmppc_mmu_pte_flush(vcpu, ea, 0x0FFFF000);
 }
 
 static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, u64 esid,
@@ -333,15 +342,22 @@ static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, u64 esid,
 		break;
 	case MSR_DR|MSR_IR:
 	{
-		ulong ea;
-		ea = esid << SID_SHIFT;
-		*vsid = find_sr(to_book3s(vcpu), ea)->vsid;
+		ulong ea = esid << SID_SHIFT;
+		struct kvmppc_sr *sr = find_sr(to_book3s(vcpu), ea);
+
+		if (!sr->valid)
+			return -1;
+
+		*vsid = sr->vsid;
 		break;
 	}
 	default:
 		BUG();
 	}
 
+	if (vcpu->arch.msr & MSR_PR)
+		*vsid |= VSID_PR;
+
 	return 0;
 }
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 04/15] KVM: PPC: Book3S_32 guest MMU fixes
@ 2010-03-08 18:03   ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

This patch makes the VSID of mapped pages always reflecting all special cases
we have, like split mode.

It also changes the tlbie mask to 0x0ffff000 according to the spec. The mask
we used before was incorrect.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s.h |    1 +
 arch/powerpc/kvm/book3s_32_mmu.c      |   30 +++++++++++++++++++++++-------
 2 files changed, 24 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 9f5a992..b47b2f5 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -44,6 +44,7 @@ struct kvmppc_sr {
 	bool Ks;
 	bool Kp;
 	bool nx;
+	bool valid;
 };
 
 struct kvmppc_bat {
diff --git a/arch/powerpc/kvm/book3s_32_mmu.c b/arch/powerpc/kvm/book3s_32_mmu.c
index 1483a9b..7071e22 100644
--- a/arch/powerpc/kvm/book3s_32_mmu.c
+++ b/arch/powerpc/kvm/book3s_32_mmu.c
@@ -57,6 +57,8 @@ static inline bool check_debug_ip(struct kvm_vcpu *vcpu)
 
 static int kvmppc_mmu_book3s_32_xlate_bat(struct kvm_vcpu *vcpu, gva_t eaddr,
 					  struct kvmppc_pte *pte, bool data);
+static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, u64 esid,
+					     u64 *vsid);
 
 static struct kvmppc_sr *find_sr(struct kvmppc_vcpu_book3s *vcpu_book3s, gva_t eaddr)
 {
@@ -66,13 +68,14 @@ static struct kvmppc_sr *find_sr(struct kvmppc_vcpu_book3s *vcpu_book3s, gva_t e
 static u64 kvmppc_mmu_book3s_32_ea_to_vp(struct kvm_vcpu *vcpu, gva_t eaddr,
 					 bool data)
 {
-	struct kvmppc_sr *sre = find_sr(to_book3s(vcpu), eaddr);
+	u64 vsid;
 	struct kvmppc_pte pte;
 
 	if (!kvmppc_mmu_book3s_32_xlate_bat(vcpu, eaddr, &pte, data))
 		return pte.vpage;
 
-	return (((u64)eaddr >> 12) & 0xffff) | (((u64)sre->vsid) << 16);
+	kvmppc_mmu_book3s_32_esid_to_vsid(vcpu, eaddr >> SID_SHIFT, &vsid);
+	return (((u64)eaddr >> 12) & 0xffff) | (vsid << 16);
 }
 
 static void kvmppc_mmu_book3s_32_reset_msr(struct kvm_vcpu *vcpu)
@@ -142,8 +145,13 @@ static int kvmppc_mmu_book3s_32_xlate_bat(struct kvm_vcpu *vcpu, gva_t eaddr,
 				    bat->bepi_mask);
 		}
 		if ((eaddr & bat->bepi_mask) = bat->bepi) {
+			u64 vsid;
+			kvmppc_mmu_book3s_32_esid_to_vsid(vcpu,
+				eaddr >> SID_SHIFT, &vsid);
+			vsid <<= 16;
+			pte->vpage = (((u64)eaddr >> 12) & 0xffff) | vsid;
+
 			pte->raddr = bat->brpn | (eaddr & ~bat->bepi_mask);
-			pte->vpage = (eaddr >> 12) | VSID_BAT;
 			pte->may_read = bat->pp;
 			pte->may_write = bat->pp > 1;
 			pte->may_execute = true;
@@ -302,6 +310,7 @@ static void kvmppc_mmu_book3s_32_mtsrin(struct kvm_vcpu *vcpu, u32 srnum,
 	/* And then put in the new SR */
 	sre->raw = value;
 	sre->vsid = (value & 0x0fffffff);
+	sre->valid = (value & 0x80000000) ? false : true;
 	sre->Ks = (value & 0x40000000) ? true : false;
 	sre->Kp = (value & 0x20000000) ? true : false;
 	sre->nx = (value & 0x10000000) ? true : false;
@@ -312,7 +321,7 @@ static void kvmppc_mmu_book3s_32_mtsrin(struct kvm_vcpu *vcpu, u32 srnum,
 
 static void kvmppc_mmu_book3s_32_tlbie(struct kvm_vcpu *vcpu, ulong ea, bool large)
 {
-	kvmppc_mmu_pte_flush(vcpu, ea, ~0xFFFULL);
+	kvmppc_mmu_pte_flush(vcpu, ea, 0x0FFFF000);
 }
 
 static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, u64 esid,
@@ -333,15 +342,22 @@ static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, u64 esid,
 		break;
 	case MSR_DR|MSR_IR:
 	{
-		ulong ea;
-		ea = esid << SID_SHIFT;
-		*vsid = find_sr(to_book3s(vcpu), ea)->vsid;
+		ulong ea = esid << SID_SHIFT;
+		struct kvmppc_sr *sr = find_sr(to_book3s(vcpu), ea);
+
+		if (!sr->valid)
+			return -1;
+
+		*vsid = sr->vsid;
 		break;
 	}
 	default:
 		BUG();
 	}
 
+	if (vcpu->arch.msr & MSR_PR)
+		*vsid |= VSID_PR;
+
 	return 0;
 }
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 05/15] KVM: PPC: Split instruction reading out
       [not found] ` <1268071402-27112-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-03-08 18:03     ` Alexander Graf
  2010-03-08 18:03     ` Alexander Graf
                       ` (7 subsequent siblings)
  8 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

The current check_ext function reads the instruction and then does
the checking. Let's split the reading out so we can reuse it for
different functions.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s.c |   24 ++++++++++++++++--------
 1 files changed, 16 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 9e0bc47..400ae0a 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -650,26 +650,34 @@ void kvmppc_giveup_ext(struct kvm_vcpu *vcpu, ulong msr)
 	kvmppc_recalc_shadow_msr(vcpu);
 }
 
-static int kvmppc_check_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr)
+static int kvmppc_read_inst(struct kvm_vcpu *vcpu)
 {
 	ulong srr0 = vcpu->arch.pc;
 	int ret;
 
-	/* Need to do paired single emulation? */
-	if (!(vcpu->arch.hflags & BOOK3S_HFLAG_PAIRED_SINGLE))
-		return EMULATE_DONE;
-
-	/* Read out the instruction */
 	ret = kvmppc_ld(vcpu, &srr0, sizeof(u32), &vcpu->arch.last_inst, false);
 	if (ret == -ENOENT) {
 		vcpu->arch.msr = kvmppc_set_field(vcpu->arch.msr, 33, 33, 1);
 		vcpu->arch.msr = kvmppc_set_field(vcpu->arch.msr, 34, 36, 0);
 		vcpu->arch.msr = kvmppc_set_field(vcpu->arch.msr, 42, 47, 0);
 		kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_INST_STORAGE);
-	} else if(ret == EMULATE_DONE) {
+		return EMULATE_AGAIN;
+	}
+
+	return EMULATE_DONE;
+}
+
+static int kvmppc_check_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr)
+{
+
+	/* Need to do paired single emulation? */
+	if (!(vcpu->arch.hflags & BOOK3S_HFLAG_PAIRED_SINGLE))
+		return EMULATE_DONE;
+
+	/* Read out the instruction */
+	if (kvmppc_read_inst(vcpu) == EMULATE_DONE)
 		/* Need to emulate */
 		return EMULATE_FAIL;
-	}
 
 	return EMULATE_AGAIN;
 }
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 05/15] KVM: PPC: Split instruction reading out
@ 2010-03-08 18:03     ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

The current check_ext function reads the instruction and then does
the checking. Let's split the reading out so we can reuse it for
different functions.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s.c |   24 ++++++++++++++++--------
 1 files changed, 16 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 9e0bc47..400ae0a 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -650,26 +650,34 @@ void kvmppc_giveup_ext(struct kvm_vcpu *vcpu, ulong msr)
 	kvmppc_recalc_shadow_msr(vcpu);
 }
 
-static int kvmppc_check_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr)
+static int kvmppc_read_inst(struct kvm_vcpu *vcpu)
 {
 	ulong srr0 = vcpu->arch.pc;
 	int ret;
 
-	/* Need to do paired single emulation? */
-	if (!(vcpu->arch.hflags & BOOK3S_HFLAG_PAIRED_SINGLE))
-		return EMULATE_DONE;
-
-	/* Read out the instruction */
 	ret = kvmppc_ld(vcpu, &srr0, sizeof(u32), &vcpu->arch.last_inst, false);
 	if (ret = -ENOENT) {
 		vcpu->arch.msr = kvmppc_set_field(vcpu->arch.msr, 33, 33, 1);
 		vcpu->arch.msr = kvmppc_set_field(vcpu->arch.msr, 34, 36, 0);
 		vcpu->arch.msr = kvmppc_set_field(vcpu->arch.msr, 42, 47, 0);
 		kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_INST_STORAGE);
-	} else if(ret = EMULATE_DONE) {
+		return EMULATE_AGAIN;
+	}
+
+	return EMULATE_DONE;
+}
+
+static int kvmppc_check_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr)
+{
+
+	/* Need to do paired single emulation? */
+	if (!(vcpu->arch.hflags & BOOK3S_HFLAG_PAIRED_SINGLE))
+		return EMULATE_DONE;
+
+	/* Read out the instruction */
+	if (kvmppc_read_inst(vcpu) = EMULATE_DONE)
 		/* Need to emulate */
 		return EMULATE_FAIL;
-	}
 
 	return EMULATE_AGAIN;
 }
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 06/15] KVM: PPC: Don't reload FPU with invalid values
  2010-03-08 18:03 ` Alexander Graf
@ 2010-03-08 18:03   ` Alexander Graf
  -1 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

When the guest activates the FPU, we load it up. That's fine when
it wasn't activated before on the host, but if it was we end up
reloading FPU values from last time the FPU was deactivated on the
host without writing the proper values back to the vcpu struct.

This patch checks if the FPU is enabled already and if so just doesn't
bother activating it, making FPU operations survive guest context switches.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s.c |    5 +++++
 1 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 400ae0a..029e1be 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -701,6 +701,11 @@ static int kvmppc_handle_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr,
 		return RESUME_GUEST;
 	}
 
+	/* We already own the ext */
+	if (vcpu->arch.guest_owned_ext & msr) {
+		return RESUME_GUEST;
+	}
+
 #ifdef DEBUG_EXT
 	printk(KERN_INFO "Loading up ext 0x%lx\n", msr);
 #endif
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 06/15] KVM: PPC: Don't reload FPU with invalid values
@ 2010-03-08 18:03   ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

When the guest activates the FPU, we load it up. That's fine when
it wasn't activated before on the host, but if it was we end up
reloading FPU values from last time the FPU was deactivated on the
host without writing the proper values back to the vcpu struct.

This patch checks if the FPU is enabled already and if so just doesn't
bother activating it, making FPU operations survive guest context switches.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s.c |    5 +++++
 1 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 400ae0a..029e1be 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -701,6 +701,11 @@ static int kvmppc_handle_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr,
 		return RESUME_GUEST;
 	}
 
+	/* We already own the ext */
+	if (vcpu->arch.guest_owned_ext & msr) {
+		return RESUME_GUEST;
+	}
+
 #ifdef DEBUG_EXT
 	printk(KERN_INFO "Loading up ext 0x%lx\n", msr);
 #endif
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 07/15] KVM: PPC: Load VCPU for register fetching
       [not found] ` <1268071402-27112-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-03-08 18:03     ` Alexander Graf
  2010-03-08 18:03     ` Alexander Graf
                       ` (7 subsequent siblings)
  8 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

When trying to read or store vcpu register data, we should also make
sure the vcpu is actually loaded, so we're 100% sure we get the correct
values.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s.c |    8 ++++++++
 1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 029e1be..585dc91 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -955,6 +955,8 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 {
 	int i;
 
+	vcpu_load(vcpu);
+
 	regs->pc = vcpu->arch.pc;
 	regs->cr = kvmppc_get_cr(vcpu);
 	regs->ctr = vcpu->arch.ctr;
@@ -975,6 +977,8 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 	for (i = 0; i < ARRAY_SIZE(regs->gpr); i++)
 		regs->gpr[i] = kvmppc_get_gpr(vcpu, i);
 
+	vcpu_put(vcpu);
+
 	return 0;
 }
 
@@ -982,6 +986,8 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 {
 	int i;
 
+	vcpu_load(vcpu);
+
 	vcpu->arch.pc = regs->pc;
 	kvmppc_set_cr(vcpu, regs->cr);
 	vcpu->arch.ctr = regs->ctr;
@@ -1001,6 +1007,8 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 	for (i = 0; i < ARRAY_SIZE(regs->gpr); i++)
 		kvmppc_set_gpr(vcpu, i, regs->gpr[i]);
 
+	vcpu_put(vcpu);
+
 	return 0;
 }
 
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 07/15] KVM: PPC: Load VCPU for register fetching
@ 2010-03-08 18:03     ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

When trying to read or store vcpu register data, we should also make
sure the vcpu is actually loaded, so we're 100% sure we get the correct
values.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s.c |    8 ++++++++
 1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 029e1be..585dc91 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -955,6 +955,8 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 {
 	int i;
 
+	vcpu_load(vcpu);
+
 	regs->pc = vcpu->arch.pc;
 	regs->cr = kvmppc_get_cr(vcpu);
 	regs->ctr = vcpu->arch.ctr;
@@ -975,6 +977,8 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 	for (i = 0; i < ARRAY_SIZE(regs->gpr); i++)
 		regs->gpr[i] = kvmppc_get_gpr(vcpu, i);
 
+	vcpu_put(vcpu);
+
 	return 0;
 }
 
@@ -982,6 +986,8 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 {
 	int i;
 
+	vcpu_load(vcpu);
+
 	vcpu->arch.pc = regs->pc;
 	kvmppc_set_cr(vcpu, regs->cr);
 	vcpu->arch.ctr = regs->ctr;
@@ -1001,6 +1007,8 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 	for (i = 0; i < ARRAY_SIZE(regs->gpr); i++)
 		kvmppc_set_gpr(vcpu, i, regs->gpr[i]);
 
+	vcpu_put(vcpu);
+
 	return 0;
 }
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 08/15] KVM: PPC: Implement mfsr emulation
       [not found] ` <1268071402-27112-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-03-08 18:03     ` Alexander Graf
  2010-03-08 18:03     ` Alexander Graf
                       ` (7 subsequent siblings)
  8 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

We emulate the mfsrin instruction already, that passes the SR number
in a register value. But we lacked support for mfsr that encoded the
SR number in the opcode.

So let's implement it.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s_64_emulate.c |   13 +++++++++++++
 1 files changed, 13 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_emulate.c b/arch/powerpc/kvm/book3s_64_emulate.c
index c989214..8d7a78d 100644
--- a/arch/powerpc/kvm/book3s_64_emulate.c
+++ b/arch/powerpc/kvm/book3s_64_emulate.c
@@ -35,6 +35,7 @@
 #define OP_31_XOP_SLBMTE	402
 #define OP_31_XOP_SLBIE		434
 #define OP_31_XOP_SLBIA		498
+#define OP_31_XOP_MFSR		595
 #define OP_31_XOP_MFSRIN	659
 #define OP_31_XOP_SLBMFEV	851
 #define OP_31_XOP_EIOIO		854
@@ -90,6 +91,18 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		case OP_31_XOP_MTMSR:
 			kvmppc_set_msr(vcpu, kvmppc_get_gpr(vcpu, get_rs(inst)));
 			break;
+		case OP_31_XOP_MFSR:
+		{
+			int srnum;
+
+			srnum = kvmppc_get_field(inst, 12 + 32, 15 + 32);
+			if (vcpu->arch.mmu.mfsrin) {
+				u32 sr;
+				sr = vcpu->arch.mmu.mfsrin(vcpu, srnum);
+				kvmppc_set_gpr(vcpu, get_rt(inst), sr);
+			}
+			break;
+		}
 		case OP_31_XOP_MFSRIN:
 		{
 			int srnum;
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 08/15] KVM: PPC: Implement mfsr emulation
@ 2010-03-08 18:03     ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

We emulate the mfsrin instruction already, that passes the SR number
in a register value. But we lacked support for mfsr that encoded the
SR number in the opcode.

So let's implement it.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_64_emulate.c |   13 +++++++++++++
 1 files changed, 13 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_emulate.c b/arch/powerpc/kvm/book3s_64_emulate.c
index c989214..8d7a78d 100644
--- a/arch/powerpc/kvm/book3s_64_emulate.c
+++ b/arch/powerpc/kvm/book3s_64_emulate.c
@@ -35,6 +35,7 @@
 #define OP_31_XOP_SLBMTE	402
 #define OP_31_XOP_SLBIE		434
 #define OP_31_XOP_SLBIA		498
+#define OP_31_XOP_MFSR		595
 #define OP_31_XOP_MFSRIN	659
 #define OP_31_XOP_SLBMFEV	851
 #define OP_31_XOP_EIOIO		854
@@ -90,6 +91,18 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		case OP_31_XOP_MTMSR:
 			kvmppc_set_msr(vcpu, kvmppc_get_gpr(vcpu, get_rs(inst)));
 			break;
+		case OP_31_XOP_MFSR:
+		{
+			int srnum;
+
+			srnum = kvmppc_get_field(inst, 12 + 32, 15 + 32);
+			if (vcpu->arch.mmu.mfsrin) {
+				u32 sr;
+				sr = vcpu->arch.mmu.mfsrin(vcpu, srnum);
+				kvmppc_set_gpr(vcpu, get_rt(inst), sr);
+			}
+			break;
+		}
 		case OP_31_XOP_MFSRIN:
 		{
 			int srnum;
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 09/15] KVM: PPC: Implement BAT reads
  2010-03-08 18:03 ` Alexander Graf
@ 2010-03-08 18:03   ` Alexander Graf
  -1 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

BATs can't only be written to, you can also read them out!
So let's implement emulation for reading BAT values again.

While at it, I also made BAT setting flush the segment cache,
so we're absolutely sure there's no MMU state left when writing
BATs.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_64_emulate.c |   35 ++++++++++++++++++++++++++++++++++
 1 files changed, 35 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_emulate.c b/arch/powerpc/kvm/book3s_64_emulate.c
index 8d7a78d..39d5003 100644
--- a/arch/powerpc/kvm/book3s_64_emulate.c
+++ b/arch/powerpc/kvm/book3s_64_emulate.c
@@ -239,6 +239,34 @@ void kvmppc_set_bat(struct kvm_vcpu *vcpu, struct kvmppc_bat *bat, bool upper,
 	}
 }
 
+static u32 kvmppc_read_bat(struct kvm_vcpu *vcpu, int sprn)
+{
+	struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
+	struct kvmppc_bat *bat;
+
+	switch (sprn) {
+	case SPRN_IBAT0U ... SPRN_IBAT3L:
+		bat = &vcpu_book3s->ibat[(sprn - SPRN_IBAT0U) / 2];
+		break;
+	case SPRN_IBAT4U ... SPRN_IBAT7L:
+		bat = &vcpu_book3s->ibat[4 + ((sprn - SPRN_IBAT4U) / 2)];
+		break;
+	case SPRN_DBAT0U ... SPRN_DBAT3L:
+		bat = &vcpu_book3s->dbat[(sprn - SPRN_DBAT0U) / 2];
+		break;
+	case SPRN_DBAT4U ... SPRN_DBAT7L:
+		bat = &vcpu_book3s->dbat[4 + ((sprn - SPRN_DBAT4U) / 2)];
+		break;
+	default:
+		BUG();
+	}
+
+	if (sprn % 2)
+		return bat->raw >> 32;
+	else
+		return bat->raw;
+}
+
 static void kvmppc_write_bat(struct kvm_vcpu *vcpu, int sprn, u32 val)
 {
 	struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
@@ -290,6 +318,7 @@ int kvmppc_core_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs)
 		/* BAT writes happen so rarely that we're ok to flush
 		 * everything here */
 		kvmppc_mmu_pte_flush(vcpu, 0, 0);
+		kvmppc_mmu_flush_segments(vcpu);
 		break;
 	case SPRN_HID0:
 		to_book3s(vcpu)->hid[0] = spr_val;
@@ -373,6 +402,12 @@ int kvmppc_core_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
 	int emulated = EMULATE_DONE;
 
 	switch (sprn) {
+	case SPRN_IBAT0U ... SPRN_IBAT3L:
+	case SPRN_IBAT4U ... SPRN_IBAT7L:
+	case SPRN_DBAT0U ... SPRN_DBAT3L:
+	case SPRN_DBAT4U ... SPRN_DBAT7L:
+		kvmppc_set_gpr(vcpu, rt, kvmppc_read_bat(vcpu, sprn));
+		break;
 	case SPRN_SDR1:
 		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->sdr1);
 		break;
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 09/15] KVM: PPC: Implement BAT reads
@ 2010-03-08 18:03   ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

BATs can't only be written to, you can also read them out!
So let's implement emulation for reading BAT values again.

While at it, I also made BAT setting flush the segment cache,
so we're absolutely sure there's no MMU state left when writing
BATs.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_64_emulate.c |   35 ++++++++++++++++++++++++++++++++++
 1 files changed, 35 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_emulate.c b/arch/powerpc/kvm/book3s_64_emulate.c
index 8d7a78d..39d5003 100644
--- a/arch/powerpc/kvm/book3s_64_emulate.c
+++ b/arch/powerpc/kvm/book3s_64_emulate.c
@@ -239,6 +239,34 @@ void kvmppc_set_bat(struct kvm_vcpu *vcpu, struct kvmppc_bat *bat, bool upper,
 	}
 }
 
+static u32 kvmppc_read_bat(struct kvm_vcpu *vcpu, int sprn)
+{
+	struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
+	struct kvmppc_bat *bat;
+
+	switch (sprn) {
+	case SPRN_IBAT0U ... SPRN_IBAT3L:
+		bat = &vcpu_book3s->ibat[(sprn - SPRN_IBAT0U) / 2];
+		break;
+	case SPRN_IBAT4U ... SPRN_IBAT7L:
+		bat = &vcpu_book3s->ibat[4 + ((sprn - SPRN_IBAT4U) / 2)];
+		break;
+	case SPRN_DBAT0U ... SPRN_DBAT3L:
+		bat = &vcpu_book3s->dbat[(sprn - SPRN_DBAT0U) / 2];
+		break;
+	case SPRN_DBAT4U ... SPRN_DBAT7L:
+		bat = &vcpu_book3s->dbat[4 + ((sprn - SPRN_DBAT4U) / 2)];
+		break;
+	default:
+		BUG();
+	}
+
+	if (sprn % 2)
+		return bat->raw >> 32;
+	else
+		return bat->raw;
+}
+
 static void kvmppc_write_bat(struct kvm_vcpu *vcpu, int sprn, u32 val)
 {
 	struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
@@ -290,6 +318,7 @@ int kvmppc_core_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs)
 		/* BAT writes happen so rarely that we're ok to flush
 		 * everything here */
 		kvmppc_mmu_pte_flush(vcpu, 0, 0);
+		kvmppc_mmu_flush_segments(vcpu);
 		break;
 	case SPRN_HID0:
 		to_book3s(vcpu)->hid[0] = spr_val;
@@ -373,6 +402,12 @@ int kvmppc_core_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
 	int emulated = EMULATE_DONE;
 
 	switch (sprn) {
+	case SPRN_IBAT0U ... SPRN_IBAT3L:
+	case SPRN_IBAT4U ... SPRN_IBAT7L:
+	case SPRN_DBAT0U ... SPRN_DBAT3L:
+	case SPRN_DBAT4U ... SPRN_DBAT7L:
+		kvmppc_set_gpr(vcpu, rt, kvmppc_read_bat(vcpu, sprn));
+		break;
 	case SPRN_SDR1:
 		kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->sdr1);
 		break;
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 10/15] KVM: PPC: Make XER load 32 bit
  2010-03-08 18:03 ` Alexander Graf
@ 2010-03-08 18:03   ` Alexander Graf
  -1 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

We have a 32 bit value in the PACA to store XER in. We also do an stw
when storing XER in there. But then we load it with ld, completely
screwing it up on every entry.

Welcome to the Big Endian world.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_64_slb.S |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_slb.S b/arch/powerpc/kvm/book3s_64_slb.S
index 35b7627..0919679 100644
--- a/arch/powerpc/kvm/book3s_64_slb.S
+++ b/arch/powerpc/kvm/book3s_64_slb.S
@@ -145,7 +145,7 @@ slb_do_enter:
 	lwz	r11, (PACA_KVM_CR)(r13)
 	mtcr	r11
 
-	ld	r11, (PACA_KVM_XER)(r13)
+	lwz	r11, (PACA_KVM_XER)(r13)
 	mtxer	r11
 
 	ld	r11, (PACA_KVM_R11)(r13)
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 10/15] KVM: PPC: Make XER load 32 bit
@ 2010-03-08 18:03   ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

We have a 32 bit value in the PACA to store XER in. We also do an stw
when storing XER in there. But then we load it with ld, completely
screwing it up on every entry.

Welcome to the Big Endian world.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_64_slb.S |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_slb.S b/arch/powerpc/kvm/book3s_64_slb.S
index 35b7627..0919679 100644
--- a/arch/powerpc/kvm/book3s_64_slb.S
+++ b/arch/powerpc/kvm/book3s_64_slb.S
@@ -145,7 +145,7 @@ slb_do_enter:
 	lwz	r11, (PACA_KVM_CR)(r13)
 	mtcr	r11
 
-	ld	r11, (PACA_KVM_XER)(r13)
+	lwz	r11, (PACA_KVM_XER)(r13)
 	mtxer	r11
 
 	ld	r11, (PACA_KVM_R11)(r13)
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 11/15] KVM: PPC: Implement emulation for lbzux and lhax
  2010-03-08 18:03 ` Alexander Graf
@ 2010-03-08 18:03   ` Alexander Graf
  -1 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

We get MMIOs with the weirdest instructions. But every time we do,
we need to improve our emulator to implement them.

So let's do that - this time it's lbzux and lhax's round.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/emulate.c |   20 ++++++++++++++++++++
 1 files changed, 20 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 2410ec2..dbb5d68 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -38,10 +38,12 @@
 #define OP_31_XOP_LBZX      87
 #define OP_31_XOP_STWX      151
 #define OP_31_XOP_STBX      215
+#define OP_31_XOP_LBZUX     119
 #define OP_31_XOP_STBUX     247
 #define OP_31_XOP_LHZX      279
 #define OP_31_XOP_LHZUX     311
 #define OP_31_XOP_MFSPR     339
+#define OP_31_XOP_LHAX      343
 #define OP_31_XOP_STHX      407
 #define OP_31_XOP_STHUX     439
 #define OP_31_XOP_MTSPR     467
@@ -173,6 +175,19 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
 			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
 			break;
 
+		case OP_31_XOP_LBZUX:
+			rt = get_rt(inst);
+			ra = get_ra(inst);
+			rb = get_rb(inst);
+
+			ea = kvmppc_get_gpr(vcpu, rb);
+			if (ra)
+				ea += kvmppc_get_gpr(vcpu, ra);
+
+			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
+			kvmppc_set_gpr(vcpu, ra, ea);
+			break;
+
 		case OP_31_XOP_STWX:
 			rs = get_rs(inst);
 			emulated = kvmppc_handle_store(run, vcpu,
@@ -202,6 +217,11 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
 			kvmppc_set_gpr(vcpu, rs, ea);
 			break;
 
+		case OP_31_XOP_LHAX:
+			rt = get_rt(inst);
+			emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
+			break;
+
 		case OP_31_XOP_LHZX:
 			rt = get_rt(inst);
 			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 11/15] KVM: PPC: Implement emulation for lbzux and lhax
@ 2010-03-08 18:03   ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

We get MMIOs with the weirdest instructions. But every time we do,
we need to improve our emulator to implement them.

So let's do that - this time it's lbzux and lhax's round.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/emulate.c |   20 ++++++++++++++++++++
 1 files changed, 20 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index 2410ec2..dbb5d68 100644
--- a/arch/powerpc/kvm/emulate.c
+++ b/arch/powerpc/kvm/emulate.c
@@ -38,10 +38,12 @@
 #define OP_31_XOP_LBZX      87
 #define OP_31_XOP_STWX      151
 #define OP_31_XOP_STBX      215
+#define OP_31_XOP_LBZUX     119
 #define OP_31_XOP_STBUX     247
 #define OP_31_XOP_LHZX      279
 #define OP_31_XOP_LHZUX     311
 #define OP_31_XOP_MFSPR     339
+#define OP_31_XOP_LHAX      343
 #define OP_31_XOP_STHX      407
 #define OP_31_XOP_STHUX     439
 #define OP_31_XOP_MTSPR     467
@@ -173,6 +175,19 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
 			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
 			break;
 
+		case OP_31_XOP_LBZUX:
+			rt = get_rt(inst);
+			ra = get_ra(inst);
+			rb = get_rb(inst);
+
+			ea = kvmppc_get_gpr(vcpu, rb);
+			if (ra)
+				ea += kvmppc_get_gpr(vcpu, ra);
+
+			emulated = kvmppc_handle_load(run, vcpu, rt, 1, 1);
+			kvmppc_set_gpr(vcpu, ra, ea);
+			break;
+
 		case OP_31_XOP_STWX:
 			rs = get_rs(inst);
 			emulated = kvmppc_handle_store(run, vcpu,
@@ -202,6 +217,11 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
 			kvmppc_set_gpr(vcpu, rs, ea);
 			break;
 
+		case OP_31_XOP_LHAX:
+			rt = get_rt(inst);
+			emulated = kvmppc_handle_loads(run, vcpu, rt, 2, 1);
+			break;
+
 		case OP_31_XOP_LHZX:
 			rt = get_rt(inst);
 			emulated = kvmppc_handle_load(run, vcpu, rt, 2, 1);
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 12/15] KVM: PPC: Implement alignment interrupt
       [not found] ` <1268071402-27112-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-03-08 18:03     ` Alexander Graf
  2010-03-08 18:03     ` Alexander Graf
                       ` (7 subsequent siblings)
  8 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

Mac OS X has some applications - namely the Finder - that require alignment
interrupts to work properly. So we need to implement them.

But the spec for 970 and 750 also looks different. While 750 requires the
DSISR fields to reflect some instruction bits, the 970 declares this as an
optional feature. So we need to reconstruct DSISR manually.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/include/asm/kvm_book3s.h |    1 +
 arch/powerpc/kvm/book3s.c             |    9 +++++++
 arch/powerpc/kvm/book3s_64_emulate.c  |   40 +++++++++++++++++++++++++++++++++
 3 files changed, 50 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index b47b2f5..1a169f3 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -131,6 +131,7 @@ extern void kvmppc_rmcall(ulong srr0, ulong srr1);
 extern void kvmppc_load_up_fpu(void);
 extern void kvmppc_load_up_altivec(void);
 extern void kvmppc_load_up_vsx(void);
+extern u32 kvmppc_alignment_dsisr(struct kvm_vcpu *vcpu, unsigned int inst);
 
 static inline struct kvmppc_vcpu_book3s *to_book3s(struct kvm_vcpu *vcpu)
 {
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 585dc91..6b8b5ed 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -905,6 +905,15 @@ program_interrupt:
 		}
 		break;
 	}
+	case BOOK3S_INTERRUPT_ALIGNMENT:
+		vcpu->arch.dear = vcpu->arch.fault_dear;
+		if (kvmppc_read_inst(vcpu) == EMULATE_DONE) {
+			to_book3s(vcpu)->dsisr = kvmppc_alignment_dsisr(vcpu,
+				vcpu->arch.last_inst);
+			kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
+		}
+		r = RESUME_GUEST;
+		break;
 	case BOOK3S_INTERRUPT_MACHINE_CHECK:
 	case BOOK3S_INTERRUPT_TRACE:
 		kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
diff --git a/arch/powerpc/kvm/book3s_64_emulate.c b/arch/powerpc/kvm/book3s_64_emulate.c
index 39d5003..c401dd4 100644
--- a/arch/powerpc/kvm/book3s_64_emulate.c
+++ b/arch/powerpc/kvm/book3s_64_emulate.c
@@ -44,6 +44,8 @@
 /* DCBZ is actually 1014, but we patch it to 1010 so we get a trap */
 #define OP_31_XOP_DCBZ		1010
 
+#define OP_LFS			48
+
 #define SPRN_GQR0		912
 #define SPRN_GQR1		913
 #define SPRN_GQR2		914
@@ -474,3 +476,41 @@ int kvmppc_core_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
 	return emulated;
 }
 
+u32 kvmppc_alignment_dsisr(struct kvm_vcpu *vcpu, unsigned int inst)
+{
+	u32 dsisr = 0;
+
+	/*
+	 * This is what the spec says about DSISR bits (not mentioned = 0):
+	 *
+	 * 12:13		[DS]	Set to bits 30:31
+	 * 15:16		[X]	Set to bits 29:30
+	 * 17			[X]	Set to bit 25
+	 *			[D/DS]	Set to bit 5
+	 * 18:21		[X]	Set to bits 21:24
+	 *			[D/DS]	Set to bits 1:4
+	 * 22:26			Set to bits 6:10 (RT/RS/FRT/FRS)
+	 * 27:31			Set to bits 11:15 (RA)
+	 */
+
+	switch (get_op(inst)) {
+	/* D-form */
+	case OP_LFS:
+		dsisr |= (inst >> 12) & 0x4000;	/* bit 17 */
+		dsisr |= (inst >> 17) & 0x3c00; /* bits 18:21 */
+		break;
+	/* X-form */
+	case 31:
+		dsisr |= (inst << 14) & 0x18000; /* bits 15:16 */
+		dsisr |= (inst << 8)  & 0x04000; /* bit 17 */
+		dsisr |= (inst << 3)  & 0x03c00; /* bits 18:21 */
+		break;
+	default:
+		printk(KERN_INFO "KVM: Unaligned instruction 0x%x\n", inst);
+		break;
+	}
+
+	dsisr |= (inst >> 16) & 0x03ff; /* bits 22:31 */
+
+	return dsisr;
+}
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 12/15] KVM: PPC: Implement alignment interrupt
@ 2010-03-08 18:03     ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

Mac OS X has some applications - namely the Finder - that require alignment
interrupts to work properly. So we need to implement them.

But the spec for 970 and 750 also looks different. While 750 requires the
DSISR fields to reflect some instruction bits, the 970 declares this as an
optional feature. So we need to reconstruct DSISR manually.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s.h |    1 +
 arch/powerpc/kvm/book3s.c             |    9 +++++++
 arch/powerpc/kvm/book3s_64_emulate.c  |   40 +++++++++++++++++++++++++++++++++
 3 files changed, 50 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index b47b2f5..1a169f3 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -131,6 +131,7 @@ extern void kvmppc_rmcall(ulong srr0, ulong srr1);
 extern void kvmppc_load_up_fpu(void);
 extern void kvmppc_load_up_altivec(void);
 extern void kvmppc_load_up_vsx(void);
+extern u32 kvmppc_alignment_dsisr(struct kvm_vcpu *vcpu, unsigned int inst);
 
 static inline struct kvmppc_vcpu_book3s *to_book3s(struct kvm_vcpu *vcpu)
 {
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 585dc91..6b8b5ed 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -905,6 +905,15 @@ program_interrupt:
 		}
 		break;
 	}
+	case BOOK3S_INTERRUPT_ALIGNMENT:
+		vcpu->arch.dear = vcpu->arch.fault_dear;
+		if (kvmppc_read_inst(vcpu) = EMULATE_DONE) {
+			to_book3s(vcpu)->dsisr = kvmppc_alignment_dsisr(vcpu,
+				vcpu->arch.last_inst);
+			kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
+		}
+		r = RESUME_GUEST;
+		break;
 	case BOOK3S_INTERRUPT_MACHINE_CHECK:
 	case BOOK3S_INTERRUPT_TRACE:
 		kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
diff --git a/arch/powerpc/kvm/book3s_64_emulate.c b/arch/powerpc/kvm/book3s_64_emulate.c
index 39d5003..c401dd4 100644
--- a/arch/powerpc/kvm/book3s_64_emulate.c
+++ b/arch/powerpc/kvm/book3s_64_emulate.c
@@ -44,6 +44,8 @@
 /* DCBZ is actually 1014, but we patch it to 1010 so we get a trap */
 #define OP_31_XOP_DCBZ		1010
 
+#define OP_LFS			48
+
 #define SPRN_GQR0		912
 #define SPRN_GQR1		913
 #define SPRN_GQR2		914
@@ -474,3 +476,41 @@ int kvmppc_core_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
 	return emulated;
 }
 
+u32 kvmppc_alignment_dsisr(struct kvm_vcpu *vcpu, unsigned int inst)
+{
+	u32 dsisr = 0;
+
+	/*
+	 * This is what the spec says about DSISR bits (not mentioned = 0):
+	 *
+	 * 12:13		[DS]	Set to bits 30:31
+	 * 15:16		[X]	Set to bits 29:30
+	 * 17			[X]	Set to bit 25
+	 *			[D/DS]	Set to bit 5
+	 * 18:21		[X]	Set to bits 21:24
+	 *			[D/DS]	Set to bits 1:4
+	 * 22:26			Set to bits 6:10 (RT/RS/FRT/FRS)
+	 * 27:31			Set to bits 11:15 (RA)
+	 */
+
+	switch (get_op(inst)) {
+	/* D-form */
+	case OP_LFS:
+		dsisr |= (inst >> 12) & 0x4000;	/* bit 17 */
+		dsisr |= (inst >> 17) & 0x3c00; /* bits 18:21 */
+		break;
+	/* X-form */
+	case 31:
+		dsisr |= (inst << 14) & 0x18000; /* bits 15:16 */
+		dsisr |= (inst << 8)  & 0x04000; /* bit 17 */
+		dsisr |= (inst << 3)  & 0x03c00; /* bits 18:21 */
+		break;
+	default:
+		printk(KERN_INFO "KVM: Unaligned instruction 0x%x\n", inst);
+		break;
+	}
+
+	dsisr |= (inst >> 16) & 0x03ff; /* bits 22:31 */
+
+	return dsisr;
+}
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 13/15] KVM: Add support for enabling capabilities per-vcpu
       [not found] ` <1268071402-27112-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-03-08 18:03     ` Alexander Graf
  2010-03-08 18:03     ` Alexander Graf
                       ` (7 subsequent siblings)
  8 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

Some times we don't want all capabilities to be available to all
our vcpus. One example for that is the OSI interface, implemented
in the next patch.

In order to have a generic mechanism in how to enable capabilities
individually, this patch introduces a new ioctl that can be used
for this purpose. That way features we don't want in all guests or
userspace configurations can just not be enabled and we're good.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>

---

v1 -> v2:

  - Add flags to enable_cap
  - Update documentation for kvm_enable_cap
---
 Documentation/kvm/api.txt  |   15 +++++++++++++++
 arch/powerpc/kvm/powerpc.c |   26 ++++++++++++++++++++++++++
 include/linux/kvm.h        |   11 +++++++++++
 3 files changed, 52 insertions(+), 0 deletions(-)

diff --git a/Documentation/kvm/api.txt b/Documentation/kvm/api.txt
index d170cb4..6a19ab6 100644
--- a/Documentation/kvm/api.txt
+++ b/Documentation/kvm/api.txt
@@ -749,6 +749,21 @@ Writes debug registers into the vcpu.
 See KVM_GET_DEBUGREGS for the data structure. The flags field is unused
 yet and must be cleared on entry.
 
+4.34 KVM_ENABLE_CAP
+
+Capability: basic
+Architectures: all
+Type: vcpu ioctl
+Parameters: struct kvm_enable_cap (in)
+Returns: 0 on success; -1 on error
+
+Not all extensions are enabled by default. Using this ioctl the application
+can enable an extension, making it available to the guest.
+
+On systems that do not support this ioctl, it always fails. On systems that
+do support it, it only works for extensions that are supported for enablement.
+As of writing this the only enablement enabled extenion is KVM_CAP_PPC_OSI.
+
 
 5. The kvm_run structure
 
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index a28a512..8bd8204 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -462,6 +462,23 @@ int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, struct kvm_interrupt *irq)
 	return 0;
 }
 
+static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
+				     struct kvm_enable_cap *cap)
+{
+	int r;
+
+	if (cap->flags)
+		return -EINVAL;
+
+	switch (cap->cap) {
+	default:
+		r = -EINVAL;
+		break;
+	}
+
+	return r;
+}
+
 int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
                                     struct kvm_mp_state *mp_state)
 {
@@ -490,6 +507,15 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
 		r = kvm_vcpu_ioctl_interrupt(vcpu, &irq);
 		break;
 	}
+	case KVM_ENABLE_CAP:
+	{
+		struct kvm_enable_cap cap;
+		r = -EFAULT;
+		if (copy_from_user(&cap, argp, sizeof(cap)))
+			goto out;
+		r = kvm_vcpu_ioctl_enable_cap(vcpu, &cap);
+		break;
+	}
 	default:
 		r = -EINVAL;
 	}
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index ce28767..a18ac92 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -400,6 +400,15 @@ struct kvm_ioeventfd {
 	__u8  pad[36];
 };
 
+/* for KVM_ENABLE_CAP */
+struct kvm_enable_cap {
+	/* in */
+	__u32 cap;
+	__u32 flags;
+	__u64 args[4];
+	__u8  pad[64];
+};
+
 #define KVMIO 0xAE
 
 /*
@@ -696,6 +705,8 @@ struct kvm_clock_data {
 /* Available with KVM_CAP_DEBUGREGS */
 #define KVM_GET_DEBUGREGS         _IOR(KVMIO,  0xa1, struct kvm_debugregs)
 #define KVM_SET_DEBUGREGS         _IOW(KVMIO,  0xa2, struct kvm_debugregs)
+/* No need for CAP, because then it just always fails */
+#define KVM_ENABLE_CAP            _IOW(KVMIO,  0xa3, struct kvm_enable_cap)
 
 #define KVM_DEV_ASSIGN_ENABLE_IOMMU	(1 << 0)
 
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 13/15] KVM: Add support for enabling capabilities per-vcpu
@ 2010-03-08 18:03     ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

Some times we don't want all capabilities to be available to all
our vcpus. One example for that is the OSI interface, implemented
in the next patch.

In order to have a generic mechanism in how to enable capabilities
individually, this patch introduces a new ioctl that can be used
for this purpose. That way features we don't want in all guests or
userspace configurations can just not be enabled and we're good.

Signed-off-by: Alexander Graf <agraf@suse.de>

---

v1 -> v2:

  - Add flags to enable_cap
  - Update documentation for kvm_enable_cap
---
 Documentation/kvm/api.txt  |   15 +++++++++++++++
 arch/powerpc/kvm/powerpc.c |   26 ++++++++++++++++++++++++++
 include/linux/kvm.h        |   11 +++++++++++
 3 files changed, 52 insertions(+), 0 deletions(-)

diff --git a/Documentation/kvm/api.txt b/Documentation/kvm/api.txt
index d170cb4..6a19ab6 100644
--- a/Documentation/kvm/api.txt
+++ b/Documentation/kvm/api.txt
@@ -749,6 +749,21 @@ Writes debug registers into the vcpu.
 See KVM_GET_DEBUGREGS for the data structure. The flags field is unused
 yet and must be cleared on entry.
 
+4.34 KVM_ENABLE_CAP
+
+Capability: basic
+Architectures: all
+Type: vcpu ioctl
+Parameters: struct kvm_enable_cap (in)
+Returns: 0 on success; -1 on error
+
+Not all extensions are enabled by default. Using this ioctl the application
+can enable an extension, making it available to the guest.
+
+On systems that do not support this ioctl, it always fails. On systems that
+do support it, it only works for extensions that are supported for enablement.
+As of writing this the only enablement enabled extenion is KVM_CAP_PPC_OSI.
+
 
 5. The kvm_run structure
 
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index a28a512..8bd8204 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -462,6 +462,23 @@ int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, struct kvm_interrupt *irq)
 	return 0;
 }
 
+static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
+				     struct kvm_enable_cap *cap)
+{
+	int r;
+
+	if (cap->flags)
+		return -EINVAL;
+
+	switch (cap->cap) {
+	default:
+		r = -EINVAL;
+		break;
+	}
+
+	return r;
+}
+
 int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
                                     struct kvm_mp_state *mp_state)
 {
@@ -490,6 +507,15 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
 		r = kvm_vcpu_ioctl_interrupt(vcpu, &irq);
 		break;
 	}
+	case KVM_ENABLE_CAP:
+	{
+		struct kvm_enable_cap cap;
+		r = -EFAULT;
+		if (copy_from_user(&cap, argp, sizeof(cap)))
+			goto out;
+		r = kvm_vcpu_ioctl_enable_cap(vcpu, &cap);
+		break;
+	}
 	default:
 		r = -EINVAL;
 	}
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index ce28767..a18ac92 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -400,6 +400,15 @@ struct kvm_ioeventfd {
 	__u8  pad[36];
 };
 
+/* for KVM_ENABLE_CAP */
+struct kvm_enable_cap {
+	/* in */
+	__u32 cap;
+	__u32 flags;
+	__u64 args[4];
+	__u8  pad[64];
+};
+
 #define KVMIO 0xAE
 
 /*
@@ -696,6 +705,8 @@ struct kvm_clock_data {
 /* Available with KVM_CAP_DEBUGREGS */
 #define KVM_GET_DEBUGREGS         _IOR(KVMIO,  0xa1, struct kvm_debugregs)
 #define KVM_SET_DEBUGREGS         _IOW(KVMIO,  0xa2, struct kvm_debugregs)
+/* No need for CAP, because then it just always fails */
+#define KVM_ENABLE_CAP            _IOW(KVMIO,  0xa3, struct kvm_enable_cap)
 
 #define KVM_DEV_ASSIGN_ENABLE_IOMMU	(1 << 0)
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 14/15] KVM: PPC: Add OSI hypercall interface
       [not found] ` <1268071402-27112-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-03-08 18:03     ` Alexander Graf
  2010-03-08 18:03     ` Alexander Graf
                       ` (7 subsequent siblings)
  8 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

MOL uses its own hypercall interface to call back into userspace when
the guest wants to do something.

So let's implement that as an exit reason, specify it with a CAP and
only really use it when userspace wants us to.

The only user of it so far is MOL.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>

---

v1 -> v2:

  - Add documentation for OSI exit struct
---
 Documentation/kvm/api.txt             |   13 +++++++++++++
 arch/powerpc/include/asm/kvm_book3s.h |    5 +++++
 arch/powerpc/include/asm/kvm_host.h   |    2 ++
 arch/powerpc/kvm/book3s.c             |   24 ++++++++++++++++++------
 arch/powerpc/kvm/powerpc.c            |   12 ++++++++++++
 include/linux/kvm.h                   |    6 ++++++
 6 files changed, 56 insertions(+), 6 deletions(-)

diff --git a/Documentation/kvm/api.txt b/Documentation/kvm/api.txt
index 6a19ab6..b2129e8 100644
--- a/Documentation/kvm/api.txt
+++ b/Documentation/kvm/api.txt
@@ -932,6 +932,19 @@ s390 specific.
 
 powerpc specific.
 
+		/* KVM_EXIT_OSI */
+		struct {
+			__u64 gprs[32];
+		} osi;
+
+MOL uses a special hypercall interface it calls 'OSI'. To enable it, we catch
+hypercalls and exit with this exit struct that contains all the guest gprs.
+
+If exit_reason is KVM_EXIT_OSI, then the vcpu has triggered such a hypercall.
+Userspace can now handle the hypercall and when it's done modify the gprs as
+necessary. Upon guest entry all guest GPRs will then be replaced by the values
+in this struct.
+
 		/* Fix the size of the union. */
 		char padding[256];
 	};
diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 1a169f3..54929cd 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -147,6 +147,11 @@ static inline ulong dsisr(void)
 
 extern void kvm_return_point(void);
 
+/* Magic register values loaded into r3 and r4 before the 'sc' assembly
+ * instruction for the OSI hypercalls */
+#define OSI_SC_MAGIC_R3			0x113724FA
+#define OSI_SC_MAGIC_R4			0x77810F9B
+
 #define INS_DCBZ			0x7c0007ec
 
 #endif /* __ASM_KVM_BOOK3S_H__ */
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 0ebda67..486f1ca 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -273,6 +273,8 @@ struct kvm_vcpu_arch {
 	u8 mmio_sign_extend;
 	u8 dcr_needed;
 	u8 dcr_is_write;
+	u8 osi_needed;
+	u8 osi_enabled;
 
 	u32 cpr0_cfgaddr; /* holds the last set cpr0_cfgaddr */
 
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 6b8b5ed..e752a59 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -871,12 +871,24 @@ program_interrupt:
 		break;
 	}
 	case BOOK3S_INTERRUPT_SYSCALL:
-#ifdef EXIT_DEBUG
-		printk(KERN_INFO "Syscall Nr %d\n", (int)kvmppc_get_gpr(vcpu, 0));
-#endif
-		vcpu->stat.syscall_exits++;
-		kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
-		r = RESUME_GUEST;
+		// XXX make user settable
+		if (vcpu->arch.osi_enabled &&
+		    (((u32)kvmppc_get_gpr(vcpu, 3)) == OSI_SC_MAGIC_R3) &&
+		    (((u32)kvmppc_get_gpr(vcpu, 4)) == OSI_SC_MAGIC_R4)) {
+			u64 *gprs = run->osi.gprs;
+			int i;
+
+			run->exit_reason = KVM_EXIT_OSI;
+			for (i = 0; i < 32; i++)
+				gprs[i] = kvmppc_get_gpr(vcpu, i);
+			vcpu->arch.osi_needed = 1;
+			r = RESUME_HOST_NV;
+
+		} else {
+			vcpu->stat.syscall_exits++;
+			kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
+			r = RESUME_GUEST;
+		}
 		break;
 	case BOOK3S_INTERRUPT_FP_UNAVAIL:
 	case BOOK3S_INTERRUPT_ALTIVEC:
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 8bd8204..035bad4 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -148,6 +148,7 @@ int kvm_dev_ioctl_check_extension(long ext)
 	switch (ext) {
 	case KVM_CAP_PPC_SEGSTATE:
 	case KVM_CAP_PPC_PAIRED_SINGLES:
+	case KVM_CAP_PPC_OSI:
 		r = 1;
 		break;
 	case KVM_CAP_COALESCED_MMIO:
@@ -429,6 +430,13 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		if (!vcpu->arch.dcr_is_write)
 			kvmppc_complete_dcr_load(vcpu, run);
 		vcpu->arch.dcr_needed = 0;
+	} else if (vcpu->arch.osi_needed) {
+		u64 *gprs = run->osi.gprs;
+		int i;
+
+		for (i = 0; i < 32; i++)
+			kvmppc_set_gpr(vcpu, i, gprs[i]);
+		vcpu->arch.osi_needed = 0;
 	}
 
 	kvmppc_core_deliver_interrupts(vcpu);
@@ -471,6 +479,10 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
 		return -EINVAL;
 
 	switch (cap->cap) {
+	case KVM_CAP_PPC_OSI:
+		r = 0;
+		vcpu->arch.osi_enabled = true;
+		break;
 	default:
 		r = -EINVAL;
 		break;
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index a18ac92..0307961 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -160,6 +160,7 @@ struct kvm_pit_config {
 #define KVM_EXIT_DCR              15
 #define KVM_EXIT_NMI              16
 #define KVM_EXIT_INTERNAL_ERROR   17
+#define KVM_EXIT_OSI              18
 
 /* For KVM_EXIT_INTERNAL_ERROR */
 #define KVM_INTERNAL_ERROR_EMULATION 1
@@ -259,6 +260,10 @@ struct kvm_run {
 			__u32 ndata;
 			__u64 data[16];
 		} internal;
+		/* KVM_EXIT_OSI */
+		struct {
+			__u64 gprs[32];
+		} osi;
 		/* Fix the size of the union. */
 		char padding[256];
 	};
@@ -516,6 +521,7 @@ struct kvm_enable_cap {
 #define KVM_CAP_DEBUGREGS 50
 #endif
 #define KVM_CAP_X86_ROBUST_SINGLESTEP 51
+#define KVM_CAP_PPC_OSI 52
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 14/15] KVM: PPC: Add OSI hypercall interface
@ 2010-03-08 18:03     ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

MOL uses its own hypercall interface to call back into userspace when
the guest wants to do something.

So let's implement that as an exit reason, specify it with a CAP and
only really use it when userspace wants us to.

The only user of it so far is MOL.

Signed-off-by: Alexander Graf <agraf@suse.de>

---

v1 -> v2:

  - Add documentation for OSI exit struct
---
 Documentation/kvm/api.txt             |   13 +++++++++++++
 arch/powerpc/include/asm/kvm_book3s.h |    5 +++++
 arch/powerpc/include/asm/kvm_host.h   |    2 ++
 arch/powerpc/kvm/book3s.c             |   24 ++++++++++++++++++------
 arch/powerpc/kvm/powerpc.c            |   12 ++++++++++++
 include/linux/kvm.h                   |    6 ++++++
 6 files changed, 56 insertions(+), 6 deletions(-)

diff --git a/Documentation/kvm/api.txt b/Documentation/kvm/api.txt
index 6a19ab6..b2129e8 100644
--- a/Documentation/kvm/api.txt
+++ b/Documentation/kvm/api.txt
@@ -932,6 +932,19 @@ s390 specific.
 
 powerpc specific.
 
+		/* KVM_EXIT_OSI */
+		struct {
+			__u64 gprs[32];
+		} osi;
+
+MOL uses a special hypercall interface it calls 'OSI'. To enable it, we catch
+hypercalls and exit with this exit struct that contains all the guest gprs.
+
+If exit_reason is KVM_EXIT_OSI, then the vcpu has triggered such a hypercall.
+Userspace can now handle the hypercall and when it's done modify the gprs as
+necessary. Upon guest entry all guest GPRs will then be replaced by the values
+in this struct.
+
 		/* Fix the size of the union. */
 		char padding[256];
 	};
diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 1a169f3..54929cd 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -147,6 +147,11 @@ static inline ulong dsisr(void)
 
 extern void kvm_return_point(void);
 
+/* Magic register values loaded into r3 and r4 before the 'sc' assembly
+ * instruction for the OSI hypercalls */
+#define OSI_SC_MAGIC_R3			0x113724FA
+#define OSI_SC_MAGIC_R4			0x77810F9B
+
 #define INS_DCBZ			0x7c0007ec
 
 #endif /* __ASM_KVM_BOOK3S_H__ */
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 0ebda67..486f1ca 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -273,6 +273,8 @@ struct kvm_vcpu_arch {
 	u8 mmio_sign_extend;
 	u8 dcr_needed;
 	u8 dcr_is_write;
+	u8 osi_needed;
+	u8 osi_enabled;
 
 	u32 cpr0_cfgaddr; /* holds the last set cpr0_cfgaddr */
 
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 6b8b5ed..e752a59 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -871,12 +871,24 @@ program_interrupt:
 		break;
 	}
 	case BOOK3S_INTERRUPT_SYSCALL:
-#ifdef EXIT_DEBUG
-		printk(KERN_INFO "Syscall Nr %d\n", (int)kvmppc_get_gpr(vcpu, 0));
-#endif
-		vcpu->stat.syscall_exits++;
-		kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
-		r = RESUME_GUEST;
+		// XXX make user settable
+		if (vcpu->arch.osi_enabled &&
+		    (((u32)kvmppc_get_gpr(vcpu, 3)) = OSI_SC_MAGIC_R3) &&
+		    (((u32)kvmppc_get_gpr(vcpu, 4)) = OSI_SC_MAGIC_R4)) {
+			u64 *gprs = run->osi.gprs;
+			int i;
+
+			run->exit_reason = KVM_EXIT_OSI;
+			for (i = 0; i < 32; i++)
+				gprs[i] = kvmppc_get_gpr(vcpu, i);
+			vcpu->arch.osi_needed = 1;
+			r = RESUME_HOST_NV;
+
+		} else {
+			vcpu->stat.syscall_exits++;
+			kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
+			r = RESUME_GUEST;
+		}
 		break;
 	case BOOK3S_INTERRUPT_FP_UNAVAIL:
 	case BOOK3S_INTERRUPT_ALTIVEC:
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 8bd8204..035bad4 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -148,6 +148,7 @@ int kvm_dev_ioctl_check_extension(long ext)
 	switch (ext) {
 	case KVM_CAP_PPC_SEGSTATE:
 	case KVM_CAP_PPC_PAIRED_SINGLES:
+	case KVM_CAP_PPC_OSI:
 		r = 1;
 		break;
 	case KVM_CAP_COALESCED_MMIO:
@@ -429,6 +430,13 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		if (!vcpu->arch.dcr_is_write)
 			kvmppc_complete_dcr_load(vcpu, run);
 		vcpu->arch.dcr_needed = 0;
+	} else if (vcpu->arch.osi_needed) {
+		u64 *gprs = run->osi.gprs;
+		int i;
+
+		for (i = 0; i < 32; i++)
+			kvmppc_set_gpr(vcpu, i, gprs[i]);
+		vcpu->arch.osi_needed = 0;
 	}
 
 	kvmppc_core_deliver_interrupts(vcpu);
@@ -471,6 +479,10 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
 		return -EINVAL;
 
 	switch (cap->cap) {
+	case KVM_CAP_PPC_OSI:
+		r = 0;
+		vcpu->arch.osi_enabled = true;
+		break;
 	default:
 		r = -EINVAL;
 		break;
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index a18ac92..0307961 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -160,6 +160,7 @@ struct kvm_pit_config {
 #define KVM_EXIT_DCR              15
 #define KVM_EXIT_NMI              16
 #define KVM_EXIT_INTERNAL_ERROR   17
+#define KVM_EXIT_OSI              18
 
 /* For KVM_EXIT_INTERNAL_ERROR */
 #define KVM_INTERNAL_ERROR_EMULATION 1
@@ -259,6 +260,10 @@ struct kvm_run {
 			__u32 ndata;
 			__u64 data[16];
 		} internal;
+		/* KVM_EXIT_OSI */
+		struct {
+			__u64 gprs[32];
+		} osi;
 		/* Fix the size of the union. */
 		char padding[256];
 	};
@@ -516,6 +521,7 @@ struct kvm_enable_cap {
 #define KVM_CAP_DEBUGREGS 50
 #endif
 #define KVM_CAP_X86_ROBUST_SINGLESTEP 51
+#define KVM_CAP_PPC_OSI 52
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 15/15] KVM: PPC: Make build work without CONFIG_VSX/ALTIVEC
       [not found] ` <1268071402-27112-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-03-08 18:03     ` Alexander Graf
  2010-03-08 18:03     ` Alexander Graf
                       ` (7 subsequent siblings)
  8 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

The FPU/Altivec/VSX enablement also brought access to some structure
elements that are only defined when the respective config options
are enabled.

Unfortuately I forgot to check for the config options at some places,
so let's do that now.

Unbreaks the build when CONFIG_VSX is not set.

Signed-off-by: Alexander Graf <agraf-l3A5Bk7waGM@public.gmane.org>
---
 arch/powerpc/kvm/book3s.c |    8 ++++++++
 1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index e752a59..00e9684 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -608,7 +608,9 @@ void kvmppc_giveup_ext(struct kvm_vcpu *vcpu, ulong msr)
 {
 	struct thread_struct *t = &current->thread;
 	u64 *vcpu_fpr = vcpu->arch.fpr;
+#ifdef CONFIG_VSX
 	u64 *vcpu_vsx = vcpu->arch.vsr;
+#endif
 	u64 *thread_fpr = (u64*)t->fpr;
 	int i;
 
@@ -688,7 +690,9 @@ static int kvmppc_handle_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr,
 {
 	struct thread_struct *t = &current->thread;
 	u64 *vcpu_fpr = vcpu->arch.fpr;
+#ifdef CONFIG_VSX
 	u64 *vcpu_vsx = vcpu->arch.vsr;
+#endif
 	u64 *thread_fpr = (u64*)t->fpr;
 	int i;
 
@@ -1218,8 +1222,12 @@ int __kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
 {
 	int ret;
 	struct thread_struct ext_bkp;
+#ifdef CONFIG_ALTIVEC
 	bool save_vec = current->thread.used_vr;
+#endif
+#ifdef CONFIG_VSX
 	bool save_vsx = current->thread.used_vsr;
+#endif
 	ulong ext_msr;
 
 	/* No need to go into the guest when all we do is going out */
-- 
1.6.0.2

^ permalink raw reply related	[flat|nested] 140+ messages in thread

* [PATCH 15/15] KVM: PPC: Make build work without CONFIG_VSX/ALTIVEC
@ 2010-03-08 18:03     ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:03 UTC (permalink / raw)
  To: kvm-ppc-u79uwXL29TY76Z2rM5mHXA; +Cc: kvm-u79uwXL29TY76Z2rM5mHXA

The FPU/Altivec/VSX enablement also brought access to some structure
elements that are only defined when the respective config options
are enabled.

Unfortuately I forgot to check for the config options at some places,
so let's do that now.

Unbreaks the build when CONFIG_VSX is not set.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s.c |    8 ++++++++
 1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index e752a59..00e9684 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -608,7 +608,9 @@ void kvmppc_giveup_ext(struct kvm_vcpu *vcpu, ulong msr)
 {
 	struct thread_struct *t = &current->thread;
 	u64 *vcpu_fpr = vcpu->arch.fpr;
+#ifdef CONFIG_VSX
 	u64 *vcpu_vsx = vcpu->arch.vsr;
+#endif
 	u64 *thread_fpr = (u64*)t->fpr;
 	int i;
 
@@ -688,7 +690,9 @@ static int kvmppc_handle_ext(struct kvm_vcpu *vcpu, unsigned int exit_nr,
 {
 	struct thread_struct *t = &current->thread;
 	u64 *vcpu_fpr = vcpu->arch.fpr;
+#ifdef CONFIG_VSX
 	u64 *vcpu_vsx = vcpu->arch.vsr;
+#endif
 	u64 *thread_fpr = (u64*)t->fpr;
 	int i;
 
@@ -1218,8 +1222,12 @@ int __kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
 {
 	int ret;
 	struct thread_struct ext_bkp;
+#ifdef CONFIG_ALTIVEC
 	bool save_vec = current->thread.used_vr;
+#endif
+#ifdef CONFIG_VSX
 	bool save_vsx = current->thread.used_vsr;
+#endif
 	ulong ext_msr;
 
 	/* No need to go into the guest when all we do is going out */
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 140+ messages in thread

* Re: [PATCH 00/15] KVM: PPC: MOL bringup patches
  2010-03-08 18:03 ` Alexander Graf
@ 2010-03-08 18:06   ` Alexander Graf
  -1 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:06 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

Alexander Graf wrote:
> Mac-on-Linux has always lacked PPC64 host support. This is going to
> change now!
>
> This patchset contains minor patches to enable MOL, but is mostly about
> bug fixes that came out of running Mac OS X. With this set and a pretty
> small patch to MOL I have 10.4.11 running as a guest on a 970MP host.
>
> I'll send the MOl patches to the respective ML in the next days.
>   

The patches for MOL are integrated in their SVN already. Forgot to
change the description.


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 00/15] KVM: PPC: MOL bringup patches
@ 2010-03-08 18:06   ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-08 18:06 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm

Alexander Graf wrote:
> Mac-on-Linux has always lacked PPC64 host support. This is going to
> change now!
>
> This patchset contains minor patches to enable MOL, but is mostly about
> bug fixes that came out of running Mac OS X. With this set and a pretty
> small patch to MOL I have 10.4.11 running as a guest on a 970MP host.
>
> I'll send the MOl patches to the respective ML in the next days.
>   

The patches for MOL are integrated in their SVN already. Forgot to
change the description.


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 02/15] KVM: PPC: Allow userspace to unset the IRQ line
       [not found]   ` <1268071402-27112-3-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
@ 2010-03-09 12:50       ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-09 12:50 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On 03/08/2010 08:03 PM, Alexander Graf wrote:
> Userspace can tell us that it wants to trigger an interrupt. But
> so far it can't tell us that it wants to stop triggering one.
>
> So let's interpret the parameter to the ioctl that we have anyways
> to tell us if we want to raise or lower the interrupt line.
>    

I asked for a KVM_CAP_ for this.  What was the conclusion of that thread?

-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 02/15] KVM: PPC: Allow userspace to unset the IRQ line
@ 2010-03-09 12:50       ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-09 12:50 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On 03/08/2010 08:03 PM, Alexander Graf wrote:
> Userspace can tell us that it wants to trigger an interrupt. But
> so far it can't tell us that it wants to stop triggering one.
>
> So let's interpret the parameter to the ioctl that we have anyways
> to tell us if we want to raise or lower the interrupt line.
>    

I asked for a KVM_CAP_ for this.  What was the conclusion of that thread?

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 02/15] KVM: PPC: Allow userspace to unset the IRQ line
       [not found]       ` <4B964412.8030708-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2010-03-09 12:54           ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-09 12:54 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA


On 09.03.2010, at 13:50, Avi Kivity wrote:

> On 03/08/2010 08:03 PM, Alexander Graf wrote:
>> Userspace can tell us that it wants to trigger an interrupt. But
>> so far it can't tell us that it wants to stop triggering one.
>> 
>> So let's interpret the parameter to the ioctl that we have anyways
>> to tell us if we want to raise or lower the interrupt line.
>>   
> 
> I asked for a KVM_CAP_ for this.  What was the conclusion of that thread?

Uh - did we come to one?

The last thing you said about it was:

> Having individual capabilities makes backporting a lot easier (otherwise you have to backport the whole thing).  If the changes are logically separate, I prefer 500 separate capabilities.
> 
> However, for a platform bringup, it's okay to have just one capability, assuming none of the changes are applicable to other platforms.

So I assumed it'd be ok to not have one. If you like I can send an additional patch adding the CAP.


Alex--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 02/15] KVM: PPC: Allow userspace to unset the IRQ line
@ 2010-03-09 12:54           ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-09 12:54 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA


On 09.03.2010, at 13:50, Avi Kivity wrote:

> On 03/08/2010 08:03 PM, Alexander Graf wrote:
>> Userspace can tell us that it wants to trigger an interrupt. But
>> so far it can't tell us that it wants to stop triggering one.
>> 
>> So let's interpret the parameter to the ioctl that we have anyways
>> to tell us if we want to raise or lower the interrupt line.
>>   
> 
> I asked for a KVM_CAP_ for this.  What was the conclusion of that thread?

Uh - did we come to one?

The last thing you said about it was:

> Having individual capabilities makes backporting a lot easier (otherwise you have to backport the whole thing).  If the changes are logically separate, I prefer 500 separate capabilities.
> 
> However, for a platform bringup, it's okay to have just one capability, assuming none of the changes are applicable to other platforms.

So I assumed it'd be ok to not have one. If you like I can send an additional patch adding the CAP.


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 13/15] KVM: Add support for enabling capabilities per-vcpu
  2010-03-08 18:03     ` Alexander Graf
@ 2010-03-09 12:56       ` Avi Kivity
  -1 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-09 12:56 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On 03/08/2010 08:03 PM, Alexander Graf wrote:
> Some times we don't want all capabilities to be available to all
> our vcpus. One example for that is the OSI interface, implemented
> in the next patch.
>
> In order to have a generic mechanism in how to enable capabilities
> individually, this patch introduces a new ioctl that can be used
> for this purpose. That way features we don't want in all guests or
> userspace configurations can just not be enabled and we're good.
>
>
> diff --git a/Documentation/kvm/api.txt b/Documentation/kvm/api.txt
> index d170cb4..6a19ab6 100644
> --- a/Documentation/kvm/api.txt
> +++ b/Documentation/kvm/api.txt
> @@ -749,6 +749,21 @@ Writes debug registers into the vcpu.
>   See KVM_GET_DEBUGREGS for the data structure. The flags field is unused
>   yet and must be cleared on entry.
>
> +4.34 KVM_ENABLE_CAP
> +
> +Capability: basic
>    

Capability: basic means that the feature was present in 2.6.22.  
Otherwise you need to specify the KVM_CAP_ that presents this feature.

> +Architectures: all
>
>    

But it's implemented for ppc only (other arches will get ENOTTY).

> +Not all extensions are enabled by default. Using this ioctl the application
> +can enable an extension, making it available to the guest.
> +
> +On systems that do not support this ioctl, it always fails. On systems that
> +do support it, it only works for extensions that are supported for enablement.
> +As of writing this the only enablement enabled extenion is KVM_CAP_PPC_OSI.
>    

That needs to be documented.  It also needs to be discoverable 
separately - we can have a kernel with KVM_ENABLE_CAP but without 
KVM_CAP_PPC_OSI.

btw, KVM_CAP_PPC_OSI conflicts with the KVM_CAP_ namespace.  Please 
choose another namespace.

Need to document the structure fields.

>
>   /*
> @@ -696,6 +705,8 @@ struct kvm_clock_data {
>   /* Available with KVM_CAP_DEBUGREGS */
>   #define KVM_GET_DEBUGREGS         _IOR(KVMIO,  0xa1, struct kvm_debugregs)
>   #define KVM_SET_DEBUGREGS         _IOW(KVMIO,  0xa2, struct kvm_debugregs)
> +/* No need for CAP, because then it just always fails */
> +#define KVM_ENABLE_CAP            _IOW(KVMIO,  0xa3, struct kvm_enable_cap)
>    
The CAPs are needed so you can discover what you have without running guests.



-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 13/15] KVM: Add support for enabling capabilities per-vcpu
@ 2010-03-09 12:56       ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-09 12:56 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On 03/08/2010 08:03 PM, Alexander Graf wrote:
> Some times we don't want all capabilities to be available to all
> our vcpus. One example for that is the OSI interface, implemented
> in the next patch.
>
> In order to have a generic mechanism in how to enable capabilities
> individually, this patch introduces a new ioctl that can be used
> for this purpose. That way features we don't want in all guests or
> userspace configurations can just not be enabled and we're good.
>
>
> diff --git a/Documentation/kvm/api.txt b/Documentation/kvm/api.txt
> index d170cb4..6a19ab6 100644
> --- a/Documentation/kvm/api.txt
> +++ b/Documentation/kvm/api.txt
> @@ -749,6 +749,21 @@ Writes debug registers into the vcpu.
>   See KVM_GET_DEBUGREGS for the data structure. The flags field is unused
>   yet and must be cleared on entry.
>
> +4.34 KVM_ENABLE_CAP
> +
> +Capability: basic
>    

Capability: basic means that the feature was present in 2.6.22.  
Otherwise you need to specify the KVM_CAP_ that presents this feature.

> +Architectures: all
>
>    

But it's implemented for ppc only (other arches will get ENOTTY).

> +Not all extensions are enabled by default. Using this ioctl the application
> +can enable an extension, making it available to the guest.
> +
> +On systems that do not support this ioctl, it always fails. On systems that
> +do support it, it only works for extensions that are supported for enablement.
> +As of writing this the only enablement enabled extenion is KVM_CAP_PPC_OSI.
>    

That needs to be documented.  It also needs to be discoverable 
separately - we can have a kernel with KVM_ENABLE_CAP but without 
KVM_CAP_PPC_OSI.

btw, KVM_CAP_PPC_OSI conflicts with the KVM_CAP_ namespace.  Please 
choose another namespace.

Need to document the structure fields.

>
>   /*
> @@ -696,6 +705,8 @@ struct kvm_clock_data {
>   /* Available with KVM_CAP_DEBUGREGS */
>   #define KVM_GET_DEBUGREGS         _IOR(KVMIO,  0xa1, struct kvm_debugregs)
>   #define KVM_SET_DEBUGREGS         _IOW(KVMIO,  0xa2, struct kvm_debugregs)
> +/* No need for CAP, because then it just always fails */
> +#define KVM_ENABLE_CAP            _IOW(KVMIO,  0xa3, struct kvm_enable_cap)
>    
The CAPs are needed so you can discover what you have without running guests.



-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: PPC: Add OSI hypercall interface
  2010-03-08 18:03     ` Alexander Graf
@ 2010-03-09 13:00       ` Avi Kivity
  -1 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-09 13:00 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On 03/08/2010 08:03 PM, Alexander Graf wrote:
> MOL uses its own hypercall interface to call back into userspace when
> the guest wants to do something.
>
> So let's implement that as an exit reason, specify it with a CAP and
> only really use it when userspace wants us to.
>
> The only user of it so far is MOL.
>
> Signed-off-by: Alexander Graf<agraf@suse.de>
>
> ---
>
> v1 ->  v2:
>
>    - Add documentation for OSI exit struct
> ---
>   Documentation/kvm/api.txt             |   13 +++++++++++++
>   arch/powerpc/include/asm/kvm_book3s.h |    5 +++++
>   arch/powerpc/include/asm/kvm_host.h   |    2 ++
>   arch/powerpc/kvm/book3s.c             |   24 ++++++++++++++++++------
>   arch/powerpc/kvm/powerpc.c            |   12 ++++++++++++
>   include/linux/kvm.h                   |    6 ++++++
>   6 files changed, 56 insertions(+), 6 deletions(-)
>
> diff --git a/Documentation/kvm/api.txt b/Documentation/kvm/api.txt
> index 6a19ab6..b2129e8 100644
> --- a/Documentation/kvm/api.txt
> +++ b/Documentation/kvm/api.txt
> @@ -932,6 +932,19 @@ s390 specific.
>
>   powerpc specific.
>
> +		/* KVM_EXIT_OSI */
> +		struct {
> +			__u64 gprs[32];
> +		} osi;
> +
> +MOL uses a special hypercall interface it calls 'OSI'. To enable it, we catch
> +hypercalls and exit with this exit struct that contains all the guest gprs.
> +
> +If exit_reason is KVM_EXIT_OSI, then the vcpu has triggered such a hypercall.
> +Userspace can now handle the hypercall and when it's done modify the gprs as
> +necessary. Upon guest entry all guest GPRs will then be replaced by the values
> +in this struct.
> +
>    

That's migration unsafe.  There may not be next guest entry on this host.

Is using KVM_[GS]ET_REGS problematic for some reason?

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: PPC: Add OSI hypercall interface
@ 2010-03-09 13:00       ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-09 13:00 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On 03/08/2010 08:03 PM, Alexander Graf wrote:
> MOL uses its own hypercall interface to call back into userspace when
> the guest wants to do something.
>
> So let's implement that as an exit reason, specify it with a CAP and
> only really use it when userspace wants us to.
>
> The only user of it so far is MOL.
>
> Signed-off-by: Alexander Graf<agraf@suse.de>
>
> ---
>
> v1 ->  v2:
>
>    - Add documentation for OSI exit struct
> ---
>   Documentation/kvm/api.txt             |   13 +++++++++++++
>   arch/powerpc/include/asm/kvm_book3s.h |    5 +++++
>   arch/powerpc/include/asm/kvm_host.h   |    2 ++
>   arch/powerpc/kvm/book3s.c             |   24 ++++++++++++++++++------
>   arch/powerpc/kvm/powerpc.c            |   12 ++++++++++++
>   include/linux/kvm.h                   |    6 ++++++
>   6 files changed, 56 insertions(+), 6 deletions(-)
>
> diff --git a/Documentation/kvm/api.txt b/Documentation/kvm/api.txt
> index 6a19ab6..b2129e8 100644
> --- a/Documentation/kvm/api.txt
> +++ b/Documentation/kvm/api.txt
> @@ -932,6 +932,19 @@ s390 specific.
>
>   powerpc specific.
>
> +		/* KVM_EXIT_OSI */
> +		struct {
> +			__u64 gprs[32];
> +		} osi;
> +
> +MOL uses a special hypercall interface it calls 'OSI'. To enable it, we catch
> +hypercalls and exit with this exit struct that contains all the guest gprs.
> +
> +If exit_reason is KVM_EXIT_OSI, then the vcpu has triggered such a hypercall.
> +Userspace can now handle the hypercall and when it's done modify the gprs as
> +necessary. Upon guest entry all guest GPRs will then be replaced by the values
> +in this struct.
> +
>    

That's migration unsafe.  There may not be next guest entry on this host.

Is using KVM_[GS]ET_REGS problematic for some reason?

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 13/15] KVM: Add support for enabling capabilities per-vcpu
  2010-03-09 12:56       ` Avi Kivity
@ 2010-03-09 13:01         ` Alexander Graf
  -1 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-09 13:01 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc, kvm


On 09.03.2010, at 13:56, Avi Kivity wrote:

> On 03/08/2010 08:03 PM, Alexander Graf wrote:
>> Some times we don't want all capabilities to be available to all
>> our vcpus. One example for that is the OSI interface, implemented
>> in the next patch.
>> 
>> In order to have a generic mechanism in how to enable capabilities
>> individually, this patch introduces a new ioctl that can be used
>> for this purpose. That way features we don't want in all guests or
>> userspace configurations can just not be enabled and we're good.
>> 
>> 
>> diff --git a/Documentation/kvm/api.txt b/Documentation/kvm/api.txt
>> index d170cb4..6a19ab6 100644
>> --- a/Documentation/kvm/api.txt
>> +++ b/Documentation/kvm/api.txt
>> @@ -749,6 +749,21 @@ Writes debug registers into the vcpu.
>>  See KVM_GET_DEBUGREGS for the data structure. The flags field is unused
>>  yet and must be cleared on entry.
>> 
>> +4.34 KVM_ENABLE_CAP
>> +
>> +Capability: basic
>>   
> 
> Capability: basic means that the feature was present in 2.6.22.  Otherwise you need to specify the KVM_CAP_ that presents this feature.
> 
>> +Architectures: all
>> 
>>   
> 
> But it's implemented for ppc only (other arches will get ENOTTY).

That was the whole idea behind it. if it fails it fails. Nothing we can do about it. If it succeeds - great.

> 
>> +Not all extensions are enabled by default. Using this ioctl the application
>> +can enable an extension, making it available to the guest.
>> +
>> +On systems that do not support this ioctl, it always fails. On systems that
>> +do support it, it only works for extensions that are supported for enablement.
>> +As of writing this the only enablement enabled extenion is KVM_CAP_PPC_OSI.
>>   
> 
> That needs to be documented.  It also needs to be discoverable separately - we can have a kernel with KVM_ENABLE_CAP but without KVM_CAP_PPC_OSI.
> 
> btw, KVM_CAP_PPC_OSI conflicts with the KVM_CAP_ namespace.  Please choose another namespace.

Well I figured it'd be slick to have capabilities get enabled or disabled. That's the whole idea behind making it generic. If I wanted a specific interface I'd go in and create an ioctl ENABLE_OSI_INTERFACE.

But this way the detection if a capability exists can be done using the existing CAP detection. It can then be enabled using ENABLE_CAP.

> Need to document the structure fields.
> 
>> 
>>  /*
>> @@ -696,6 +705,8 @@ struct kvm_clock_data {
>>  /* Available with KVM_CAP_DEBUGREGS */
>>  #define KVM_GET_DEBUGREGS         _IOR(KVMIO,  0xa1, struct kvm_debugregs)
>>  #define KVM_SET_DEBUGREGS         _IOW(KVMIO,  0xa2, struct kvm_debugregs)
>> +/* No need for CAP, because then it just always fails */
>> +#define KVM_ENABLE_CAP            _IOW(KVMIO,  0xa3, struct kvm_enable_cap)
>>   
> The CAPs are needed so you can discover what you have without running guests.

The whole point of this extension was to make CAPs not always enabled, but make them possibly enable on demand.


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 13/15] KVM: Add support for enabling capabilities per-vcpu
@ 2010-03-09 13:01         ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-09 13:01 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc, kvm


On 09.03.2010, at 13:56, Avi Kivity wrote:

> On 03/08/2010 08:03 PM, Alexander Graf wrote:
>> Some times we don't want all capabilities to be available to all
>> our vcpus. One example for that is the OSI interface, implemented
>> in the next patch.
>> 
>> In order to have a generic mechanism in how to enable capabilities
>> individually, this patch introduces a new ioctl that can be used
>> for this purpose. That way features we don't want in all guests or
>> userspace configurations can just not be enabled and we're good.
>> 
>> 
>> diff --git a/Documentation/kvm/api.txt b/Documentation/kvm/api.txt
>> index d170cb4..6a19ab6 100644
>> --- a/Documentation/kvm/api.txt
>> +++ b/Documentation/kvm/api.txt
>> @@ -749,6 +749,21 @@ Writes debug registers into the vcpu.
>>  See KVM_GET_DEBUGREGS for the data structure. The flags field is unused
>>  yet and must be cleared on entry.
>> 
>> +4.34 KVM_ENABLE_CAP
>> +
>> +Capability: basic
>>   
> 
> Capability: basic means that the feature was present in 2.6.22.  Otherwise you need to specify the KVM_CAP_ that presents this feature.
> 
>> +Architectures: all
>> 
>>   
> 
> But it's implemented for ppc only (other arches will get ENOTTY).

That was the whole idea behind it. if it fails it fails. Nothing we can do about it. If it succeeds - great.

> 
>> +Not all extensions are enabled by default. Using this ioctl the application
>> +can enable an extension, making it available to the guest.
>> +
>> +On systems that do not support this ioctl, it always fails. On systems that
>> +do support it, it only works for extensions that are supported for enablement.
>> +As of writing this the only enablement enabled extenion is KVM_CAP_PPC_OSI.
>>   
> 
> That needs to be documented.  It also needs to be discoverable separately - we can have a kernel with KVM_ENABLE_CAP but without KVM_CAP_PPC_OSI.
> 
> btw, KVM_CAP_PPC_OSI conflicts with the KVM_CAP_ namespace.  Please choose another namespace.

Well I figured it'd be slick to have capabilities get enabled or disabled. That's the whole idea behind making it generic. If I wanted a specific interface I'd go in and create an ioctl ENABLE_OSI_INTERFACE.

But this way the detection if a capability exists can be done using the existing CAP detection. It can then be enabled using ENABLE_CAP.

> Need to document the structure fields.
> 
>> 
>>  /*
>> @@ -696,6 +705,8 @@ struct kvm_clock_data {
>>  /* Available with KVM_CAP_DEBUGREGS */
>>  #define KVM_GET_DEBUGREGS         _IOR(KVMIO,  0xa1, struct kvm_debugregs)
>>  #define KVM_SET_DEBUGREGS         _IOW(KVMIO,  0xa2, struct kvm_debugregs)
>> +/* No need for CAP, because then it just always fails */
>> +#define KVM_ENABLE_CAP            _IOW(KVMIO,  0xa3, struct kvm_enable_cap)
>>   
> The CAPs are needed so you can discover what you have without running guests.

The whole point of this extension was to make CAPs not always enabled, but make them possibly enable on demand.


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: PPC: Add OSI hypercall interface
  2010-03-09 13:00       ` Avi Kivity
@ 2010-03-09 13:04         ` Alexander Graf
  -1 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-09 13:04 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc, kvm


On 09.03.2010, at 14:00, Avi Kivity wrote:

> On 03/08/2010 08:03 PM, Alexander Graf wrote:
>> MOL uses its own hypercall interface to call back into userspace when
>> the guest wants to do something.
>> 
>> So let's implement that as an exit reason, specify it with a CAP and
>> only really use it when userspace wants us to.
>> 
>> The only user of it so far is MOL.
>> 
>> Signed-off-by: Alexander Graf<agraf@suse.de>
>> 
>> ---
>> 
>> v1 ->  v2:
>> 
>>   - Add documentation for OSI exit struct
>> ---
>>  Documentation/kvm/api.txt             |   13 +++++++++++++
>>  arch/powerpc/include/asm/kvm_book3s.h |    5 +++++
>>  arch/powerpc/include/asm/kvm_host.h   |    2 ++
>>  arch/powerpc/kvm/book3s.c             |   24 ++++++++++++++++++------
>>  arch/powerpc/kvm/powerpc.c            |   12 ++++++++++++
>>  include/linux/kvm.h                   |    6 ++++++
>>  6 files changed, 56 insertions(+), 6 deletions(-)
>> 
>> diff --git a/Documentation/kvm/api.txt b/Documentation/kvm/api.txt
>> index 6a19ab6..b2129e8 100644
>> --- a/Documentation/kvm/api.txt
>> +++ b/Documentation/kvm/api.txt
>> @@ -932,6 +932,19 @@ s390 specific.
>> 
>>  powerpc specific.
>> 
>> +		/* KVM_EXIT_OSI */
>> +		struct {
>> +			__u64 gprs[32];
>> +		} osi;
>> +
>> +MOL uses a special hypercall interface it calls 'OSI'. To enable it, we catch
>> +hypercalls and exit with this exit struct that contains all the guest gprs.
>> +
>> +If exit_reason is KVM_EXIT_OSI, then the vcpu has triggered such a hypercall.
>> +Userspace can now handle the hypercall and when it's done modify the gprs as
>> +necessary. Upon guest entry all guest GPRs will then be replaced by the values
>> +in this struct.
>> +
>>   
> 
> That's migration unsafe.  There may not be next guest entry on this host.

It's as unsafe as MMIO then.

> Is using KVM_[GS]ET_REGS problematic for some reason?

It's two additional ioctls for no good reason. We know the interface, so we can model towards it.


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: PPC: Add OSI hypercall interface
@ 2010-03-09 13:04         ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-09 13:04 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc, kvm


On 09.03.2010, at 14:00, Avi Kivity wrote:

> On 03/08/2010 08:03 PM, Alexander Graf wrote:
>> MOL uses its own hypercall interface to call back into userspace when
>> the guest wants to do something.
>> 
>> So let's implement that as an exit reason, specify it with a CAP and
>> only really use it when userspace wants us to.
>> 
>> The only user of it so far is MOL.
>> 
>> Signed-off-by: Alexander Graf<agraf@suse.de>
>> 
>> ---
>> 
>> v1 ->  v2:
>> 
>>   - Add documentation for OSI exit struct
>> ---
>>  Documentation/kvm/api.txt             |   13 +++++++++++++
>>  arch/powerpc/include/asm/kvm_book3s.h |    5 +++++
>>  arch/powerpc/include/asm/kvm_host.h   |    2 ++
>>  arch/powerpc/kvm/book3s.c             |   24 ++++++++++++++++++------
>>  arch/powerpc/kvm/powerpc.c            |   12 ++++++++++++
>>  include/linux/kvm.h                   |    6 ++++++
>>  6 files changed, 56 insertions(+), 6 deletions(-)
>> 
>> diff --git a/Documentation/kvm/api.txt b/Documentation/kvm/api.txt
>> index 6a19ab6..b2129e8 100644
>> --- a/Documentation/kvm/api.txt
>> +++ b/Documentation/kvm/api.txt
>> @@ -932,6 +932,19 @@ s390 specific.
>> 
>>  powerpc specific.
>> 
>> +		/* KVM_EXIT_OSI */
>> +		struct {
>> +			__u64 gprs[32];
>> +		} osi;
>> +
>> +MOL uses a special hypercall interface it calls 'OSI'. To enable it, we catch
>> +hypercalls and exit with this exit struct that contains all the guest gprs.
>> +
>> +If exit_reason is KVM_EXIT_OSI, then the vcpu has triggered such a hypercall.
>> +Userspace can now handle the hypercall and when it's done modify the gprs as
>> +necessary. Upon guest entry all guest GPRs will then be replaced by the values
>> +in this struct.
>> +
>>   
> 
> That's migration unsafe.  There may not be next guest entry on this host.

It's as unsafe as MMIO then.

> Is using KVM_[GS]ET_REGS problematic for some reason?

It's two additional ioctls for no good reason. We know the interface, so we can model towards it.


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 02/15] KVM: PPC: Allow userspace to unset the IRQ line
       [not found]           ` <954C5195-A8E4-4CA5-8D5E-AA21E2E21C5B-l3A5Bk7waGM@public.gmane.org>
@ 2010-03-09 13:05               ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-09 13:05 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On 03/09/2010 02:54 PM, Alexander Graf wrote:
> On 09.03.2010, at 13:50, Avi Kivity wrote:
>
>    
>> On 03/08/2010 08:03 PM, Alexander Graf wrote:
>>      
>>> Userspace can tell us that it wants to trigger an interrupt. But
>>> so far it can't tell us that it wants to stop triggering one.
>>>
>>> So let's interpret the parameter to the ioctl that we have anyways
>>> to tell us if we want to raise or lower the interrupt line.
>>>
>>>        
>> I asked for a KVM_CAP_ for this.  What was the conclusion of that thread?
>>      
> Uh - did we come to one?
>
> The last thing you said about it was:
>
>    
>> Having individual capabilities makes backporting a lot easier (otherwise you have to backport the whole thing).  If the changes are logically separate, I prefer 500 separate capabilities.
>>
>> However, for a platform bringup, it's okay to have just one capability, assuming none of the changes are applicable to other platforms.
>>      
> So I assumed it'd be ok to not have one. If you like I can send an additional patch adding the CAP.
>
>    

Well, what's the capability for this patchset?

Things like "if you have KVM_CAP_OSI you can assume you have 
KVM_INTERRUPT_LOWER" don't work for me.  A platform cap would be called 
KVM_CAP_MOL and explicitly document everything in there.

And it commits you to not deprecating things individually.  Really, 
individual caps are better.

-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 02/15] KVM: PPC: Allow userspace to unset the IRQ line
@ 2010-03-09 13:05               ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-09 13:05 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On 03/09/2010 02:54 PM, Alexander Graf wrote:
> On 09.03.2010, at 13:50, Avi Kivity wrote:
>
>    
>> On 03/08/2010 08:03 PM, Alexander Graf wrote:
>>      
>>> Userspace can tell us that it wants to trigger an interrupt. But
>>> so far it can't tell us that it wants to stop triggering one.
>>>
>>> So let's interpret the parameter to the ioctl that we have anyways
>>> to tell us if we want to raise or lower the interrupt line.
>>>
>>>        
>> I asked for a KVM_CAP_ for this.  What was the conclusion of that thread?
>>      
> Uh - did we come to one?
>
> The last thing you said about it was:
>
>    
>> Having individual capabilities makes backporting a lot easier (otherwise you have to backport the whole thing).  If the changes are logically separate, I prefer 500 separate capabilities.
>>
>> However, for a platform bringup, it's okay to have just one capability, assuming none of the changes are applicable to other platforms.
>>      
> So I assumed it'd be ok to not have one. If you like I can send an additional patch adding the CAP.
>
>    

Well, what's the capability for this patchset?

Things like "if you have KVM_CAP_OSI you can assume you have 
KVM_INTERRUPT_LOWER" don't work for me.  A platform cap would be called 
KVM_CAP_MOL and explicitly document everything in there.

And it commits you to not deprecating things individually.  Really, 
individual caps are better.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 13/15] KVM: Add support for enabling capabilities per-vcpu
  2010-03-09 13:01         ` Alexander Graf
@ 2010-03-09 13:09           ` Avi Kivity
  -1 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-09 13:09 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On 03/09/2010 03:01 PM, Alexander Graf wrote:
> On 09.03.2010, at 13:56, Avi Kivity wrote:
>
>    
>> On 03/08/2010 08:03 PM, Alexander Graf wrote:
>>      
>>> Some times we don't want all capabilities to be available to all
>>> our vcpus. One example for that is the OSI interface, implemented
>>> in the next patch.
>>>
>>> In order to have a generic mechanism in how to enable capabilities
>>> individually, this patch introduces a new ioctl that can be used
>>> for this purpose. That way features we don't want in all guests or
>>> userspace configurations can just not be enabled and we're good.
>>>
>>>
>>> diff --git a/Documentation/kvm/api.txt b/Documentation/kvm/api.txt
>>> index d170cb4..6a19ab6 100644
>>> --- a/Documentation/kvm/api.txt
>>> +++ b/Documentation/kvm/api.txt
>>> @@ -749,6 +749,21 @@ Writes debug registers into the vcpu.
>>>   See KVM_GET_DEBUGREGS for the data structure. The flags field is unused
>>>   yet and must be cleared on entry.
>>>
>>> +4.34 KVM_ENABLE_CAP
>>> +
>>> +Capability: basic
>>>
>>>        
>> Capability: basic means that the feature was present in 2.6.22.  Otherwise you need to specify the KVM_CAP_ that presents this feature.
>>
>>      
>>> +Architectures: all
>>>
>>>
>>>        
>> But it's implemented for ppc only (other arches will get ENOTTY).
>>      
> That was the whole idea behind it. if it fails it fails. Nothing we can do about it. If it succeeds - great.
>    

If KVM_CAP_ENABLE_CAP is present, it means the KVM_ENABLE_CAP ioctl will 
not return ENOTTY (it may return EINVAL if wrong values are present).

ENOTTY means not implemented.  'Architectures: all' means implemented.

>>> +Not all extensions are enabled by default. Using this ioctl the application
>>> +can enable an extension, making it available to the guest.
>>> +
>>> +On systems that do not support this ioctl, it always fails. On systems that
>>> +do support it, it only works for extensions that are supported for enablement.
>>> +As of writing this the only enablement enabled extenion is KVM_CAP_PPC_OSI.
>>>
>>>        
>> That needs to be documented.  It also needs to be discoverable separately - we can have a kernel with KVM_ENABLE_CAP but without KVM_CAP_PPC_OSI.
>>
>> btw, KVM_CAP_PPC_OSI conflicts with the KVM_CAP_ namespace.  Please choose another namespace.
>>      
> Well I figured it'd be slick to have capabilities get enabled or disabled. That's the whole idea behind making it generic. If I wanted a specific interface I'd go in and create an ioctl ENABLE_OSI_INTERFACE.
>    

Ah, I see.  Well, that makes sense.  Please document it.

> But this way the detection if a capability exists can be done using the existing CAP detection. It can then be enabled using ENABLE_CAP.
>    

Okay, I agree.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 13/15] KVM: Add support for enabling capabilities per-vcpu
@ 2010-03-09 13:09           ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-09 13:09 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On 03/09/2010 03:01 PM, Alexander Graf wrote:
> On 09.03.2010, at 13:56, Avi Kivity wrote:
>
>    
>> On 03/08/2010 08:03 PM, Alexander Graf wrote:
>>      
>>> Some times we don't want all capabilities to be available to all
>>> our vcpus. One example for that is the OSI interface, implemented
>>> in the next patch.
>>>
>>> In order to have a generic mechanism in how to enable capabilities
>>> individually, this patch introduces a new ioctl that can be used
>>> for this purpose. That way features we don't want in all guests or
>>> userspace configurations can just not be enabled and we're good.
>>>
>>>
>>> diff --git a/Documentation/kvm/api.txt b/Documentation/kvm/api.txt
>>> index d170cb4..6a19ab6 100644
>>> --- a/Documentation/kvm/api.txt
>>> +++ b/Documentation/kvm/api.txt
>>> @@ -749,6 +749,21 @@ Writes debug registers into the vcpu.
>>>   See KVM_GET_DEBUGREGS for the data structure. The flags field is unused
>>>   yet and must be cleared on entry.
>>>
>>> +4.34 KVM_ENABLE_CAP
>>> +
>>> +Capability: basic
>>>
>>>        
>> Capability: basic means that the feature was present in 2.6.22.  Otherwise you need to specify the KVM_CAP_ that presents this feature.
>>
>>      
>>> +Architectures: all
>>>
>>>
>>>        
>> But it's implemented for ppc only (other arches will get ENOTTY).
>>      
> That was the whole idea behind it. if it fails it fails. Nothing we can do about it. If it succeeds - great.
>    

If KVM_CAP_ENABLE_CAP is present, it means the KVM_ENABLE_CAP ioctl will 
not return ENOTTY (it may return EINVAL if wrong values are present).

ENOTTY means not implemented.  'Architectures: all' means implemented.

>>> +Not all extensions are enabled by default. Using this ioctl the application
>>> +can enable an extension, making it available to the guest.
>>> +
>>> +On systems that do not support this ioctl, it always fails. On systems that
>>> +do support it, it only works for extensions that are supported for enablement.
>>> +As of writing this the only enablement enabled extenion is KVM_CAP_PPC_OSI.
>>>
>>>        
>> That needs to be documented.  It also needs to be discoverable separately - we can have a kernel with KVM_ENABLE_CAP but without KVM_CAP_PPC_OSI.
>>
>> btw, KVM_CAP_PPC_OSI conflicts with the KVM_CAP_ namespace.  Please choose another namespace.
>>      
> Well I figured it'd be slick to have capabilities get enabled or disabled. That's the whole idea behind making it generic. If I wanted a specific interface I'd go in and create an ioctl ENABLE_OSI_INTERFACE.
>    

Ah, I see.  Well, that makes sense.  Please document it.

> But this way the detection if a capability exists can be done using the existing CAP detection. It can then be enabled using ENABLE_CAP.
>    

Okay, I agree.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: PPC: Add OSI hypercall interface
  2010-03-09 13:04         ` Alexander Graf
@ 2010-03-09 13:11           ` Avi Kivity
  -1 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-09 13:11 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On 03/09/2010 03:04 PM, Alexander Graf wrote:
>
>>> +		/* KVM_EXIT_OSI */
>>> +		struct {
>>> +			__u64 gprs[32];
>>> +		} osi;
>>> +
>>> +MOL uses a special hypercall interface it calls 'OSI'. To enable it, we catch
>>> +hypercalls and exit with this exit struct that contains all the guest gprs.
>>> +
>>> +If exit_reason is KVM_EXIT_OSI, then the vcpu has triggered such a hypercall.
>>> +Userspace can now handle the hypercall and when it's done modify the gprs as
>>> +necessary. Upon guest entry all guest GPRs will then be replaced by the values
>>> +in this struct.
>>> +
>>>
>>>        
>> That's migration unsafe.  There may not be next guest entry on this host.
>>      
> It's as unsafe as MMIO then.
>
>    

 From api.txt:

> NOTE: For KVM_EXIT_IO and KVM_EXIT_MMIO, the corresponding operations
> are complete (and guest state is consistent) only after userspace has
> re-entered the kernel with KVM_RUN.  The kernel side will first finish
> incomplete operations and then check for pending signals.  Userspace
> can re-enter the guest with an unmasked signal pending to complete
> pending operations.


>> Is using KVM_[GS]ET_REGS problematic for some reason?
>>      
> It's two additional ioctls for no good reason. We know the interface, so we can model towards it.
>    

But we need to be migration safe.  If the interface is not heavily used, 
let's not add complications.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: PPC: Add OSI hypercall interface
@ 2010-03-09 13:11           ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-09 13:11 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc, kvm

On 03/09/2010 03:04 PM, Alexander Graf wrote:
>
>>> +		/* KVM_EXIT_OSI */
>>> +		struct {
>>> +			__u64 gprs[32];
>>> +		} osi;
>>> +
>>> +MOL uses a special hypercall interface it calls 'OSI'. To enable it, we catch
>>> +hypercalls and exit with this exit struct that contains all the guest gprs.
>>> +
>>> +If exit_reason is KVM_EXIT_OSI, then the vcpu has triggered such a hypercall.
>>> +Userspace can now handle the hypercall and when it's done modify the gprs as
>>> +necessary. Upon guest entry all guest GPRs will then be replaced by the values
>>> +in this struct.
>>> +
>>>
>>>        
>> That's migration unsafe.  There may not be next guest entry on this host.
>>      
> It's as unsafe as MMIO then.
>
>    

 From api.txt:

> NOTE: For KVM_EXIT_IO and KVM_EXIT_MMIO, the corresponding operations
> are complete (and guest state is consistent) only after userspace has
> re-entered the kernel with KVM_RUN.  The kernel side will first finish
> incomplete operations and then check for pending signals.  Userspace
> can re-enter the guest with an unmasked signal pending to complete
> pending operations.


>> Is using KVM_[GS]ET_REGS problematic for some reason?
>>      
> It's two additional ioctls for no good reason. We know the interface, so we can model towards it.
>    

But we need to be migration safe.  If the interface is not heavily used, 
let's not add complications.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: PPC: Add OSI hypercall interface
  2010-03-09 13:11           ` Avi Kivity
@ 2010-03-09 13:12             ` Alexander Graf
  -1 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-09 13:12 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc, kvm


On 09.03.2010, at 14:11, Avi Kivity wrote:

> On 03/09/2010 03:04 PM, Alexander Graf wrote:
>> 
>>>> +		/* KVM_EXIT_OSI */
>>>> +		struct {
>>>> +			__u64 gprs[32];
>>>> +		} osi;
>>>> +
>>>> +MOL uses a special hypercall interface it calls 'OSI'. To enable it, we catch
>>>> +hypercalls and exit with this exit struct that contains all the guest gprs.
>>>> +
>>>> +If exit_reason is KVM_EXIT_OSI, then the vcpu has triggered such a hypercall.
>>>> +Userspace can now handle the hypercall and when it's done modify the gprs as
>>>> +necessary. Upon guest entry all guest GPRs will then be replaced by the values
>>>> +in this struct.
>>>> +
>>>> 
>>>>       
>>> That's migration unsafe.  There may not be next guest entry on this host.
>>>     
>> It's as unsafe as MMIO then.
>> 
>>   
> 
> From api.txt:
> 
>> NOTE: For KVM_EXIT_IO and KVM_EXIT_MMIO, the corresponding operations
>> are complete (and guest state is consistent) only after userspace has
>> re-entered the kernel with KVM_RUN.  The kernel side will first finish
>> incomplete operations and then check for pending signals.  Userspace
>> can re-enter the guest with an unmasked signal pending to complete
>> pending operations.
> 

Alright - so I add KVM_EXIT_OSI there and be good? :)

> 
>>> Is using KVM_[GS]ET_REGS problematic for some reason?
>>>     
>> It's two additional ioctls for no good reason. We know the interface, so we can model towards it.
>>   
> 
> But we need to be migration safe.  If the interface is not heavily used, let's not add complications.

MOL uses OSI calls instead of MMIO. So yes, it is heavily used.


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: PPC: Add OSI hypercall interface
@ 2010-03-09 13:12             ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-09 13:12 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc, kvm


On 09.03.2010, at 14:11, Avi Kivity wrote:

> On 03/09/2010 03:04 PM, Alexander Graf wrote:
>> 
>>>> +		/* KVM_EXIT_OSI */
>>>> +		struct {
>>>> +			__u64 gprs[32];
>>>> +		} osi;
>>>> +
>>>> +MOL uses a special hypercall interface it calls 'OSI'. To enable it, we catch
>>>> +hypercalls and exit with this exit struct that contains all the guest gprs.
>>>> +
>>>> +If exit_reason is KVM_EXIT_OSI, then the vcpu has triggered such a hypercall.
>>>> +Userspace can now handle the hypercall and when it's done modify the gprs as
>>>> +necessary. Upon guest entry all guest GPRs will then be replaced by the values
>>>> +in this struct.
>>>> +
>>>> 
>>>>       
>>> That's migration unsafe.  There may not be next guest entry on this host.
>>>     
>> It's as unsafe as MMIO then.
>> 
>>   
> 
> From api.txt:
> 
>> NOTE: For KVM_EXIT_IO and KVM_EXIT_MMIO, the corresponding operations
>> are complete (and guest state is consistent) only after userspace has
>> re-entered the kernel with KVM_RUN.  The kernel side will first finish
>> incomplete operations and then check for pending signals.  Userspace
>> can re-enter the guest with an unmasked signal pending to complete
>> pending operations.
> 

Alright - so I add KVM_EXIT_OSI there and be good? :)

> 
>>> Is using KVM_[GS]ET_REGS problematic for some reason?
>>>     
>> It's two additional ioctls for no good reason. We know the interface, so we can model towards it.
>>   
> 
> But we need to be migration safe.  If the interface is not heavily used, let's not add complications.

MOL uses OSI calls instead of MMIO. So yes, it is heavily used.


Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: PPC: Add OSI hypercall interface
       [not found]             ` <3D0D6963-FEC8-4A53-ACCE-570BEAF3721B-l3A5Bk7waGM@public.gmane.org>
@ 2010-03-09 13:19                 ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-09 13:19 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On 03/09/2010 03:12 PM, Alexander Graf wrote:
> On 09.03.2010, at 14:11, Avi Kivity wrote:
>
>    
>> On 03/09/2010 03:04 PM, Alexander Graf wrote:
>>      
>>>        
>>>>> +		/* KVM_EXIT_OSI */
>>>>> +		struct {
>>>>> +			__u64 gprs[32];
>>>>> +		} osi;
>>>>> +
>>>>> +MOL uses a special hypercall interface it calls 'OSI'. To enable it, we catch
>>>>> +hypercalls and exit with this exit struct that contains all the guest gprs.
>>>>> +
>>>>> +If exit_reason is KVM_EXIT_OSI, then the vcpu has triggered such a hypercall.
>>>>> +Userspace can now handle the hypercall and when it's done modify the gprs as
>>>>> +necessary. Upon guest entry all guest GPRs will then be replaced by the values
>>>>> +in this struct.
>>>>> +
>>>>>
>>>>>
>>>>>            
>>>> That's migration unsafe.  There may not be next guest entry on this host.
>>>>
>>>>          
>>> It's as unsafe as MMIO then.
>>>
>>>
>>>        
>>  From api.txt:
>>
>>      
>>> NOTE: For KVM_EXIT_IO and KVM_EXIT_MMIO, the corresponding operations
>>> are complete (and guest state is consistent) only after userspace has
>>> re-entered the kernel with KVM_RUN.  The kernel side will first finish
>>> incomplete operations and then check for pending signals.  Userspace
>>> can re-enter the guest with an unmasked signal pending to complete
>>> pending operations.
>>>        
>>      
> Alright - so I add KVM_EXIT_OSI there and be good? :)
>    

Sure, just verify that the note holds for that case too.

>> But we need to be migration safe.  If the interface is not heavily used, let's not add complications.
>>      
> MOL uses OSI calls instead of MMIO. So yes, it is heavily used.
>
>    

Ok.

-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: PPC: Add OSI hypercall interface
@ 2010-03-09 13:19                 ` Avi Kivity
  0 siblings, 0 replies; 140+ messages in thread
From: Avi Kivity @ 2010-03-09 13:19 UTC (permalink / raw)
  To: Alexander Graf; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA

On 03/09/2010 03:12 PM, Alexander Graf wrote:
> On 09.03.2010, at 14:11, Avi Kivity wrote:
>
>    
>> On 03/09/2010 03:04 PM, Alexander Graf wrote:
>>      
>>>        
>>>>> +		/* KVM_EXIT_OSI */
>>>>> +		struct {
>>>>> +			__u64 gprs[32];
>>>>> +		} osi;
>>>>> +
>>>>> +MOL uses a special hypercall interface it calls 'OSI'. To enable it, we catch
>>>>> +hypercalls and exit with this exit struct that contains all the guest gprs.
>>>>> +
>>>>> +If exit_reason is KVM_EXIT_OSI, then the vcpu has triggered such a hypercall.
>>>>> +Userspace can now handle the hypercall and when it's done modify the gprs as
>>>>> +necessary. Upon guest entry all guest GPRs will then be replaced by the values
>>>>> +in this struct.
>>>>> +
>>>>>
>>>>>
>>>>>            
>>>> That's migration unsafe.  There may not be next guest entry on this host.
>>>>
>>>>          
>>> It's as unsafe as MMIO then.
>>>
>>>
>>>        
>>  From api.txt:
>>
>>      
>>> NOTE: For KVM_EXIT_IO and KVM_EXIT_MMIO, the corresponding operations
>>> are complete (and guest state is consistent) only after userspace has
>>> re-entered the kernel with KVM_RUN.  The kernel side will first finish
>>> incomplete operations and then check for pending signals.  Userspace
>>> can re-enter the guest with an unmasked signal pending to complete
>>> pending operations.
>>>        
>>      
> Alright - so I add KVM_EXIT_OSI there and be good? :)
>    

Sure, just verify that the note holds for that case too.

>> But we need to be migration safe.  If the interface is not heavily used, let's not add complications.
>>      
> MOL uses OSI calls instead of MMIO. So yes, it is heavily used.
>
>    

Ok.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: PPC: Add OSI hypercall interface
       [not found]                 ` <4B964ADE.5030200-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2010-03-09 13:20                     ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-09 13:20 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA


On 09.03.2010, at 14:19, Avi Kivity wrote:

> On 03/09/2010 03:12 PM, Alexander Graf wrote:
>> On 09.03.2010, at 14:11, Avi Kivity wrote:
>> 
>>   
>>> On 03/09/2010 03:04 PM, Alexander Graf wrote:
>>>     
>>>>       
>>>>>> +		/* KVM_EXIT_OSI */
>>>>>> +		struct {
>>>>>> +			__u64 gprs[32];
>>>>>> +		} osi;
>>>>>> +
>>>>>> +MOL uses a special hypercall interface it calls 'OSI'. To enable it, we catch
>>>>>> +hypercalls and exit with this exit struct that contains all the guest gprs.
>>>>>> +
>>>>>> +If exit_reason is KVM_EXIT_OSI, then the vcpu has triggered such a hypercall.
>>>>>> +Userspace can now handle the hypercall and when it's done modify the gprs as
>>>>>> +necessary. Upon guest entry all guest GPRs will then be replaced by the values
>>>>>> +in this struct.
>>>>>> +
>>>>>> 
>>>>>> 
>>>>>>           
>>>>> That's migration unsafe.  There may not be next guest entry on this host.
>>>>> 
>>>>>         
>>>> It's as unsafe as MMIO then.
>>>> 
>>>> 
>>>>       
>>> From api.txt:
>>> 
>>>     
>>>> NOTE: For KVM_EXIT_IO and KVM_EXIT_MMIO, the corresponding operations
>>>> are complete (and guest state is consistent) only after userspace has
>>>> re-entered the kernel with KVM_RUN.  The kernel side will first finish
>>>> incomplete operations and then check for pending signals.  Userspace
>>>> can re-enter the guest with an unmasked signal pending to complete
>>>> pending operations.
>>>>       
>>>     
>> Alright - so I add KVM_EXIT_OSI there and be good? :)
>>   
> 
> Sure, just verify that the note holds for that case too.

The handling of the hypercall write-back is in the same region as the mmio one. So whatever applies for MMIO entries applies for OSI entries too.

Alex--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 140+ messages in thread

* Re: [PATCH 14/15] KVM: PPC: Add OSI hypercall interface
@ 2010-03-09 13:20                     ` Alexander Graf
  0 siblings, 0 replies; 140+ messages in thread
From: Alexander Graf @ 2010-03-09 13:20 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-ppc-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA


On 09.03.2010, at 14:19, Avi Kivity wrote:

> On 03/09/2010 03:12 PM, Alexander Graf wrote:
>> On 09.03.2010, at 14:11, Avi Kivity wrote:
>> 
>>   
>>> On 03/09/2010 03:04 PM, Alexander Graf wrote:
>>>     
>>>>       
>>>>>> +		/* KVM_EXIT_OSI */
>>>>>> +		struct {
>>>>>> +			__u64 gprs[32];
>>>>>> +		} osi;
>>>>>> +
>>>>>> +MOL uses a special hypercall interface it calls 'OSI'. To enable it, we catch
>>>>>> +hypercalls and exit with this exit struct that contains all the guest gprs.
>>>>>> +
>>>>>> +If exit_reason is KVM_EXIT_OSI, then the vcpu has triggered such a hypercall.
>>>>>> +Userspace can now handle the hypercall and when it's done modify the gprs as
>>>>>> +necessary. Upon guest entry all guest GPRs will then be replaced by the values
>>>>>> +in this struct.
>>>>>> +
>>>>>> 
>>>>>> 
>>>>>>           
>>>>> That's migration unsafe.  There may not be next guest entry on this host.
>>>>> 
>>>>>         
>>>> It's as unsafe as MMIO then.
>>>> 
>>>> 
>>>>       
>>> From api.txt:
>>> 
>>>     
>>>> NOTE: For KVM_EXIT_IO and KVM_EXIT_MMIO, the corresponding operations
>>>> are complete (and guest state is consistent) only after userspace has
>>>> re-entered the kernel with KVM_RUN.  The kernel side will first finish
>>>> incomplete operations and then check for pending signals.  Userspace
>>>> can re-enter the guest with an unmasked signal pending to complete
>>>> pending operations.
>>>>       
>>>     
>> Alright - so I add KVM_EXIT_OSI there and be good? :)
>>   
> 
> Sure, just verify that the note holds for that case too.

The handling of the hypercall write-back is in the same region as the mmio one. So whatever applies for MMIO entries applies for OSI entries too.

Alex

^ permalink raw reply	[flat|nested] 140+ messages in thread

end of thread, other threads:[~2010-03-09 13:20 UTC | newest]

Thread overview: 140+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-03-08 18:03 [PATCH 00/15] KVM: PPC: MOL bringup patches Alexander Graf
2010-03-08 18:03 ` Alexander Graf
2010-03-08 18:03 ` [PATCH 02/15] KVM: PPC: Allow userspace to unset the IRQ line Alexander Graf
2010-03-08 18:03   ` Alexander Graf
     [not found]   ` <1268071402-27112-3-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
2010-03-09 12:50     ` Avi Kivity
2010-03-09 12:50       ` Avi Kivity
     [not found]       ` <4B964412.8030708-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2010-03-09 12:54         ` Alexander Graf
2010-03-09 12:54           ` Alexander Graf
     [not found]           ` <954C5195-A8E4-4CA5-8D5E-AA21E2E21C5B-l3A5Bk7waGM@public.gmane.org>
2010-03-09 13:05             ` Avi Kivity
2010-03-09 13:05               ` Avi Kivity
2010-03-08 18:03 ` [PATCH 04/15] KVM: PPC: Book3S_32 guest MMU fixes Alexander Graf
2010-03-08 18:03   ` Alexander Graf
2010-03-08 18:03 ` [PATCH 06/15] KVM: PPC: Don't reload FPU with invalid values Alexander Graf
2010-03-08 18:03   ` Alexander Graf
2010-03-08 18:03 ` [PATCH 09/15] KVM: PPC: Implement BAT reads Alexander Graf
2010-03-08 18:03   ` Alexander Graf
2010-03-08 18:03 ` [PATCH 10/15] KVM: PPC: Make XER load 32 bit Alexander Graf
2010-03-08 18:03   ` Alexander Graf
2010-03-08 18:03 ` [PATCH 11/15] KVM: PPC: Implement emulation for lbzux and lhax Alexander Graf
2010-03-08 18:03   ` Alexander Graf
     [not found] ` <1268071402-27112-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
2010-03-08 18:03   ` [PATCH 01/15] KVM: PPC: Ensure split mode works Alexander Graf
2010-03-08 18:03     ` Alexander Graf
2010-03-08 18:03   ` [PATCH 03/15] KVM: PPC: Make DSISR 32 bits wide Alexander Graf
2010-03-08 18:03     ` Alexander Graf
2010-03-08 18:03   ` [PATCH 05/15] KVM: PPC: Split instruction reading out Alexander Graf
2010-03-08 18:03     ` Alexander Graf
2010-03-08 18:03   ` [PATCH 07/15] KVM: PPC: Load VCPU for register fetching Alexander Graf
2010-03-08 18:03     ` Alexander Graf
2010-03-08 18:03   ` [PATCH 08/15] KVM: PPC: Implement mfsr emulation Alexander Graf
2010-03-08 18:03     ` Alexander Graf
2010-03-08 18:03   ` [PATCH 12/15] KVM: PPC: Implement alignment interrupt Alexander Graf
2010-03-08 18:03     ` Alexander Graf
2010-03-08 18:03   ` [PATCH 13/15] KVM: Add support for enabling capabilities per-vcpu Alexander Graf
2010-03-08 18:03     ` Alexander Graf
2010-03-09 12:56     ` Avi Kivity
2010-03-09 12:56       ` Avi Kivity
2010-03-09 13:01       ` Alexander Graf
2010-03-09 13:01         ` Alexander Graf
2010-03-09 13:09         ` Avi Kivity
2010-03-09 13:09           ` Avi Kivity
2010-03-08 18:03   ` [PATCH 14/15] KVM: PPC: Add OSI hypercall interface Alexander Graf
2010-03-08 18:03     ` Alexander Graf
2010-03-09 13:00     ` Avi Kivity
2010-03-09 13:00       ` Avi Kivity
2010-03-09 13:04       ` Alexander Graf
2010-03-09 13:04         ` Alexander Graf
2010-03-09 13:11         ` Avi Kivity
2010-03-09 13:11           ` Avi Kivity
2010-03-09 13:12           ` Alexander Graf
2010-03-09 13:12             ` Alexander Graf
     [not found]             ` <3D0D6963-FEC8-4A53-ACCE-570BEAF3721B-l3A5Bk7waGM@public.gmane.org>
2010-03-09 13:19               ` Avi Kivity
2010-03-09 13:19                 ` Avi Kivity
     [not found]                 ` <4B964ADE.5030200-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2010-03-09 13:20                   ` Alexander Graf
2010-03-09 13:20                     ` Alexander Graf
2010-03-08 18:03   ` [PATCH 15/15] KVM: PPC: Make build work without CONFIG_VSX/ALTIVEC Alexander Graf
2010-03-08 18:03     ` Alexander Graf
2010-03-08 18:06 ` [PATCH 00/15] KVM: PPC: MOL bringup patches Alexander Graf
2010-03-08 18:06   ` Alexander Graf
  -- strict thread matches above, loose matches on Subject: below --
2010-03-05 16:50 Alexander Graf
2010-03-05 16:50 ` Alexander Graf
2010-03-05 16:50 ` [PATCH 01/15] KVM: PPC: Make register read/write wrappers always work Alexander Graf
2010-03-05 16:50   ` Alexander Graf
     [not found]   ` <1267807842-3751-2-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
2010-03-08 13:40     ` Avi Kivity
2010-03-08 13:40       ` [PATCH 01/15] KVM: PPC: Make register read/write wrappers always Avi Kivity
     [not found]       ` <4B94FE41.1040904-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2010-03-08 13:44         ` [PATCH 01/15] KVM: PPC: Make register read/write wrappers always work Alexander Graf
2010-03-08 13:44           ` [PATCH 01/15] KVM: PPC: Make register read/write wrappers always Alexander Graf
     [not found]           ` <4B94FF56.9060200-l3A5Bk7waGM@public.gmane.org>
2010-03-08 13:50             ` [PATCH 01/15] KVM: PPC: Make register read/write wrappers always work Avi Kivity
2010-03-08 13:50               ` [PATCH 01/15] KVM: PPC: Make register read/write wrappers always Avi Kivity
2010-03-08 13:53               ` [PATCH 01/15] KVM: PPC: Make register read/write wrappers always work Alexander Graf
2010-03-08 13:53                 ` [PATCH 01/15] KVM: PPC: Make register read/write wrappers always Alexander Graf
     [not found]                 ` <4B950174.7010709-l3A5Bk7waGM@public.gmane.org>
2010-03-08 14:06                   ` [PATCH 01/15] KVM: PPC: Make register read/write wrappers always work Avi Kivity
2010-03-08 14:06                     ` [PATCH 01/15] KVM: PPC: Make register read/write wrappers always Avi Kivity
     [not found]                     ` <4B950475.1020106-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2010-03-08 14:14                       ` [PATCH 01/15] KVM: PPC: Make register read/write wrappers always work Alexander Graf
2010-03-08 14:14                         ` [PATCH 01/15] KVM: PPC: Make register read/write wrappers always Alexander Graf
     [not found]                         ` <4B95062D.2020908-l3A5Bk7waGM@public.gmane.org>
2010-03-08 14:16                           ` [PATCH 01/15] KVM: PPC: Make register read/write wrappers always work Avi Kivity
2010-03-08 14:16                             ` [PATCH 01/15] KVM: PPC: Make register read/write wrappers always Avi Kivity
     [not found]                             ` <4B9506C5.30606-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2010-03-08 14:20                               ` [PATCH 01/15] KVM: PPC: Make register read/write wrappers always work Alexander Graf
2010-03-08 14:20                                 ` [PATCH 01/15] KVM: PPC: Make register read/write wrappers always Alexander Graf
2010-03-08 14:23                                 ` [PATCH 01/15] KVM: PPC: Make register read/write wrappers always work Avi Kivity
2010-03-08 14:23                                   ` [PATCH 01/15] KVM: PPC: Make register read/write wrappers always Avi Kivity
2010-03-05 16:50 ` [PATCH 03/15] KVM: PPC: Allow userspace to unset the IRQ line Alexander Graf
2010-03-05 16:50   ` Alexander Graf
     [not found]   ` <1267807842-3751-4-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
2010-03-08 13:44     ` Avi Kivity
2010-03-08 13:44       ` Avi Kivity
     [not found]       ` <4B94FF27.5010800-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2010-03-08 13:48         ` Alexander Graf
2010-03-08 13:48           ` Alexander Graf
2010-03-08 13:52           ` Avi Kivity
2010-03-08 13:52             ` Avi Kivity
2010-03-08 13:55             ` Alexander Graf
2010-03-08 13:55               ` Alexander Graf
2010-03-08 13:58               ` Avi Kivity
2010-03-08 13:58                 ` Avi Kivity
     [not found]                 ` <4B95029C.6000800-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2010-03-08 14:01                   ` Alexander Graf
2010-03-08 14:01                     ` Alexander Graf
2010-03-08 14:09                     ` Avi Kivity
2010-03-08 14:09                       ` Avi Kivity
     [not found] ` <1267807842-3751-1-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
2010-03-05 16:50   ` [PATCH 02/15] KVM: PPC: Ensure split mode works Alexander Graf
2010-03-05 16:50     ` Alexander Graf
2010-03-05 16:50   ` [PATCH 04/15] KVM: PPC: Make DSISR 32 bits wide Alexander Graf
2010-03-05 16:50     ` Alexander Graf
2010-03-05 16:50   ` [PATCH 05/15] KVM: PPC: Book3S_32 guest MMU fixes Alexander Graf
2010-03-05 16:50     ` Alexander Graf
2010-03-05 16:50   ` [PATCH 10/15] KVM: PPC: Implement BAT reads Alexander Graf
2010-03-05 16:50     ` Alexander Graf
2010-03-05 16:50   ` [PATCH 13/15] KVM: PPC: Implement alignment interrupt Alexander Graf
2010-03-05 16:50     ` Alexander Graf
2010-03-05 16:50 ` [PATCH 06/15] KVM: PPC: Split instruction reading out Alexander Graf
2010-03-05 16:50   ` Alexander Graf
2010-03-05 16:50 ` [PATCH 07/15] KVM: PPC: Don't reload FPU with invalid values Alexander Graf
2010-03-05 16:50   ` Alexander Graf
2010-03-05 16:50 ` [PATCH 08/15] KVM: PPC: Load VCPU for register fetching Alexander Graf
2010-03-05 16:50   ` Alexander Graf
2010-03-05 16:50 ` [PATCH 09/15] KVM: PPC: Implement mfsr emulation Alexander Graf
2010-03-05 16:50   ` Alexander Graf
2010-03-05 16:50 ` [PATCH 11/15] KVM: PPC: Make XER load 32 bit Alexander Graf
2010-03-05 16:50   ` Alexander Graf
2010-03-05 16:50 ` [PATCH 12/15] KVM: PPC: Implement emulation for lbzux and lhax Alexander Graf
2010-03-05 16:50   ` Alexander Graf
2010-03-05 16:50 ` [PATCH 14/15] KVM: Add support for enabling capabilities per-vcpu Alexander Graf
2010-03-05 16:50   ` Alexander Graf
     [not found]   ` <1267807842-3751-15-git-send-email-agraf-l3A5Bk7waGM@public.gmane.org>
2010-03-08 13:49     ` Avi Kivity
2010-03-08 13:49       ` Avi Kivity
     [not found]       ` <4B950057.1090204-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2010-03-08 13:51         ` Alexander Graf
2010-03-08 13:51           ` Alexander Graf
     [not found]           ` <4B9500D1.2060008-l3A5Bk7waGM@public.gmane.org>
2010-03-08 13:52             ` Avi Kivity
2010-03-08 13:52               ` Avi Kivity
     [not found]               ` <4B95012B.3030505-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2010-03-08 13:56                 ` Alexander Graf
2010-03-08 13:56                   ` Alexander Graf
2010-03-08 14:02                   ` Avi Kivity
2010-03-08 14:02                     ` Avi Kivity
2010-03-08 14:10                     ` Alexander Graf
2010-03-08 14:10                       ` Alexander Graf
     [not found]                       ` <4B950562.6050509-l3A5Bk7waGM@public.gmane.org>
2010-03-08 14:14                         ` Avi Kivity
2010-03-08 14:14                           ` Avi Kivity
     [not found]                           ` <4B950656.4010307-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2010-03-08 14:18                             ` Alexander Graf
2010-03-08 14:18                               ` Alexander Graf
2010-03-08 14:21                               ` Avi Kivity
2010-03-08 14:21                                 ` Avi Kivity
2010-03-05 16:50 ` [PATCH 15/15] KVM: PPC: Add OSI hypercall interface Alexander Graf
2010-03-05 16:50   ` Alexander Graf

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.