All of lore.kernel.org
 help / color / mirror / Atom feed
* [PULL 3.16 0/6] 3.16 patch queue 2014-07-08
@ 2014-07-08 10:04 ` Alexander Graf
  0 siblings, 0 replies; 18+ messages in thread
From: Alexander Graf @ 2014-07-08 10:04 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm, Paolo Bonzini

Hi Paolo / Marcelo,

This is my current patch queue for 3.16.  Please pull.

Alex


The following changes since commit 5c02c392cd2320e8d612376d6b72b6548a680923:

  Merge tag 'virtio-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux (2014-06-11 21:10:33 -0700)

are available in the git repository at:


  git://github.com/agraf/linux-2.6.git tags/signed-for-3.16

for you to fetch changes up to 19a44ecff52fd67d77d49fb4d43b289c53cdc392:

  KVM: PPC: RTAS: Do byte swaps explicitly (2014-07-07 23:17:20 +0200)

----------------------------------------------------------------
Patch queue for 3.16 - 2014-07-08

A few bug fixes to make 3.16 work well with KVM on PowerPC:

  - Fix ppc32 module builds
  - Fix Little Endian hosts
  - Fix Book3S HV HPTE lookup with huge pages in guest
  - Fix BookE lock leak

----------------------------------------------------------------
Alexander Graf (3):
      PPC: Add _GLOBAL_TOC for 32bit
      KVM: PPC: Book3S PR: Fix ABIv2 on LE
      KVM: PPC: RTAS: Do byte swaps explicitly

Aneesh Kumar K.V (1):
      KVM: PPC: BOOK3S: HV: Use base page size when comparing against slb value

Anton Blanchard (1):
      KVM: PPC: Assembly functions exported to modules need _GLOBAL_TOC()

Mihai Caraman (1):
      KVM: PPC: Book3E: Unlock mmu_lock when setting caching atttribute

 arch/powerpc/include/asm/kvm_book3s_64.h | 19 +++++++++-
 arch/powerpc/include/asm/ppc_asm.h       |  2 +
 arch/powerpc/kvm/book3s_64_mmu_hv.c      |  2 +-
 arch/powerpc/kvm/book3s_hv_rm_mmu.c      |  7 +---
 arch/powerpc/kvm/book3s_hv_rmhandlers.S  |  2 +-
 arch/powerpc/kvm/book3s_interrupts.S     |  4 ++
 arch/powerpc/kvm/book3s_rmhandlers.S     |  6 ++-
 arch/powerpc/kvm/book3s_rtas.c           | 65 +++++++++-----------------------
 arch/powerpc/kvm/e500_mmu_host.c         |  3 +-
 9 files changed, 52 insertions(+), 58 deletions(-)

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PULL 3.16 0/6] 3.16 patch queue 2014-07-08
@ 2014-07-08 10:04 ` Alexander Graf
  0 siblings, 0 replies; 18+ messages in thread
From: Alexander Graf @ 2014-07-08 10:04 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm, Paolo Bonzini

Hi Paolo / Marcelo,

This is my current patch queue for 3.16.  Please pull.

Alex


The following changes since commit 5c02c392cd2320e8d612376d6b72b6548a680923:

  Merge tag 'virtio-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux (2014-06-11 21:10:33 -0700)

are available in the git repository at:


  git://github.com/agraf/linux-2.6.git tags/signed-for-3.16

for you to fetch changes up to 19a44ecff52fd67d77d49fb4d43b289c53cdc392:

  KVM: PPC: RTAS: Do byte swaps explicitly (2014-07-07 23:17:20 +0200)

----------------------------------------------------------------
Patch queue for 3.16 - 2014-07-08

A few bug fixes to make 3.16 work well with KVM on PowerPC:

  - Fix ppc32 module builds
  - Fix Little Endian hosts
  - Fix Book3S HV HPTE lookup with huge pages in guest
  - Fix BookE lock leak

----------------------------------------------------------------
Alexander Graf (3):
      PPC: Add _GLOBAL_TOC for 32bit
      KVM: PPC: Book3S PR: Fix ABIv2 on LE
      KVM: PPC: RTAS: Do byte swaps explicitly

Aneesh Kumar K.V (1):
      KVM: PPC: BOOK3S: HV: Use base page size when comparing against slb value

Anton Blanchard (1):
      KVM: PPC: Assembly functions exported to modules need _GLOBAL_TOC()

Mihai Caraman (1):
      KVM: PPC: Book3E: Unlock mmu_lock when setting caching atttribute

 arch/powerpc/include/asm/kvm_book3s_64.h | 19 +++++++++-
 arch/powerpc/include/asm/ppc_asm.h       |  2 +
 arch/powerpc/kvm/book3s_64_mmu_hv.c      |  2 +-
 arch/powerpc/kvm/book3s_hv_rm_mmu.c      |  7 +---
 arch/powerpc/kvm/book3s_hv_rmhandlers.S  |  2 +-
 arch/powerpc/kvm/book3s_interrupts.S     |  4 ++
 arch/powerpc/kvm/book3s_rmhandlers.S     |  6 ++-
 arch/powerpc/kvm/book3s_rtas.c           | 65 +++++++++-----------------------
 arch/powerpc/kvm/e500_mmu_host.c         |  3 +-
 9 files changed, 52 insertions(+), 58 deletions(-)

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PULL 1/6] KVM: PPC: Book3E: Unlock mmu_lock when setting caching atttribute
  2014-07-08 10:04 ` Alexander Graf
@ 2014-07-08 10:04   ` Alexander Graf
  -1 siblings, 0 replies; 18+ messages in thread
From: Alexander Graf @ 2014-07-08 10:04 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm, Paolo Bonzini, Mihai Caraman

From: Mihai Caraman <mihai.caraman@freescale.com>

The patch 08c9a188d0d0fc0f0c5e17d89a06bb59c493110f
  	kvm: powerpc: use caching attributes as per linux pte
do not handle properly the error case, letting mmu_lock locked. The lock
will further generate a RCU stall from kvmppc_e500_emul_tlbwe() caller.

In case of an error go to out label.

Signed-off-by: Mihai Caraman <mihai.caraman@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/e500_mmu_host.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
index dd2cc03..86903d3 100644
--- a/arch/powerpc/kvm/e500_mmu_host.c
+++ b/arch/powerpc/kvm/e500_mmu_host.c
@@ -473,7 +473,8 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
 		if (printk_ratelimit())
 			pr_err("%s: pte not present: gfn %lx, pfn %lx\n",
 				__func__, (long)gfn, pfn);
-		return -EINVAL;
+		ret = -EINVAL;
+		goto out;
 	}
 	kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg);
 
-- 
1.8.1.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PULL 1/6] KVM: PPC: Book3E: Unlock mmu_lock when setting caching atttribute
@ 2014-07-08 10:04   ` Alexander Graf
  0 siblings, 0 replies; 18+ messages in thread
From: Alexander Graf @ 2014-07-08 10:04 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm, Paolo Bonzini, Mihai Caraman

From: Mihai Caraman <mihai.caraman@freescale.com>

The patch 08c9a188d0d0fc0f0c5e17d89a06bb59c493110f
  	kvm: powerpc: use caching attributes as per linux pte
do not handle properly the error case, letting mmu_lock locked. The lock
will further generate a RCU stall from kvmppc_e500_emul_tlbwe() caller.

In case of an error go to out label.

Signed-off-by: Mihai Caraman <mihai.caraman@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/e500_mmu_host.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
index dd2cc03..86903d3 100644
--- a/arch/powerpc/kvm/e500_mmu_host.c
+++ b/arch/powerpc/kvm/e500_mmu_host.c
@@ -473,7 +473,8 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
 		if (printk_ratelimit())
 			pr_err("%s: pte not present: gfn %lx, pfn %lx\n",
 				__func__, (long)gfn, pfn);
-		return -EINVAL;
+		ret = -EINVAL;
+		goto out;
 	}
 	kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg);
 
-- 
1.8.1.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PULL 2/6] KVM: PPC: BOOK3S: HV: Use base page size when comparing against slb value
  2014-07-08 10:04 ` Alexander Graf
@ 2014-07-08 10:04   ` Alexander Graf
  -1 siblings, 0 replies; 18+ messages in thread
From: Alexander Graf @ 2014-07-08 10:04 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm, Paolo Bonzini, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

With guests supporting Multiple page size per segment (MPSS),
hpte_page_size returns the actual page size used. Add a new function to
return base page size and use that to compare against the the page size
calculated from SLB. Without this patch a hpte lookup can fail since
we are comparing wrong page size in kvmppc_hv_find_lock_hpte.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s_64.h | 19 +++++++++++++++++--
 arch/powerpc/kvm/book3s_64_mmu_hv.c      |  2 +-
 arch/powerpc/kvm/book3s_hv_rm_mmu.c      |  7 ++-----
 3 files changed, 20 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h
index fddb72b..d645428 100644
--- a/arch/powerpc/include/asm/kvm_book3s_64.h
+++ b/arch/powerpc/include/asm/kvm_book3s_64.h
@@ -198,8 +198,10 @@ static inline unsigned long compute_tlbie_rb(unsigned long v, unsigned long r,
 	return rb;
 }
 
-static inline unsigned long hpte_page_size(unsigned long h, unsigned long l)
+static inline unsigned long __hpte_page_size(unsigned long h, unsigned long l,
+					     bool is_base_size)
 {
+
 	int size, a_psize;
 	/* Look at the 8 bit LP value */
 	unsigned int lp = (l >> LP_SHIFT) & ((1 << LP_BITS) - 1);
@@ -214,14 +216,27 @@ static inline unsigned long hpte_page_size(unsigned long h, unsigned long l)
 				continue;
 
 			a_psize = __hpte_actual_psize(lp, size);
-			if (a_psize != -1)
+			if (a_psize != -1) {
+				if (is_base_size)
+					return 1ul << mmu_psize_defs[size].shift;
 				return 1ul << mmu_psize_defs[a_psize].shift;
+			}
 		}
 
 	}
 	return 0;
 }
 
+static inline unsigned long hpte_page_size(unsigned long h, unsigned long l)
+{
+	return __hpte_page_size(h, l, 0);
+}
+
+static inline unsigned long hpte_base_page_size(unsigned long h, unsigned long l)
+{
+	return __hpte_page_size(h, l, 1);
+}
+
 static inline unsigned long hpte_rpn(unsigned long ptel, unsigned long psize)
 {
 	return ((ptel & HPTE_R_RPN) & ~(psize - 1)) >> PAGE_SHIFT;
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 8056107..68468d6 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -1562,7 +1562,7 @@ static ssize_t kvm_htab_write(struct file *file, const char __user *buf,
 				goto out;
 			}
 			if (!rma_setup && is_vrma_hpte(v)) {
-				unsigned long psize = hpte_page_size(v, r);
+				unsigned long psize = hpte_base_page_size(v, r);
 				unsigned long senc = slb_pgsize_encoding(psize);
 				unsigned long lpcr;
 
diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index 6e62243..5a24d3c 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
@@ -814,13 +814,10 @@ long kvmppc_hv_find_lock_hpte(struct kvm *kvm, gva_t eaddr, unsigned long slb_v,
 			r = hpte[i+1];
 
 			/*
-			 * Check the HPTE again, including large page size
-			 * Since we don't currently allow any MPSS (mixed
-			 * page-size segment) page sizes, it is sufficient
-			 * to check against the actual page size.
+			 * Check the HPTE again, including base page size
 			 */
 			if ((v & valid) && (v & mask) == val &&
-			    hpte_page_size(v, r) == (1ul << pshift))
+			    hpte_base_page_size(v, r) == (1ul << pshift))
 				/* Return with the HPTE still locked */
 				return (hash << 3) + (i >> 1);
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PULL 2/6] KVM: PPC: BOOK3S: HV: Use base page size when comparing against slb value
@ 2014-07-08 10:04   ` Alexander Graf
  0 siblings, 0 replies; 18+ messages in thread
From: Alexander Graf @ 2014-07-08 10:04 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm, Paolo Bonzini, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

With guests supporting Multiple page size per segment (MPSS),
hpte_page_size returns the actual page size used. Add a new function to
return base page size and use that to compare against the the page size
calculated from SLB. Without this patch a hpte lookup can fail since
we are comparing wrong page size in kvmppc_hv_find_lock_hpte.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/kvm_book3s_64.h | 19 +++++++++++++++++--
 arch/powerpc/kvm/book3s_64_mmu_hv.c      |  2 +-
 arch/powerpc/kvm/book3s_hv_rm_mmu.c      |  7 ++-----
 3 files changed, 20 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h
index fddb72b..d645428 100644
--- a/arch/powerpc/include/asm/kvm_book3s_64.h
+++ b/arch/powerpc/include/asm/kvm_book3s_64.h
@@ -198,8 +198,10 @@ static inline unsigned long compute_tlbie_rb(unsigned long v, unsigned long r,
 	return rb;
 }
 
-static inline unsigned long hpte_page_size(unsigned long h, unsigned long l)
+static inline unsigned long __hpte_page_size(unsigned long h, unsigned long l,
+					     bool is_base_size)
 {
+
 	int size, a_psize;
 	/* Look at the 8 bit LP value */
 	unsigned int lp = (l >> LP_SHIFT) & ((1 << LP_BITS) - 1);
@@ -214,14 +216,27 @@ static inline unsigned long hpte_page_size(unsigned long h, unsigned long l)
 				continue;
 
 			a_psize = __hpte_actual_psize(lp, size);
-			if (a_psize != -1)
+			if (a_psize != -1) {
+				if (is_base_size)
+					return 1ul << mmu_psize_defs[size].shift;
 				return 1ul << mmu_psize_defs[a_psize].shift;
+			}
 		}
 
 	}
 	return 0;
 }
 
+static inline unsigned long hpte_page_size(unsigned long h, unsigned long l)
+{
+	return __hpte_page_size(h, l, 0);
+}
+
+static inline unsigned long hpte_base_page_size(unsigned long h, unsigned long l)
+{
+	return __hpte_page_size(h, l, 1);
+}
+
 static inline unsigned long hpte_rpn(unsigned long ptel, unsigned long psize)
 {
 	return ((ptel & HPTE_R_RPN) & ~(psize - 1)) >> PAGE_SHIFT;
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 8056107..68468d6 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -1562,7 +1562,7 @@ static ssize_t kvm_htab_write(struct file *file, const char __user *buf,
 				goto out;
 			}
 			if (!rma_setup && is_vrma_hpte(v)) {
-				unsigned long psize = hpte_page_size(v, r);
+				unsigned long psize = hpte_base_page_size(v, r);
 				unsigned long senc = slb_pgsize_encoding(psize);
 				unsigned long lpcr;
 
diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index 6e62243..5a24d3c 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
@@ -814,13 +814,10 @@ long kvmppc_hv_find_lock_hpte(struct kvm *kvm, gva_t eaddr, unsigned long slb_v,
 			r = hpte[i+1];
 
 			/*
-			 * Check the HPTE again, including large page size
-			 * Since we don't currently allow any MPSS (mixed
-			 * page-size segment) page sizes, it is sufficient
-			 * to check against the actual page size.
+			 * Check the HPTE again, including base page size
 			 */
 			if ((v & valid) && (v & mask) = val &&
-			    hpte_page_size(v, r) = (1ul << pshift))
+			    hpte_base_page_size(v, r) = (1ul << pshift))
 				/* Return with the HPTE still locked */
 				return (hash << 3) + (i >> 1);
 
-- 
1.8.1.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PULL 3/6] PPC: Add _GLOBAL_TOC for 32bit
  2014-07-08 10:04 ` Alexander Graf
@ 2014-07-08 10:04   ` Alexander Graf
  -1 siblings, 0 replies; 18+ messages in thread
From: Alexander Graf @ 2014-07-08 10:04 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm, Paolo Bonzini

Commit ac5a8ee8 started using _GLOBAL_TOC on ppc32 code. Unfortunately it's only
defined for 64bit targets though. Define it for ppc32 as well, fixing the build
breakage that commit introduced.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/ppc_asm.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
index 9ea266e..7e46125 100644
--- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -277,6 +277,8 @@ n:
 	.globl n;	\
 n:
 
+#define _GLOBAL_TOC(name) _GLOBAL(name)
+
 #define _KPROBE(n)	\
 	.section ".kprobes.text","a";	\
 	.globl	n;	\
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PULL 3/6] PPC: Add _GLOBAL_TOC for 32bit
@ 2014-07-08 10:04   ` Alexander Graf
  0 siblings, 0 replies; 18+ messages in thread
From: Alexander Graf @ 2014-07-08 10:04 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm, Paolo Bonzini

Commit ac5a8ee8 started using _GLOBAL_TOC on ppc32 code. Unfortunately it's only
defined for 64bit targets though. Define it for ppc32 as well, fixing the build
breakage that commit introduced.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/include/asm/ppc_asm.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
index 9ea266e..7e46125 100644
--- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -277,6 +277,8 @@ n:
 	.globl n;	\
 n:
 
+#define _GLOBAL_TOC(name) _GLOBAL(name)
+
 #define _KPROBE(n)	\
 	.section ".kprobes.text","a";	\
 	.globl	n;	\
-- 
1.8.1.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PULL 4/6] KVM: PPC: Assembly functions exported to modules need _GLOBAL_TOC()
  2014-07-08 10:04 ` Alexander Graf
@ 2014-07-08 10:04   ` Alexander Graf
  -1 siblings, 0 replies; 18+ messages in thread
From: Alexander Graf @ 2014-07-08 10:04 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm, Paolo Bonzini, Anton Blanchard

From: Anton Blanchard <anton@samba.org>

Both kvmppc_hv_entry_trampoline and kvmppc_entry_trampoline are
assembly functions that are exported to modules and also require
a valid r2.

As such we need to use _GLOBAL_TOC so we provide a global entry
point that establishes the TOC (r2).

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_hv_rmhandlers.S | 2 +-
 arch/powerpc/kvm/book3s_rmhandlers.S    | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 77356fd..8d9c5d2 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -48,7 +48,7 @@
  *
  * LR = return address to continue at after eventually re-enabling MMU
  */
-_GLOBAL(kvmppc_hv_entry_trampoline)
+_GLOBAL_TOC(kvmppc_hv_entry_trampoline)
 	mflr	r0
 	std	r0, PPC_LR_STKOFF(r1)
 	stdu	r1, -112(r1)
diff --git a/arch/powerpc/kvm/book3s_rmhandlers.S b/arch/powerpc/kvm/book3s_rmhandlers.S
index 9eec675..4850a22 100644
--- a/arch/powerpc/kvm/book3s_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_rmhandlers.S
@@ -146,7 +146,7 @@ kvmppc_handler_skip_ins:
  * On entry, r4 contains the guest shadow MSR
  * MSR.EE has to be 0 when calling this function
  */
-_GLOBAL(kvmppc_entry_trampoline)
+_GLOBAL_TOC(kvmppc_entry_trampoline)
 	mfmsr	r5
 	LOAD_REG_ADDR(r7, kvmppc_handler_trampoline_enter)
 	toreal(r7)
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PULL 4/6] KVM: PPC: Assembly functions exported to modules need _GLOBAL_TOC()
@ 2014-07-08 10:04   ` Alexander Graf
  0 siblings, 0 replies; 18+ messages in thread
From: Alexander Graf @ 2014-07-08 10:04 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm, Paolo Bonzini, Anton Blanchard

From: Anton Blanchard <anton@samba.org>

Both kvmppc_hv_entry_trampoline and kvmppc_entry_trampoline are
assembly functions that are exported to modules and also require
a valid r2.

As such we need to use _GLOBAL_TOC so we provide a global entry
point that establishes the TOC (r2).

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_hv_rmhandlers.S | 2 +-
 arch/powerpc/kvm/book3s_rmhandlers.S    | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 77356fd..8d9c5d2 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -48,7 +48,7 @@
  *
  * LR = return address to continue at after eventually re-enabling MMU
  */
-_GLOBAL(kvmppc_hv_entry_trampoline)
+_GLOBAL_TOC(kvmppc_hv_entry_trampoline)
 	mflr	r0
 	std	r0, PPC_LR_STKOFF(r1)
 	stdu	r1, -112(r1)
diff --git a/arch/powerpc/kvm/book3s_rmhandlers.S b/arch/powerpc/kvm/book3s_rmhandlers.S
index 9eec675..4850a22 100644
--- a/arch/powerpc/kvm/book3s_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_rmhandlers.S
@@ -146,7 +146,7 @@ kvmppc_handler_skip_ins:
  * On entry, r4 contains the guest shadow MSR
  * MSR.EE has to be 0 when calling this function
  */
-_GLOBAL(kvmppc_entry_trampoline)
+_GLOBAL_TOC(kvmppc_entry_trampoline)
 	mfmsr	r5
 	LOAD_REG_ADDR(r7, kvmppc_handler_trampoline_enter)
 	toreal(r7)
-- 
1.8.1.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PULL 5/6] KVM: PPC: Book3S PR: Fix ABIv2 on LE
  2014-07-08 10:04 ` Alexander Graf
@ 2014-07-08 10:04   ` Alexander Graf
  -1 siblings, 0 replies; 18+ messages in thread
From: Alexander Graf @ 2014-07-08 10:04 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm, Paolo Bonzini

We switched to ABIv2 on Little Endian systems now which gets rid of the
dotted function names. Branch to the actual functions when we see such
a system.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_interrupts.S | 4 ++++
 arch/powerpc/kvm/book3s_rmhandlers.S | 4 ++++
 2 files changed, 8 insertions(+)

diff --git a/arch/powerpc/kvm/book3s_interrupts.S b/arch/powerpc/kvm/book3s_interrupts.S
index e2c29e3..d044b8b 100644
--- a/arch/powerpc/kvm/book3s_interrupts.S
+++ b/arch/powerpc/kvm/book3s_interrupts.S
@@ -25,7 +25,11 @@
 #include <asm/exception-64s.h>
 
 #if defined(CONFIG_PPC_BOOK3S_64)
+#if defined(_CALL_ELF) && _CALL_ELF == 2
+#define FUNC(name) 		name
+#else
 #define FUNC(name) 		GLUE(.,name)
+#endif
 #define GET_SHADOW_VCPU(reg)    addi	reg, r13, PACA_SVCPU
 
 #elif defined(CONFIG_PPC_BOOK3S_32)
diff --git a/arch/powerpc/kvm/book3s_rmhandlers.S b/arch/powerpc/kvm/book3s_rmhandlers.S
index 4850a22..16c4d88 100644
--- a/arch/powerpc/kvm/book3s_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_rmhandlers.S
@@ -36,7 +36,11 @@
 
 #if defined(CONFIG_PPC_BOOK3S_64)
 
+#if defined(_CALL_ELF) && _CALL_ELF == 2
+#define FUNC(name) 		name
+#else
 #define FUNC(name) 		GLUE(.,name)
+#endif
 
 #elif defined(CONFIG_PPC_BOOK3S_32)
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PULL 5/6] KVM: PPC: Book3S PR: Fix ABIv2 on LE
@ 2014-07-08 10:04   ` Alexander Graf
  0 siblings, 0 replies; 18+ messages in thread
From: Alexander Graf @ 2014-07-08 10:04 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm, Paolo Bonzini

We switched to ABIv2 on Little Endian systems now which gets rid of the
dotted function names. Branch to the actual functions when we see such
a system.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_interrupts.S | 4 ++++
 arch/powerpc/kvm/book3s_rmhandlers.S | 4 ++++
 2 files changed, 8 insertions(+)

diff --git a/arch/powerpc/kvm/book3s_interrupts.S b/arch/powerpc/kvm/book3s_interrupts.S
index e2c29e3..d044b8b 100644
--- a/arch/powerpc/kvm/book3s_interrupts.S
+++ b/arch/powerpc/kvm/book3s_interrupts.S
@@ -25,7 +25,11 @@
 #include <asm/exception-64s.h>
 
 #if defined(CONFIG_PPC_BOOK3S_64)
+#if defined(_CALL_ELF) && _CALL_ELF = 2
+#define FUNC(name) 		name
+#else
 #define FUNC(name) 		GLUE(.,name)
+#endif
 #define GET_SHADOW_VCPU(reg)    addi	reg, r13, PACA_SVCPU
 
 #elif defined(CONFIG_PPC_BOOK3S_32)
diff --git a/arch/powerpc/kvm/book3s_rmhandlers.S b/arch/powerpc/kvm/book3s_rmhandlers.S
index 4850a22..16c4d88 100644
--- a/arch/powerpc/kvm/book3s_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_rmhandlers.S
@@ -36,7 +36,11 @@
 
 #if defined(CONFIG_PPC_BOOK3S_64)
 
+#if defined(_CALL_ELF) && _CALL_ELF = 2
+#define FUNC(name) 		name
+#else
 #define FUNC(name) 		GLUE(.,name)
+#endif
 
 #elif defined(CONFIG_PPC_BOOK3S_32)
 
-- 
1.8.1.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PULL 6/6] KVM: PPC: RTAS: Do byte swaps explicitly
  2014-07-08 10:04 ` Alexander Graf
@ 2014-07-08 10:04   ` Alexander Graf
  -1 siblings, 0 replies; 18+ messages in thread
From: Alexander Graf @ 2014-07-08 10:04 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm, Paolo Bonzini

In commit b59d9d26b we introduced implicit byte swaps for RTAS calls.
Unfortunately we messed up and didn't swizzle return values properly.

Also the old approach wasn't "sparse" compatible - we were randomly
reading __be32 values on an LE system.

Let's just do all of the swizzling explicitly with byte swaps right
where values get used. That way we can at least catch bugs using sparse.

This patch fixes XICS RTAS emulation on little endian hosts for me.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_rtas.c | 65 ++++++++++++------------------------------
 1 file changed, 18 insertions(+), 47 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_rtas.c b/arch/powerpc/kvm/book3s_rtas.c
index edb14ba..ef27fbd 100644
--- a/arch/powerpc/kvm/book3s_rtas.c
+++ b/arch/powerpc/kvm/book3s_rtas.c
@@ -23,20 +23,20 @@ static void kvm_rtas_set_xive(struct kvm_vcpu *vcpu, struct rtas_args *args)
 	u32 irq, server, priority;
 	int rc;
 
-	if (args->nargs != 3 || args->nret != 1) {
+	if (be32_to_cpu(args->nargs) != 3 || be32_to_cpu(args->nret) != 1) {
 		rc = -3;
 		goto out;
 	}
 
-	irq = args->args[0];
-	server = args->args[1];
-	priority = args->args[2];
+	irq = be32_to_cpu(args->args[0]);
+	server = be32_to_cpu(args->args[1]);
+	priority = be32_to_cpu(args->args[2]);
 
 	rc = kvmppc_xics_set_xive(vcpu->kvm, irq, server, priority);
 	if (rc)
 		rc = -3;
 out:
-	args->rets[0] = rc;
+	args->rets[0] = cpu_to_be32(rc);
 }
 
 static void kvm_rtas_get_xive(struct kvm_vcpu *vcpu, struct rtas_args *args)
@@ -44,12 +44,12 @@ static void kvm_rtas_get_xive(struct kvm_vcpu *vcpu, struct rtas_args *args)
 	u32 irq, server, priority;
 	int rc;
 
-	if (args->nargs != 1 || args->nret != 3) {
+	if (be32_to_cpu(args->nargs) != 1 || be32_to_cpu(args->nret) != 3) {
 		rc = -3;
 		goto out;
 	}
 
-	irq = args->args[0];
+	irq = be32_to_cpu(args->args[0]);
 
 	server = priority = 0;
 	rc = kvmppc_xics_get_xive(vcpu->kvm, irq, &server, &priority);
@@ -58,10 +58,10 @@ static void kvm_rtas_get_xive(struct kvm_vcpu *vcpu, struct rtas_args *args)
 		goto out;
 	}
 
-	args->rets[1] = server;
-	args->rets[2] = priority;
+	args->rets[1] = cpu_to_be32(server);
+	args->rets[2] = cpu_to_be32(priority);
 out:
-	args->rets[0] = rc;
+	args->rets[0] = cpu_to_be32(rc);
 }
 
 static void kvm_rtas_int_off(struct kvm_vcpu *vcpu, struct rtas_args *args)
@@ -69,18 +69,18 @@ static void kvm_rtas_int_off(struct kvm_vcpu *vcpu, struct rtas_args *args)
 	u32 irq;
 	int rc;
 
-	if (args->nargs != 1 || args->nret != 1) {
+	if (be32_to_cpu(args->nargs) != 1 || be32_to_cpu(args->nret) != 1) {
 		rc = -3;
 		goto out;
 	}
 
-	irq = args->args[0];
+	irq = be32_to_cpu(args->args[0]);
 
 	rc = kvmppc_xics_int_off(vcpu->kvm, irq);
 	if (rc)
 		rc = -3;
 out:
-	args->rets[0] = rc;
+	args->rets[0] = cpu_to_be32(rc);
 }
 
 static void kvm_rtas_int_on(struct kvm_vcpu *vcpu, struct rtas_args *args)
@@ -88,18 +88,18 @@ static void kvm_rtas_int_on(struct kvm_vcpu *vcpu, struct rtas_args *args)
 	u32 irq;
 	int rc;
 
-	if (args->nargs != 1 || args->nret != 1) {
+	if (be32_to_cpu(args->nargs) != 1 || be32_to_cpu(args->nret) != 1) {
 		rc = -3;
 		goto out;
 	}
 
-	irq = args->args[0];
+	irq = be32_to_cpu(args->args[0]);
 
 	rc = kvmppc_xics_int_on(vcpu->kvm, irq);
 	if (rc)
 		rc = -3;
 out:
-	args->rets[0] = rc;
+	args->rets[0] = cpu_to_be32(rc);
 }
 #endif /* CONFIG_KVM_XICS */
 
@@ -205,32 +205,6 @@ int kvm_vm_ioctl_rtas_define_token(struct kvm *kvm, void __user *argp)
 	return rc;
 }
 
-static void kvmppc_rtas_swap_endian_in(struct rtas_args *args)
-{
-#ifdef __LITTLE_ENDIAN__
-	int i;
-
-	args->token = be32_to_cpu(args->token);
-	args->nargs = be32_to_cpu(args->nargs);
-	args->nret = be32_to_cpu(args->nret);
-	for (i = 0; i < args->nargs; i++)
-		args->args[i] = be32_to_cpu(args->args[i]);
-#endif
-}
-
-static void kvmppc_rtas_swap_endian_out(struct rtas_args *args)
-{
-#ifdef __LITTLE_ENDIAN__
-	int i;
-
-	for (i = 0; i < args->nret; i++)
-		args->args[i] = cpu_to_be32(args->args[i]);
-	args->token = cpu_to_be32(args->token);
-	args->nargs = cpu_to_be32(args->nargs);
-	args->nret = cpu_to_be32(args->nret);
-#endif
-}
-
 int kvmppc_rtas_hcall(struct kvm_vcpu *vcpu)
 {
 	struct rtas_token_definition *d;
@@ -249,8 +223,6 @@ int kvmppc_rtas_hcall(struct kvm_vcpu *vcpu)
 	if (rc)
 		goto fail;
 
-	kvmppc_rtas_swap_endian_in(&args);
-
 	/*
 	 * args->rets is a pointer into args->args. Now that we've
 	 * copied args we need to fix it up to point into our copy,
@@ -258,13 +230,13 @@ int kvmppc_rtas_hcall(struct kvm_vcpu *vcpu)
 	 * value so we can restore it on the way out.
 	 */
 	orig_rets = args.rets;
-	args.rets = &args.args[args.nargs];
+	args.rets = &args.args[be32_to_cpu(args.nargs)];
 
 	mutex_lock(&vcpu->kvm->lock);
 
 	rc = -ENOENT;
 	list_for_each_entry(d, &vcpu->kvm->arch.rtas_tokens, list) {
-		if (d->token == args.token) {
+		if (d->token == be32_to_cpu(args.token)) {
 			d->handler->handler(vcpu, &args);
 			rc = 0;
 			break;
@@ -275,7 +247,6 @@ int kvmppc_rtas_hcall(struct kvm_vcpu *vcpu)
 
 	if (rc == 0) {
 		args.rets = orig_rets;
-		kvmppc_rtas_swap_endian_out(&args);
 		rc = kvm_write_guest(vcpu->kvm, args_phys, &args, sizeof(args));
 		if (rc)
 			goto fail;
-- 
1.8.1.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PULL 6/6] KVM: PPC: RTAS: Do byte swaps explicitly
@ 2014-07-08 10:04   ` Alexander Graf
  0 siblings, 0 replies; 18+ messages in thread
From: Alexander Graf @ 2014-07-08 10:04 UTC (permalink / raw)
  To: kvm-ppc; +Cc: kvm, Paolo Bonzini

In commit b59d9d26b we introduced implicit byte swaps for RTAS calls.
Unfortunately we messed up and didn't swizzle return values properly.

Also the old approach wasn't "sparse" compatible - we were randomly
reading __be32 values on an LE system.

Let's just do all of the swizzling explicitly with byte swaps right
where values get used. That way we can at least catch bugs using sparse.

This patch fixes XICS RTAS emulation on little endian hosts for me.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_rtas.c | 65 ++++++++++++------------------------------
 1 file changed, 18 insertions(+), 47 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_rtas.c b/arch/powerpc/kvm/book3s_rtas.c
index edb14ba..ef27fbd 100644
--- a/arch/powerpc/kvm/book3s_rtas.c
+++ b/arch/powerpc/kvm/book3s_rtas.c
@@ -23,20 +23,20 @@ static void kvm_rtas_set_xive(struct kvm_vcpu *vcpu, struct rtas_args *args)
 	u32 irq, server, priority;
 	int rc;
 
-	if (args->nargs != 3 || args->nret != 1) {
+	if (be32_to_cpu(args->nargs) != 3 || be32_to_cpu(args->nret) != 1) {
 		rc = -3;
 		goto out;
 	}
 
-	irq = args->args[0];
-	server = args->args[1];
-	priority = args->args[2];
+	irq = be32_to_cpu(args->args[0]);
+	server = be32_to_cpu(args->args[1]);
+	priority = be32_to_cpu(args->args[2]);
 
 	rc = kvmppc_xics_set_xive(vcpu->kvm, irq, server, priority);
 	if (rc)
 		rc = -3;
 out:
-	args->rets[0] = rc;
+	args->rets[0] = cpu_to_be32(rc);
 }
 
 static void kvm_rtas_get_xive(struct kvm_vcpu *vcpu, struct rtas_args *args)
@@ -44,12 +44,12 @@ static void kvm_rtas_get_xive(struct kvm_vcpu *vcpu, struct rtas_args *args)
 	u32 irq, server, priority;
 	int rc;
 
-	if (args->nargs != 1 || args->nret != 3) {
+	if (be32_to_cpu(args->nargs) != 1 || be32_to_cpu(args->nret) != 3) {
 		rc = -3;
 		goto out;
 	}
 
-	irq = args->args[0];
+	irq = be32_to_cpu(args->args[0]);
 
 	server = priority = 0;
 	rc = kvmppc_xics_get_xive(vcpu->kvm, irq, &server, &priority);
@@ -58,10 +58,10 @@ static void kvm_rtas_get_xive(struct kvm_vcpu *vcpu, struct rtas_args *args)
 		goto out;
 	}
 
-	args->rets[1] = server;
-	args->rets[2] = priority;
+	args->rets[1] = cpu_to_be32(server);
+	args->rets[2] = cpu_to_be32(priority);
 out:
-	args->rets[0] = rc;
+	args->rets[0] = cpu_to_be32(rc);
 }
 
 static void kvm_rtas_int_off(struct kvm_vcpu *vcpu, struct rtas_args *args)
@@ -69,18 +69,18 @@ static void kvm_rtas_int_off(struct kvm_vcpu *vcpu, struct rtas_args *args)
 	u32 irq;
 	int rc;
 
-	if (args->nargs != 1 || args->nret != 1) {
+	if (be32_to_cpu(args->nargs) != 1 || be32_to_cpu(args->nret) != 1) {
 		rc = -3;
 		goto out;
 	}
 
-	irq = args->args[0];
+	irq = be32_to_cpu(args->args[0]);
 
 	rc = kvmppc_xics_int_off(vcpu->kvm, irq);
 	if (rc)
 		rc = -3;
 out:
-	args->rets[0] = rc;
+	args->rets[0] = cpu_to_be32(rc);
 }
 
 static void kvm_rtas_int_on(struct kvm_vcpu *vcpu, struct rtas_args *args)
@@ -88,18 +88,18 @@ static void kvm_rtas_int_on(struct kvm_vcpu *vcpu, struct rtas_args *args)
 	u32 irq;
 	int rc;
 
-	if (args->nargs != 1 || args->nret != 1) {
+	if (be32_to_cpu(args->nargs) != 1 || be32_to_cpu(args->nret) != 1) {
 		rc = -3;
 		goto out;
 	}
 
-	irq = args->args[0];
+	irq = be32_to_cpu(args->args[0]);
 
 	rc = kvmppc_xics_int_on(vcpu->kvm, irq);
 	if (rc)
 		rc = -3;
 out:
-	args->rets[0] = rc;
+	args->rets[0] = cpu_to_be32(rc);
 }
 #endif /* CONFIG_KVM_XICS */
 
@@ -205,32 +205,6 @@ int kvm_vm_ioctl_rtas_define_token(struct kvm *kvm, void __user *argp)
 	return rc;
 }
 
-static void kvmppc_rtas_swap_endian_in(struct rtas_args *args)
-{
-#ifdef __LITTLE_ENDIAN__
-	int i;
-
-	args->token = be32_to_cpu(args->token);
-	args->nargs = be32_to_cpu(args->nargs);
-	args->nret = be32_to_cpu(args->nret);
-	for (i = 0; i < args->nargs; i++)
-		args->args[i] = be32_to_cpu(args->args[i]);
-#endif
-}
-
-static void kvmppc_rtas_swap_endian_out(struct rtas_args *args)
-{
-#ifdef __LITTLE_ENDIAN__
-	int i;
-
-	for (i = 0; i < args->nret; i++)
-		args->args[i] = cpu_to_be32(args->args[i]);
-	args->token = cpu_to_be32(args->token);
-	args->nargs = cpu_to_be32(args->nargs);
-	args->nret = cpu_to_be32(args->nret);
-#endif
-}
-
 int kvmppc_rtas_hcall(struct kvm_vcpu *vcpu)
 {
 	struct rtas_token_definition *d;
@@ -249,8 +223,6 @@ int kvmppc_rtas_hcall(struct kvm_vcpu *vcpu)
 	if (rc)
 		goto fail;
 
-	kvmppc_rtas_swap_endian_in(&args);
-
 	/*
 	 * args->rets is a pointer into args->args. Now that we've
 	 * copied args we need to fix it up to point into our copy,
@@ -258,13 +230,13 @@ int kvmppc_rtas_hcall(struct kvm_vcpu *vcpu)
 	 * value so we can restore it on the way out.
 	 */
 	orig_rets = args.rets;
-	args.rets = &args.args[args.nargs];
+	args.rets = &args.args[be32_to_cpu(args.nargs)];
 
 	mutex_lock(&vcpu->kvm->lock);
 
 	rc = -ENOENT;
 	list_for_each_entry(d, &vcpu->kvm->arch.rtas_tokens, list) {
-		if (d->token = args.token) {
+		if (d->token = be32_to_cpu(args.token)) {
 			d->handler->handler(vcpu, &args);
 			rc = 0;
 			break;
@@ -275,7 +247,6 @@ int kvmppc_rtas_hcall(struct kvm_vcpu *vcpu)
 
 	if (rc = 0) {
 		args.rets = orig_rets;
-		kvmppc_rtas_swap_endian_out(&args);
 		rc = kvm_write_guest(vcpu->kvm, args_phys, &args, sizeof(args));
 		if (rc)
 			goto fail;
-- 
1.8.1.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PULL 3.16 0/6] 3.16 patch queue 2014-07-08
  2014-07-08 10:04 ` Alexander Graf
@ 2014-07-08 10:13   ` Paolo Bonzini
  -1 siblings, 0 replies; 18+ messages in thread
From: Paolo Bonzini @ 2014-07-08 10:13 UTC (permalink / raw)
  To: Alexander Graf, kvm-ppc; +Cc: kvm

Il 08/07/2014 12:04, Alexander Graf ha scritto:
> Hi Paolo / Marcelo,
>
> This is my current patch queue for 3.16.  Please pull.
>
> Alex
>
>
> The following changes since commit 5c02c392cd2320e8d612376d6b72b6548a680923:
>
>   Merge tag 'virtio-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux (2014-06-11 21:10:33 -0700)
>
> are available in the git repository at:
>
>
>   git://github.com/agraf/linux-2.6.git tags/signed-for-3.16
>
> for you to fetch changes up to 19a44ecff52fd67d77d49fb4d43b289c53cdc392:
>
>   KVM: PPC: RTAS: Do byte swaps explicitly (2014-07-07 23:17:20 +0200)

Thanks, pulled.

Pushing to kvm.git will have to wait for Autotesting of Bandan's nested 
VMX patch.

Paolo

> ----------------------------------------------------------------
> Patch queue for 3.16 - 2014-07-08
>
> A few bug fixes to make 3.16 work well with KVM on PowerPC:
>
>   - Fix ppc32 module builds
>   - Fix Little Endian hosts
>   - Fix Book3S HV HPTE lookup with huge pages in guest
>   - Fix BookE lock leak
>
> ----------------------------------------------------------------
> Alexander Graf (3):
>       PPC: Add _GLOBAL_TOC for 32bit
>       KVM: PPC: Book3S PR: Fix ABIv2 on LE
>       KVM: PPC: RTAS: Do byte swaps explicitly
>
> Aneesh Kumar K.V (1):
>       KVM: PPC: BOOK3S: HV: Use base page size when comparing against slb value
>
> Anton Blanchard (1):
>       KVM: PPC: Assembly functions exported to modules need _GLOBAL_TOC()
>
> Mihai Caraman (1):
>       KVM: PPC: Book3E: Unlock mmu_lock when setting caching atttribute
>
>  arch/powerpc/include/asm/kvm_book3s_64.h | 19 +++++++++-
>  arch/powerpc/include/asm/ppc_asm.h       |  2 +
>  arch/powerpc/kvm/book3s_64_mmu_hv.c      |  2 +-
>  arch/powerpc/kvm/book3s_hv_rm_mmu.c      |  7 +---
>  arch/powerpc/kvm/book3s_hv_rmhandlers.S  |  2 +-
>  arch/powerpc/kvm/book3s_interrupts.S     |  4 ++
>  arch/powerpc/kvm/book3s_rmhandlers.S     |  6 ++-
>  arch/powerpc/kvm/book3s_rtas.c           | 65 +++++++++-----------------------
>  arch/powerpc/kvm/e500_mmu_host.c         |  3 +-
>  9 files changed, 52 insertions(+), 58 deletions(-)
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PULL 3.16 0/6] 3.16 patch queue 2014-07-08
@ 2014-07-08 10:13   ` Paolo Bonzini
  0 siblings, 0 replies; 18+ messages in thread
From: Paolo Bonzini @ 2014-07-08 10:13 UTC (permalink / raw)
  To: Alexander Graf, kvm-ppc; +Cc: kvm

Il 08/07/2014 12:04, Alexander Graf ha scritto:
> Hi Paolo / Marcelo,
>
> This is my current patch queue for 3.16.  Please pull.
>
> Alex
>
>
> The following changes since commit 5c02c392cd2320e8d612376d6b72b6548a680923:
>
>   Merge tag 'virtio-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux (2014-06-11 21:10:33 -0700)
>
> are available in the git repository at:
>
>
>   git://github.com/agraf/linux-2.6.git tags/signed-for-3.16
>
> for you to fetch changes up to 19a44ecff52fd67d77d49fb4d43b289c53cdc392:
>
>   KVM: PPC: RTAS: Do byte swaps explicitly (2014-07-07 23:17:20 +0200)

Thanks, pulled.

Pushing to kvm.git will have to wait for Autotesting of Bandan's nested 
VMX patch.

Paolo

> ----------------------------------------------------------------
> Patch queue for 3.16 - 2014-07-08
>
> A few bug fixes to make 3.16 work well with KVM on PowerPC:
>
>   - Fix ppc32 module builds
>   - Fix Little Endian hosts
>   - Fix Book3S HV HPTE lookup with huge pages in guest
>   - Fix BookE lock leak
>
> ----------------------------------------------------------------
> Alexander Graf (3):
>       PPC: Add _GLOBAL_TOC for 32bit
>       KVM: PPC: Book3S PR: Fix ABIv2 on LE
>       KVM: PPC: RTAS: Do byte swaps explicitly
>
> Aneesh Kumar K.V (1):
>       KVM: PPC: BOOK3S: HV: Use base page size when comparing against slb value
>
> Anton Blanchard (1):
>       KVM: PPC: Assembly functions exported to modules need _GLOBAL_TOC()
>
> Mihai Caraman (1):
>       KVM: PPC: Book3E: Unlock mmu_lock when setting caching atttribute
>
>  arch/powerpc/include/asm/kvm_book3s_64.h | 19 +++++++++-
>  arch/powerpc/include/asm/ppc_asm.h       |  2 +
>  arch/powerpc/kvm/book3s_64_mmu_hv.c      |  2 +-
>  arch/powerpc/kvm/book3s_hv_rm_mmu.c      |  7 +---
>  arch/powerpc/kvm/book3s_hv_rmhandlers.S  |  2 +-
>  arch/powerpc/kvm/book3s_interrupts.S     |  4 ++
>  arch/powerpc/kvm/book3s_rmhandlers.S     |  6 ++-
>  arch/powerpc/kvm/book3s_rtas.c           | 65 +++++++++-----------------------
>  arch/powerpc/kvm/e500_mmu_host.c         |  3 +-
>  9 files changed, 52 insertions(+), 58 deletions(-)
>


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PULL 3.16 0/6] 3.16 patch queue 2014-07-08
  2014-07-08 10:13   ` Paolo Bonzini
@ 2014-07-08 10:14     ` Alexander Graf
  -1 siblings, 0 replies; 18+ messages in thread
From: Alexander Graf @ 2014-07-08 10:14 UTC (permalink / raw)
  To: Paolo Bonzini, kvm-ppc; +Cc: kvm


On 08.07.14 12:13, Paolo Bonzini wrote:
> Il 08/07/2014 12:04, Alexander Graf ha scritto:
>> Hi Paolo / Marcelo,
>>
>> This is my current patch queue for 3.16.  Please pull.
>>
>> Alex
>>
>>
>> The following changes since commit 
>> 5c02c392cd2320e8d612376d6b72b6548a680923:
>>
>>   Merge tag 'virtio-next-for-linus' of 
>> git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux (2014-06-11 
>> 21:10:33 -0700)
>>
>> are available in the git repository at:
>>
>>
>>   git://github.com/agraf/linux-2.6.git tags/signed-for-3.16
>>
>> for you to fetch changes up to 19a44ecff52fd67d77d49fb4d43b289c53cdc392:
>>
>>   KVM: PPC: RTAS: Do byte swaps explicitly (2014-07-07 23:17:20 +0200)
>
> Thanks, pulled.
>
> Pushing to kvm.git will have to wait for Autotesting of Bandan's 
> nested VMX patch.

Thanks a bunch. Needless to say that I also successfully autotested this 
branch (+ merge with linus/master) ;).


Alex

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PULL 3.16 0/6] 3.16 patch queue 2014-07-08
@ 2014-07-08 10:14     ` Alexander Graf
  0 siblings, 0 replies; 18+ messages in thread
From: Alexander Graf @ 2014-07-08 10:14 UTC (permalink / raw)
  To: Paolo Bonzini, kvm-ppc; +Cc: kvm


On 08.07.14 12:13, Paolo Bonzini wrote:
> Il 08/07/2014 12:04, Alexander Graf ha scritto:
>> Hi Paolo / Marcelo,
>>
>> This is my current patch queue for 3.16.  Please pull.
>>
>> Alex
>>
>>
>> The following changes since commit 
>> 5c02c392cd2320e8d612376d6b72b6548a680923:
>>
>>   Merge tag 'virtio-next-for-linus' of 
>> git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux (2014-06-11 
>> 21:10:33 -0700)
>>
>> are available in the git repository at:
>>
>>
>>   git://github.com/agraf/linux-2.6.git tags/signed-for-3.16
>>
>> for you to fetch changes up to 19a44ecff52fd67d77d49fb4d43b289c53cdc392:
>>
>>   KVM: PPC: RTAS: Do byte swaps explicitly (2014-07-07 23:17:20 +0200)
>
> Thanks, pulled.
>
> Pushing to kvm.git will have to wait for Autotesting of Bandan's 
> nested VMX patch.

Thanks a bunch. Needless to say that I also successfully autotested this 
branch (+ merge with linus/master) ;).


Alex


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2014-07-08 10:14 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-07-08 10:04 [PULL 3.16 0/6] 3.16 patch queue 2014-07-08 Alexander Graf
2014-07-08 10:04 ` Alexander Graf
2014-07-08 10:04 ` [PULL 1/6] KVM: PPC: Book3E: Unlock mmu_lock when setting caching atttribute Alexander Graf
2014-07-08 10:04   ` Alexander Graf
2014-07-08 10:04 ` [PULL 2/6] KVM: PPC: BOOK3S: HV: Use base page size when comparing against slb value Alexander Graf
2014-07-08 10:04   ` Alexander Graf
2014-07-08 10:04 ` [PULL 3/6] PPC: Add _GLOBAL_TOC for 32bit Alexander Graf
2014-07-08 10:04   ` Alexander Graf
2014-07-08 10:04 ` [PULL 4/6] KVM: PPC: Assembly functions exported to modules need _GLOBAL_TOC() Alexander Graf
2014-07-08 10:04   ` Alexander Graf
2014-07-08 10:04 ` [PULL 5/6] KVM: PPC: Book3S PR: Fix ABIv2 on LE Alexander Graf
2014-07-08 10:04   ` Alexander Graf
2014-07-08 10:04 ` [PULL 6/6] KVM: PPC: RTAS: Do byte swaps explicitly Alexander Graf
2014-07-08 10:04   ` Alexander Graf
2014-07-08 10:13 ` [PULL 3.16 0/6] 3.16 patch queue 2014-07-08 Paolo Bonzini
2014-07-08 10:13   ` Paolo Bonzini
2014-07-08 10:14   ` Alexander Graf
2014-07-08 10:14     ` Alexander Graf

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.