linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/6 v3] kvm: powerpc: use cache attributes from linux pte
@ 2013-08-06 11:31 Bharat Bhushan
  2013-08-06 11:31 ` [PATCH 1/6 v3] powerpc: book3e: _PAGE_LENDIAN must be _PAGE_ENDIAN Bharat Bhushan
                   ` (5 more replies)
  0 siblings, 6 replies; 9+ messages in thread
From: Bharat Bhushan @ 2013-08-06 11:31 UTC (permalink / raw)
  To: scottwood, benh, agraf, paulus, kvm, kvm-ppc, linuxppc-dev; +Cc: Bharat Bhushan

From: Bharat Bhushan <bharat.bhushan@freescale.com>

First patch is a typo fix where book3e define _PAGE_LENDIAN while it should be
defined as _PAGE_ENDIAN. This seems to show that this is never exercised :-)

Second and third patch is to allow guest controlling "G"-Guarded and
"E"-Endian TLB attributes respectively.

Fourth and fifth patch is moving functions/logic in common code
so they can be used on booke also.

Sixth patch is actually setting caching attributes (TLB.WIMGE) using
corresponding Linux pte.

v2->v3
 - now lookup_linux_pte() only have pte search logic and it does not
   set any access flags in pte. There is already a function for setting
   access flag which will be called explicitly where needed.
   On booke we only need to search for pte to get WIMG.

v1->v2
 - Earlier caching attributes (WIMGE) were set based of page is RAM or not
   But now we get these attributes from corresponding Linux PTE.

Bharat Bhushan (6):
  powerpc: book3e: _PAGE_LENDIAN must be _PAGE_ENDIAN
  kvm: powerpc: allow guest control "E" attribute in mas2
  kvm: powerpc: allow guest control "G" attribute in mas2
  powerpc: move linux pte/hugepte search to more generic file
  kvm: powerpc: keep only pte search logic in lookup_linux_pte
  kvm: powerpc: use caching attributes as per linux pte

 arch/powerpc/include/asm/kvm_host.h      |    2 +-
 arch/powerpc/include/asm/pgtable-ppc64.h |   36 ------------------
 arch/powerpc/include/asm/pgtable.h       |   60 ++++++++++++++++++++++++++++++
 arch/powerpc/include/asm/pte-book3e.h    |    2 +-
 arch/powerpc/kvm/book3s_hv_rm_mmu.c      |   38 ++++++-------------
 arch/powerpc/kvm/booke.c                 |    2 +-
 arch/powerpc/kvm/e500.h                  |   10 +++--
 arch/powerpc/kvm/e500_mmu_host.c         |   36 ++++++++++-------
 8 files changed, 102 insertions(+), 84 deletions(-)

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/6 v3] powerpc: book3e: _PAGE_LENDIAN must be _PAGE_ENDIAN
  2013-08-06 11:31 [PATCH 0/6 v3] kvm: powerpc: use cache attributes from linux pte Bharat Bhushan
@ 2013-08-06 11:31 ` Bharat Bhushan
  2013-08-06 11:31 ` [PATCH 2/6 v3] kvm: powerpc: allow guest control "E" attribute in mas2 Bharat Bhushan
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Bharat Bhushan @ 2013-08-06 11:31 UTC (permalink / raw)
  To: scottwood, benh, agraf, paulus, kvm, kvm-ppc, linuxppc-dev; +Cc: Bharat Bhushan

For booke3e _PAGE_ENDIAN is not defined. Infact what is defined
is "_PAGE_LENDIAN" which is wrong and should be _PAGE_ENDIAN.
There are no compilation errors as
arch/powerpc/include/asm/pte-common.h defines _PAGE_ENDIAN to 0
as it is not defined anywhere.

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
---
v2->v3
 - no change
v1->v2
 - no change

 arch/powerpc/include/asm/pte-book3e.h |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/include/asm/pte-book3e.h b/arch/powerpc/include/asm/pte-book3e.h
index 0156702..576ad88 100644
--- a/arch/powerpc/include/asm/pte-book3e.h
+++ b/arch/powerpc/include/asm/pte-book3e.h
@@ -40,7 +40,7 @@
 #define _PAGE_U1	0x010000
 #define _PAGE_U0	0x020000
 #define _PAGE_ACCESSED	0x040000
-#define _PAGE_LENDIAN	0x080000
+#define _PAGE_ENDIAN	0x080000
 #define _PAGE_GUARDED	0x100000
 #define _PAGE_COHERENT	0x200000 /* M: enforce memory coherence */
 #define _PAGE_NO_CACHE	0x400000 /* I: cache inhibit */
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/6 v3] kvm: powerpc: allow guest control "E" attribute in mas2
  2013-08-06 11:31 [PATCH 0/6 v3] kvm: powerpc: use cache attributes from linux pte Bharat Bhushan
  2013-08-06 11:31 ` [PATCH 1/6 v3] powerpc: book3e: _PAGE_LENDIAN must be _PAGE_ENDIAN Bharat Bhushan
@ 2013-08-06 11:31 ` Bharat Bhushan
  2013-08-06 11:31 ` [PATCH 3/6 v3] kvm: powerpc: allow guest control "G" " Bharat Bhushan
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Bharat Bhushan @ 2013-08-06 11:31 UTC (permalink / raw)
  To: scottwood, benh, agraf, paulus, kvm, kvm-ppc, linuxppc-dev; +Cc: Bharat Bhushan

"E" bit in MAS2 bit indicates whether the page is accessed
in Little-Endian or Big-Endian byte order.
There is no reason to stop guest setting  "E", so allow him."

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
---
v2->v3
 - no change
v1->v2
 - no change
 arch/powerpc/kvm/e500.h |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kvm/e500.h b/arch/powerpc/kvm/e500.h
index c2e5e98..277cb18 100644
--- a/arch/powerpc/kvm/e500.h
+++ b/arch/powerpc/kvm/e500.h
@@ -117,7 +117,7 @@ static inline struct kvmppc_vcpu_e500 *to_e500(struct kvm_vcpu *vcpu)
 #define E500_TLB_USER_PERM_MASK (MAS3_UX|MAS3_UR|MAS3_UW)
 #define E500_TLB_SUPER_PERM_MASK (MAS3_SX|MAS3_SR|MAS3_SW)
 #define MAS2_ATTRIB_MASK \
-	  (MAS2_X0 | MAS2_X1)
+	  (MAS2_X0 | MAS2_X1 | MAS2_E)
 #define MAS3_ATTRIB_MASK \
 	  (MAS3_U0 | MAS3_U1 | MAS3_U2 | MAS3_U3 \
 	   | E500_TLB_USER_PERM_MASK | E500_TLB_SUPER_PERM_MASK)
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/6 v3] kvm: powerpc: allow guest control "G" attribute in mas2
  2013-08-06 11:31 [PATCH 0/6 v3] kvm: powerpc: use cache attributes from linux pte Bharat Bhushan
  2013-08-06 11:31 ` [PATCH 1/6 v3] powerpc: book3e: _PAGE_LENDIAN must be _PAGE_ENDIAN Bharat Bhushan
  2013-08-06 11:31 ` [PATCH 2/6 v3] kvm: powerpc: allow guest control "E" attribute in mas2 Bharat Bhushan
@ 2013-08-06 11:31 ` Bharat Bhushan
  2013-08-06 11:31 ` [PATCH 4/6 v3] powerpc: move linux pte/hugepte search to more generic file Bharat Bhushan
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Bharat Bhushan @ 2013-08-06 11:31 UTC (permalink / raw)
  To: scottwood, benh, agraf, paulus, kvm, kvm-ppc, linuxppc-dev; +Cc: Bharat Bhushan

"G" bit in MAS2 indicates whether the page is Guarded.
There is no reason to stop guest setting  "G", so allow him.

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
---
v2->v3
 - no change
v1->v2
 - no change
 arch/powerpc/kvm/e500.h |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kvm/e500.h b/arch/powerpc/kvm/e500.h
index 277cb18..4fd9650 100644
--- a/arch/powerpc/kvm/e500.h
+++ b/arch/powerpc/kvm/e500.h
@@ -117,7 +117,7 @@ static inline struct kvmppc_vcpu_e500 *to_e500(struct kvm_vcpu *vcpu)
 #define E500_TLB_USER_PERM_MASK (MAS3_UX|MAS3_UR|MAS3_UW)
 #define E500_TLB_SUPER_PERM_MASK (MAS3_SX|MAS3_SR|MAS3_SW)
 #define MAS2_ATTRIB_MASK \
-	  (MAS2_X0 | MAS2_X1 | MAS2_E)
+	  (MAS2_X0 | MAS2_X1 | MAS2_E | MAS2_G)
 #define MAS3_ATTRIB_MASK \
 	  (MAS3_U0 | MAS3_U1 | MAS3_U2 | MAS3_U3 \
 	   | E500_TLB_USER_PERM_MASK | E500_TLB_SUPER_PERM_MASK)
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 4/6 v3] powerpc: move linux pte/hugepte search to more generic file
  2013-08-06 11:31 [PATCH 0/6 v3] kvm: powerpc: use cache attributes from linux pte Bharat Bhushan
                   ` (2 preceding siblings ...)
  2013-08-06 11:31 ` [PATCH 3/6 v3] kvm: powerpc: allow guest control "G" " Bharat Bhushan
@ 2013-08-06 11:31 ` Bharat Bhushan
  2013-08-06 11:31 ` [PATCH 5/6 v3] kvm: powerpc: keep only pte search logic in lookup_linux_pte Bharat Bhushan
  2013-08-06 11:31 ` [PATCH 6/6 v3] kvm: powerpc: use caching attributes as per linux pte Bharat Bhushan
  5 siblings, 0 replies; 9+ messages in thread
From: Bharat Bhushan @ 2013-08-06 11:31 UTC (permalink / raw)
  To: scottwood, benh, agraf, paulus, kvm, kvm-ppc, linuxppc-dev; +Cc: Bharat Bhushan

Linux pte search functions find_linux_pte_or_hugepte() and
find_linux_pte() have nothing specific to 64bit anymore.
So they are move from pgtable-ppc64.h to asm/pgtable.h

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
---
v2->v3
 - no change
v1->v2
 - This is a new change in this version
 
 arch/powerpc/include/asm/pgtable-ppc64.h |   36 -----------------------------
 arch/powerpc/include/asm/pgtable.h       |   37 ++++++++++++++++++++++++++++++
 2 files changed, 37 insertions(+), 36 deletions(-)

diff --git a/arch/powerpc/include/asm/pgtable-ppc64.h b/arch/powerpc/include/asm/pgtable-ppc64.h
index e3d55f6..d257d98 100644
--- a/arch/powerpc/include/asm/pgtable-ppc64.h
+++ b/arch/powerpc/include/asm/pgtable-ppc64.h
@@ -340,42 +340,6 @@ static inline void __ptep_set_access_flags(pte_t *ptep, pte_t entry)
 void pgtable_cache_add(unsigned shift, void (*ctor)(void *));
 void pgtable_cache_init(void);
 
-/*
- * find_linux_pte returns the address of a linux pte for a given
- * effective address and directory.  If not found, it returns zero.
- */
-static inline pte_t *find_linux_pte(pgd_t *pgdir, unsigned long ea)
-{
-	pgd_t *pg;
-	pud_t *pu;
-	pmd_t *pm;
-	pte_t *pt = NULL;
-
-	pg = pgdir + pgd_index(ea);
-	if (!pgd_none(*pg)) {
-		pu = pud_offset(pg, ea);
-		if (!pud_none(*pu)) {
-			pm = pmd_offset(pu, ea);
-			if (pmd_present(*pm))
-				pt = pte_offset_kernel(pm, ea);
-		}
-	}
-	return pt;
-}
-
-#ifdef CONFIG_HUGETLB_PAGE
-pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea,
-				 unsigned *shift);
-#else
-static inline pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea,
-					       unsigned *shift)
-{
-	if (shift)
-		*shift = 0;
-	return find_linux_pte(pgdir, ea);
-}
-#endif /* !CONFIG_HUGETLB_PAGE */
-
 #endif /* __ASSEMBLY__ */
 
 #endif /* _ASM_POWERPC_PGTABLE_PPC64_H_ */
diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index b6293d2..690c8c2 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -217,6 +217,43 @@ extern int gup_hugepd(hugepd_t *hugepd, unsigned pdshift, unsigned long addr,
 
 extern int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr,
 		       unsigned long end, int write, struct page **pages, int *nr);
+
+/*
+ * find_linux_pte returns the address of a linux pte for a given
+ * effective address and directory.  If not found, it returns zero.
+ */
+static inline pte_t *find_linux_pte(pgd_t *pgdir, unsigned long ea)
+{
+	pgd_t *pg;
+	pud_t *pu;
+	pmd_t *pm;
+	pte_t *pt = NULL;
+
+	pg = pgdir + pgd_index(ea);
+	if (!pgd_none(*pg)) {
+		pu = pud_offset(pg, ea);
+		if (!pud_none(*pu)) {
+			pm = pmd_offset(pu, ea);
+			if (pmd_present(*pm))
+				pt = pte_offset_kernel(pm, ea);
+		}
+	}
+	return pt;
+}
+
+#ifdef CONFIG_HUGETLB_PAGE
+pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea,
+				 unsigned *shift);
+#else
+static inline pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea,
+					       unsigned *shift)
+{
+	if (shift)
+		*shift = 0;
+	return find_linux_pte(pgdir, ea);
+}
+#endif /* !CONFIG_HUGETLB_PAGE */
+
 #endif /* __ASSEMBLY__ */
 
 #endif /* __KERNEL__ */
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 5/6 v3] kvm: powerpc: keep only pte search logic in lookup_linux_pte
  2013-08-06 11:31 [PATCH 0/6 v3] kvm: powerpc: use cache attributes from linux pte Bharat Bhushan
                   ` (3 preceding siblings ...)
  2013-08-06 11:31 ` [PATCH 4/6 v3] powerpc: move linux pte/hugepte search to more generic file Bharat Bhushan
@ 2013-08-06 11:31 ` Bharat Bhushan
  2013-08-06 11:31 ` [PATCH 6/6 v3] kvm: powerpc: use caching attributes as per linux pte Bharat Bhushan
  5 siblings, 0 replies; 9+ messages in thread
From: Bharat Bhushan @ 2013-08-06 11:31 UTC (permalink / raw)
  To: scottwood, benh, agraf, paulus, kvm, kvm-ppc, linuxppc-dev; +Cc: Bharat Bhushan

lookup_linux_pte() was searching for a pte and also sets access
flags is writable. This function now searches only pte while
access flag setting is done explicitly.

This pte lookup is not kvm specific, so moved to common code (asm/pgtable.h)
My Followup patch will use this on booke.

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
---
v2->v3
 - New change

 arch/powerpc/include/asm/pgtable.h  |   23 +++++++++++++++++++++
 arch/powerpc/kvm/book3s_hv_rm_mmu.c |   38 +++++++++++-----------------------
 2 files changed, 35 insertions(+), 26 deletions(-)

diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index 690c8c2..d4d16ab 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -254,6 +254,29 @@ static inline pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea,
 }
 #endif /* !CONFIG_HUGETLB_PAGE */
 
+static inline pte_t *lookup_linux_pte(pgd_t *pgdir, unsigned long hva,
+				     unsigned long *pte_sizep)
+{
+	pte_t *ptep;
+	unsigned long ps = *pte_sizep;
+	unsigned int shift;
+
+	ptep = find_linux_pte_or_hugepte(pgdir, hva, &shift);
+	if (!ptep)
+		return __pte(0);
+	if (shift)
+		*pte_sizep = 1ul << shift;
+	else
+		*pte_sizep = PAGE_SIZE;
+
+	if (ps > *pte_sizep)
+		return __pte(0);
+
+	if (!pte_present(*ptep))
+		return __pte(0);
+
+	return ptep;
+}
 #endif /* __ASSEMBLY__ */
 
 #endif /* __KERNEL__ */
diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index 105b00f..7e6200c 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
@@ -134,27 +134,6 @@ static void remove_revmap_chain(struct kvm *kvm, long pte_index,
 	unlock_rmap(rmap);
 }
 
-static pte_t lookup_linux_pte(pgd_t *pgdir, unsigned long hva,
-			      int writing, unsigned long *pte_sizep)
-{
-	pte_t *ptep;
-	unsigned long ps = *pte_sizep;
-	unsigned int shift;
-
-	ptep = find_linux_pte_or_hugepte(pgdir, hva, &shift);
-	if (!ptep)
-		return __pte(0);
-	if (shift)
-		*pte_sizep = 1ul << shift;
-	else
-		*pte_sizep = PAGE_SIZE;
-	if (ps > *pte_sizep)
-		return __pte(0);
-	if (!pte_present(*ptep))
-		return __pte(0);
-	return kvmppc_read_update_linux_pte(ptep, writing);
-}
-
 static inline void unlock_hpte(unsigned long *hpte, unsigned long hpte_v)
 {
 	asm volatile(PPC_RELEASE_BARRIER "" : : : "memory");
@@ -174,6 +153,7 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
 	unsigned long *physp, pte_size;
 	unsigned long is_io;
 	unsigned long *rmap;
+	pte_t *ptep;
 	pte_t pte;
 	unsigned int writing;
 	unsigned long mmu_seq;
@@ -233,8 +213,9 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
 
 		/* Look up the Linux PTE for the backing page */
 		pte_size = psize;
-		pte = lookup_linux_pte(pgdir, hva, writing, &pte_size);
-		if (pte_present(pte)) {
+		ptep = lookup_linux_pte(pgdir, hva, &pte_size);
+		if (pte_present(pte_val(*ptep))) {
+			pte = kvmppc_read_update_linux_pte(ptep, writing);
 			if (writing && !pte_write(pte))
 				/* make the actual HPTE be read-only */
 				ptel = hpte_make_readonly(ptel);
@@ -662,6 +643,7 @@ long kvmppc_h_protect(struct kvm_vcpu *vcpu, unsigned long flags,
 			unsigned long psize, gfn, hva;
 			struct kvm_memory_slot *memslot;
 			pgd_t *pgdir = vcpu->arch.pgdir;
+			pte_t *ptep;
 			pte_t pte;
 
 			psize = hpte_page_size(v, r);
@@ -669,9 +651,13 @@ long kvmppc_h_protect(struct kvm_vcpu *vcpu, unsigned long flags,
 			memslot = __gfn_to_memslot(kvm_memslots(kvm), gfn);
 			if (memslot) {
 				hva = __gfn_to_hva_memslot(memslot, gfn);
-				pte = lookup_linux_pte(pgdir, hva, 1, &psize);
-				if (pte_present(pte) && !pte_write(pte))
-					r = hpte_make_readonly(r);
+				ptep = lookup_linux_pte(pgdir, hva, &psize);
+				if (pte_present(pte_val(*ptep))) {
+					pte = kvmppc_read_update_linux_pte(ptep,
+									   1);
+					if (pte_present(pte) && !pte_write(pte))
+						r = hpte_make_readonly(r);
+				}
 			}
 		}
 	}
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 6/6 v3] kvm: powerpc: use caching attributes as per linux pte
  2013-08-06 11:31 [PATCH 0/6 v3] kvm: powerpc: use cache attributes from linux pte Bharat Bhushan
                   ` (4 preceding siblings ...)
  2013-08-06 11:31 ` [PATCH 5/6 v3] kvm: powerpc: keep only pte search logic in lookup_linux_pte Bharat Bhushan
@ 2013-08-06 11:31 ` Bharat Bhushan
  2013-08-10  1:04   ` Scott Wood
  5 siblings, 1 reply; 9+ messages in thread
From: Bharat Bhushan @ 2013-08-06 11:31 UTC (permalink / raw)
  To: scottwood, benh, agraf, paulus, kvm, kvm-ppc, linuxppc-dev; +Cc: Bharat Bhushan

KVM uses same WIM tlb attributes as the corresponding qemu pte.
For this we now search the linux pte for the requested page and
get these cache caching/coherency attributes from pte.

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
---
v2->v3
 - setting pgdir before kvmppc_fix_ee_before_entry() on vcpu_run
 - Aligned as per changes in patch 5/6
 - setting WIMG for pfnmap pages also
 
v1->v2
 - Use Linux pte for wimge rather than RAM/no-RAM mechanism

 arch/powerpc/include/asm/kvm_host.h |    2 +-
 arch/powerpc/kvm/booke.c            |    2 +-
 arch/powerpc/kvm/e500.h             |    8 ++++--
 arch/powerpc/kvm/e500_mmu_host.c    |   36 ++++++++++++++++++++--------------
 4 files changed, 28 insertions(+), 20 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 3328353..583d405 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -535,6 +535,7 @@ struct kvm_vcpu_arch {
 #endif
 	gpa_t paddr_accessed;
 	gva_t vaddr_accessed;
+	pgd_t *pgdir;
 
 	u8 io_gpr; /* GPR used as IO source/target */
 	u8 mmio_is_bigendian;
@@ -592,7 +593,6 @@ struct kvm_vcpu_arch {
 	struct list_head run_list;
 	struct task_struct *run_task;
 	struct kvm_run *kvm_run;
-	pgd_t *pgdir;
 
 	spinlock_t vpa_update_lock;
 	struct kvmppc_vpa vpa;
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 17722d8..0d96d50 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -696,8 +696,8 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
 	kvmppc_load_guest_fp(vcpu);
 #endif
 
+	vcpu->arch.pgdir = current->mm->pgd;
 	kvmppc_fix_ee_before_entry();
-
 	ret = __kvmppc_vcpu_run(kvm_run, vcpu);
 
 	/* No need for kvm_guest_exit. It's done in handle_exit.
diff --git a/arch/powerpc/kvm/e500.h b/arch/powerpc/kvm/e500.h
index 4fd9650..fc4b2f6 100644
--- a/arch/powerpc/kvm/e500.h
+++ b/arch/powerpc/kvm/e500.h
@@ -31,11 +31,13 @@ enum vcpu_ftr {
 #define E500_TLB_NUM   2
 
 /* entry is mapped somewhere in host TLB */
-#define E500_TLB_VALID		(1 << 0)
+#define E500_TLB_VALID		(1 << 31)
 /* TLB1 entry is mapped by host TLB1, tracked by bitmaps */
-#define E500_TLB_BITMAP		(1 << 1)
+#define E500_TLB_BITMAP		(1 << 30)
 /* TLB1 entry is mapped by host TLB0 */
-#define E500_TLB_TLB0		(1 << 2)
+#define E500_TLB_TLB0		(1 << 29)
+/* Lower 5 bits have WIMGE value */
+#define E500_TLB_WIMGE_MASK	(0x1f)
 
 struct tlbe_ref {
 	pfn_t pfn;		/* valid only for TLB0, except briefly */
diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
index 1c6a9d7..001a2b0 100644
--- a/arch/powerpc/kvm/e500_mmu_host.c
+++ b/arch/powerpc/kvm/e500_mmu_host.c
@@ -64,15 +64,6 @@ static inline u32 e500_shadow_mas3_attrib(u32 mas3, int usermode)
 	return mas3;
 }
 
-static inline u32 e500_shadow_mas2_attrib(u32 mas2, int usermode)
-{
-#ifdef CONFIG_SMP
-	return (mas2 & MAS2_ATTRIB_MASK) | MAS2_M;
-#else
-	return mas2 & MAS2_ATTRIB_MASK;
-#endif
-}
-
 /*
  * writing shadow tlb entry to host TLB
  */
@@ -248,10 +239,12 @@ static inline int tlbe_is_writable(struct kvm_book3e_206_tlb_entry *tlbe)
 
 static inline void kvmppc_e500_ref_setup(struct tlbe_ref *ref,
 					 struct kvm_book3e_206_tlb_entry *gtlbe,
-					 pfn_t pfn)
+					 pfn_t pfn, int wimg)
 {
 	ref->pfn = pfn;
 	ref->flags |= E500_TLB_VALID;
+	/* Use guest supplied MAS2_G and MAS2_E */
+	ref->flags |= (gtlbe->mas2 & MAS2_ATTRIB_MASK) | wimg;
 
 	if (tlbe_is_writable(gtlbe))
 		kvm_set_pfn_dirty(pfn);
@@ -312,8 +305,7 @@ static void kvmppc_e500_setup_stlbe(
 
 	/* Force IPROT=0 for all guest mappings. */
 	stlbe->mas1 = MAS1_TSIZE(tsize) | get_tlb_sts(gtlbe) | MAS1_VALID;
-	stlbe->mas2 = (gvaddr & MAS2_EPN) |
-		      e500_shadow_mas2_attrib(gtlbe->mas2, pr);
+	stlbe->mas2 = (gvaddr & MAS2_EPN) | (ref->flags & E500_TLB_WIMGE_MASK);
 	stlbe->mas7_3 = ((u64)pfn << PAGE_SHIFT) |
 			e500_shadow_mas3_attrib(gtlbe->mas7_3, pr);
 
@@ -332,6 +324,10 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
 	unsigned long hva;
 	int pfnmap = 0;
 	int tsize = BOOK3E_PAGESZ_4K;
+	unsigned long tsize_pages = 0;
+	pte_t *ptep;
+	int wimg = 0;
+	pgd_t *pgdir;
 
 	/*
 	 * Translate guest physical to true physical, acquiring
@@ -394,7 +390,7 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
 			 */
 
 			for (; tsize > BOOK3E_PAGESZ_4K; tsize -= 2) {
-				unsigned long gfn_start, gfn_end, tsize_pages;
+				unsigned long gfn_start, gfn_end;
 				tsize_pages = 1 << (tsize - 2);
 
 				gfn_start = gfn & ~(tsize_pages - 1);
@@ -436,7 +432,8 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
 	}
 
 	if (likely(!pfnmap)) {
-		unsigned long tsize_pages = 1 << (tsize + 10 - PAGE_SHIFT);
+		tsize_pages = 1 << (tsize + 10 - PAGE_SHIFT);
+
 		pfn = gfn_to_pfn_memslot(slot, gfn);
 		if (is_error_noslot_pfn(pfn)) {
 			printk(KERN_ERR "Couldn't get real page for gfn %lx!\n",
@@ -449,7 +446,16 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
 		gvaddr &= ~((tsize_pages << PAGE_SHIFT) - 1);
 	}
 
-	kvmppc_e500_ref_setup(ref, gtlbe, pfn);
+	pgdir = vcpu_e500->vcpu.arch.pgdir;
+	ptep = lookup_linux_pte(pgdir, hva, &tsize_pages);
+	if (pte_present(*ptep)) {
+		wimg = (pte_val(*ptep) >> PTE_WIMGE_SHIFT) & MAS2_WIMGE_MASK;
+	} else {
+		printk(KERN_ERR "pte not present: gfn %lx, pfn %lx\n",
+				(long)gfn, pfn);
+		return -EINVAL;
+	}
+	kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg);
 
 	kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize,
 				ref, gvaddr, stlbe);
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 6/6 v3] kvm: powerpc: use caching attributes as per linux pte
  2013-08-06 11:31 ` [PATCH 6/6 v3] kvm: powerpc: use caching attributes as per linux pte Bharat Bhushan
@ 2013-08-10  1:04   ` Scott Wood
  2013-08-12 17:14     ` Bhushan Bharat-R65777
  0 siblings, 1 reply; 9+ messages in thread
From: Scott Wood @ 2013-08-10  1:04 UTC (permalink / raw)
  To: Bharat Bhushan; +Cc: kvm, agraf, kvm-ppc, Bharat Bhushan, paulus, linuxppc-dev

On Tue, 2013-08-06 at 17:01 +0530, Bharat Bhushan wrote:
> @@ -449,7 +446,16 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
>  		gvaddr &= ~((tsize_pages << PAGE_SHIFT) - 1);
>  	}
>  
> -	kvmppc_e500_ref_setup(ref, gtlbe, pfn);
> +	pgdir = vcpu_e500->vcpu.arch.pgdir;
> +	ptep = lookup_linux_pte(pgdir, hva, &tsize_pages);
> +	if (pte_present(*ptep)) {
> +		wimg = (pte_val(*ptep) >> PTE_WIMGE_SHIFT) & MAS2_WIMGE_MASK;
> +	} else {
> +		printk(KERN_ERR "pte not present: gfn %lx, pfn %lx\n",
> +				(long)gfn, pfn);
> +		return -EINVAL;

Don't let the guest spam the host kernel console by repeatedly accessing
bad mappings (even if it requires host userspace to assist by pointing a
memslot at a bad hva).  This should at most be printk_ratelimited(), and
probably just pr_debug().  It should also have __func__ context.

Also, I don't see the return value getting checked (the immediate
callers check it and propogate the error, but kvmppc_mmu_map() doesn't).
We want to send a machine check to the guest if this happens (or
possibly exit to userspace since it indicates a bad memslot, not just a
guest bug).  We don't want to just silently retry over and over.

Otherwise, this series looks good to me.

-Scott

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: [PATCH 6/6 v3] kvm: powerpc: use caching attributes as per linux pte
  2013-08-10  1:04   ` Scott Wood
@ 2013-08-12 17:14     ` Bhushan Bharat-R65777
  0 siblings, 0 replies; 9+ messages in thread
From: Bhushan Bharat-R65777 @ 2013-08-12 17:14 UTC (permalink / raw)
  To: Wood Scott-B07421; +Cc: kvm, agraf, kvm-ppc, paulus, linuxppc-dev

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogV29vZCBTY290dC1CMDc0
MjENCj4gU2VudDogU2F0dXJkYXksIEF1Z3VzdCAxMCwgMjAxMyA2OjM1IEFNDQo+IFRvOiBCaHVz
aGFuIEJoYXJhdC1SNjU3NzcNCj4gQ2M6IGJlbmhAa2VybmVsLmNyYXNoaW5nLm9yZzsgYWdyYWZA
c3VzZS5kZTsgcGF1bHVzQHNhbWJhLm9yZzsNCj4ga3ZtQHZnZXIua2VybmVsLm9yZzsga3ZtLXBw
Y0B2Z2VyLmtlcm5lbC5vcmc7IGxpbnV4cHBjLWRldkBsaXN0cy5vemxhYnMub3JnOw0KPiBCaHVz
aGFuIEJoYXJhdC1SNjU3NzcNCj4gU3ViamVjdDogUmU6IFtQQVRDSCA2LzYgdjNdIGt2bTogcG93
ZXJwYzogdXNlIGNhY2hpbmcgYXR0cmlidXRlcyBhcyBwZXIgbGludXgNCj4gcHRlDQo+IA0KPiBP
biBUdWUsIDIwMTMtMDgtMDYgYXQgMTc6MDEgKzA1MzAsIEJoYXJhdCBCaHVzaGFuIHdyb3RlOg0K
PiA+IEBAIC00NDksNyArNDQ2LDE2IEBAIHN0YXRpYyBpbmxpbmUgaW50IGt2bXBwY19lNTAwX3No
YWRvd19tYXAoc3RydWN0DQo+IGt2bXBwY192Y3B1X2U1MDAgKnZjcHVfZTUwMCwNCj4gPiAgCQln
dmFkZHIgJj0gfigodHNpemVfcGFnZXMgPDwgUEFHRV9TSElGVCkgLSAxKTsNCj4gPiAgCX0NCj4g
Pg0KPiA+IC0Ja3ZtcHBjX2U1MDBfcmVmX3NldHVwKHJlZiwgZ3RsYmUsIHBmbik7DQo+ID4gKwlw
Z2RpciA9IHZjcHVfZTUwMC0+dmNwdS5hcmNoLnBnZGlyOw0KPiA+ICsJcHRlcCA9IGxvb2t1cF9s
aW51eF9wdGUocGdkaXIsIGh2YSwgJnRzaXplX3BhZ2VzKTsNCj4gPiArCWlmIChwdGVfcHJlc2Vu
dCgqcHRlcCkpIHsNCj4gPiArCQl3aW1nID0gKHB0ZV92YWwoKnB0ZXApID4+IFBURV9XSU1HRV9T
SElGVCkgJiBNQVMyX1dJTUdFX01BU0s7DQo+ID4gKwl9IGVsc2Ugew0KPiA+ICsJCXByaW50ayhL
RVJOX0VSUiAicHRlIG5vdCBwcmVzZW50OiBnZm4gJWx4LCBwZm4gJWx4XG4iLA0KPiA+ICsJCQkJ
KGxvbmcpZ2ZuLCBwZm4pOw0KPiA+ICsJCXJldHVybiAtRUlOVkFMOw0KPiANCj4gRG9uJ3QgbGV0
IHRoZSBndWVzdCBzcGFtIHRoZSBob3N0IGtlcm5lbCBjb25zb2xlIGJ5IHJlcGVhdGVkbHkgYWNj
ZXNzaW5nIGJhZA0KPiBtYXBwaW5ncyAoZXZlbiBpZiBpdCByZXF1aXJlcyBob3N0IHVzZXJzcGFj
ZSB0byBhc3Npc3QgYnkgcG9pbnRpbmcgYSBtZW1zbG90IGF0DQo+IGEgYmFkIGh2YSkuICBUaGlz
IHNob3VsZCBhdCBtb3N0IGJlIHByaW50a19yYXRlbGltaXRlZCgpLCBhbmQgcHJvYmFibHkganVz
dA0KPiBwcl9kZWJ1ZygpLiAgSXQgc2hvdWxkIGFsc28gaGF2ZSBfX2Z1bmNfXyBjb250ZXh0Lg0K
DQpWZXJ5IGdvb2QgcG9pbnQsIEkgd2lsbCBtYWtlIHRoaXMgcHJpbnRrX3JhdGVsaW1pdGVkKCkg
aW4gdGhpcyBwYXRjaC4gQW5kIGNvbnZlcnQgdGhpcyBhbmQgb3RoZXIgZXJyb3IgcHJpbnRzIHRv
IHByX2RlYnVnKCkgd2hlbiB3ZSB3aWxsIHNlbmQgbWFjaGluZSBjaGVjayBvbiBlcnJvciBpbiB0
aGlzIGZsb3cuDQoNCj4gDQo+IEFsc28sIEkgZG9uJ3Qgc2VlIHRoZSByZXR1cm4gdmFsdWUgZ2V0
dGluZyBjaGVja2VkICh0aGUgaW1tZWRpYXRlIGNhbGxlcnMgY2hlY2sNCj4gaXQgYW5kIHByb3Bv
Z2F0ZSB0aGUgZXJyb3IsIGJ1dCBrdm1wcGNfbW11X21hcCgpIGRvZXNuJ3QpLg0KPiBXZSB3YW50
IHRvIHNlbmQgYSBtYWNoaW5lIGNoZWNrIHRvIHRoZSBndWVzdCBpZiB0aGlzIGhhcHBlbnMgKG9y
IHBvc3NpYmx5IGV4aXQNCj4gdG8gdXNlcnNwYWNlIHNpbmNlIGl0IGluZGljYXRlcyBhIGJhZCBt
ZW1zbG90LCBub3QganVzdCBhIGd1ZXN0IGJ1ZykuICBXZSBkb24ndA0KPiB3YW50IHRvIGp1c3Qg
c2lsZW50bHkgcmV0cnkgb3ZlciBhbmQgb3Zlci4NCg0KSSBjb21wbGV0ZWx5IGFncmVlIHdpdGgg
eW91LCBidXQgdGhpcyB3YXMgc29tZXRoaW5nIGFscmVhZHkgbWlzc2luZyAoZXJyb3IgcmV0dXJu
IGJ5IHRoaXMgZnVuY3Rpb24gaXMgbm90aGluZyBuZXcgYWRkZWQgaW4gdGhpcyBwYXRjaCksIFNv
IEkgd291bGQgbGlrZSB0byB0YWtlIHRoYXQgc2VwYXJhdGVseS4NCg0KPiANCj4gT3RoZXJ3aXNl
LCB0aGlzIHNlcmllcyBsb29rcyBnb29kIHRvIG1lLg0KDQpUaGFuayB5b3UuIDopDQotQmhhcmF0
DQoNCj4gDQo+IC1TY290dA0KPiANCg0K

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2013-08-12 17:14 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-08-06 11:31 [PATCH 0/6 v3] kvm: powerpc: use cache attributes from linux pte Bharat Bhushan
2013-08-06 11:31 ` [PATCH 1/6 v3] powerpc: book3e: _PAGE_LENDIAN must be _PAGE_ENDIAN Bharat Bhushan
2013-08-06 11:31 ` [PATCH 2/6 v3] kvm: powerpc: allow guest control "E" attribute in mas2 Bharat Bhushan
2013-08-06 11:31 ` [PATCH 3/6 v3] kvm: powerpc: allow guest control "G" " Bharat Bhushan
2013-08-06 11:31 ` [PATCH 4/6 v3] powerpc: move linux pte/hugepte search to more generic file Bharat Bhushan
2013-08-06 11:31 ` [PATCH 5/6 v3] kvm: powerpc: keep only pte search logic in lookup_linux_pte Bharat Bhushan
2013-08-06 11:31 ` [PATCH 6/6 v3] kvm: powerpc: use caching attributes as per linux pte Bharat Bhushan
2013-08-10  1:04   ` Scott Wood
2013-08-12 17:14     ` Bhushan Bharat-R65777

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).