linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/4] powerpc: book3e: _PAGE_LENDIAN must be _PAGE_ENDIAN
@ 2013-07-26  5:46 Bharat Bhushan
  2013-07-26  5:46 ` [PATCH 2/4] kvm: powerpc: allow guest control "E" attribute in mas2 Bharat Bhushan
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Bharat Bhushan @ 2013-07-26  5:46 UTC (permalink / raw)
  To: kvm-ppc, kvm, linuxppc-dev, agraf, benh, scottwood; +Cc: Bharat Bhushan

For booke3e _PAGE_ENDIAN is not defined. Infact what is defined
is "_PAGE_LENDIAN" which is wrong and that should be _PAGE_ENDIAN.
There are no compilation errors as
arch/powerpc/include/asm/pte-common.h defines _PAGE_ENDIAN to 0
as it is not defined anywhere.

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
---
 arch/powerpc/include/asm/pte-book3e.h |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/include/asm/pte-book3e.h b/arch/powerpc/include/asm/pte-book3e.h
index 0156702..576ad88 100644
--- a/arch/powerpc/include/asm/pte-book3e.h
+++ b/arch/powerpc/include/asm/pte-book3e.h
@@ -40,7 +40,7 @@
 #define _PAGE_U1	0x010000
 #define _PAGE_U0	0x020000
 #define _PAGE_ACCESSED	0x040000
-#define _PAGE_LENDIAN	0x080000
+#define _PAGE_ENDIAN	0x080000
 #define _PAGE_GUARDED	0x100000
 #define _PAGE_COHERENT	0x200000 /* M: enforce memory coherence */
 #define _PAGE_NO_CACHE	0x400000 /* I: cache inhibit */
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/4] kvm: powerpc: allow guest control "E" attribute in mas2
  2013-07-26  5:46 [PATCH 1/4] powerpc: book3e: _PAGE_LENDIAN must be _PAGE_ENDIAN Bharat Bhushan
@ 2013-07-26  5:46 ` Bharat Bhushan
  2013-07-26  5:46 ` [PATCH 3/4] kvm: powerpc: allow guest control "G" " Bharat Bhushan
  2013-07-26  5:46 ` [PATCH 4/4] kvm: powerpc: set cache coherency only for RAM pages Bharat Bhushan
  2 siblings, 0 replies; 13+ messages in thread
From: Bharat Bhushan @ 2013-07-26  5:46 UTC (permalink / raw)
  To: kvm-ppc, kvm, linuxppc-dev, agraf, benh, scottwood; +Cc: Bharat Bhushan

"E" bit in MAS2 bit indicates whether the page is accessed
in Little-Endian or Big-Endian byte order.
There is no reason to stop guest setting  "E", so allow him."

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
---
 arch/powerpc/kvm/e500.h |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kvm/e500.h b/arch/powerpc/kvm/e500.h
index c2e5e98..277cb18 100644
--- a/arch/powerpc/kvm/e500.h
+++ b/arch/powerpc/kvm/e500.h
@@ -117,7 +117,7 @@ static inline struct kvmppc_vcpu_e500 *to_e500(struct kvm_vcpu *vcpu)
 #define E500_TLB_USER_PERM_MASK (MAS3_UX|MAS3_UR|MAS3_UW)
 #define E500_TLB_SUPER_PERM_MASK (MAS3_SX|MAS3_SR|MAS3_SW)
 #define MAS2_ATTRIB_MASK \
-	  (MAS2_X0 | MAS2_X1)
+	  (MAS2_X0 | MAS2_X1 | MAS2_E)
 #define MAS3_ATTRIB_MASK \
 	  (MAS3_U0 | MAS3_U1 | MAS3_U2 | MAS3_U3 \
 	   | E500_TLB_USER_PERM_MASK | E500_TLB_SUPER_PERM_MASK)
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/4] kvm: powerpc: allow guest control "G" attribute in mas2
  2013-07-26  5:46 [PATCH 1/4] powerpc: book3e: _PAGE_LENDIAN must be _PAGE_ENDIAN Bharat Bhushan
  2013-07-26  5:46 ` [PATCH 2/4] kvm: powerpc: allow guest control "E" attribute in mas2 Bharat Bhushan
@ 2013-07-26  5:46 ` Bharat Bhushan
  2013-07-26  5:46 ` [PATCH 4/4] kvm: powerpc: set cache coherency only for RAM pages Bharat Bhushan
  2 siblings, 0 replies; 13+ messages in thread
From: Bharat Bhushan @ 2013-07-26  5:46 UTC (permalink / raw)
  To: kvm-ppc, kvm, linuxppc-dev, agraf, benh, scottwood; +Cc: Bharat Bhushan

"G" bit in MAS2 indicates whether the page is Guarded.
There is no reason to stop guest setting  "G", so allow him.

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
---
 arch/powerpc/kvm/e500.h |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kvm/e500.h b/arch/powerpc/kvm/e500.h
index 277cb18..4fd9650 100644
--- a/arch/powerpc/kvm/e500.h
+++ b/arch/powerpc/kvm/e500.h
@@ -117,7 +117,7 @@ static inline struct kvmppc_vcpu_e500 *to_e500(struct kvm_vcpu *vcpu)
 #define E500_TLB_USER_PERM_MASK (MAS3_UX|MAS3_UR|MAS3_UW)
 #define E500_TLB_SUPER_PERM_MASK (MAS3_SX|MAS3_SR|MAS3_SW)
 #define MAS2_ATTRIB_MASK \
-	  (MAS2_X0 | MAS2_X1 | MAS2_E)
+	  (MAS2_X0 | MAS2_X1 | MAS2_E | MAS2_G)
 #define MAS3_ATTRIB_MASK \
 	  (MAS3_U0 | MAS3_U1 | MAS3_U2 | MAS3_U3 \
 	   | E500_TLB_USER_PERM_MASK | E500_TLB_SUPER_PERM_MASK)
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 4/4] kvm: powerpc: set cache coherency only for RAM pages
  2013-07-26  5:46 [PATCH 1/4] powerpc: book3e: _PAGE_LENDIAN must be _PAGE_ENDIAN Bharat Bhushan
  2013-07-26  5:46 ` [PATCH 2/4] kvm: powerpc: allow guest control "E" attribute in mas2 Bharat Bhushan
  2013-07-26  5:46 ` [PATCH 3/4] kvm: powerpc: allow guest control "G" " Bharat Bhushan
@ 2013-07-26  5:46 ` Bharat Bhushan
  2013-07-26  8:26   ` Benjamin Herrenschmidt
  2 siblings, 1 reply; 13+ messages in thread
From: Bharat Bhushan @ 2013-07-26  5:46 UTC (permalink / raw)
  To: kvm-ppc, kvm, linuxppc-dev, agraf, benh, scottwood; +Cc: Bharat Bhushan

If the page is RAM then map this as cacheable and coherent (set "M" bit)
otherwise this page is treated as I/O and map this as cache inhibited
and guarded (set  "I + G")

This helps setting proper MMU mapping for direct assigned device.

NOTE: There can be devices that require cacheable mapping, which is not yet supported.

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
---
 arch/powerpc/kvm/e500_mmu_host.c |   24 +++++++++++++++++++-----
 1 files changed, 19 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
index 1c6a9d7..5cbdc8f 100644
--- a/arch/powerpc/kvm/e500_mmu_host.c
+++ b/arch/powerpc/kvm/e500_mmu_host.c
@@ -64,13 +64,27 @@ static inline u32 e500_shadow_mas3_attrib(u32 mas3, int usermode)
 	return mas3;
 }
 
-static inline u32 e500_shadow_mas2_attrib(u32 mas2, int usermode)
+static inline u32 e500_shadow_mas2_attrib(u32 mas2, pfn_t pfn)
 {
+	u32 mas2_attr;
+
+	mas2_attr = mas2 & MAS2_ATTRIB_MASK;
+
+	if (kvm_is_mmio_pfn(pfn)) {
+		/*
+		 * If page is not RAM then it is treated as I/O page.
+		 * Map it with cache inhibited and guarded (set "I" + "G").
+		 */
+		mas2_attr |= MAS2_I | MAS2_G;
+		return mas2_attr;
+	}
+
+	/* Map RAM pages as cacheable (Not setting "I" in MAS2) */
 #ifdef CONFIG_SMP
-	return (mas2 & MAS2_ATTRIB_MASK) | MAS2_M;
-#else
-	return mas2 & MAS2_ATTRIB_MASK;
+	/* Also map as coherent (set "M") in SMP */
+	mas2_attr |= MAS2_M;
 #endif
+	return mas2_attr;
 }
 
 /*
@@ -313,7 +327,7 @@ static void kvmppc_e500_setup_stlbe(
 	/* Force IPROT=0 for all guest mappings. */
 	stlbe->mas1 = MAS1_TSIZE(tsize) | get_tlb_sts(gtlbe) | MAS1_VALID;
 	stlbe->mas2 = (gvaddr & MAS2_EPN) |
-		      e500_shadow_mas2_attrib(gtlbe->mas2, pr);
+		      e500_shadow_mas2_attrib(gtlbe->mas2, pfn);
 	stlbe->mas7_3 = ((u64)pfn << PAGE_SHIFT) |
 			e500_shadow_mas3_attrib(gtlbe->mas7_3, pr);
 
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] kvm: powerpc: set cache coherency only for RAM pages
  2013-07-26  5:46 ` [PATCH 4/4] kvm: powerpc: set cache coherency only for RAM pages Bharat Bhushan
@ 2013-07-26  8:26   ` Benjamin Herrenschmidt
  2013-07-26  8:50     ` Alexander Graf
  2013-07-26  8:51     ` Bhushan Bharat-R65777
  0 siblings, 2 replies; 13+ messages in thread
From: Benjamin Herrenschmidt @ 2013-07-26  8:26 UTC (permalink / raw)
  To: Bharat Bhushan
  Cc: kvm, agraf, kvm-ppc, Bharat Bhushan, scottwood, linuxppc-dev

On Fri, 2013-07-26 at 11:16 +0530, Bharat Bhushan wrote:
> If the page is RAM then map this as cacheable and coherent (set "M" bit)
> otherwise this page is treated as I/O and map this as cache inhibited
> and guarded (set  "I + G")
> 
> This helps setting proper MMU mapping for direct assigned device.
> 
> NOTE: There can be devices that require cacheable mapping, which is not yet supported.

Why don't you do like server instead and enforce the use of the same I
and M bits as the corresponding qemu PTE ?

Cheers,
Ben.

> Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
> ---
>  arch/powerpc/kvm/e500_mmu_host.c |   24 +++++++++++++++++++-----
>  1 files changed, 19 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
> index 1c6a9d7..5cbdc8f 100644
> --- a/arch/powerpc/kvm/e500_mmu_host.c
> +++ b/arch/powerpc/kvm/e500_mmu_host.c
> @@ -64,13 +64,27 @@ static inline u32 e500_shadow_mas3_attrib(u32 mas3, int usermode)
>  	return mas3;
>  }
>  
> -static inline u32 e500_shadow_mas2_attrib(u32 mas2, int usermode)
> +static inline u32 e500_shadow_mas2_attrib(u32 mas2, pfn_t pfn)
>  {
> +	u32 mas2_attr;
> +
> +	mas2_attr = mas2 & MAS2_ATTRIB_MASK;
> +
> +	if (kvm_is_mmio_pfn(pfn)) {
> +		/*
> +		 * If page is not RAM then it is treated as I/O page.
> +		 * Map it with cache inhibited and guarded (set "I" + "G").
> +		 */
> +		mas2_attr |= MAS2_I | MAS2_G;
> +		return mas2_attr;
> +	}
> +
> +	/* Map RAM pages as cacheable (Not setting "I" in MAS2) */
>  #ifdef CONFIG_SMP
> -	return (mas2 & MAS2_ATTRIB_MASK) | MAS2_M;
> -#else
> -	return mas2 & MAS2_ATTRIB_MASK;
> +	/* Also map as coherent (set "M") in SMP */
> +	mas2_attr |= MAS2_M;
>  #endif
> +	return mas2_attr;
>  }
>  
>  /*
> @@ -313,7 +327,7 @@ static void kvmppc_e500_setup_stlbe(
>  	/* Force IPROT=0 for all guest mappings. */
>  	stlbe->mas1 = MAS1_TSIZE(tsize) | get_tlb_sts(gtlbe) | MAS1_VALID;
>  	stlbe->mas2 = (gvaddr & MAS2_EPN) |
> -		      e500_shadow_mas2_attrib(gtlbe->mas2, pr);
> +		      e500_shadow_mas2_attrib(gtlbe->mas2, pfn);
>  	stlbe->mas7_3 = ((u64)pfn << PAGE_SHIFT) |
>  			e500_shadow_mas3_attrib(gtlbe->mas7_3, pr);
>  

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] kvm: powerpc: set cache coherency only for RAM pages
  2013-07-26  8:26   ` Benjamin Herrenschmidt
@ 2013-07-26  8:50     ` Alexander Graf
  2013-07-26  8:52       ` Bhushan Bharat-R65777
  2013-07-26 15:03       ` Bhushan Bharat-R65777
  2013-07-26  8:51     ` Bhushan Bharat-R65777
  1 sibling, 2 replies; 13+ messages in thread
From: Alexander Graf @ 2013-07-26  8:50 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: kvm, kvm-ppc, Bharat Bhushan, Bharat Bhushan, scottwood, linuxppc-dev


On 26.07.2013, at 10:26, Benjamin Herrenschmidt wrote:

> On Fri, 2013-07-26 at 11:16 +0530, Bharat Bhushan wrote:
>> If the page is RAM then map this as cacheable and coherent (set "M" =
bit)
>> otherwise this page is treated as I/O and map this as cache inhibited
>> and guarded (set  "I + G")
>>=20
>> This helps setting proper MMU mapping for direct assigned device.
>>=20
>> NOTE: There can be devices that require cacheable mapping, which is =
not yet supported.
>=20
> Why don't you do like server instead and enforce the use of the same I
> and M bits as the corresponding qemu PTE ?

Specifically, Ben is talking about this code:


                /* Translate to host virtual address */
                hva =3D __gfn_to_hva_memslot(memslot, gfn);

                /* Look up the Linux PTE for the backing page */
                pte_size =3D psize;
                pte =3D lookup_linux_pte(pgdir, hva, writing, =
&pte_size);
                if (pte_present(pte)) {
                        if (writing && !pte_write(pte))
                                /* make the actual HPTE be read-only */
                                ptel =3D hpte_make_readonly(ptel);
                        is_io =3D hpte_cache_bits(pte_val(pte));
                        pa =3D pte_pfn(pte) << PAGE_SHIFT;
                }


Alex

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH 4/4] kvm: powerpc: set cache coherency only for RAM pages
  2013-07-26  8:26   ` Benjamin Herrenschmidt
  2013-07-26  8:50     ` Alexander Graf
@ 2013-07-26  8:51     ` Bhushan Bharat-R65777
  1 sibling, 0 replies; 13+ messages in thread
From: Bhushan Bharat-R65777 @ 2013-07-26  8:51 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: Wood Scott-B07421, linuxppc-dev, kvm, kvm-ppc, agraf

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogQmVuamFtaW4gSGVycmVu
c2NobWlkdCBbbWFpbHRvOmJlbmhAa2VybmVsLmNyYXNoaW5nLm9yZ10NCj4gU2VudDogRnJpZGF5
LCBKdWx5IDI2LCAyMDEzIDE6NTcgUE0NCj4gVG86IEJodXNoYW4gQmhhcmF0LVI2NTc3Nw0KPiBD
Yzoga3ZtLXBwY0B2Z2VyLmtlcm5lbC5vcmc7IGt2bUB2Z2VyLmtlcm5lbC5vcmc7IGxpbnV4cHBj
LWRldkBsaXN0cy5vemxhYnMub3JnOw0KPiBhZ3JhZkBzdXNlLmRlOyBXb29kIFNjb3R0LUIwNzQy
MTsgQmh1c2hhbiBCaGFyYXQtUjY1Nzc3DQo+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggNC80XSBrdm06
IHBvd2VycGM6IHNldCBjYWNoZSBjb2hlcmVuY3kgb25seSBmb3IgUkFNIHBhZ2VzDQo+IA0KPiBP
biBGcmksIDIwMTMtMDctMjYgYXQgMTE6MTYgKzA1MzAsIEJoYXJhdCBCaHVzaGFuIHdyb3RlOg0K
PiA+IElmIHRoZSBwYWdlIGlzIFJBTSB0aGVuIG1hcCB0aGlzIGFzIGNhY2hlYWJsZSBhbmQgY29o
ZXJlbnQgKHNldCAiTSINCj4gPiBiaXQpIG90aGVyd2lzZSB0aGlzIHBhZ2UgaXMgdHJlYXRlZCBh
cyBJL08gYW5kIG1hcCB0aGlzIGFzIGNhY2hlDQo+ID4gaW5oaWJpdGVkIGFuZCBndWFyZGVkIChz
ZXQgICJJICsgRyIpDQo+ID4NCj4gPiBUaGlzIGhlbHBzIHNldHRpbmcgcHJvcGVyIE1NVSBtYXBw
aW5nIGZvciBkaXJlY3QgYXNzaWduZWQgZGV2aWNlLg0KPiA+DQo+ID4gTk9URTogVGhlcmUgY2Fu
IGJlIGRldmljZXMgdGhhdCByZXF1aXJlIGNhY2hlYWJsZSBtYXBwaW5nLCB3aGljaCBpcyBub3Qg
eWV0DQo+IHN1cHBvcnRlZC4NCj4gDQo+IFdoeSBkb24ndCB5b3UgZG8gbGlrZSBzZXJ2ZXIgaW5z
dGVhZCBhbmQgZW5mb3JjZSB0aGUgdXNlIG9mIHRoZSBzYW1lIEkgYW5kIE0NCj4gYml0cyBhcyB0
aGUgY29ycmVzcG9uZGluZyBxZW11IFBURSA/DQoNCkJlbi9BbGV4LCBJIHdpbGwgbG9vayBpbnRv
IHRoZSBjb2RlLiBDYW4geW91IHBsZWFzZSBkZXNjcmliZSBob3cgdGhpcyBpcyBoYW5kbGVkIG9u
IHNlcnZlcj8NCg0KVGhhbmtzDQotQmhhcmF0DQoNCj4gDQo+IENoZWVycywNCj4gQmVuLg0KPiAN
Cj4gPiBTaWduZWQtb2ZmLWJ5OiBCaGFyYXQgQmh1c2hhbiA8YmhhcmF0LmJodXNoYW5AZnJlZXNj
YWxlLmNvbT4NCj4gPiAtLS0NCj4gPiAgYXJjaC9wb3dlcnBjL2t2bS9lNTAwX21tdV9ob3N0LmMg
fCAgIDI0ICsrKysrKysrKysrKysrKysrKystLS0tLQ0KPiA+ICAxIGZpbGVzIGNoYW5nZWQsIDE5
IGluc2VydGlvbnMoKyksIDUgZGVsZXRpb25zKC0pDQo+ID4NCj4gPiBkaWZmIC0tZ2l0IGEvYXJj
aC9wb3dlcnBjL2t2bS9lNTAwX21tdV9ob3N0LmMNCj4gPiBiL2FyY2gvcG93ZXJwYy9rdm0vZTUw
MF9tbXVfaG9zdC5jDQo+ID4gaW5kZXggMWM2YTlkNy4uNWNiZGM4ZiAxMDA2NDQNCj4gPiAtLS0g
YS9hcmNoL3Bvd2VycGMva3ZtL2U1MDBfbW11X2hvc3QuYw0KPiA+ICsrKyBiL2FyY2gvcG93ZXJw
Yy9rdm0vZTUwMF9tbXVfaG9zdC5jDQo+ID4gQEAgLTY0LDEzICs2NCwyNyBAQCBzdGF0aWMgaW5s
aW5lIHUzMiBlNTAwX3NoYWRvd19tYXMzX2F0dHJpYih1MzIgbWFzMywgaW50DQo+IHVzZXJtb2Rl
KQ0KPiA+ICAJcmV0dXJuIG1hczM7DQo+ID4gIH0NCj4gPg0KPiA+IC1zdGF0aWMgaW5saW5lIHUz
MiBlNTAwX3NoYWRvd19tYXMyX2F0dHJpYih1MzIgbWFzMiwgaW50IHVzZXJtb2RlKQ0KPiA+ICtz
dGF0aWMgaW5saW5lIHUzMiBlNTAwX3NoYWRvd19tYXMyX2F0dHJpYih1MzIgbWFzMiwgcGZuX3Qg
cGZuKQ0KPiA+ICB7DQo+ID4gKwl1MzIgbWFzMl9hdHRyOw0KPiA+ICsNCj4gPiArCW1hczJfYXR0
ciA9IG1hczIgJiBNQVMyX0FUVFJJQl9NQVNLOw0KPiA+ICsNCj4gPiArCWlmIChrdm1faXNfbW1p
b19wZm4ocGZuKSkgew0KPiA+ICsJCS8qDQo+ID4gKwkJICogSWYgcGFnZSBpcyBub3QgUkFNIHRo
ZW4gaXQgaXMgdHJlYXRlZCBhcyBJL08gcGFnZS4NCj4gPiArCQkgKiBNYXAgaXQgd2l0aCBjYWNo
ZSBpbmhpYml0ZWQgYW5kIGd1YXJkZWQgKHNldCAiSSIgKyAiRyIpLg0KPiA+ICsJCSAqLw0KPiA+
ICsJCW1hczJfYXR0ciB8PSBNQVMyX0kgfCBNQVMyX0c7DQo+ID4gKwkJcmV0dXJuIG1hczJfYXR0
cjsNCj4gPiArCX0NCj4gPiArDQo+ID4gKwkvKiBNYXAgUkFNIHBhZ2VzIGFzIGNhY2hlYWJsZSAo
Tm90IHNldHRpbmcgIkkiIGluIE1BUzIpICovDQo+ID4gICNpZmRlZiBDT05GSUdfU01QDQo+ID4g
LQlyZXR1cm4gKG1hczIgJiBNQVMyX0FUVFJJQl9NQVNLKSB8IE1BUzJfTTsNCj4gPiAtI2Vsc2UN
Cj4gPiAtCXJldHVybiBtYXMyICYgTUFTMl9BVFRSSUJfTUFTSzsNCj4gPiArCS8qIEFsc28gbWFw
IGFzIGNvaGVyZW50IChzZXQgIk0iKSBpbiBTTVAgKi8NCj4gPiArCW1hczJfYXR0ciB8PSBNQVMy
X007DQo+ID4gICNlbmRpZg0KPiA+ICsJcmV0dXJuIG1hczJfYXR0cjsNCj4gPiAgfQ0KPiA+DQo+
ID4gIC8qDQo+ID4gQEAgLTMxMyw3ICszMjcsNyBAQCBzdGF0aWMgdm9pZCBrdm1wcGNfZTUwMF9z
ZXR1cF9zdGxiZSgNCj4gPiAgCS8qIEZvcmNlIElQUk9UPTAgZm9yIGFsbCBndWVzdCBtYXBwaW5n
cy4gKi8NCj4gPiAgCXN0bGJlLT5tYXMxID0gTUFTMV9UU0laRSh0c2l6ZSkgfCBnZXRfdGxiX3N0
cyhndGxiZSkgfCBNQVMxX1ZBTElEOw0KPiA+ICAJc3RsYmUtPm1hczIgPSAoZ3ZhZGRyICYgTUFT
Ml9FUE4pIHwNCj4gPiAtCQkgICAgICBlNTAwX3NoYWRvd19tYXMyX2F0dHJpYihndGxiZS0+bWFz
MiwgcHIpOw0KPiA+ICsJCSAgICAgIGU1MDBfc2hhZG93X21hczJfYXR0cmliKGd0bGJlLT5tYXMy
LCBwZm4pOw0KPiA+ICAJc3RsYmUtPm1hczdfMyA9ICgodTY0KXBmbiA8PCBQQUdFX1NISUZUKSB8
DQo+ID4gIAkJCWU1MDBfc2hhZG93X21hczNfYXR0cmliKGd0bGJlLT5tYXM3XzMsIHByKTsNCj4g
Pg0KPiANCj4gDQoNCg==

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH 4/4] kvm: powerpc: set cache coherency only for RAM pages
  2013-07-26  8:50     ` Alexander Graf
@ 2013-07-26  8:52       ` Bhushan Bharat-R65777
  2013-07-26 15:03       ` Bhushan Bharat-R65777
  1 sibling, 0 replies; 13+ messages in thread
From: Bhushan Bharat-R65777 @ 2013-07-26  8:52 UTC (permalink / raw)
  To: Alexander Graf, Benjamin Herrenschmidt
  Cc: Wood Scott-B07421, linuxppc-dev, kvm, kvm-ppc



> -----Original Message-----
> From: kvm-ppc-owner@vger.kernel.org [mailto:kvm-ppc-owner@vger.kernel.org=
] On
> Behalf Of Alexander Graf
> Sent: Friday, July 26, 2013 2:20 PM
> To: Benjamin Herrenschmidt
> Cc: Bhushan Bharat-R65777; kvm-ppc@vger.kernel.org; kvm@vger.kernel.org;
> linuxppc-dev@lists.ozlabs.org; Wood Scott-B07421; Bhushan Bharat-R65777
> Subject: Re: [PATCH 4/4] kvm: powerpc: set cache coherency only for RAM p=
ages
>=20
>=20
> On 26.07.2013, at 10:26, Benjamin Herrenschmidt wrote:
>=20
> > On Fri, 2013-07-26 at 11:16 +0530, Bharat Bhushan wrote:
> >> If the page is RAM then map this as cacheable and coherent (set "M"
> >> bit) otherwise this page is treated as I/O and map this as cache
> >> inhibited and guarded (set  "I + G")
> >>
> >> This helps setting proper MMU mapping for direct assigned device.
> >>
> >> NOTE: There can be devices that require cacheable mapping, which is no=
t yet
> supported.
> >
> > Why don't you do like server instead and enforce the use of the same I
> > and M bits as the corresponding qemu PTE ?
>=20
> Specifically, Ben is talking about this code:
>=20
>=20
>                 /* Translate to host virtual address */
>                 hva =3D __gfn_to_hva_memslot(memslot, gfn);
>=20
>                 /* Look up the Linux PTE for the backing page */
>                 pte_size =3D psize;
>                 pte =3D lookup_linux_pte(pgdir, hva, writing, &pte_size);
>                 if (pte_present(pte)) {
>                         if (writing && !pte_write(pte))
>                                 /* make the actual HPTE be read-only */
>                                 ptel =3D hpte_make_readonly(ptel);
>                         is_io =3D hpte_cache_bits(pte_val(pte));
>                         pa =3D pte_pfn(pte) << PAGE_SHIFT;
>                 }
>=20

Ok

Thanks
-Bharat


>=20
> Alex
>=20
> --
> To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in the=
 body
> of a message to majordomo@vger.kernel.org More majordomo info at
> http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH 4/4] kvm: powerpc: set cache coherency only for RAM pages
  2013-07-26  8:50     ` Alexander Graf
  2013-07-26  8:52       ` Bhushan Bharat-R65777
@ 2013-07-26 15:03       ` Bhushan Bharat-R65777
  2013-07-26 22:26         ` Benjamin Herrenschmidt
  1 sibling, 1 reply; 13+ messages in thread
From: Bhushan Bharat-R65777 @ 2013-07-26 15:03 UTC (permalink / raw)
  To: Alexander Graf, Benjamin Herrenschmidt
  Cc: Wood Scott-B07421, linuxppc-dev, kvm, kvm-ppc



> -----Original Message-----
> From: kvm-ppc-owner@vger.kernel.org [mailto:kvm-ppc-owner@vger.kernel.org=
] On
> Behalf Of Alexander Graf
> Sent: Friday, July 26, 2013 2:20 PM
> To: Benjamin Herrenschmidt
> Cc: Bhushan Bharat-R65777; kvm-ppc@vger.kernel.org; kvm@vger.kernel.org;
> linuxppc-dev@lists.ozlabs.org; Wood Scott-B07421; Bhushan Bharat-R65777
> Subject: Re: [PATCH 4/4] kvm: powerpc: set cache coherency only for RAM p=
ages
>=20
>=20
> On 26.07.2013, at 10:26, Benjamin Herrenschmidt wrote:
>=20
> > On Fri, 2013-07-26 at 11:16 +0530, Bharat Bhushan wrote:
> >> If the page is RAM then map this as cacheable and coherent (set "M"
> >> bit) otherwise this page is treated as I/O and map this as cache
> >> inhibited and guarded (set  "I + G")
> >>
> >> This helps setting proper MMU mapping for direct assigned device.
> >>
> >> NOTE: There can be devices that require cacheable mapping, which is no=
t yet
> supported.
> >
> > Why don't you do like server instead and enforce the use of the same I
> > and M bits as the corresponding qemu PTE ?
>=20
> Specifically, Ben is talking about this code:
>=20
>=20
>                 /* Translate to host virtual address */
>                 hva =3D __gfn_to_hva_memslot(memslot, gfn);
>=20
>                 /* Look up the Linux PTE for the backing page */
>                 pte_size =3D psize;
>                 pte =3D lookup_linux_pte(pgdir, hva, writing, &pte_size);
>                 if (pte_present(pte)) {
>                         if (writing && !pte_write(pte))
>                                 /* make the actual HPTE be read-only */
>                                 ptel =3D hpte_make_readonly(ptel);
>                         is_io =3D hpte_cache_bits(pte_val(pte));
>                         pa =3D pte_pfn(pte) << PAGE_SHIFT;
>                 }
>=20

Will not searching the Linux PTE is a overkill?

=3DBharat

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] kvm: powerpc: set cache coherency only for RAM pages
  2013-07-26 15:03       ` Bhushan Bharat-R65777
@ 2013-07-26 22:26         ` Benjamin Herrenschmidt
  2013-07-30 16:22           ` Bhushan Bharat-R65777
  0 siblings, 1 reply; 13+ messages in thread
From: Benjamin Herrenschmidt @ 2013-07-26 22:26 UTC (permalink / raw)
  To: Bhushan Bharat-R65777
  Cc: Wood Scott-B07421, linuxppc-dev, Alexander Graf, kvm-ppc, kvm

On Fri, 2013-07-26 at 15:03 +0000, Bhushan Bharat-R65777 wrote:
> Will not searching the Linux PTE is a overkill?

That's the best approach. Also we are searching it already to resolve
the page fault. That does mean we search twice but on the other hand
that also means it's hot in the cache.

Cheers,
Ben

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH 4/4] kvm: powerpc: set cache coherency only for RAM pages
  2013-07-26 22:26         ` Benjamin Herrenschmidt
@ 2013-07-30 16:22           ` Bhushan Bharat-R65777
  2013-07-30 18:49             ` Scott Wood
  0 siblings, 1 reply; 13+ messages in thread
From: Bhushan Bharat-R65777 @ 2013-07-30 16:22 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: Wood Scott-B07421, linuxppc-dev, Alexander Graf, kvm-ppc, kvm

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogQmVuamFtaW4gSGVycmVu
c2NobWlkdCBbbWFpbHRvOmJlbmhAa2VybmVsLmNyYXNoaW5nLm9yZ10NCj4gU2VudDogU2F0dXJk
YXksIEp1bHkgMjcsIDIwMTMgMzo1NyBBTQ0KPiBUbzogQmh1c2hhbiBCaGFyYXQtUjY1Nzc3DQo+
IENjOiBBbGV4YW5kZXIgR3JhZjsga3ZtLXBwY0B2Z2VyLmtlcm5lbC5vcmc7IGt2bUB2Z2VyLmtl
cm5lbC5vcmc7IGxpbnV4cHBjLQ0KPiBkZXZAbGlzdHMub3psYWJzLm9yZzsgV29vZCBTY290dC1C
MDc0MjENCj4gU3ViamVjdDogUmU6IFtQQVRDSCA0LzRdIGt2bTogcG93ZXJwYzogc2V0IGNhY2hl
IGNvaGVyZW5jeSBvbmx5IGZvciBSQU0gcGFnZXMNCj4gDQo+IE9uIEZyaSwgMjAxMy0wNy0yNiBh
dCAxNTowMyArMDAwMCwgQmh1c2hhbiBCaGFyYXQtUjY1Nzc3IHdyb3RlOg0KPiA+IFdpbGwgbm90
IHNlYXJjaGluZyB0aGUgTGludXggUFRFIGlzIGEgb3ZlcmtpbGw/DQo+IA0KPiBUaGF0J3MgdGhl
IGJlc3QgYXBwcm9hY2guIEFsc28gd2UgYXJlIHNlYXJjaGluZyBpdCBhbHJlYWR5IHRvIHJlc29s
dmUgdGhlIHBhZ2UNCj4gZmF1bHQuIFRoYXQgZG9lcyBtZWFuIHdlIHNlYXJjaCB0d2ljZSBidXQg
b24gdGhlIG90aGVyIGhhbmQgdGhhdCBhbHNvIG1lYW5zIGl0J3MNCj4gaG90IGluIHRoZSBjYWNo
ZS4NCg0KDQpCZWxvdyBpcyBlYXJseSBnaXQgZGlmZiAobm90IGEgcHJvcGVyIGNsZWFudXAgcGF0
Y2gpLCB0byBiZSBzdXJlIHRoYXQgdGhpcyBpcyB3aGF0IHdlIHdhbnQgb24gUG93ZXJQQyBhbmQg
dGFrZSBlYXJseSBmZWVkYmFjay4gQWxzbyBJIHJ1biBzb21lIGJlbmNobWFyayB0byB1bmRlcnN0
YW5kIHRoZSBvdmVyaGVhZCBpZiBhbnkuIA0KDQpVc2luZyBrdm1faXNfbW1pb19wZm4oKTsgd2hh
dCB0aGUgY3VycmVudCBwYXRjaCBkb2VzOgkJCQkJCQkJCQkJDQpSZWFsOiAwbTQ2LjYxNnMgKyAw
bTQ5LjUxN3MgKyAwbTQ5LjUxMHMgKyAwbTQ2LjkzNnMgKyAwbTQ2Ljg4OXMgKyAwbTQ2LjY4NHMg
PSBBdmc7IDQ3LjY5MnMNClVzZXI6IDBtMzEuNjM2cyArIDBtMzEuODE2cyArIDBtMzEuNDU2cyAr
IDBtMzEuNzUycyArIDBtMzIuMDI4cyArIDBtMzEuODQ4cyA9IEF2ZzsgMzEuNzU2cw0KU3lzOiAg
MG0xMS41OTZzICsgMG0xMS44NjhzICsgMG0xMi4yNDRzICsgMG0xMS42NzJzICsgMG0xMS4zNTZz
ICsgMG0xMS40MzJzID0gQXZnOyAxMS42OTVzDQoNCg0KVXNpbmcga2VybmVsIHBhZ2UgdGFibGUg
c2VhcmNoIChiZWxvdyBjaGFuZ2VzKToNClJlYWw6IDBtNDYuNDMxcyArIDBtNTAuMjY5cyArIDBt
NDYuNzI0cyArIDBtNDYuNjQ1cyArIDBtNDYuNjcwcyArIDBtNTAuMjU5cyA9IEF2ZzsgNDcuODMz
cw0KVXNlcjogMG0zMS41NjhzICsgMG0zMS44MTZzICsgMG0zMS40NDRzICsgMG0zMS44MDhzICsg
MG0zMS4zMTJzICsgMG0zMS43NDBzID0gQXZnOyAzMS42MTRzDQpTeXM6ICAwbTExLjUxNnMgKyAw
bTEyLjA2MHMgKyAwbTExLjg3MnMgKyAwbTExLjQ3NnMgKyAwbTEyLjAwMHMgKyAwbTEyLjE1MnMg
PSBBdmc7IDExLjg0NnMNCg0KLS0tLS0tLS0tLS0tLS0tLS0tDQpkaWZmIC0tZ2l0IGEvYXJjaC9w
b3dlcnBjL2luY2x1ZGUvYXNtL2t2bV9ob3N0LmggYi9hcmNoL3Bvd2VycGMvaW5jbHVkZS9hc20v
a3ZtX2hvc3QuaA0KaW5kZXggMzMyODM1My4uZDZkMGRhYyAxMDA2NDQNCi0tLSBhL2FyY2gvcG93
ZXJwYy9pbmNsdWRlL2FzbS9rdm1faG9zdC5oDQorKysgYi9hcmNoL3Bvd2VycGMvaW5jbHVkZS9h
c20va3ZtX2hvc3QuaA0KQEAgLTUzMiw2ICs1MzIsNyBAQCBzdHJ1Y3Qga3ZtX3ZjcHVfYXJjaCB7
DQogICAgICAgIHUzMiBlcHI7DQogICAgICAgIHUzMiBjcml0X3NhdmU7DQogICAgICAgIHN0cnVj
dCBrdm1wcGNfYm9va2VfZGVidWdfcmVnIGRiZ19yZWc7DQorICAgICAgIHBnZF90ICpwZ2RpcjsN
CiAjZW5kaWYNCiAgICAgICAgZ3BhX3QgcGFkZHJfYWNjZXNzZWQ7DQogICAgICAgIGd2YV90IHZh
ZGRyX2FjY2Vzc2VkOw0KZGlmZiAtLWdpdCBhL2FyY2gvcG93ZXJwYy9rdm0vYm9va2UuYyBiL2Fy
Y2gvcG93ZXJwYy9rdm0vYm9va2UuYw0KaW5kZXggMTc3MjJkOC4uZWJjY2NjMiAxMDA2NDQNCi0t
LSBhL2FyY2gvcG93ZXJwYy9rdm0vYm9va2UuYw0KKysrIGIvYXJjaC9wb3dlcnBjL2t2bS9ib29r
ZS5jDQpAQCAtNjk3LDcgKzY5Nyw3IEBAIGludCBrdm1wcGNfdmNwdV9ydW4oc3RydWN0IGt2bV9y
dW4gKmt2bV9ydW4sIHN0cnVjdCBrdm1fdmNwdSAqdmNwdSkNCiAjZW5kaWYNCiANCiAgICAgICAg
a3ZtcHBjX2ZpeF9lZV9iZWZvcmVfZW50cnkoKTsNCi0NCisgICAgICAgdmNwdS0+YXJjaC5wZ2Rp
ciA9IGN1cnJlbnQtPm1tLT5wZ2Q7DQogICAgICAgIHJldCA9IF9fa3ZtcHBjX3ZjcHVfcnVuKGt2
bV9ydW4sIHZjcHUpOw0KIA0KICAgICAgICAvKiBObyBuZWVkIGZvciBrdm1fZ3Vlc3RfZXhpdC4g
SXQncyBkb25lIGluIGhhbmRsZV9leGl0Lg0KZGlmZiAtLWdpdCBhL2FyY2gvcG93ZXJwYy9rdm0v
ZTUwMC5oIGIvYXJjaC9wb3dlcnBjL2t2bS9lNTAwLmgNCmluZGV4IDRmZDk2NTAuLmZjNGIyZjYg
MTAwNjQ0DQotLS0gYS9hcmNoL3Bvd2VycGMva3ZtL2U1MDAuaA0KKysrIGIvYXJjaC9wb3dlcnBj
L2t2bS9lNTAwLmgNCkBAIC0zMSwxMSArMzEsMTMgQEAgZW51bSB2Y3B1X2Z0ciB7DQogI2RlZmlu
ZSBFNTAwX1RMQl9OVU0gICAyDQogDQogLyogZW50cnkgaXMgbWFwcGVkIHNvbWV3aGVyZSBpbiBo
b3N0IFRMQiAqLw0KLSNkZWZpbmUgRTUwMF9UTEJfVkFMSUQgICAgICAgICAoMSA8PCAwKQ0KKyNk
ZWZpbmUgRTUwMF9UTEJfVkFMSUQgICAgICAgICAoMSA8PCAzMSkNCiAvKiBUTEIxIGVudHJ5IGlz
IG1hcHBlZCBieSBob3N0IFRMQjEsIHRyYWNrZWQgYnkgYml0bWFwcyAqLw0KLSNkZWZpbmUgRTUw
MF9UTEJfQklUTUFQICAgICAgICAgICAgICAgICgxIDw8IDEpDQorI2RlZmluZSBFNTAwX1RMQl9C
SVRNQVAgICAgICAgICAgICAgICAgKDEgPDwgMzApDQogLyogVExCMSBlbnRyeSBpcyBtYXBwZWQg
YnkgaG9zdCBUTEIwICovDQotI2RlZmluZSBFNTAwX1RMQl9UTEIwICAgICAgICAgICgxIDw8IDIp
DQorI2RlZmluZSBFNTAwX1RMQl9UTEIwICAgICAgICAgICgxIDw8IDI5KQ0KKy8qIExvd2VyIDUg
Yml0cyBoYXZlIFdJTUdFIHZhbHVlICovDQorI2RlZmluZSBFNTAwX1RMQl9XSU1HRV9NQVNLICAg
ICgweDFmKQ0KIA0KIHN0cnVjdCB0bGJlX3JlZiB7DQogICAgICAgIHBmbl90IHBmbjsgICAgICAg
ICAgICAgIC8qIHZhbGlkIG9ubHkgZm9yIFRMQjAsIGV4Y2VwdCBicmllZmx5ICovDQpkaWZmIC0t
Z2l0IGEvYXJjaC9wb3dlcnBjL2t2bS9lNTAwX21tdV9ob3N0LmMgYi9hcmNoL3Bvd2VycGMva3Zt
L2U1MDBfbW11X2hvc3QuYw0KaW5kZXggNWNiZGM4Zi4uYTQ4YzEzZiAxMDA2NDQNCi0tLSBhL2Fy
Y2gvcG93ZXJwYy9rdm0vZTUwMF9tbXVfaG9zdC5jDQorKysgYi9hcmNoL3Bvd2VycGMva3ZtL2U1
MDBfbW11X2hvc3QuYw0KQEAgLTQwLDYgKzQwLDg0IEBADQogDQogc3RhdGljIHN0cnVjdCBrdm1w
cGNfZTUwMF90bGJfcGFyYW1zIGhvc3RfdGxiX3BhcmFtc1tFNTAwX1RMQl9OVU1dOw0KIA0KKy8q
DQorICogZmluZF9saW51eF9wdGUgcmV0dXJucyB0aGUgYWRkcmVzcyBvZiBhIGxpbnV4IHB0ZSBm
b3IgYSBnaXZlbg0KKyAqIGVmZmVjdGl2ZSBhZGRyZXNzIGFuZCBkaXJlY3RvcnkuICBJZiBub3Qg
Zm91bmQsIGl0IHJldHVybnMgemVyby4NCisgKi8NCitzdGF0aWMgaW5saW5lIHB0ZV90ICpmaW5k
X2xpbnV4X3B0ZShwZ2RfdCAqcGdkaXIsIHVuc2lnbmVkIGxvbmcgZWEpDQorew0KKyAgICAgICAg
cGdkX3QgKnBnOw0KKyAgICAgICAgcHVkX3QgKnB1Ow0KKyAgICAgICAgcG1kX3QgKnBtOw0KKyAg
ICAgICAgcHRlX3QgKnB0ID0gTlVMTDsNCisNCisgICAgICAgIHBnID0gcGdkaXIgKyBwZ2RfaW5k
ZXgoZWEpOw0KKyAgICAgICAgaWYgKCFwZ2Rfbm9uZSgqcGcpKSB7DQorICAgICAgICAgICAgICAg
IHB1ID0gcHVkX29mZnNldChwZywgZWEpOw0KKyAgICAgICAgICAgICAgICBpZiAoIXB1ZF9ub25l
KCpwdSkpIHsNCisgICAgICAgICAgICAgICAgICAgICAgICBwbSA9IHBtZF9vZmZzZXQocHUsIGVh
KTsNCisgICAgICAgICAgICAgICAgICAgICAgICBpZiAocG1kX3ByZXNlbnQoKnBtKSkNCisgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIHB0ID0gcHRlX29mZnNldF9rZXJuZWwocG0sIGVh
KTsNCisgICAgICAgICAgICAgICAgfQ0KKyAgICAgICAgfQ0KKyAgICAgICAgcmV0dXJuIHB0Ow0K
K30NCisNCisjaWZkZWYgQ09ORklHX0hVR0VUTEJfUEFHRQ0KK3B0ZV90ICpmaW5kX2xpbnV4X3B0
ZV9vcl9odWdlcHRlKHBnZF90ICpwZ2RpciwgdW5zaWduZWQgbG9uZyBlYSwNCisgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICB1bnNpZ25lZCAqc2hpZnQpOw0KKyNlbHNlDQorc3RhdGlj
IGlubGluZSBwdGVfdCAqZmluZF9saW51eF9wdGVfb3JfaHVnZXB0ZShwZ2RfdCAqcGdkaXIsIHVu
c2lnbmVkIGxvbmcgZWEsDQorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICB1bnNpZ25lZCAqc2hpZnQpDQorew0KKyAgICAgICAgaWYgKHNoaWZ0KQ0KKyAgICAg
ICAgICAgICAgICAqc2hpZnQgPSAwOw0KKyAgICAgICAgcmV0dXJuIGZpbmRfbGludXhfcHRlKHBn
ZGlyLCBlYSk7DQorfQ0KKyNlbmRpZiAvKiAhQ09ORklHX0hVR0VUTEJfUEFHRSAqLw0KKw0KKy8q
DQorICogTG9jayBhbmQgcmVhZCBhIGxpbnV4IFBURS4gIElmIGl0J3MgcHJlc2VudCBhbmQgd3Jp
dGFibGUsIGF0b21pY2FsbHkNCisgKiBzZXQgZGlydHkgYW5kIHJlZmVyZW5jZWQgYml0cyBhbmQg
cmV0dXJuIHRoZSBQVEUsIG90aGVyd2lzZSByZXR1cm4gMC4NCisgKi8NCitzdGF0aWMgaW5saW5l
IHB0ZV90IGt2bXBwY19yZWFkX3VwZGF0ZV9saW51eF9wdGUocHRlX3QgKnAsIGludCB3cml0aW5n
KQ0KK3sNCisgICAgICAgcHRlX3QgcHRlID0gcHRlX3ZhbCgqcCk7DQorDQorICAgICAgIGlmIChw
dGVfcHJlc2VudChwdGUpKSB7DQorICAgICAgICAgICAgICAgcHRlID0gcHRlX21reW91bmcocHRl
KTsNCisgICAgICAgICAgICAgICBpZiAod3JpdGluZyAmJiBwdGVfd3JpdGUocHRlKSkNCisgICAg
ICAgICAgICAgICAgICAgICAgIHB0ZSA9IHB0ZV9ta2RpcnR5KHB0ZSk7DQorICAgICAgIH0NCisN
CisgICAgICAgKnAgPSBwdGU7DQorDQorICAgICAgIHJldHVybiBwdGU7DQorfQ0KKw0KK3N0YXRp
YyBwdGVfdCBsb29rdXBfbGludXhfcHRlKHBnZF90ICpwZ2RpciwgdW5zaWduZWQgbG9uZyBodmEs
DQorICAgICAgICAgICAgICAgICAgICAgICAgICAgICBpbnQgd3JpdGluZywgdW5zaWduZWQgbG9u
ZyAqcHRlX3NpemVwKQ0KK3sNCisgICAgICAgcHRlX3QgKnB0ZXA7DQorICAgICAgIHVuc2lnbmVk
IGxvbmcgcHMgPSAqcHRlX3NpemVwOw0KKyAgICAgICB1bnNpZ25lZCBpbnQgc2hpZnQ7DQorDQor
ICAgICAgIHB0ZXAgPSBmaW5kX2xpbnV4X3B0ZV9vcl9odWdlcHRlKHBnZGlyLCBodmEsICZzaGlm
dCk7DQorICAgICAgIGlmICghcHRlcCkNCisgICAgICAgICAgICAgICByZXR1cm4gX19wdGUoMCk7
DQorICAgICAgIGlmIChzaGlmdCkNCisgICAgICAgICAgICAgICAqcHRlX3NpemVwID0gMXVsIDw8
IHNoaWZ0Ow0KKyAgICAgICBlbHNlDQorICAgICAgICAgICAgICAgKnB0ZV9zaXplcCA9IFBBR0Vf
U0laRTsNCisNCisgICAgICAgaWYgKHBzID4gKnB0ZV9zaXplcCkNCisgICAgICAgICAgICAgICBy
ZXR1cm4gX19wdGUoMCk7DQorICAgICAgIGlmICghcHRlX3ByZXNlbnQoKnB0ZXApKQ0KKyAgICAg
ICAgICAgICAgIHJldHVybiBfX3B0ZSgwKTsNCisNCisgICAgICAgcmV0dXJuIGt2bXBwY19yZWFk
X3VwZGF0ZV9saW51eF9wdGUocHRlcCwgd3JpdGluZyk7DQorfQ0KKw0KIHN0YXRpYyBpbmxpbmUg
dW5zaWduZWQgaW50IHRsYjFfbWF4X3NoYWRvd19zaXplKHZvaWQpDQogew0KICAgICAgICAvKiBy
ZXNlcnZlIG9uZSBlbnRyeSBmb3IgbWFnaWMgcGFnZSAqLw0KQEAgLTI2MiwxMCArMzQwLDExIEBA
IHN0YXRpYyBpbmxpbmUgaW50IHRsYmVfaXNfd3JpdGFibGUoc3RydWN0IGt2bV9ib29rM2VfMjA2
X3RsYl9lbnRyeSAqdGxiZSkNCiANCiBzdGF0aWMgaW5saW5lIHZvaWQga3ZtcHBjX2U1MDBfcmVm
X3NldHVwKHN0cnVjdCB0bGJlX3JlZiAqcmVmLA0KICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBzdHJ1Y3Qga3ZtX2Jvb2szZV8yMDZfdGxiX2VudHJ5ICpndGxiZSwNCi0g
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcGZuX3QgcGZuKQ0KKyAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBwZm5fdCBwZm4sIGludCB3aW1nKQ0K
IHsNCiAgICAgICAgcmVmLT5wZm4gPSBwZm47DQogICAgICAgIHJlZi0+ZmxhZ3MgfD0gRTUwMF9U
TEJfVkFMSUQ7DQorICAgICAgIHJlZi0+ZmxhZ3MgfD0gKGd0bGJlLT5tYXMyICYgTUFTMl9BVFRS
SUJfTUFTSykgfCB3aW1nOw0KIA0KICAgICAgICBpZiAodGxiZV9pc193cml0YWJsZShndGxiZSkp
DQogICAgICAgICAgICAgICAga3ZtX3NldF9wZm5fZGlydHkocGZuKTsNCkBAIC0zMjYsOCArNDA1
LDggQEAgc3RhdGljIHZvaWQga3ZtcHBjX2U1MDBfc2V0dXBfc3RsYmUoDQogDQogICAgICAgIC8q
IEZvcmNlIElQUk9UPTAgZm9yIGFsbCBndWVzdCBtYXBwaW5ncy4gKi8NCiAgICAgICAgc3RsYmUt
Pm1hczEgPSBNQVMxX1RTSVpFKHRzaXplKSB8IGdldF90bGJfc3RzKGd0bGJlKSB8IE1BUzFfVkFM
SUQ7DQotICAgICAgIHN0bGJlLT5tYXMyID0gKGd2YWRkciAmIE1BUzJfRVBOKSB8DQotICAgICAg
ICAgICAgICAgICAgICAgZTUwMF9zaGFkb3dfbWFzMl9hdHRyaWIoZ3RsYmUtPm1hczIsIHBmbik7
DQorICAgICAgIHN0bGJlLT5tYXMyID0gKGd2YWRkciAmIE1BUzJfRVBOKSB8IChyZWYtPmZsYWdz
ICYgRTUwMF9UTEJfV0lNR0VfTUFTSyk7DQorLy8gICAgICAgICAgICAgICAgICAgZTUwMF9zaGFk
b3dfbWFzMl9hdHRyaWIoZ3RsYmUtPm1hczIsIHBmbik7DQogICAgICAgIHN0bGJlLT5tYXM3XzMg
PSAoKHU2NClwZm4gPDwgUEFHRV9TSElGVCkgfA0KICAgICAgICAgICAgICAgICAgICAgICAgZTUw
MF9zaGFkb3dfbWFzM19hdHRyaWIoZ3RsYmUtPm1hczdfMywgcHIpOw0KDQpAQCAtMzQ2LDYgKzQy
NSw4IEBAIHN0YXRpYyBpbmxpbmUgaW50IGt2bXBwY19lNTAwX3NoYWRvd19tYXAoc3RydWN0IGt2
bXBwY192Y3B1X2U1MDAgKnZjcHVfZTUwMCwNCiAgICAgICAgdW5zaWduZWQgbG9uZyBodmE7DQog
ICAgICAgIGludCBwZm5tYXAgPSAwOw0KICAgICAgICBpbnQgdHNpemUgPSBCT09LM0VfUEFHRVNa
XzRLOw0KKyAgICAgICBwdGVfdCBwdGU7DQorICAgICAgIGludCB3aW1nID0gMDsNCiANCiAgICAg
ICAgLyoNCiAgICAgICAgICogVHJhbnNsYXRlIGd1ZXN0IHBoeXNpY2FsIHRvIHRydWUgcGh5c2lj
YWwsIGFjcXVpcmluZw0KQEAgLTQ1MSw2ICs1MzIsOCBAQCBzdGF0aWMgaW5saW5lIGludCBrdm1w
cGNfZTUwMF9zaGFkb3dfbWFwKHN0cnVjdCBrdm1wcGNfdmNwdV9lNTAwICp2Y3B1X2U1MDAsDQog
DQogICAgICAgIGlmIChsaWtlbHkoIXBmbm1hcCkpIHsNCiAgICAgICAgICAgICAgICB1bnNpZ25l
ZCBsb25nIHRzaXplX3BhZ2VzID0gMSA8PCAodHNpemUgKyAxMCAtIFBBR0VfU0hJRlQpOw0KKyAg
ICAgICAgICAgICAgIHBnZF90ICpwZ2RpcjsNCisNCiAgICAgICAgICAgICAgICBwZm4gPSBnZm5f
dG9fcGZuX21lbXNsb3Qoc2xvdCwgZ2ZuKTsNCiAgICAgICAgICAgICAgICBpZiAoaXNfZXJyb3Jf
bm9zbG90X3BmbihwZm4pKSB7DQogICAgICAgICAgICAgICAgICAgICAgICBwcmludGsoS0VSTl9F
UlIgIkNvdWxkbid0IGdldCByZWFsIHBhZ2UgZm9yIGdmbiAlbHghXG4iLA0KQEAgLTQ2MSw5ICs1
NDQsMTUgQEAgc3RhdGljIGlubGluZSBpbnQga3ZtcHBjX2U1MDBfc2hhZG93X21hcChzdHJ1Y3Qg
a3ZtcHBjX3ZjcHVfZTUwMCAqdmNwdV9lNTAwLA0KICAgICAgICAgICAgICAgIC8qIEFsaWduIGd1
ZXN0IGFuZCBwaHlzaWNhbCBhZGRyZXNzIHRvIHBhZ2UgbWFwIGJvdW5kYXJpZXMgKi8NCiAgICAg
ICAgICAgICAgICBwZm4gJj0gfih0c2l6ZV9wYWdlcyAtIDEpOw0KICAgICAgICAgICAgICAgIGd2
YWRkciAmPSB+KCh0c2l6ZV9wYWdlcyA8PCBQQUdFX1NISUZUKSAtIDEpOw0KKyAgICAgICAgICAg
ICAgIHBnZGlyID0gdmNwdV9lNTAwLT52Y3B1LmFyY2gucGdkaXI7DQorICAgICAgICAgICAgICAg
cHRlID0gbG9va3VwX2xpbnV4X3B0ZShwZ2RpciwgaHZhLCAxLCAmdHNpemVfcGFnZXMpOw0KKyAg
ICAgICAgICAgICAgIGlmIChwdGVfcHJlc2VudChwdGUpKQ0KKyAgICAgICAgICAgICAgICAgICAg
ICAgd2ltZyA9IChwdGUgPj4gUFRFX1dJTUdFX1NISUZUKSAmIE1BUzJfV0lNR0VfTUFTSzsNCisg
ICAgICAgICAgICAgICBlbHNlDQorICAgICAgICAgICAgICAgICAgICAgICB3aW1nID0gTUFTMl9J
IHwgTUFTMl9HOw0KICAgICAgICB9DQogDQotICAgICAgIGt2bXBwY19lNTAwX3JlZl9zZXR1cChy
ZWYsIGd0bGJlLCBwZm4pOw0KKyAgICAgICBrdm1wcGNfZTUwMF9yZWZfc2V0dXAocmVmLCBndGxi
ZSwgcGZuLCB3aW1nKTsNCiANCiAgICAgICAga3ZtcHBjX2U1MDBfc2V0dXBfc3RsYmUoJnZjcHVf
ZTUwMC0+dmNwdSwgZ3RsYmUsIHRzaXplLA0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICByZWYsIGd2YWRkciwgc3RsYmUpOw0KDQotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0NCg0KDQo+IA0KPiBDaGVlcnMsDQo+IEJlbg0KPiANCj4gDQoNCg==

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] kvm: powerpc: set cache coherency only for RAM pages
  2013-07-30 16:22           ` Bhushan Bharat-R65777
@ 2013-07-30 18:49             ` Scott Wood
  2013-07-31  5:23               ` Bhushan Bharat-R65777
  0 siblings, 1 reply; 13+ messages in thread
From: Scott Wood @ 2013-07-30 18:49 UTC (permalink / raw)
  To: Bhushan Bharat-R65777
  Cc: Wood Scott-B07421, kvm, Alexander Graf, kvm-ppc, linuxppc-dev

On 07/30/2013 11:22:54 AM, Bhushan Bharat-R65777 wrote:
> diff --git a/arch/powerpc/kvm/e500_mmu_host.c =20
> b/arch/powerpc/kvm/e500_mmu_host.c
> index 5cbdc8f..a48c13f 100644
> --- a/arch/powerpc/kvm/e500_mmu_host.c
> +++ b/arch/powerpc/kvm/e500_mmu_host.c
> @@ -40,6 +40,84 @@
>=20
>  static struct kvmppc_e500_tlb_params host_tlb_params[E500_TLB_NUM];
>=20
> +/*
> + * find_linux_pte returns the address of a linux pte for a given
> + * effective address and directory.  If not found, it returns zero.
> + */
> +static inline pte_t *find_linux_pte(pgd_t *pgdir, unsigned long ea)
> +{
> +        pgd_t *pg;
> +        pud_t *pu;
> +        pmd_t *pm;
> +        pte_t *pt =3D NULL;
> +
> +        pg =3D pgdir + pgd_index(ea);
> +        if (!pgd_none(*pg)) {
> +                pu =3D pud_offset(pg, ea);
> +                if (!pud_none(*pu)) {
> +                        pm =3D pmd_offset(pu, ea);
> +                        if (pmd_present(*pm))
> +                                pt =3D pte_offset_kernel(pm, ea);
> +                }
> +        }
> +        return pt;
> +}

How is this specific to KVM or e500?

> +#ifdef CONFIG_HUGETLB_PAGE
> +pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea,
> +                                 unsigned *shift);
> +#else
> +static inline pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir, =20
> unsigned long ea,
> +                                               unsigned *shift)
> +{
> +        if (shift)
> +                *shift =3D 0;
> +        return find_linux_pte(pgdir, ea);
> +}
> +#endif /* !CONFIG_HUGETLB_PAGE */

This is already declared in asm/pgtable.h.  If we need a non-hugepage =20
alternative, that should also go in asm/pgtable.h.

> +/*
> + * Lock and read a linux PTE.  If it's present and writable, =20
> atomically
> + * set dirty and referenced bits and return the PTE, otherwise =20
> return 0.
> + */
> +static inline pte_t kvmppc_read_update_linux_pte(pte_t *p, int =20
> writing)
> +{
> +       pte_t pte =3D pte_val(*p);
> +
> +       if (pte_present(pte)) {
> +               pte =3D pte_mkyoung(pte);
> +               if (writing && pte_write(pte))
> +                       pte =3D pte_mkdirty(pte);
> +       }
> +
> +       *p =3D pte;
> +
> +       return pte;
> +}
> +
> +static pte_t lookup_linux_pte(pgd_t *pgdir, unsigned long hva,
> +                             int writing, unsigned long *pte_sizep)
> +{
> +       pte_t *ptep;
> +       unsigned long ps =3D *pte_sizep;
> +       unsigned int shift;
> +
> +       ptep =3D find_linux_pte_or_hugepte(pgdir, hva, &shift);
> +       if (!ptep)
> +               return __pte(0);
> +       if (shift)
> +               *pte_sizep =3D 1ul << shift;
> +       else
> +               *pte_sizep =3D PAGE_SIZE;
> +
> +       if (ps > *pte_sizep)
> +               return __pte(0);
> +       if (!pte_present(*ptep))
> +               return __pte(0);
> +
> +       return kvmppc_read_update_linux_pte(ptep, writing);
> +}
> +

None of this belongs in this file either.

> @@ -326,8 +405,8 @@ static void kvmppc_e500_setup_stlbe(
>=20
>         /* Force IPROT=3D0 for all guest mappings. */
>         stlbe->mas1 =3D MAS1_TSIZE(tsize) | get_tlb_sts(gtlbe) | =20
> MAS1_VALID;
> -       stlbe->mas2 =3D (gvaddr & MAS2_EPN) |
> -                     e500_shadow_mas2_attrib(gtlbe->mas2, pfn);
> +       stlbe->mas2 =3D (gvaddr & MAS2_EPN) | (ref->flags & =20
> E500_TLB_WIMGE_MASK);
> +//                   e500_shadow_mas2_attrib(gtlbe->mas2, pfn);

MAS2_E and MAS2_G should be safe to come from the guest.

How does this work for TLB1?  One ref corresponds to one guest entry, =20
which may correspond to multiple host entries, potentially each with =20
different WIM settings.

>         stlbe->mas7_3 =3D ((u64)pfn << PAGE_SHIFT) |
>                         e500_shadow_mas3_attrib(gtlbe->mas7_3, pr);
>=20
> @@ -346,6 +425,8 @@ static inline int kvmppc_e500_shadow_map(struct =20
> kvmppc_vcpu_e500 *vcpu_e500,
>         unsigned long hva;
>         int pfnmap =3D 0;
>         int tsize =3D BOOK3E_PAGESZ_4K;
> +       pte_t pte;
> +       int wimg =3D 0;
>=20
>         /*
>          * Translate guest physical to true physical, acquiring
> @@ -451,6 +532,8 @@ static inline int kvmppc_e500_shadow_map(struct =20
> kvmppc_vcpu_e500 *vcpu_e500,
>=20
>         if (likely(!pfnmap)) {
>                 unsigned long tsize_pages =3D 1 << (tsize + 10 - =20
> PAGE_SHIFT);
> +               pgd_t *pgdir;
> +
>                 pfn =3D gfn_to_pfn_memslot(slot, gfn);
>                 if (is_error_noslot_pfn(pfn)) {
>                         printk(KERN_ERR "Couldn't get real page for =20
> gfn %lx!\n",
> @@ -461,9 +544,15 @@ static inline int kvmppc_e500_shadow_map(struct =20
> kvmppc_vcpu_e500 *vcpu_e500,
>                 /* Align guest and physical address to page map =20
> boundaries */
>                 pfn &=3D ~(tsize_pages - 1);
>                 gvaddr &=3D ~((tsize_pages << PAGE_SHIFT) - 1);
> +               pgdir =3D vcpu_e500->vcpu.arch.pgdir;
> +               pte =3D lookup_linux_pte(pgdir, hva, 1, &tsize_pages);
> +               if (pte_present(pte))
> +                       wimg =3D (pte >> PTE_WIMGE_SHIFT) & =20
> MAS2_WIMGE_MASK;
> +               else
> +                       wimg =3D MAS2_I | MAS2_G;

If the PTE is not present, then we can't map it, right?  So why I+G?

-Scott=

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH 4/4] kvm: powerpc: set cache coherency only for RAM pages
  2013-07-30 18:49             ` Scott Wood
@ 2013-07-31  5:23               ` Bhushan Bharat-R65777
  0 siblings, 0 replies; 13+ messages in thread
From: Bhushan Bharat-R65777 @ 2013-07-31  5:23 UTC (permalink / raw)
  To: Wood Scott-B07421; +Cc: linuxppc-dev, Alexander Graf, kvm-ppc, kvm



> -----Original Message-----
> From: Wood Scott-B07421
> Sent: Wednesday, July 31, 2013 12:19 AM
> To: Bhushan Bharat-R65777
> Cc: Benjamin Herrenschmidt; Alexander Graf; kvm-ppc@vger.kernel.org;
> kvm@vger.kernel.org; linuxppc-dev@lists.ozlabs.org; Wood Scott-B07421
> Subject: Re: [PATCH 4/4] kvm: powerpc: set cache coherency only for RAM p=
ages
>=20
> On 07/30/2013 11:22:54 AM, Bhushan Bharat-R65777 wrote:
> > diff --git a/arch/powerpc/kvm/e500_mmu_host.c
> > b/arch/powerpc/kvm/e500_mmu_host.c
> > index 5cbdc8f..a48c13f 100644
> > --- a/arch/powerpc/kvm/e500_mmu_host.c
> > +++ b/arch/powerpc/kvm/e500_mmu_host.c
> > @@ -40,6 +40,84 @@
> >
> >  static struct kvmppc_e500_tlb_params host_tlb_params[E500_TLB_NUM];
> >
> > +/*
> > + * find_linux_pte returns the address of a linux pte for a given
> > + * effective address and directory.  If not found, it returns zero.
> > + */
> > +static inline pte_t *find_linux_pte(pgd_t *pgdir, unsigned long ea) {
> > +        pgd_t *pg;
> > +        pud_t *pu;
> > +        pmd_t *pm;
> > +        pte_t *pt =3D NULL;
> > +
> > +        pg =3D pgdir + pgd_index(ea);
> > +        if (!pgd_none(*pg)) {
> > +                pu =3D pud_offset(pg, ea);
> > +                if (!pud_none(*pu)) {
> > +                        pm =3D pmd_offset(pu, ea);
> > +                        if (pmd_present(*pm))
> > +                                pt =3D pte_offset_kernel(pm, ea);
> > +                }
> > +        }
> > +        return pt;
> > +}
>=20
> How is this specific to KVM or e500?
>=20
> > +#ifdef CONFIG_HUGETLB_PAGE
> > +pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea,
> > +                                 unsigned *shift); #else static
> > +inline pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir,
> > unsigned long ea,
> > +                                               unsigned *shift) {
> > +        if (shift)
> > +                *shift =3D 0;
> > +        return find_linux_pte(pgdir, ea); } #endif /*
> > +!CONFIG_HUGETLB_PAGE */
>=20
> This is already declared in asm/pgtable.h.  If we need a non-hugepage
> alternative, that should also go in asm/pgtable.h.
>=20
> > +/*
> > + * Lock and read a linux PTE.  If it's present and writable,
> > atomically
> > + * set dirty and referenced bits and return the PTE, otherwise
> > return 0.
> > + */
> > +static inline pte_t kvmppc_read_update_linux_pte(pte_t *p, int
> > writing)
> > +{
> > +       pte_t pte =3D pte_val(*p);
> > +
> > +       if (pte_present(pte)) {
> > +               pte =3D pte_mkyoung(pte);
> > +               if (writing && pte_write(pte))
> > +                       pte =3D pte_mkdirty(pte);
> > +       }
> > +
> > +       *p =3D pte;
> > +
> > +       return pte;
> > +}
> > +
> > +static pte_t lookup_linux_pte(pgd_t *pgdir, unsigned long hva,
> > +                             int writing, unsigned long *pte_sizep) {
> > +       pte_t *ptep;
> > +       unsigned long ps =3D *pte_sizep;
> > +       unsigned int shift;
> > +
> > +       ptep =3D find_linux_pte_or_hugepte(pgdir, hva, &shift);
> > +       if (!ptep)
> > +               return __pte(0);
> > +       if (shift)
> > +               *pte_sizep =3D 1ul << shift;
> > +       else
> > +               *pte_sizep =3D PAGE_SIZE;
> > +
> > +       if (ps > *pte_sizep)
> > +               return __pte(0);
> > +       if (!pte_present(*ptep))
> > +               return __pte(0);
> > +
> > +       return kvmppc_read_update_linux_pte(ptep, writing); }
> > +
>=20
> None of this belongs in this file either.
>=20
> > @@ -326,8 +405,8 @@ static void kvmppc_e500_setup_stlbe(
> >
> >         /* Force IPROT=3D0 for all guest mappings. */
> >         stlbe->mas1 =3D MAS1_TSIZE(tsize) | get_tlb_sts(gtlbe) |
> > MAS1_VALID;
> > -       stlbe->mas2 =3D (gvaddr & MAS2_EPN) |
> > -                     e500_shadow_mas2_attrib(gtlbe->mas2, pfn);
> > +       stlbe->mas2 =3D (gvaddr & MAS2_EPN) | (ref->flags &
> > E500_TLB_WIMGE_MASK);
> > +//                   e500_shadow_mas2_attrib(gtlbe->mas2, pfn);
>=20
> MAS2_E and MAS2_G should be safe to come from the guest.

This is handled when setting WIMGE in ref->flags.

>=20
> How does this work for TLB1?  One ref corresponds to one guest entry, whi=
ch may
> correspond to multiple host entries, potentially each with different WIM
> settings.

Yes, one ref corresponds to one guest entry. To understand how this will wo=
rk when a one guest tlb1 entry may maps to many host tlb0/1 entry;=20
on guest tlbwe, KVM setup one guest tlb entry and then pre-map one host tlb=
 entry (out of many) and ref (ref->pfn etc) points to this pre-map entry fo=
r that guest entry.
Now a guest TLB miss happens which falls on same guest tlb entry and but de=
mands another host tlb entry. In that flow we change/overwrite ref (ref->pf=
n etc) to point to new host mapping for same guest mapping.

>=20
> >         stlbe->mas7_3 =3D ((u64)pfn << PAGE_SHIFT) |
> >                         e500_shadow_mas3_attrib(gtlbe->mas7_3, pr);
> >
> > @@ -346,6 +425,8 @@ static inline int kvmppc_e500_shadow_map(struct
> > kvmppc_vcpu_e500 *vcpu_e500,
> >         unsigned long hva;
> >         int pfnmap =3D 0;
> >         int tsize =3D BOOK3E_PAGESZ_4K;
> > +       pte_t pte;
> > +       int wimg =3D 0;
> >
> >         /*
> >          * Translate guest physical to true physical, acquiring @@
> > -451,6 +532,8 @@ static inline int kvmppc_e500_shadow_map(struct
> > kvmppc_vcpu_e500 *vcpu_e500,
> >
> >         if (likely(!pfnmap)) {
> >                 unsigned long tsize_pages =3D 1 << (tsize + 10 -
> > PAGE_SHIFT);
> > +               pgd_t *pgdir;
> > +
> >                 pfn =3D gfn_to_pfn_memslot(slot, gfn);
> >                 if (is_error_noslot_pfn(pfn)) {
> >                         printk(KERN_ERR "Couldn't get real page for
> > gfn %lx!\n", @@ -461,9 +544,15 @@ static inline int
> > kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
> >                 /* Align guest and physical address to page map
> > boundaries */
> >                 pfn &=3D ~(tsize_pages - 1);
> >                 gvaddr &=3D ~((tsize_pages << PAGE_SHIFT) - 1);
> > +               pgdir =3D vcpu_e500->vcpu.arch.pgdir;
> > +               pte =3D lookup_linux_pte(pgdir, hva, 1, &tsize_pages);
> > +               if (pte_present(pte))
> > +                       wimg =3D (pte >> PTE_WIMGE_SHIFT) &
> > MAS2_WIMGE_MASK;
> > +               else
> > +                       wimg =3D MAS2_I | MAS2_G;
>=20
> If the PTE is not present, then we can't map it, right?

Right, we should return error :)

-Bharat

>  So why I+G?
>=20
> -Scott

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2013-07-31  5:23 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-07-26  5:46 [PATCH 1/4] powerpc: book3e: _PAGE_LENDIAN must be _PAGE_ENDIAN Bharat Bhushan
2013-07-26  5:46 ` [PATCH 2/4] kvm: powerpc: allow guest control "E" attribute in mas2 Bharat Bhushan
2013-07-26  5:46 ` [PATCH 3/4] kvm: powerpc: allow guest control "G" " Bharat Bhushan
2013-07-26  5:46 ` [PATCH 4/4] kvm: powerpc: set cache coherency only for RAM pages Bharat Bhushan
2013-07-26  8:26   ` Benjamin Herrenschmidt
2013-07-26  8:50     ` Alexander Graf
2013-07-26  8:52       ` Bhushan Bharat-R65777
2013-07-26 15:03       ` Bhushan Bharat-R65777
2013-07-26 22:26         ` Benjamin Herrenschmidt
2013-07-30 16:22           ` Bhushan Bharat-R65777
2013-07-30 18:49             ` Scott Wood
2013-07-31  5:23               ` Bhushan Bharat-R65777
2013-07-26  8:51     ` Bhushan Bharat-R65777

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).