All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 01/04] p2m: use defines for page sizes rather hardcoding them
@ 2011-08-26 14:45 Christoph Egger
  2011-09-01  8:40 ` Tim Deegan
  0 siblings, 1 reply; 6+ messages in thread
From: Christoph Egger @ 2011-08-26 14:45 UTC (permalink / raw)
  To: xen-devel, Tim Deegan

[-- Attachment #1: Type: text/plain, Size: 317 bytes --]


Contains pieces I missed in previous patch. Sorry.


-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo, Andrew Bowd
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632

[-- Attachment #2: xen_superpage1.diff --]
[-- Type: text/plain, Size: 1269 bytes --]

use defines for page sizes rather hardcoding them.

Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>

diff -r 8a4fd41c18d9 -r ec93fa9caebb xen/arch/x86/mm/p2m.c
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -881,7 +881,7 @@ void p2m_mem_access_check(unsigned long 
 
     if ( access_w && p2ma == p2m_access_rx2rw ) 
     {
-        p2m->set_entry(p2m, gfn, mfn, 0, p2mt, p2m_access_rw);
+        p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2mt, p2m_access_rw);
         p2m_unlock(p2m);
         return;
     }
@@ -904,7 +904,7 @@ void p2m_mem_access_check(unsigned long 
         {
             /* A listener is not required, so clear the access restrictions */
             p2m_lock(p2m);
-            p2m->set_entry(p2m, gfn, mfn, 0, p2mt, p2m_access_rwx);
+            p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2mt, p2m_access_rwx);
             p2m_unlock(p2m);
         }
 
@@ -996,7 +996,7 @@ int p2m_set_mem_access(struct domain *d,
     for ( pfn = start_pfn; pfn < start_pfn + nr; pfn++ )
     {
         mfn = gfn_to_mfn_query(d, pfn, &t);
-        if ( p2m->set_entry(p2m, pfn, mfn, 0, t, a) == 0 )
+        if ( p2m->set_entry(p2m, pfn, mfn, PAGE_ORDER_4K, t, a) == 0 )
         {
             rc = -ENOMEM;
             break;

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 01/04] p2m: use defines for page sizes rather hardcoding them
  2011-08-26 14:45 [PATCH 01/04] p2m: use defines for page sizes rather hardcoding them Christoph Egger
@ 2011-09-01  8:40 ` Tim Deegan
  2011-09-01  8:45   ` Tim Deegan
  0 siblings, 1 reply; 6+ messages in thread
From: Tim Deegan @ 2011-09-01  8:40 UTC (permalink / raw)
  To: Christoph Egger; +Cc: xen-devel, Tim Deegan

Content-Description: xen_superpage1.diff
> use defines for page sizes rather hardcoding them.
> 
> Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>

Applied, thanks.  

Tim.

-- 
Tim Deegan <tim@xen.org>
Principal Software Engineer, Xen Platform Team
Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 01/04] p2m: use defines for page sizes rather hardcoding them
  2011-09-01  8:40 ` Tim Deegan
@ 2011-09-01  8:45   ` Tim Deegan
  2011-09-05  9:02     ` Tim Deegan
  0 siblings, 1 reply; 6+ messages in thread
From: Tim Deegan @ 2011-09-01  8:45 UTC (permalink / raw)
  To: Christoph Egger; +Cc: xen-devel, Tim Deegan

At 09:40 +0100 on 01 Sep (1314870041), Tim Deegan wrote:
> Content-Description: xen_superpage1.diff
> > use defines for page sizes rather hardcoding them.
> > 
> > Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
> 
> Applied, thanks.  

The rest of this series looks OK in principle but I haven't time to look
at the detail today (and I'm wondering whether there's a way of doing it
without adding yet another argument to the p2m interfaces, but I suspect
not).   I'll get to it as soon as I can.

Tim.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 01/04] p2m: use defines for page sizes rather hardcoding them
  2011-09-01  8:45   ` Tim Deegan
@ 2011-09-05  9:02     ` Tim Deegan
  2011-09-07 13:44       ` Tim Deegan
  0 siblings, 1 reply; 6+ messages in thread
From: Tim Deegan @ 2011-09-05  9:02 UTC (permalink / raw)
  To: Christoph Egger; +Cc: xen-devel, Tim Deegan

At 09:45 +0100 on 01 Sep (1314870353), Tim Deegan wrote:
> At 09:40 +0100 on 01 Sep (1314870041), Tim Deegan wrote:
> > Content-Description: xen_superpage1.diff
> > > use defines for page sizes rather hardcoding them.
> > > 
> > > Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
> > 
> > Applied, thanks.  
> 
> The rest of this series looks OK in principle but I haven't time to look
> at the detail today (and I'm wondering whether there's a way of doing it
> without adding yet another argument to the p2m interfaces, but I suspect
> not).   I'll get to it as soon as I can.

I've had a look and the mechanism is good, but the patches are not quite
ready.  Patch #2 touches too much of the p2m interfaces with the new
page-order argument -- there are functions in it that now have a new
argument that's _never_ called except with NULL.

Patches #4 and #5 have a lot of churn in and around spage_* for what's
basically a mask operation on an MFN.  I don't think any of it is
necessary.  Also, please don't send patches that contain things like: 

+/* XXX: defines should be moved to a proper header */

It might make me think you didn't re-read them before posting. :)

I think they really just need trimmed back a bit before they go in.
I'll have some time on Wednesday, so I might just do that then.

Cheers,

Tim.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 01/04] p2m: use defines for page sizes rather hardcoding them
  2011-09-05  9:02     ` Tim Deegan
@ 2011-09-07 13:44       ` Tim Deegan
  2011-09-08 14:05         ` Christoph Egger
  0 siblings, 1 reply; 6+ messages in thread
From: Tim Deegan @ 2011-09-07 13:44 UTC (permalink / raw)
  To: Christoph Egger; +Cc: xen-devel, Tim Deegan

[-- Attachment #1: Type: text/plain, Size: 398 bytes --]

At 10:02 +0100 on 05 Sep (1315216944), Tim Deegan wrote:
> I think they really just need trimmed back a bit before they go in.
> I'll have some time on Wednesday, so I might just do that then.

OK, I've cut them back to just the bits that are needed by the new
callers.  I ended up shuffling the code around as well, so could you
please test that the attached series works for you?

Cheers,

Tim.


[-- Attachment #2: p2m-order --]
[-- Type: text/plain, Size: 11449 bytes --]

x86/mm: adjust p2m interface to return superpage sizes

Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
Signed-off-by: Tim Deegan <tim@xen.org>

diff -r f1349a968a5a xen/arch/x86/hvm/hvm.c
--- a/xen/arch/x86/hvm/hvm.c	Fri Sep 02 14:56:26 2011 +0100
+++ b/xen/arch/x86/hvm/hvm.c	Wed Sep 07 11:43:49 2011 +0100
@@ -1216,7 +1216,7 @@ int hvm_hap_nested_page_fault(unsigned l
     }
 
     p2m = p2m_get_hostp2m(v->domain);
-    mfn = gfn_to_mfn_type_p2m(p2m, gfn, &p2mt, &p2ma, p2m_guest);
+    mfn = gfn_to_mfn_type_p2m(p2m, gfn, &p2mt, &p2ma, p2m_guest, NULL);
 
     /* Check access permissions first, then handle faults */
     if ( access_valid && (mfn_x(mfn) != INVALID_MFN) )
diff -r f1349a968a5a xen/arch/x86/hvm/svm/svm.c
--- a/xen/arch/x86/hvm/svm/svm.c	Fri Sep 02 14:56:26 2011 +0100
+++ b/xen/arch/x86/hvm/svm/svm.c	Wed Sep 07 11:43:49 2011 +0100
@@ -1160,7 +1160,7 @@ static void svm_do_nested_pgfault(struct
         p2m = p2m_get_p2m(v);
         _d.gpa = gpa;
         _d.qualification = 0;
-        _d.mfn = mfn_x(gfn_to_mfn_type_p2m(p2m, gfn, &_d.p2mt, &p2ma, p2m_query));
+        _d.mfn = mfn_x(gfn_to_mfn_type_p2m(p2m, gfn, &_d.p2mt, &p2ma, p2m_query, NULL));
         
         __trace_var(TRC_HVM_NPF, 0, sizeof(_d), &_d);
     }
@@ -1180,7 +1180,7 @@ static void svm_do_nested_pgfault(struct
     if ( p2m == NULL )
         p2m = p2m_get_p2m(v);
     /* Everything else is an error. */
-    mfn = gfn_to_mfn_type_p2m(p2m, gfn, &p2mt, &p2ma, p2m_guest);
+    mfn = gfn_to_mfn_type_p2m(p2m, gfn, &p2mt, &p2ma, p2m_guest, NULL);
     gdprintk(XENLOG_ERR,
          "SVM violation gpa %#"PRIpaddr", mfn %#lx, type %i\n",
          gpa, mfn_x(mfn), p2mt);
diff -r f1349a968a5a xen/arch/x86/mm/guest_walk.c
--- a/xen/arch/x86/mm/guest_walk.c	Fri Sep 02 14:56:26 2011 +0100
+++ b/xen/arch/x86/mm/guest_walk.c	Wed Sep 07 11:43:49 2011 +0100
@@ -95,7 +95,7 @@ static inline void *map_domain_gfn(struc
     p2m_access_t a;
 
     /* Translate the gfn, unsharing if shared */
-    *mfn = gfn_to_mfn_type_p2m(p2m, gfn_x(gfn), p2mt, &a, p2m_unshare);
+    *mfn = gfn_to_mfn_type_p2m(p2m, gfn_x(gfn), p2mt, &a, p2m_unshare, NULL);
     if ( p2m_is_paging(*p2mt) )
     {
         ASSERT(!p2m_is_nestedp2m(p2m));
diff -r f1349a968a5a xen/arch/x86/mm/hap/guest_walk.c
--- a/xen/arch/x86/mm/hap/guest_walk.c	Fri Sep 02 14:56:26 2011 +0100
+++ b/xen/arch/x86/mm/hap/guest_walk.c	Wed Sep 07 11:43:49 2011 +0100
@@ -59,7 +59,7 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PA
 
     /* Get the top-level table's MFN */
     top_mfn = gfn_to_mfn_type_p2m(p2m, cr3 >> PAGE_SHIFT, 
-                                  &p2mt, &p2ma, p2m_unshare);
+                                  &p2mt, &p2ma, p2m_unshare, NULL);
     if ( p2m_is_paging(p2mt) )
     {
         ASSERT(!p2m_is_nestedp2m(p2m));
@@ -92,7 +92,7 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PA
     if ( missing == 0 )
     {
         gfn_t gfn = guest_l1e_get_gfn(gw.l1e);
-        gfn_to_mfn_type_p2m(p2m, gfn_x(gfn), &p2mt, &p2ma, p2m_unshare);
+        gfn_to_mfn_type_p2m(p2m, gfn_x(gfn), &p2mt, &p2ma, p2m_unshare, NULL);
         if ( p2m_is_paging(p2mt) )
         {
             ASSERT(!p2m_is_nestedp2m(p2m));
diff -r f1349a968a5a xen/arch/x86/mm/hap/nested_hap.c
--- a/xen/arch/x86/mm/hap/nested_hap.c	Fri Sep 02 14:56:26 2011 +0100
+++ b/xen/arch/x86/mm/hap/nested_hap.c	Wed Sep 07 11:43:49 2011 +0100
@@ -136,7 +136,8 @@ nestedhap_walk_L0_p2m(struct p2m_domain 
     p2m_access_t p2ma;
 
     /* walk L0 P2M table */
-    mfn = gfn_to_mfn_type_p2m(p2m, L1_gpa >> PAGE_SHIFT, &p2mt, &p2ma, p2m_query);
+    mfn = gfn_to_mfn_type_p2m(p2m, L1_gpa >> PAGE_SHIFT, &p2mt, &p2ma, 
+                              p2m_query, NULL);
 
     if ( p2m_is_paging(p2mt) || p2m_is_shared(p2mt) || !p2m_is_ram(p2mt) )
         return NESTEDHVM_PAGEFAULT_ERROR;
diff -r f1349a968a5a xen/arch/x86/mm/p2m-ept.c
--- a/xen/arch/x86/mm/p2m-ept.c	Fri Sep 02 14:56:26 2011 +0100
+++ b/xen/arch/x86/mm/p2m-ept.c	Wed Sep 07 11:43:49 2011 +0100
@@ -509,7 +509,7 @@ out:
 /* Read ept p2m entries */
 static mfn_t ept_get_entry(struct p2m_domain *p2m,
                            unsigned long gfn, p2m_type_t *t, p2m_access_t* a,
-                           p2m_query_t q)
+                           p2m_query_t q, unsigned int *page_order)
 {
     struct domain *d = p2m->domain;
     ept_entry_t *table = map_domain_page(ept_get_asr(d));
@@ -596,6 +596,9 @@ static mfn_t ept_get_entry(struct p2m_do
                  ((1 << (i * EPT_TABLE_ORDER)) - 1));
             mfn = _mfn(split_mfn);
         }
+
+        if ( page_order )
+            *page_order = i * EPT_TABLE_ORDER;
     }
 
 out:
diff -r f1349a968a5a xen/arch/x86/mm/p2m-pt.c
--- a/xen/arch/x86/mm/p2m-pt.c	Fri Sep 02 14:56:26 2011 +0100
+++ b/xen/arch/x86/mm/p2m-pt.c	Wed Sep 07 11:43:49 2011 +0100
@@ -503,7 +503,8 @@ static int p2m_pod_check_and_populate(st
 /* Read the current domain's p2m table (through the linear mapping). */
 static mfn_t p2m_gfn_to_mfn_current(struct p2m_domain *p2m, 
                                     unsigned long gfn, p2m_type_t *t, 
-                                    p2m_access_t *a, p2m_query_t q)
+                                    p2m_access_t *a, p2m_query_t q,
+                                    unsigned int *page_order)
 {
     mfn_t mfn = _mfn(INVALID_MFN);
     p2m_type_t p2mt = p2m_mmio_dm;
@@ -567,6 +568,8 @@ pod_retry_l3:
         else
             p2mt = p2m_mmio_dm;
             
+        if ( page_order )
+            *page_order = PAGE_ORDER_1G;
         goto out;
     }
 #endif
@@ -620,6 +623,8 @@ pod_retry_l2:
         else
             p2mt = p2m_mmio_dm;
 
+        if ( page_order )
+            *page_order = PAGE_ORDER_2M;
         goto out;
     }
 
@@ -669,6 +674,8 @@ pod_retry_l1:
             p2mt = p2m_mmio_dm;
     }
     
+    if ( page_order )
+        *page_order = PAGE_ORDER_4K;
 out:
     *t = p2mt;
     return mfn;
@@ -676,7 +683,8 @@ out:
 
 static mfn_t
 p2m_gfn_to_mfn(struct p2m_domain *p2m, unsigned long gfn, 
-               p2m_type_t *t, p2m_access_t *a, p2m_query_t q)
+               p2m_type_t *t, p2m_access_t *a, p2m_query_t q,
+               unsigned int *page_order)
 {
     mfn_t mfn;
     paddr_t addr = ((paddr_t)gfn) << PAGE_SHIFT;
@@ -699,7 +707,7 @@ p2m_gfn_to_mfn(struct p2m_domain *p2m, u
 
     /* Use the fast path with the linear mapping if we can */
     if ( p2m == p2m_get_hostp2m(current->domain) )
-        return p2m_gfn_to_mfn_current(p2m, gfn, t, a, q);
+        return p2m_gfn_to_mfn_current(p2m, gfn, t, a, q, page_order);
 
     mfn = pagetable_get_mfn(p2m_get_pagetable(p2m));
 
@@ -753,6 +761,8 @@ pod_retry_l3:
             unmap_domain_page(l3e);
 
             ASSERT(mfn_valid(mfn) || !p2m_is_ram(*t));
+            if ( page_order )
+                *page_order = PAGE_ORDER_1G;
             return (p2m_is_valid(*t)) ? mfn : _mfn(INVALID_MFN);
         }
 
@@ -787,6 +797,8 @@ pod_retry_l2:
         unmap_domain_page(l2e);
         
         ASSERT(mfn_valid(mfn) || !p2m_is_ram(*t));
+        if ( page_order )
+            *page_order = PAGE_ORDER_2M;
         return (p2m_is_valid(*t)) ? mfn : _mfn(INVALID_MFN);
     }
 
@@ -817,6 +829,8 @@ pod_retry_l1:
     unmap_domain_page(l1e);
 
     ASSERT(mfn_valid(mfn) || !p2m_is_ram(*t));
+    if ( page_order )
+        *page_order = PAGE_ORDER_4K;
     return (p2m_is_valid(*t) || p2m_is_grant(*t)) ? mfn : _mfn(INVALID_MFN);
 }
 
diff -r f1349a968a5a xen/arch/x86/mm/p2m.c
--- a/xen/arch/x86/mm/p2m.c	Fri Sep 02 14:56:26 2011 +0100
+++ b/xen/arch/x86/mm/p2m.c	Wed Sep 07 11:43:49 2011 +0100
@@ -307,7 +307,7 @@ void p2m_teardown(struct p2m_domain *p2m
 #ifdef __x86_64__
     for ( gfn=0; gfn < p2m->max_mapped_pfn; gfn++ )
     {
-        mfn = gfn_to_mfn_type_p2m(p2m, gfn, &t, &a, p2m_query);
+        mfn = gfn_to_mfn_type_p2m(p2m, gfn, &t, &a, p2m_query, NULL);
         if ( mfn_valid(mfn) && (t == p2m_ram_shared) )
         {
             ASSERT(!p2m_is_nestedp2m(p2m));
@@ -372,7 +372,7 @@ p2m_remove_page(struct p2m_domain *p2m, 
     {
         for ( i = 0; i < (1UL << page_order); i++ )
         {
-            mfn_return = p2m->get_entry(p2m, gfn + i, &t, &a, p2m_query);
+            mfn_return = p2m->get_entry(p2m, gfn + i, &t, &a, p2m_query, NULL);
             if ( !p2m_is_grant(t) )
                 set_gpfn_from_mfn(mfn+i, INVALID_M2P_ENTRY);
             ASSERT( !p2m_is_valid(t) || mfn + i == mfn_x(mfn_return) );
@@ -877,7 +877,7 @@ void p2m_mem_access_check(unsigned long 
     
     /* First, handle rx2rw conversion automatically */
     p2m_lock(p2m);
-    mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, p2m_query);
+    mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, p2m_query, NULL);
 
     if ( access_w && p2ma == p2m_access_rx2rw ) 
     {
@@ -1035,7 +1035,7 @@ int p2m_get_mem_access(struct domain *d,
         return 0;
     }
 
-    mfn = p2m->get_entry(p2m, pfn, &t, &a, p2m_query);
+    mfn = p2m->get_entry(p2m, pfn, &t, &a, p2m_query, NULL);
     if ( mfn_x(mfn) == INVALID_MFN )
         return -ESRCH;
     
diff -r f1349a968a5a xen/include/asm-x86/p2m.h
--- a/xen/include/asm-x86/p2m.h	Fri Sep 02 14:56:26 2011 +0100
+++ b/xen/include/asm-x86/p2m.h	Wed Sep 07 11:43:49 2011 +0100
@@ -233,7 +233,8 @@ struct p2m_domain {
                                        unsigned long gfn,
                                        p2m_type_t *p2mt,
                                        p2m_access_t *p2ma,
-                                       p2m_query_t q);
+                                       p2m_query_t q,
+                                       unsigned int *page_order);
     void               (*change_entry_type_global)(struct p2m_domain *p2m,
                                                    p2m_type_t ot,
                                                    p2m_type_t nt);
@@ -303,10 +304,14 @@ struct p2m_domain *p2m_get_p2m(struct vc
 /* Read a particular P2M table, mapping pages as we go.  Most callers
  * should _not_ call this directly; use the other gfn_to_mfn_* functions
  * below unless you know you want to walk a p2m that isn't a domain's
- * main one. */
+ * main one.
+ * If the lookup succeeds, the return value is != INVALID_MFN and 
+ * *page_order is filled in with the order of the superpage (if any) that
+ * the entry was found in.  */
 static inline mfn_t
 gfn_to_mfn_type_p2m(struct p2m_domain *p2m, unsigned long gfn,
-                    p2m_type_t *t, p2m_access_t *a, p2m_query_t q)
+                    p2m_type_t *t, p2m_access_t *a, p2m_query_t q,
+                    unsigned int *page_order)
 {
     mfn_t mfn;
 
@@ -318,14 +323,14 @@ gfn_to_mfn_type_p2m(struct p2m_domain *p
         return _mfn(gfn);
     }
 
-    mfn = p2m->get_entry(p2m, gfn, t, a, q);
+    mfn = p2m->get_entry(p2m, gfn, t, a, q, page_order);
 
 #ifdef __x86_64__
     if ( q == p2m_unshare && p2m_is_shared(*t) )
     {
         ASSERT(!p2m_is_nestedp2m(p2m));
         mem_sharing_unshare_page(p2m->domain, gfn, 0);
-        mfn = p2m->get_entry(p2m, gfn, t, a, q);
+        mfn = p2m->get_entry(p2m, gfn, t, a, q, page_order);
     }
 #endif
 
@@ -349,7 +354,7 @@ static inline mfn_t gfn_to_mfn_type(stru
                                     p2m_query_t q)
 {
     p2m_access_t a;
-    return gfn_to_mfn_type_p2m(p2m_get_hostp2m(d), gfn, t, &a, q);
+    return gfn_to_mfn_type_p2m(p2m_get_hostp2m(d), gfn, t, &a, q, NULL);
 }
 
 /* Syntactic sugar: most callers will use one of these. 

[-- Attachment #3: paging-order --]
[-- Type: text/plain, Size: 6882 bytes --]

x86/mm: adjust paging interface to return superpage sizes
to the caller of paging_ga_to_gfn_cr3()

Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
Signed-off-by: Tim Deegan <tim@xen.org>

diff -r e6ae91f100d0 xen/arch/x86/mm/hap/guest_walk.c
--- a/xen/arch/x86/mm/hap/guest_walk.c	Wed Sep 07 11:43:49 2011 +0100
+++ b/xen/arch/x86/mm/hap/guest_walk.c	Wed Sep 07 14:09:01 2011 +0100
@@ -43,12 +43,12 @@ unsigned long hap_gva_to_gfn(GUEST_PAGIN
     struct vcpu *v, struct p2m_domain *p2m, unsigned long gva, uint32_t *pfec)
 {
     unsigned long cr3 = v->arch.hvm_vcpu.guest_cr[3];
-    return hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(v, p2m, cr3, gva, pfec);
+    return hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(v, p2m, cr3, gva, pfec, NULL);
 }
 
 unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(
     struct vcpu *v, struct p2m_domain *p2m, unsigned long cr3,
-    paddr_t ga, uint32_t *pfec)
+    paddr_t ga, uint32_t *pfec, unsigned int *page_order)
 {
     uint32_t missing;
     mfn_t top_mfn;
@@ -107,6 +108,9 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PA
             return INVALID_GFN;
         }
 
+        if ( page_order )
+            *page_order = guest_walk_to_page_order(&gw);
+
         return gfn_x(gfn);
     }
 
diff -r e6ae91f100d0 xen/arch/x86/mm/hap/hap.c
--- a/xen/arch/x86/mm/hap/hap.c	Wed Sep 07 11:43:49 2011 +0100
+++ b/xen/arch/x86/mm/hap/hap.c	Wed Sep 07 14:09:01 2011 +0100
@@ -897,8 +897,10 @@ static unsigned long hap_gva_to_gfn_real
 
 static unsigned long hap_p2m_ga_to_gfn_real_mode(
     struct vcpu *v, struct p2m_domain *p2m, unsigned long cr3,
-    paddr_t ga, uint32_t *pfec)
+    paddr_t ga, uint32_t *pfec, unsigned int *page_order)
 {
+    if ( page_order )
+        *page_order = PAGE_ORDER_4K;
     return (ga >> PAGE_SHIFT);
 }
 
diff -r e6ae91f100d0 xen/arch/x86/mm/hap/nested_hap.c
--- a/xen/arch/x86/mm/hap/nested_hap.c	Wed Sep 07 11:43:49 2011 +0100
+++ b/xen/arch/x86/mm/hap/nested_hap.c	Wed Sep 07 14:09:01 2011 +0100
@@ -162,7 +162,7 @@ nestedhap_walk_L1_p2m(struct vcpu *v, pa
     nested_cr3 = nhvm_vcpu_hostcr3(v);
 
     /* Walk the guest-supplied NPT table, just as if it were a pagetable */
-    gfn = paging_ga_to_gfn_cr3(v, nested_cr3, L2_gpa, &pfec);
+    gfn = paging_ga_to_gfn_cr3(v, nested_cr3, L2_gpa, &pfec, NULL);
 
     if ( gfn == INVALID_GFN ) 
         return NESTEDHVM_PAGEFAULT_INJECT;
diff -r e6ae91f100d0 xen/arch/x86/mm/hap/private.h
--- a/xen/arch/x86/mm/hap/private.h	Wed Sep 07 11:43:49 2011 +0100
+++ b/xen/arch/x86/mm/hap/private.h	Wed Sep 07 14:09:01 2011 +0100
@@ -40,12 +40,12 @@ unsigned long hap_gva_to_gfn_4_levels(st
 
 unsigned long hap_p2m_ga_to_gfn_2_levels(struct vcpu *v,
     struct p2m_domain *p2m, unsigned long cr3,
-    paddr_t ga, uint32_t *pfec);
+    paddr_t ga, uint32_t *pfec, unsigned int *page_order);
 unsigned long hap_p2m_ga_to_gfn_3_levels(struct vcpu *v,
     struct p2m_domain *p2m, unsigned long cr3,
-    paddr_t ga, uint32_t *pfec);
+    paddr_t ga, uint32_t *pfec, unsigned int *page_order);
 unsigned long hap_p2m_ga_to_gfn_4_levels(struct vcpu *v,
     struct p2m_domain *p2m, unsigned long cr3,
-    paddr_t ga, uint32_t *pfec);
+    paddr_t ga, uint32_t *pfec, unsigned int *page_order);
 
 #endif /* __HAP_PRIVATE_H__ */
diff -r e6ae91f100d0 xen/arch/x86/mm/p2m.c
--- a/xen/arch/x86/mm/p2m.c	Wed Sep 07 11:43:49 2011 +0100
+++ b/xen/arch/x86/mm/p2m.c	Wed Sep 07 14:09:01 2011 +0100
@@ -1205,7 +1205,7 @@ unsigned long paging_gva_to_gfn(struct v
 
         /* translate l2 guest gfn into l1 guest gfn */
         return hostmode->p2m_ga_to_gfn(v, hostp2m, ncr3,
-            gfn << PAGE_SHIFT, pfec);
+                                       gfn << PAGE_SHIFT, pfec, NULL);
     }
 
     return hostmode->gva_to_gfn(v, hostp2m, va, pfec);
diff -r e6ae91f100d0 xen/include/asm-x86/guest_pt.h
--- a/xen/include/asm-x86/guest_pt.h	Wed Sep 07 11:43:49 2011 +0100
+++ b/xen/include/asm-x86/guest_pt.h	Wed Sep 07 14:09:01 2011 +0100
@@ -272,6 +272,24 @@ guest_walk_to_gpa(walk_t *gw)
     return guest_l1e_get_paddr(gw->l1e) + (gw->va & ~PAGE_MASK);
 }
 
+/* Given a walk_t from a successful walk, return the page-order of the 
+ * page or superpage that the virtual address is in. */
+static inline unsigned int 
+guest_walk_to_page_order(walk_t *gw)
+{
+    /* This is only valid for successful walks - otherwise the 
+     * PSE bits might be invalid. */
+    ASSERT(guest_l1e_get_flags(gw->l1e) & _PAGE_PRESENT);
+#if GUEST_PAGING_LEVELS >= 3
+    if ( guest_l3e_get_flags(gw->l3e) & _PAGE_PSE )
+        return GUEST_L3_PAGETABLE_SHIFT - PAGE_SHIFT;
+#endif
+    if ( guest_l2e_get_flags(gw->l2e) & _PAGE_PSE )
+        return GUEST_L2_PAGETABLE_SHIFT - PAGE_SHIFT;
+    return GUEST_L1_PAGETABLE_SHIFT - PAGE_SHIFT;
+}
+
+
 /* Walk the guest pagetables, after the manner of a hardware walker. 
  *
  * Inputs: a vcpu, a virtual address, a walk_t to fill, a 
diff -r e6ae91f100d0 xen/include/asm-x86/paging.h
--- a/xen/include/asm-x86/paging.h	Wed Sep 07 11:43:49 2011 +0100
+++ b/xen/include/asm-x86/paging.h	Wed Sep 07 14:09:01 2011 +0100
@@ -115,7 +115,8 @@ struct paging_mode {
     unsigned long (*p2m_ga_to_gfn         )(struct vcpu *v,
                                             struct p2m_domain *p2m,
                                             unsigned long cr3,
-                                            paddr_t ga, uint32_t *pfec);
+                                            paddr_t ga, uint32_t *pfec,
+                                            unsigned int *page_order);
     void          (*update_cr3            )(struct vcpu *v, int do_locking);
     void          (*update_paging_modes   )(struct vcpu *v);
     void          (*write_p2m_entry       )(struct vcpu *v, unsigned long gfn,
@@ -270,15 +271,18 @@ unsigned long paging_gva_to_gfn(struct v
  * to by nested HAP code, to walk the guest-supplied NPT tables as if
  * they were pagetables.
  * Use 'paddr_t' for the guest address so it won't overflow when
- * guest or nested guest is in 32bit PAE mode.
- */
+ * l1 or l2 guest is in 32bit PAE mode.
+ * If the GFN returned is not INVALID_GFN, *page_order gives
+ * the size of the superpage (if any) it was found in. */
 static inline unsigned long paging_ga_to_gfn_cr3(struct vcpu *v,
                                                  unsigned long cr3,
                                                  paddr_t ga,
-                                                 uint32_t *pfec)
+                                                 uint32_t *pfec,
+                                                 unsigned int *page_order)
 {
     struct p2m_domain *p2m = v->domain->arch.p2m;
-    return paging_get_hostmode(v)->p2m_ga_to_gfn(v, p2m, cr3, ga, pfec);
+    return paging_get_hostmode(v)->p2m_ga_to_gfn(v, p2m, cr3, ga, pfec,
+        page_order);
 }
 
 /* Update all the things that are derived from the guest's CR3.

[-- Attachment #4: nestedhap-order --]
[-- Type: text/plain, Size: 4423 bytes --]

x86/mm: use new page-order interfaces in nested HAP code
to make 2M and 1G mappings in the nested p2m tables.

Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
Signed-off-by: Tim Deegan <tim@xen.org>

diff -r 3ccbaa111cb8 xen/arch/x86/mm/hap/nested_hap.c
--- a/xen/arch/x86/mm/hap/nested_hap.c	Wed Sep 07 14:26:44 2011 +0100
+++ b/xen/arch/x86/mm/hap/nested_hap.c	Wed Sep 07 14:27:57 2011 +0100
@@ -99,7 +99,7 @@ nestedp2m_write_p2m_entry(struct p2m_dom
 static void
 nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m, 
                   paddr_t L2_gpa, paddr_t L0_gpa,
-                  p2m_type_t p2mt, p2m_access_t p2ma)
+                  unsigned int page_order, p2m_type_t p2mt, p2m_access_t p2ma)
 {
     int rv = 1;
     ASSERT(p2m);
@@ -111,9 +111,20 @@ nestedhap_fix_p2m(struct vcpu *v, struct
      * leave it alone.  We'll pick up the right one as we try to 
      * vmenter the guest. */
     if ( p2m->cr3 == nhvm_vcpu_hostcr3(v) )
-         rv = set_p2m_entry(p2m, L2_gpa >> PAGE_SHIFT,
-                            page_to_mfn(maddr_to_page(L0_gpa)),
-                            0 /*4K*/, p2mt, p2ma);
+    {
+        unsigned long gfn, mask;
+        mfn_t mfn;
+
+        /* If this is a superpage mapping, round down both addresses
+         * to the start of the superpage. */
+        mask = ~((1UL << page_order) - 1);
+
+        gfn = (L2_gpa >> PAGE_SHIFT) & mask;
+        mfn = _mfn((L0_gpa >> PAGE_SHIFT) & mask);
+
+        rv = set_p2m_entry(p2m, gfn, mfn, page_order, p2mt, p2ma);
+    }
+
     p2m_unlock(p2m);
 
     if (rv == 0) {
@@ -129,7 +140,8 @@ nestedhap_fix_p2m(struct vcpu *v, struct
  * value tells the upper level what to do.
  */
 static int
-nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa)
+nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa,
+                      unsigned int *page_order)
 {
     mfn_t mfn;
     p2m_type_t p2mt;
@@ -137,7 +149,7 @@ nestedhap_walk_L0_p2m(struct p2m_domain 
 
     /* walk L0 P2M table */
     mfn = gfn_to_mfn_type_p2m(p2m, L1_gpa >> PAGE_SHIFT, &p2mt, &p2ma, 
-                              p2m_query, NULL);
+                              p2m_query, page_order);
 
     if ( p2m_is_paging(p2mt) || p2m_is_shared(p2mt) || !p2m_is_ram(p2mt) )
         return NESTEDHVM_PAGEFAULT_ERROR;
@@ -154,7 +166,8 @@ nestedhap_walk_L0_p2m(struct p2m_domain 
  * L1_gpa. The result value tells what to do next.
  */
 static int
-nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa)
+nestedhap_walk_L1_p2m(struct vcpu *v, paddr_t L2_gpa, paddr_t *L1_gpa,
+                      unsigned int *page_order)
 {
     uint32_t pfec;
     unsigned long nested_cr3, gfn;
@@ -162,7 +175,7 @@ nestedhap_walk_L1_p2m(struct vcpu *v, pa
     nested_cr3 = nhvm_vcpu_hostcr3(v);
 
     /* Walk the guest-supplied NPT table, just as if it were a pagetable */
-    gfn = paging_ga_to_gfn_cr3(v, nested_cr3, L2_gpa, &pfec, NULL);
+    gfn = paging_ga_to_gfn_cr3(v, nested_cr3, L2_gpa, &pfec, page_order);
 
     if ( gfn == INVALID_GFN ) 
         return NESTEDHVM_PAGEFAULT_INJECT;
@@ -183,12 +196,13 @@ nestedhvm_hap_nested_page_fault(struct v
     paddr_t L1_gpa, L0_gpa;
     struct domain *d = v->domain;
     struct p2m_domain *p2m, *nested_p2m;
+    unsigned int page_order_21, page_order_10, page_order_20;
 
     p2m = p2m_get_hostp2m(d); /* L0 p2m */
     nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_hostcr3(v));
 
     /* walk the L1 P2M table */
-    rv = nestedhap_walk_L1_p2m(v, L2_gpa, &L1_gpa);
+    rv = nestedhap_walk_L1_p2m(v, L2_gpa, &L1_gpa, &page_order_21);
 
     /* let caller to handle these two cases */
     switch (rv) {
@@ -204,7 +218,7 @@ nestedhvm_hap_nested_page_fault(struct v
     }
 
     /* ==> we have to walk L0 P2M */
-    rv = nestedhap_walk_L0_p2m(p2m, L1_gpa, &L0_gpa);
+    rv = nestedhap_walk_L0_p2m(p2m, L1_gpa, &L0_gpa, &page_order_10);
 
     /* let upper level caller to handle these two cases */
     switch (rv) {
@@ -219,8 +233,10 @@ nestedhvm_hap_nested_page_fault(struct v
         break;
     }
 
+    page_order_20 = min(page_order_21, page_order_10);
+
     /* fix p2m_get_pagetable(nested_p2m) */
-    nestedhap_fix_p2m(v, nested_p2m, L2_gpa, L0_gpa,
+    nestedhap_fix_p2m(v, nested_p2m, L2_gpa, L0_gpa, page_order_20,
         p2m_ram_rw,
         p2m_access_rwx /* FIXME: Should use same permission as l1 guest */);
 

[-- Attachment #5: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 01/04] p2m: use defines for page sizes rather hardcoding them
  2011-09-07 13:44       ` Tim Deegan
@ 2011-09-08 14:05         ` Christoph Egger
  0 siblings, 0 replies; 6+ messages in thread
From: Christoph Egger @ 2011-09-08 14:05 UTC (permalink / raw)
  To: Tim Deegan; +Cc: xen-devel

On 09/07/11 15:44, Tim Deegan wrote:
> At 10:02 +0100 on 05 Sep (1315216944), Tim Deegan wrote:
>> I think they really just need trimmed back a bit before they go in.
>> I'll have some time on Wednesday, so I might just do that then.
>
> OK, I've cut them back to just the bits that are needed by the new
> callers.  I ended up shuffling the code around as well, so could you
> please test that the attached series works for you?

Yes, the patch series works very well.
Please apply them.

Acked-by: Christoph Egger <Christoph.Egger@amd.com>



-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo, Andrew Bowd
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2011-09-08 14:05 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-08-26 14:45 [PATCH 01/04] p2m: use defines for page sizes rather hardcoding them Christoph Egger
2011-09-01  8:40 ` Tim Deegan
2011-09-01  8:45   ` Tim Deegan
2011-09-05  9:02     ` Tim Deegan
2011-09-07 13:44       ` Tim Deegan
2011-09-08 14:05         ` Christoph Egger

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.