All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/16] xen/x86: Clean-up the PoD code
@ 2017-09-21 12:40 Julien Grall
  2017-09-21 12:40 ` [PATCH v2 01/16] xen/x86: p2m-pod: Clean-up includes Julien Grall
                   ` (15 more replies)
  0 siblings, 16 replies; 49+ messages in thread
From: Julien Grall @ 2017-09-21 12:40 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Tamas K Lengyel, Jan Beulich, Razvan Cojocaru,
	George Dunlap, Andrew Cooper, Julien Grall, Jun Nakajima

Hi all,

I have been attempting to use the PoD code on Arm (it will be sent in a
separate series) and spent sometimes to clean-up and switch to typesafe gfn
the current code.

The PoD code has been tested on Arm (the vervision is slightly different,
mostly renaming) and the x86 part as only been built test it.

Cheers,

Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
Cc: Tamas K Lengyel <tamas@tklengyel.com>

Julien Grall (16):
  xen/x86: p2m-pod: Clean-up includes
  xen/x86: p2m-pod: Remove trailing whitespaces
  xen/x86: p2m-pod: Fix coding style for comments
  xen/x86: p2m-pod: Fix coding style
  xen/x86: p2m-pod: Avoid redundant assignments in
    p2m_pod_demand_populate
  xen/x86: p2m-pod: Clean-up use of typesafe MFN
  xen/x86: p2m-pod: Use typesafe gfn in p2m_pod_decrease_reservation
  xen/x86: p2m: Use typesafe gfn for the P2M callbacks get_entry and
    set_entry
  xen/x86: p2m: Use typesafe GFN in p2m_set_entry
  xen/x86: p2m-pod: Use typesafe GFN in pod_eager_record
  xen/x86: p2m-pod: Clean-up p2m_pod_zero_check
  xen/x86: p2m-pod: Use typesafe gfn in p2m_pod_zero_check
  xen/x86: p2m-pod: Use typesafe gfn in p2m_pod_demand_populate
  xen/x86: p2m-pod: Use typesafe gfn for the fields reclaim_single and
    max_guest
  xen/x86: p2m-pod: Rework prototype of p2m_pod_demand_populate
  xen/x86: mem_access: Fix mis-indented line

 xen/arch/arm/p2m.c               |   3 +-
 xen/arch/x86/hvm/hvm.c           |   2 +-
 xen/arch/x86/mm/hap/nested_hap.c |   2 +-
 xen/arch/x86/mm/mem_access.c     |  21 +-
 xen/arch/x86/mm/mem_sharing.c    |   7 +-
 xen/arch/x86/mm/p2m-ept.c        |  11 +-
 xen/arch/x86/mm/p2m-pod.c        | 435 +++++++++++++++++++++------------------
 xen/arch/x86/mm/p2m-pt.c         |  12 +-
 xen/arch/x86/mm/p2m.c            | 139 +++++++------
 xen/common/memory.c              |   3 +-
 xen/include/asm-arm/p2m.h        |  13 --
 xen/include/asm-x86/p2m.h        |  23 +--
 xen/include/xen/p2m-common.h     |  13 ++
 13 files changed, 371 insertions(+), 313 deletions(-)

-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* [PATCH v2 01/16] xen/x86: p2m-pod: Clean-up includes
  2017-09-21 12:40 [PATCH v2 00/16] xen/x86: Clean-up the PoD code Julien Grall
@ 2017-09-21 12:40 ` Julien Grall
  2017-09-21 14:47   ` Wei Liu
  2017-09-21 16:23   ` George Dunlap
  2017-09-21 12:40 ` [PATCH v2 02/16] xen/x86: p2m-pod: Remove trailing whitespaces Julien Grall
                   ` (14 subsequent siblings)
  15 siblings, 2 replies; 49+ messages in thread
From: Julien Grall @ 2017-09-21 12:40 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Julien Grall, Jan Beulich

A lot of the headers are not necessary. At the same time, order them in the
alphabetical order.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

---

Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>

    Changes in v2:
        - Add Andrew's acked-by
---
 xen/arch/x86/mm/p2m-pod.c | 11 +++--------
 1 file changed, 3 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index 4085b7f752..fec87e5224 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -19,18 +19,13 @@
  * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
-#include <xen/iommu.h>
-#include <xen/vm_event.h>
 #include <xen/event.h>
-#include <public/vm_event.h>
-#include <asm/domain.h>
+#include <xen/mm.h>
+#include <xen/sched.h>
+#include <xen/trace.h>
 #include <asm/page.h>
 #include <asm/paging.h>
 #include <asm/p2m.h>
-#include <asm/hvm/vmx/vmx.h> /* ept_p2m_init() */
-#include <asm/mem_sharing.h>
-#include <asm/hvm/nestedhvm.h>
-#include <asm/hvm/svm/amd-iommu-proto.h>
 
 #include "mm-locks.h"
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v2 02/16] xen/x86: p2m-pod: Remove trailing whitespaces
  2017-09-21 12:40 [PATCH v2 00/16] xen/x86: Clean-up the PoD code Julien Grall
  2017-09-21 12:40 ` [PATCH v2 01/16] xen/x86: p2m-pod: Clean-up includes Julien Grall
@ 2017-09-21 12:40 ` Julien Grall
  2017-09-21 14:47   ` Wei Liu
  2017-09-21 16:25   ` George Dunlap
  2017-09-21 12:40 ` [PATCH v2 03/16] xen/x86: p2m-pod: Fix coding style for comments Julien Grall
                   ` (13 subsequent siblings)
  15 siblings, 2 replies; 49+ messages in thread
From: Julien Grall @ 2017-09-21 12:40 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Julien Grall, Jan Beulich

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

---

Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>

    Changes in v2:
        - Add Andrew's acked-by
---
 xen/arch/x86/mm/p2m-pod.c | 46 +++++++++++++++++++++++-----------------------
 1 file changed, 23 insertions(+), 23 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index fec87e5224..1f07441259 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -1,7 +1,7 @@
 /******************************************************************************
  * arch/x86/mm/p2m-pod.c
  *
- * Populate-on-demand p2m entries. 
+ * Populate-on-demand p2m entries.
  *
  * Copyright (c) 2009-2011 Citrix Systems, Inc.
  *
@@ -76,7 +76,7 @@ p2m_pod_cache_add(struct p2m_domain *p2m,
                __func__, mfn_x(mfn), order, ((1UL << order) - 1));
         return -1;
     }
-    
+
     for(i=0; i < 1 << order ; i++) {
         struct domain * od;
 
@@ -223,8 +223,8 @@ p2m_pod_set_cache_target(struct p2m_domain *p2m, unsigned long pod_target, int p
                 /* If we can't allocate a superpage, try singleton pages */
                 order = PAGE_ORDER_4K;
                 goto retry;
-            }   
-            
+            }
+
             printk("%s: Unable to allocate page for PoD cache (target=%lu cache=%ld)\n",
                    __func__, pod_target, p2m->pod.count);
             ret = -ENOMEM;
@@ -272,7 +272,7 @@ p2m_pod_set_cache_target(struct p2m_domain *p2m, unsigned long pod_target, int p
 
             if ( test_and_clear_bit(_PGT_pinned, &(page+i)->u.inuse.type_info) )
                 put_page_and_type(page+i);
-            
+
             if ( test_and_clear_bit(_PGC_allocated, &(page+i)->count_info) )
                 put_page(page+i);
 
@@ -296,7 +296,7 @@ out:
  * definitions:
  * + M: static_max
  * + B: number of pages the balloon driver has ballooned down to.
- * + P: Number of populated pages. 
+ * + P: Number of populated pages.
  * + T: Old target
  * + T': New target
  *
@@ -311,10 +311,10 @@ out:
  *   the remainder of the ram to the guest OS.
  *  T <T'<B : Increase PoD cache size.
  *  T'<T<=B : Here we have a choice.  We can decrease the size of the cache,
- *   get the memory right away.  However, that means every time we 
- *   reduce the memory target we risk the guest attempting to populate the 
+ *   get the memory right away.  However, that means every time we
+ *   reduce the memory target we risk the guest attempting to populate the
  *   memory before the balloon driver has reached its new target.  Safer to
- *   never reduce the cache size here, but only when the balloon driver frees 
+ *   never reduce the cache size here, but only when the balloon driver frees
  *   PoD ranges.
  *
  * If there are many zero pages, we could reach the target also by doing
@@ -511,7 +511,7 @@ p2m_pod_decrease_reservation(struct domain *d,
     long pod, nonpod, ram;
 
     gfn_lock(p2m, gpfn, order);
-    pod_lock(p2m);    
+    pod_lock(p2m);
 
     /* If we don't have any outstanding PoD entries, let things take their
      * course */
@@ -629,7 +629,7 @@ p2m_pod_decrease_reservation(struct domain *d,
             nonpod -= n;
             ram -= n;
         }
-    }    
+    }
 
     /* If there are no more non-PoD entries, tell decrease_reservation() that
      * there's nothing left to do. */
@@ -682,7 +682,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
     if ( paging_mode_shadow(d) )
         max_ref++;
 
-    /* NOTE: this is why we don't enforce deadlock constraints between p2m 
+    /* NOTE: this is why we don't enforce deadlock constraints between p2m
      * and pod locks */
     gfn_lock(p2m, gfn, SUPERPAGE_ORDER);
 
@@ -690,7 +690,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
      * and aligned, and mapping them. */
     for ( i = 0; i < SUPERPAGE_PAGES; i += n )
     {
-        p2m_access_t a; 
+        p2m_access_t a;
         unsigned int cur_order;
         unsigned long k;
         const struct page_info *page;
@@ -807,7 +807,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
 out_reset:
     if ( reset )
         p2m_set_entry(p2m, gfn, mfn0, 9, type0, p2m->default_access);
-    
+
 out:
     gfn_unlock(p2m, gfn, SUPERPAGE_ORDER);
     return ret;
@@ -836,8 +836,8 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
         /* If this is ram, and not a pagetable or from the xen heap, and probably not mapped
            elsewhere, map it; otherwise, skip. */
         if ( p2m_is_ram(types[i])
-             && ( (mfn_to_page(mfns[i])->count_info & PGC_allocated) != 0 ) 
-             && ( (mfn_to_page(mfns[i])->count_info & (PGC_page_table|PGC_xen_heap)) == 0 ) 
+             && ( (mfn_to_page(mfns[i])->count_info & PGC_allocated) != 0 )
+             && ( (mfn_to_page(mfns[i])->count_info & (PGC_page_table|PGC_xen_heap)) == 0 )
              && ( (mfn_to_page(mfns[i])->count_info & PGC_count_mask) <= max_ref ) )
             map[i] = map_domain_page(mfns[i]);
         else
@@ -915,7 +915,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
                 t.mfn = mfn_x(mfns[i]);
                 t.d = d->domain_id;
                 t.order = 0;
-        
+
                 __trace_var(TRC_MEM_POD_ZERO_RECLAIM, 0, sizeof(t), &t);
             }
 
@@ -924,7 +924,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
             p2m->pod.entry_count++;
         }
     }
-    
+
 }
 
 #define POD_SWEEP_LIMIT 1024
@@ -1046,12 +1046,12 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
     pod_lock(p2m);
 
     /* This check is done with the pod lock held.  This will make sure that
-     * even if d->is_dying changes under our feet, p2m_pod_empty_cache() 
+     * even if d->is_dying changes under our feet, p2m_pod_empty_cache()
      * won't start until we're done. */
     if ( unlikely(d->is_dying) )
         goto out_fail;
 
-    
+
     /* Because PoD does not have cache list for 1GB pages, it has to remap
      * 1GB region to 2MB chunks for a retry. */
     if ( order == PAGE_ORDER_1G )
@@ -1107,7 +1107,7 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
         set_gpfn_from_mfn(mfn_x(mfn) + i, gfn_aligned + i);
         paging_mark_dirty(d, mfn_add(mfn, i));
     }
-    
+
     p2m->pod.entry_count -= (1 << order);
     BUG_ON(p2m->pod.entry_count < 0);
 
@@ -1124,7 +1124,7 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
         t.mfn = mfn_x(mfn);
         t.d = d->domain_id;
         t.order = order;
-        
+
         __trace_var(TRC_MEM_POD_POPULATE, 0, sizeof(t), &t);
     }
 
@@ -1161,7 +1161,7 @@ remap_and_retry:
 
         t.gfn = gfn;
         t.d = d->domain_id;
-        
+
         __trace_var(TRC_MEM_POD_SUPERPAGE_SPLINTER, 0, sizeof(t), &t);
     }
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v2 03/16] xen/x86: p2m-pod: Fix coding style for comments
  2017-09-21 12:40 [PATCH v2 00/16] xen/x86: Clean-up the PoD code Julien Grall
  2017-09-21 12:40 ` [PATCH v2 01/16] xen/x86: p2m-pod: Clean-up includes Julien Grall
  2017-09-21 12:40 ` [PATCH v2 02/16] xen/x86: p2m-pod: Remove trailing whitespaces Julien Grall
@ 2017-09-21 12:40 ` Julien Grall
  2017-09-21 14:47   ` Wei Liu
  2017-09-21 16:28   ` George Dunlap
  2017-09-21 12:40 ` [PATCH v2 04/16] xen/x86: p2m-pod: Fix coding style Julien Grall
                   ` (12 subsequent siblings)
  15 siblings, 2 replies; 49+ messages in thread
From: Julien Grall @ 2017-09-21 12:40 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Julien Grall, Jan Beulich

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

---

Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>

    Changes in v2:
        - Add Andrew's acked-by
---
 xen/arch/x86/mm/p2m-pod.c | 154 ++++++++++++++++++++++++++++++----------------
 1 file changed, 102 insertions(+), 52 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index 1f07441259..6f045081b5 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -155,8 +155,10 @@ static struct page_info * p2m_pod_cache_get(struct p2m_domain *p2m,
 
         BUG_ON( page_list_empty(&p2m->pod.super) );
 
-        /* Break up a superpage to make single pages. NB count doesn't
-         * need to be adjusted. */
+        /*
+         * Break up a superpage to make single pages. NB count doesn't
+         * need to be adjusted.
+         */
         p = page_list_remove_head(&p2m->pod.super);
         mfn = mfn_x(page_to_mfn(p));
 
@@ -242,8 +244,10 @@ p2m_pod_set_cache_target(struct p2m_domain *p2m, unsigned long pod_target, int p
     }
 
     /* Decreasing the target */
-    /* We hold the pod lock here, so we don't need to worry about
-     * cache disappearing under our feet. */
+    /*
+     * We hold the pod lock here, so we don't need to worry about
+     * cache disappearing under our feet.
+     */
     while ( pod_target < p2m->pod.count )
     {
         struct page_info * page;
@@ -345,15 +349,19 @@ p2m_pod_set_mem_target(struct domain *d, unsigned long target)
     if ( d->is_dying )
         goto out;
 
-    /* T' < B: Don't reduce the cache size; let the balloon driver
-     * take care of it. */
+    /*
+     * T' < B: Don't reduce the cache size; let the balloon driver
+     * take care of it.
+     */
     if ( target < d->tot_pages )
         goto out;
 
     pod_target = target - populated;
 
-    /* B < T': Set the cache size equal to # of outstanding entries,
-     * let the balloon driver fill in the rest. */
+    /*
+     * B < T': Set the cache size equal to # of outstanding entries,
+     * let the balloon driver fill in the rest.
+     */
     if ( populated > 0 && pod_target > p2m->pod.entry_count )
         pod_target = p2m->pod.entry_count;
 
@@ -491,7 +499,8 @@ static int
 p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn);
 
 
-/* This function is needed for two reasons:
+/*
+ * This function is needed for two reasons:
  * + To properly handle clearing of PoD entries
  * + To "steal back" memory being freed for the PoD cache, rather than
  *   releasing it.
@@ -513,8 +522,10 @@ p2m_pod_decrease_reservation(struct domain *d,
     gfn_lock(p2m, gpfn, order);
     pod_lock(p2m);
 
-    /* If we don't have any outstanding PoD entries, let things take their
-     * course */
+    /*
+     * If we don't have any outstanding PoD entries, let things take their
+     * course.
+     */
     if ( p2m->pod.entry_count == 0 )
         goto out_unlock;
 
@@ -550,8 +561,10 @@ p2m_pod_decrease_reservation(struct domain *d,
 
     if ( !nonpod )
     {
-        /* All PoD: Mark the whole region invalid and tell caller
-         * we're done. */
+        /*
+         * All PoD: Mark the whole region invalid and tell caller
+         * we're done.
+         */
         p2m_set_entry(p2m, gpfn, INVALID_MFN, order, p2m_invalid,
                       p2m->default_access);
         p2m->pod.entry_count-=(1<<order);
@@ -568,15 +581,16 @@ p2m_pod_decrease_reservation(struct domain *d,
      * - order >= SUPERPAGE_ORDER (the loop below will take care of this)
      * - not all of the pages were RAM (now knowing order < SUPERPAGE_ORDER)
      */
-    if ( steal_for_cache && order < SUPERPAGE_ORDER && ram == (1 << order) &&
+    if ( steal_for_cache && order < SUPERPAGE_ORDER && ram == (1UL << order) &&
          p2m_pod_zero_check_superpage(p2m, gpfn & ~(SUPERPAGE_PAGES - 1)) )
     {
-        pod = 1 << order;
+        pod = 1UL << order;
         ram = nonpod = 0;
         ASSERT(steal_for_cache == (p2m->pod.entry_count > p2m->pod.count));
     }
 
-    /* Process as long as:
+    /*
+     * Process as long as:
      * + There are PoD entries to handle, or
      * + There is ram left, and we want to steal it
      */
@@ -631,8 +645,10 @@ p2m_pod_decrease_reservation(struct domain *d,
         }
     }
 
-    /* If there are no more non-PoD entries, tell decrease_reservation() that
-     * there's nothing left to do. */
+    /*
+     * If there are no more non-PoD entries, tell decrease_reservation() that
+     * there's nothing left to do.
+     */
     if ( nonpod == 0 )
         ret = 1;
 
@@ -658,9 +674,11 @@ void p2m_pod_dump_data(struct domain *d)
 }
 
 
-/* Search for all-zero superpages to be reclaimed as superpages for the
+/*
+ * Search for all-zero superpages to be reclaimed as superpages for the
  * PoD cache. Must be called w/ pod lock held, must lock the superpage
- * in the p2m */
+ * in the p2m.
+ */
 static int
 p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
 {
@@ -682,12 +700,16 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
     if ( paging_mode_shadow(d) )
         max_ref++;
 
-    /* NOTE: this is why we don't enforce deadlock constraints between p2m
-     * and pod locks */
+    /*
+     * NOTE: this is why we don't enforce deadlock constraints between p2m
+     * and pod locks.
+     */
     gfn_lock(p2m, gfn, SUPERPAGE_ORDER);
 
-    /* Look up the mfns, checking to make sure they're the same mfn
-     * and aligned, and mapping them. */
+    /*
+     * Look up the mfns, checking to make sure they're the same mfn
+     * and aligned, and mapping them.
+     */
     for ( i = 0; i < SUPERPAGE_PAGES; i += n )
     {
         p2m_access_t a;
@@ -697,7 +719,8 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
 
         mfn = p2m->get_entry(p2m, gfn + i, &type, &a, 0, &cur_order, NULL);
 
-        /* Conditions that must be met for superpage-superpage:
+        /*
+         * Conditions that must be met for superpage-superpage:
          * + All gfns are ram types
          * + All gfns have the same type
          * + All of the mfns are allocated to a domain
@@ -751,9 +774,11 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
                   p2m_populate_on_demand, p2m->default_access);
     p2m_tlb_flush_sync(p2m);
 
-    /* Make none of the MFNs are used elsewhere... for example, mapped
+    /*
+     * Make none of the MFNs are used elsewhere... for example, mapped
      * via the grant table interface, or by qemu.  Allow one refcount for
-     * being allocated to the domain. */
+     * being allocated to the domain.
+     */
     for ( i=0; i < SUPERPAGE_PAGES; i++ )
     {
         mfn = _mfn(mfn_x(mfn0) + i);
@@ -797,8 +822,10 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
         __trace_var(TRC_MEM_POD_ZERO_RECLAIM, 0, sizeof(t), &t);
     }
 
-    /* Finally!  We've passed all the checks, and can add the mfn superpage
-     * back on the PoD cache, and account for the new p2m PoD entries */
+    /*
+     * Finally!  We've passed all the checks, and can add the mfn superpage
+     * back on the PoD cache, and account for the new p2m PoD entries.
+     */
     p2m_pod_cache_add(p2m, mfn_to_page(mfn0), PAGE_ORDER_2M);
     p2m->pod.entry_count += SUPERPAGE_PAGES;
 
@@ -833,8 +860,10 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
     {
         p2m_access_t a;
         mfns[i] = p2m->get_entry(p2m, gfns[i], types + i, &a, 0, NULL, NULL);
-        /* If this is ram, and not a pagetable or from the xen heap, and probably not mapped
-           elsewhere, map it; otherwise, skip. */
+        /*
+         * If this is ram, and not a pagetable or from the xen heap, and
+         * probably not mapped elsewhere, map it; otherwise, skip.
+         */
         if ( p2m_is_ram(types[i])
              && ( (mfn_to_page(mfns[i])->count_info & PGC_allocated) != 0 )
              && ( (mfn_to_page(mfns[i])->count_info & (PGC_page_table|PGC_xen_heap)) == 0 )
@@ -844,8 +873,10 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
             map[i] = NULL;
     }
 
-    /* Then, go through and check for zeroed pages, removing write permission
-     * for those with zeroes. */
+    /*
+     * Then, go through and check for zeroed pages, removing write permission
+     * for those with zeroes.
+     */
     for ( i=0; i<count; i++ )
     {
         if(!map[i])
@@ -867,8 +898,10 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
         p2m_set_entry(p2m, gfns[i], INVALID_MFN, PAGE_ORDER_4K,
                       p2m_populate_on_demand, p2m->default_access);
 
-        /* See if the page was successfully unmapped.  (Allow one refcount
-         * for being allocated to a domain.) */
+        /*
+         * See if the page was successfully unmapped.  (Allow one refcount
+         * for being allocated to a domain.)
+         */
         if ( (mfn_to_page(mfns[i])->count_info & PGC_count_mask) > 1 )
         {
             unmap_domain_page(map[i]);
@@ -895,8 +928,10 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
 
         unmap_domain_page(map[i]);
 
-        /* See comment in p2m_pod_zero_check_superpage() re gnttab
-         * check timing.  */
+        /*
+         * See comment in p2m_pod_zero_check_superpage() re gnttab
+         * check timing.
+         */
         if ( j < PAGE_SIZE/sizeof(*map[i]) )
         {
             p2m_set_entry(p2m, gfns[i], mfns[i], PAGE_ORDER_4K,
@@ -944,9 +979,11 @@ p2m_pod_emergency_sweep(struct p2m_domain *p2m)
     limit = (start > POD_SWEEP_LIMIT) ? (start - POD_SWEEP_LIMIT) : 0;
 
     /* FIXME: Figure out how to avoid superpages */
-    /* NOTE: Promote to globally locking the p2m. This will get complicated
+    /*
+     * NOTE: Promote to globally locking the p2m. This will get complicated
      * in a fine-grained scenario. If we lock each gfn individually we must be
-     * careful about spinlock recursion limits and POD_SWEEP_STRIDE. */
+     * careful about spinlock recursion limits and POD_SWEEP_STRIDE.
+     */
     p2m_lock(p2m);
     for ( i=p2m->pod.reclaim_single; i > 0 ; i-- )
     {
@@ -963,11 +1000,13 @@ p2m_pod_emergency_sweep(struct p2m_domain *p2m)
                 j = 0;
             }
         }
-        /* Stop if we're past our limit and we have found *something*.
+        /*
+         * Stop if we're past our limit and we have found *something*.
          *
          * NB that this is a zero-sum game; we're increasing our cache size
          * by re-increasing our 'debt'.  Since we hold the pod lock,
-         * (entry_count - count) must remain the same. */
+         * (entry_count - count) must remain the same.
+         */
         if ( i < limit && (p2m->pod.count > 0 || hypercall_preempt_check()) )
             break;
     }
@@ -1045,20 +1084,25 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
     ASSERT(gfn_locked_by_me(p2m, gfn));
     pod_lock(p2m);
 
-    /* This check is done with the pod lock held.  This will make sure that
+    /*
+     * This check is done with the pod lock held.  This will make sure that
      * even if d->is_dying changes under our feet, p2m_pod_empty_cache()
-     * won't start until we're done. */
+     * won't start until we're done.
+     */
     if ( unlikely(d->is_dying) )
         goto out_fail;
 
 
-    /* Because PoD does not have cache list for 1GB pages, it has to remap
-     * 1GB region to 2MB chunks for a retry. */
+    /*
+     * Because PoD does not have cache list for 1GB pages, it has to remap
+     * 1GB region to 2MB chunks for a retry.
+     */
     if ( order == PAGE_ORDER_1G )
     {
         pod_unlock(p2m);
         gfn_aligned = (gfn >> order) << order;
-        /* Note that we are supposed to call p2m_set_entry() 512 times to
+        /*
+         * Note that we are supposed to call p2m_set_entry() 512 times to
          * split 1GB into 512 2MB pages here. But We only do once here because
          * p2m_set_entry() should automatically shatter the 1GB page into
          * 512 2MB pages. The rest of 511 calls are unnecessary.
@@ -1075,8 +1119,10 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
     if ( p2m->pod.entry_count > p2m->pod.count )
         pod_eager_reclaim(p2m);
 
-    /* Only sweep if we're actually out of memory.  Doing anything else
-     * causes unnecessary time and fragmentation of superpages in the p2m. */
+    /*
+     * Only sweep if we're actually out of memory.  Doing anything else
+     * causes unnecessary time and fragmentation of superpages in the p2m.
+     */
     if ( p2m->pod.count == 0 )
         p2m_pod_emergency_sweep(p2m);
 
@@ -1088,8 +1134,10 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
     if ( gfn > p2m->pod.max_guest )
         p2m->pod.max_guest = gfn;
 
-    /* Get a page f/ the cache.  A NULL return value indicates that the
-     * 2-meg range should be marked singleton PoD, and retried */
+    /*
+     * Get a page f/ the cache.  A NULL return value indicates that the
+     * 2-meg range should be marked singleton PoD, and retried.
+     */
     if ( (p = p2m_pod_cache_get(p2m, order)) == NULL )
         goto remap_and_retry;
 
@@ -1146,8 +1194,10 @@ remap_and_retry:
     pod_unlock(p2m);
 
     /* Remap this 2-meg region in singleton chunks */
-    /* NOTE: In a p2m fine-grained lock scenario this might
-     * need promoting the gfn lock from gfn->2M superpage */
+    /*
+     * NOTE: In a p2m fine-grained lock scenario this might
+     * need promoting the gfn lock from gfn->2M superpage.
+     */
     gfn_aligned = (gfn>>order)<<order;
     for(i=0; i<(1<<order); i++)
         p2m_set_entry(p2m, gfn_aligned + i, INVALID_MFN, PAGE_ORDER_4K,
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v2 04/16] xen/x86: p2m-pod: Fix coding style
  2017-09-21 12:40 [PATCH v2 00/16] xen/x86: Clean-up the PoD code Julien Grall
                   ` (2 preceding siblings ...)
  2017-09-21 12:40 ` [PATCH v2 03/16] xen/x86: p2m-pod: Fix coding style for comments Julien Grall
@ 2017-09-21 12:40 ` Julien Grall
  2017-09-21 14:48   ` Wei Liu
                     ` (2 more replies)
  2017-09-21 12:40 ` [PATCH v2 05/16] xen/x86: p2m-pod: Avoid redundant assignments in p2m_pod_demand_populate Julien Grall
                   ` (11 subsequent siblings)
  15 siblings, 3 replies; 49+ messages in thread
From: Julien Grall @ 2017-09-21 12:40 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Julien Grall, Jan Beulich

Also take the opportunity to:
    - move from 1 << * to 1UL << *.
    - use unsigned when possible
    - move from unsigned int -> unsigned long for some induction
    variables

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

---

Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>

    Changes in v2:
        - Add Andrew's acked-by
---
 xen/arch/x86/mm/p2m-pod.c | 102 +++++++++++++++++++++++-----------------------
 1 file changed, 52 insertions(+), 50 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index 6f045081b5..f04d6e03e2 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -60,7 +60,7 @@ p2m_pod_cache_add(struct p2m_domain *p2m,
                   struct page_info *page,
                   unsigned int order)
 {
-    int i;
+    unsigned long i;
     struct page_info *p;
     struct domain *d = p2m->domain;
 
@@ -70,23 +70,24 @@ p2m_pod_cache_add(struct p2m_domain *p2m,
     mfn = page_to_mfn(page);
 
     /* Check to make sure this is a contiguous region */
-    if( mfn_x(mfn) & ((1 << order) - 1) )
+    if ( mfn_x(mfn) & ((1UL << order) - 1) )
     {
         printk("%s: mfn %lx not aligned order %u! (mask %lx)\n",
                __func__, mfn_x(mfn), order, ((1UL << order) - 1));
         return -1;
     }
 
-    for(i=0; i < 1 << order ; i++) {
+    for ( i = 0; i < 1UL << order ; i++)
+    {
         struct domain * od;
 
         p = mfn_to_page(_mfn(mfn_x(mfn) + i));
         od = page_get_owner(p);
-        if(od != d)
+        if ( od != d )
         {
             printk("%s: mfn %lx expected owner d%d, got owner d%d!\n",
                    __func__, mfn_x(mfn), d->domain_id,
-                   od?od->domain_id:-1);
+                   od ? od->domain_id : -1);
             return -1;
         }
     }
@@ -99,12 +100,12 @@ p2m_pod_cache_add(struct p2m_domain *p2m,
      * guaranteed to be zero; but by reclaiming zero pages, we implicitly
      * promise to provide zero pages. So we scrub pages before using.
      */
-    for ( i = 0; i < (1 << order); i++ )
+    for ( i = 0; i < (1UL << order); i++ )
         clear_domain_page(_mfn(mfn_x(page_to_mfn(page)) + i));
 
     /* First, take all pages off the domain list */
     lock_page_alloc(p2m);
-    for(i=0; i < 1 << order ; i++)
+    for ( i = 0; i < 1UL << order ; i++ )
     {
         p = page + i;
         page_list_del(p, &d->page_list);
@@ -128,7 +129,7 @@ p2m_pod_cache_add(struct p2m_domain *p2m,
     default:
         BUG();
     }
-    p2m->pod.count += 1L << order;
+    p2m->pod.count += 1UL << order;
 
     return 0;
 }
@@ -140,7 +141,7 @@ static struct page_info * p2m_pod_cache_get(struct p2m_domain *p2m,
                                             unsigned int order)
 {
     struct page_info *p = NULL;
-    int i;
+    unsigned long i;
 
     ASSERT(pod_locked_by_me(p2m));
 
@@ -162,7 +163,7 @@ static struct page_info * p2m_pod_cache_get(struct p2m_domain *p2m,
         p = page_list_remove_head(&p2m->pod.super);
         mfn = mfn_x(page_to_mfn(p));
 
-        for ( i=0; i<SUPERPAGE_PAGES; i++ )
+        for ( i = 0; i < SUPERPAGE_PAGES; i++ )
         {
             q = mfn_to_page(_mfn(mfn+i));
             page_list_add_tail(q, &p2m->pod.single);
@@ -174,12 +175,12 @@ static struct page_info * p2m_pod_cache_get(struct p2m_domain *p2m,
     case PAGE_ORDER_2M:
         BUG_ON( page_list_empty(&p2m->pod.super) );
         p = page_list_remove_head(&p2m->pod.super);
-        p2m->pod.count -= 1 << order;
+        p2m->pod.count -= 1UL << order;
         break;
     case PAGE_ORDER_4K:
         BUG_ON( page_list_empty(&p2m->pod.single) );
         p = page_list_remove_head(&p2m->pod.single);
-        p2m->pod.count -= 1;
+        p2m->pod.count -= 1UL;
         break;
     default:
         BUG();
@@ -187,7 +188,7 @@ static struct page_info * p2m_pod_cache_get(struct p2m_domain *p2m,
 
     /* Put the pages back on the domain page_list */
     lock_page_alloc(p2m);
-    for ( i = 0 ; i < (1 << order); i++ )
+    for ( i = 0 ; i < (1UL << order); i++ )
     {
         BUG_ON(page_get_owner(p + i) != p2m->domain);
         page_list_add_tail(p + i, &p2m->domain->page_list);
@@ -251,7 +252,8 @@ p2m_pod_set_cache_target(struct p2m_domain *p2m, unsigned long pod_target, int p
     while ( pod_target < p2m->pod.count )
     {
         struct page_info * page;
-        int order, i;
+        unsigned int order;
+        unsigned long i;
 
         if ( (p2m->pod.count - pod_target) > SUPERPAGE_PAGES
              && !page_list_empty(&p2m->pod.super) )
@@ -264,10 +266,10 @@ p2m_pod_set_cache_target(struct p2m_domain *p2m, unsigned long pod_target, int p
         ASSERT(page != NULL);
 
         /* Then free them */
-        for ( i = 0 ; i < (1 << order) ; i++ )
+        for ( i = 0 ; i < (1UL << order) ; i++ )
         {
             /* Copied from common/memory.c:guest_remove_page() */
-            if ( unlikely(!get_page(page+i, d)) )
+            if ( unlikely(!get_page(page + i, d)) )
             {
                 gdprintk(XENLOG_INFO, "Bad page free for domain %u\n", d->domain_id);
                 ret = -EINVAL;
@@ -275,12 +277,12 @@ p2m_pod_set_cache_target(struct p2m_domain *p2m, unsigned long pod_target, int p
             }
 
             if ( test_and_clear_bit(_PGT_pinned, &(page+i)->u.inuse.type_info) )
-                put_page_and_type(page+i);
+                put_page_and_type(page + i);
 
             if ( test_and_clear_bit(_PGC_allocated, &(page+i)->count_info) )
-                put_page(page+i);
+                put_page(page + i);
 
-            put_page(page+i);
+            put_page(page + i);
 
             if ( preemptible && pod_target != p2m->pod.count &&
                  hypercall_preempt_check() )
@@ -513,7 +515,7 @@ p2m_pod_decrease_reservation(struct domain *d,
                              xen_pfn_t gpfn,
                              unsigned int order)
 {
-    int ret=0;
+    int ret = 0;
     unsigned long i, n;
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     bool_t steal_for_cache;
@@ -556,7 +558,7 @@ p2m_pod_decrease_reservation(struct domain *d,
     }
 
     /* No populate-on-demand?  Don't need to steal anything?  Then we're done!*/
-    if(!pod && !steal_for_cache)
+    if ( !pod && !steal_for_cache )
         goto out_unlock;
 
     if ( !nonpod )
@@ -567,7 +569,7 @@ p2m_pod_decrease_reservation(struct domain *d,
          */
         p2m_set_entry(p2m, gpfn, INVALID_MFN, order, p2m_invalid,
                       p2m->default_access);
-        p2m->pod.entry_count-=(1<<order);
+        p2m->pod.entry_count -= 1UL << order;
         BUG_ON(p2m->pod.entry_count < 0);
         ret = 1;
         goto out_entry_check;
@@ -625,7 +627,7 @@ p2m_pod_decrease_reservation(struct domain *d,
              * avoid breaking up superpages.
              */
             struct page_info *page;
-            unsigned int j;
+            unsigned long j;
 
             ASSERT(mfn_valid(mfn));
 
@@ -753,13 +755,13 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
     }
 
     /* Now, do a quick check to see if it may be zero before unmapping. */
-    for ( i=0; i<SUPERPAGE_PAGES; i++ )
+    for ( i = 0; i < SUPERPAGE_PAGES; i++ )
     {
         /* Quick zero-check */
         map = map_domain_page(_mfn(mfn_x(mfn0) + i));
 
-        for ( j=0; j<16; j++ )
-            if( *(map+j) != 0 )
+        for ( j = 0; j < 16; j++ )
+            if ( *(map + j) != 0 )
                 break;
 
         unmap_domain_page(map);
@@ -779,7 +781,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
      * via the grant table interface, or by qemu.  Allow one refcount for
      * being allocated to the domain.
      */
-    for ( i=0; i < SUPERPAGE_PAGES; i++ )
+    for ( i = 0; i < SUPERPAGE_PAGES; i++ )
     {
         mfn = _mfn(mfn_x(mfn0) + i);
         if ( (mfn_to_page(mfn)->count_info & PGC_count_mask) > 1 )
@@ -790,12 +792,12 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
     }
 
     /* Finally, do a full zero-check */
-    for ( i=0; i < SUPERPAGE_PAGES; i++ )
+    for ( i = 0; i < SUPERPAGE_PAGES; i++ )
     {
         map = map_domain_page(_mfn(mfn_x(mfn0) + i));
 
-        for ( j=0; j<PAGE_SIZE/sizeof(*map); j++ )
-            if( *(map+j) != 0 )
+        for ( j = 0; j < (PAGE_SIZE / sizeof(*map)); j++ )
+            if ( *(map+j) != 0 )
             {
                 reset = 1;
                 break;
@@ -845,7 +847,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
 {
     mfn_t mfns[count];
     p2m_type_t types[count];
-    unsigned long * map[count];
+    unsigned long *map[count];
     struct domain *d = p2m->domain;
 
     int i, j;
@@ -856,7 +858,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
         max_ref++;
 
     /* First, get the gfn list, translate to mfns, and map the pages. */
-    for ( i=0; i<count; i++ )
+    for ( i = 0; i < count; i++ )
     {
         p2m_access_t a;
         mfns[i] = p2m->get_entry(p2m, gfns[i], types + i, &a, 0, NULL, NULL);
@@ -877,14 +879,14 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
      * Then, go through and check for zeroed pages, removing write permission
      * for those with zeroes.
      */
-    for ( i=0; i<count; i++ )
+    for ( i = 0; i < count; i++ )
     {
-        if(!map[i])
+        if ( !map[i] )
             continue;
 
         /* Quick zero-check */
-        for ( j=0; j<16; j++ )
-            if( *(map[i]+j) != 0 )
+        for ( j = 0; j < 16; j++ )
+            if ( *(map[i] + j) != 0 )
                 break;
 
         if ( j < 16 )
@@ -917,13 +919,13 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
     p2m_tlb_flush_sync(p2m);
 
     /* Now check each page for real */
-    for ( i=0; i < count; i++ )
+    for ( i = 0; i < count; i++ )
     {
-        if(!map[i])
+        if ( !map[i] )
             continue;
 
-        for ( j=0; j<PAGE_SIZE/sizeof(*map[i]); j++ )
-            if( *(map[i]+j) != 0 )
+        for ( j = 0; j < (PAGE_SIZE / sizeof(*map[i])); j++ )
+            if ( *(map[i] + j) != 0 )
                 break;
 
         unmap_domain_page(map[i]);
@@ -932,10 +934,10 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
          * See comment in p2m_pod_zero_check_superpage() re gnttab
          * check timing.
          */
-        if ( j < PAGE_SIZE/sizeof(*map[i]) )
+        if ( j < (PAGE_SIZE / sizeof(*map[i])) )
         {
             p2m_set_entry(p2m, gfns[i], mfns[i], PAGE_ORDER_4K,
-                types[i], p2m->default_access);
+                          types[i], p2m->default_access);
         }
         else
         {
@@ -968,7 +970,7 @@ static void
 p2m_pod_emergency_sweep(struct p2m_domain *p2m)
 {
     unsigned long gfns[POD_SWEEP_STRIDE];
-    unsigned long i, j=0, start, limit;
+    unsigned long i, j = 0, start, limit;
     p2m_type_t t;
 
 
@@ -985,7 +987,7 @@ p2m_pod_emergency_sweep(struct p2m_domain *p2m)
      * careful about spinlock recursion limits and POD_SWEEP_STRIDE.
      */
     p2m_lock(p2m);
-    for ( i=p2m->pod.reclaim_single; i > 0 ; i-- )
+    for ( i = p2m->pod.reclaim_single; i > 0 ; i-- )
     {
         p2m_access_t a;
         (void)p2m->get_entry(p2m, i, &t, &a, 0, NULL, NULL);
@@ -1079,7 +1081,7 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
     struct page_info *p = NULL; /* Compiler warnings */
     unsigned long gfn_aligned;
     mfn_t mfn;
-    int i;
+    unsigned long i;
 
     ASSERT(gfn_locked_by_me(p2m, gfn));
     pod_lock(p2m);
@@ -1143,7 +1145,7 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
 
     mfn = page_to_mfn(p);
 
-    BUG_ON((mfn_x(mfn) & ((1 << order)-1)) != 0);
+    BUG_ON((mfn_x(mfn) & ((1UL << order) - 1)) != 0);
 
     gfn_aligned = (gfn >> order) << order;
 
@@ -1156,7 +1158,7 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
         paging_mark_dirty(d, mfn_add(mfn, i));
     }
 
-    p2m->pod.entry_count -= (1 << order);
+    p2m->pod.entry_count -= (1UL << order);
     BUG_ON(p2m->pod.entry_count < 0);
 
     pod_eager_record(p2m, gfn_aligned, order);
@@ -1198,8 +1200,8 @@ remap_and_retry:
      * NOTE: In a p2m fine-grained lock scenario this might
      * need promoting the gfn lock from gfn->2M superpage.
      */
-    gfn_aligned = (gfn>>order)<<order;
-    for(i=0; i<(1<<order); i++)
+    gfn_aligned = (gfn >> order) << order;
+    for ( i = 0; i < (1UL << order); i++ )
         p2m_set_entry(p2m, gfn_aligned + i, INVALID_MFN, PAGE_ORDER_4K,
                       p2m_populate_on_demand, p2m->default_access);
     if ( tb_init_done )
@@ -1262,7 +1264,7 @@ guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn,
     if ( rc == 0 )
     {
         pod_lock(p2m);
-        p2m->pod.entry_count += 1 << order;
+        p2m->pod.entry_count += 1UL << order;
         p2m->pod.entry_count -= pod_count;
         BUG_ON(p2m->pod.entry_count < 0);
         pod_unlock(p2m);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v2 05/16] xen/x86: p2m-pod: Avoid redundant assignments in p2m_pod_demand_populate
  2017-09-21 12:40 [PATCH v2 00/16] xen/x86: Clean-up the PoD code Julien Grall
                   ` (3 preceding siblings ...)
  2017-09-21 12:40 ` [PATCH v2 04/16] xen/x86: p2m-pod: Fix coding style Julien Grall
@ 2017-09-21 12:40 ` Julien Grall
  2017-09-21 14:53   ` Wei Liu
  2017-09-21 16:35   ` George Dunlap
  2017-09-21 12:40 ` [PATCH v2 06/16] xen/x86: p2m-pod: Clean-up use of typesafe MFN Julien Grall
                   ` (10 subsequent siblings)
  15 siblings, 2 replies; 49+ messages in thread
From: Julien Grall @ 2017-09-21 12:40 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Julien Grall, Jan Beulich

gfn_aligned is assigned 3 times with the exact same formula. All the
variables used are not modified, so consolidate in a single assignment
at the beginning of the function.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

---

Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>

    Changes in v2:
        - Add Andrew's acked-by
---
 xen/arch/x86/mm/p2m-pod.c | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index f04d6e03e2..bcc87aee03 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -1079,7 +1079,7 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
 {
     struct domain *d = p2m->domain;
     struct page_info *p = NULL; /* Compiler warnings */
-    unsigned long gfn_aligned;
+    unsigned long gfn_aligned = (gfn >> order) << order;
     mfn_t mfn;
     unsigned long i;
 
@@ -1102,7 +1102,6 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
     if ( order == PAGE_ORDER_1G )
     {
         pod_unlock(p2m);
-        gfn_aligned = (gfn >> order) << order;
         /*
          * Note that we are supposed to call p2m_set_entry() 512 times to
          * split 1GB into 512 2MB pages here. But We only do once here because
@@ -1147,8 +1146,6 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
 
     BUG_ON((mfn_x(mfn) & ((1UL << order) - 1)) != 0);
 
-    gfn_aligned = (gfn >> order) << order;
-
     p2m_set_entry(p2m, gfn_aligned, mfn, order, p2m_ram_rw,
                   p2m->default_access);
 
@@ -1200,7 +1197,6 @@ remap_and_retry:
      * NOTE: In a p2m fine-grained lock scenario this might
      * need promoting the gfn lock from gfn->2M superpage.
      */
-    gfn_aligned = (gfn >> order) << order;
     for ( i = 0; i < (1UL << order); i++ )
         p2m_set_entry(p2m, gfn_aligned + i, INVALID_MFN, PAGE_ORDER_4K,
                       p2m_populate_on_demand, p2m->default_access);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v2 06/16] xen/x86: p2m-pod: Clean-up use of typesafe MFN
  2017-09-21 12:40 [PATCH v2 00/16] xen/x86: Clean-up the PoD code Julien Grall
                   ` (4 preceding siblings ...)
  2017-09-21 12:40 ` [PATCH v2 05/16] xen/x86: p2m-pod: Avoid redundant assignments in p2m_pod_demand_populate Julien Grall
@ 2017-09-21 12:40 ` Julien Grall
  2017-09-21 14:54   ` Wei Liu
  2017-09-21 16:36   ` George Dunlap
  2017-09-21 12:40 ` [PATCH v2 07/16] xen/x86: p2m-pod: Use typesafe gfn in p2m_pod_decrease_reservation Julien Grall
                   ` (9 subsequent siblings)
  15 siblings, 2 replies; 49+ messages in thread
From: Julien Grall @ 2017-09-21 12:40 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Julien Grall, Jan Beulich

Some unboxing/boxing can be avoided by using mfn_add(...) instead.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

---

Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>

    Changes in v2:
        - Add Andrew's acked-by
---
 xen/arch/x86/mm/p2m-pod.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index bcc87aee03..34f5239b6d 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -101,7 +101,7 @@ p2m_pod_cache_add(struct p2m_domain *p2m,
      * promise to provide zero pages. So we scrub pages before using.
      */
     for ( i = 0; i < (1UL << order); i++ )
-        clear_domain_page(_mfn(mfn_x(page_to_mfn(page)) + i));
+        clear_domain_page(mfn_add(page_to_mfn(page), i));
 
     /* First, take all pages off the domain list */
     lock_page_alloc(p2m);
@@ -743,7 +743,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
             mfn0 = mfn;
             type0 = type;
         }
-        else if ( type != type0 || mfn_x(mfn) != (mfn_x(mfn0) + i) )
+        else if ( type != type0 || !mfn_eq(mfn, mfn_add(mfn0, i)) )
             goto out;
 
         n = 1UL << min(cur_order, SUPERPAGE_ORDER + 0U);
@@ -758,7 +758,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
     for ( i = 0; i < SUPERPAGE_PAGES; i++ )
     {
         /* Quick zero-check */
-        map = map_domain_page(_mfn(mfn_x(mfn0) + i));
+        map = map_domain_page(mfn_add(mfn0, i));
 
         for ( j = 0; j < 16; j++ )
             if ( *(map + j) != 0 )
@@ -783,7 +783,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
      */
     for ( i = 0; i < SUPERPAGE_PAGES; i++ )
     {
-        mfn = _mfn(mfn_x(mfn0) + i);
+        mfn = mfn_add(mfn0, i);
         if ( (mfn_to_page(mfn)->count_info & PGC_count_mask) > 1 )
         {
             reset = 1;
@@ -794,7 +794,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
     /* Finally, do a full zero-check */
     for ( i = 0; i < SUPERPAGE_PAGES; i++ )
     {
-        map = map_domain_page(_mfn(mfn_x(mfn0) + i));
+        map = map_domain_page(mfn_add(mfn0, i));
 
         for ( j = 0; j < (PAGE_SIZE / sizeof(*map)); j++ )
             if ( *(map+j) != 0 )
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v2 07/16] xen/x86: p2m-pod: Use typesafe gfn in p2m_pod_decrease_reservation
  2017-09-21 12:40 [PATCH v2 00/16] xen/x86: Clean-up the PoD code Julien Grall
                   ` (5 preceding siblings ...)
  2017-09-21 12:40 ` [PATCH v2 06/16] xen/x86: p2m-pod: Clean-up use of typesafe MFN Julien Grall
@ 2017-09-21 12:40 ` Julien Grall
  2017-09-21 15:19   ` Wei Liu
  2017-09-21 16:39   ` George Dunlap
  2017-09-21 12:40 ` [PATCH v2 08/16] xen/x86: p2m: Use typesafe gfn for the P2M callbacks get_entry and set_entry Julien Grall
                   ` (8 subsequent siblings)
  15 siblings, 2 replies; 49+ messages in thread
From: Julien Grall @ 2017-09-21 12:40 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Julien Grall, Jan Beulich

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

---

Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>

    Changes in v2:
        - Add Andrew's acked-by
---
 xen/arch/arm/p2m.c           |  3 +--
 xen/arch/x86/mm/p2m-pod.c    | 20 +++++++++-----------
 xen/common/memory.c          |  3 ++-
 xen/include/asm-arm/p2m.h    | 13 -------------
 xen/include/asm-x86/p2m.h    |  7 -------
 xen/include/xen/p2m-common.h | 13 +++++++++++++
 6 files changed, 25 insertions(+), 34 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 192a1c329d..0410b1e86b 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -393,8 +393,7 @@ int guest_physmap_mark_populate_on_demand(struct domain *d,
     return -ENOSYS;
 }
 
-int p2m_pod_decrease_reservation(struct domain *d,
-                                 xen_pfn_t gpfn,
+int p2m_pod_decrease_reservation(struct domain *d, gfn_t gfn,
                                  unsigned int order)
 {
     return -ENOSYS;
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index 34f5239b6d..eb74e5c01f 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -511,9 +511,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn);
  * allow decrease_reservation() to handle everything else.
  */
 int
-p2m_pod_decrease_reservation(struct domain *d,
-                             xen_pfn_t gpfn,
-                             unsigned int order)
+p2m_pod_decrease_reservation(struct domain *d, gfn_t gfn, unsigned int order)
 {
     int ret = 0;
     unsigned long i, n;
@@ -521,7 +519,7 @@ p2m_pod_decrease_reservation(struct domain *d,
     bool_t steal_for_cache;
     long pod, nonpod, ram;
 
-    gfn_lock(p2m, gpfn, order);
+    gfn_lock(p2m, gfn, order);
     pod_lock(p2m);
 
     /*
@@ -545,7 +543,7 @@ p2m_pod_decrease_reservation(struct domain *d,
         p2m_type_t t;
         unsigned int cur_order;
 
-        p2m->get_entry(p2m, gpfn + i, &t, &a, 0, &cur_order, NULL);
+        p2m->get_entry(p2m, gfn_x(gfn) + i, &t, &a, 0, &cur_order, NULL);
         n = 1UL << min(order, cur_order);
         if ( t == p2m_populate_on_demand )
             pod += n;
@@ -567,7 +565,7 @@ p2m_pod_decrease_reservation(struct domain *d,
          * All PoD: Mark the whole region invalid and tell caller
          * we're done.
          */
-        p2m_set_entry(p2m, gpfn, INVALID_MFN, order, p2m_invalid,
+        p2m_set_entry(p2m, gfn_x(gfn), INVALID_MFN, order, p2m_invalid,
                       p2m->default_access);
         p2m->pod.entry_count -= 1UL << order;
         BUG_ON(p2m->pod.entry_count < 0);
@@ -584,7 +582,7 @@ p2m_pod_decrease_reservation(struct domain *d,
      * - not all of the pages were RAM (now knowing order < SUPERPAGE_ORDER)
      */
     if ( steal_for_cache && order < SUPERPAGE_ORDER && ram == (1UL << order) &&
-         p2m_pod_zero_check_superpage(p2m, gpfn & ~(SUPERPAGE_PAGES - 1)) )
+         p2m_pod_zero_check_superpage(p2m, gfn_x(gfn) & ~(SUPERPAGE_PAGES - 1)) )
     {
         pod = 1UL << order;
         ram = nonpod = 0;
@@ -605,13 +603,13 @@ p2m_pod_decrease_reservation(struct domain *d,
         p2m_access_t a;
         unsigned int cur_order;
 
-        mfn = p2m->get_entry(p2m, gpfn + i, &t, &a, 0, &cur_order, NULL);
+        mfn = p2m->get_entry(p2m, gfn_x(gfn) + i, &t, &a, 0, &cur_order, NULL);
         if ( order < cur_order )
             cur_order = order;
         n = 1UL << cur_order;
         if ( t == p2m_populate_on_demand )
         {
-            p2m_set_entry(p2m, gpfn + i, INVALID_MFN, cur_order,
+            p2m_set_entry(p2m, gfn_x(gfn) + i, INVALID_MFN, cur_order,
                           p2m_invalid, p2m->default_access);
             p2m->pod.entry_count -= n;
             BUG_ON(p2m->pod.entry_count < 0);
@@ -633,7 +631,7 @@ p2m_pod_decrease_reservation(struct domain *d,
 
             page = mfn_to_page(mfn);
 
-            p2m_set_entry(p2m, gpfn + i, INVALID_MFN, cur_order,
+            p2m_set_entry(p2m, gfn_x(gfn) + i, INVALID_MFN, cur_order,
                           p2m_invalid, p2m->default_access);
             p2m_tlb_flush_sync(p2m);
             for ( j = 0; j < n; ++j )
@@ -663,7 +661,7 @@ out_entry_check:
 
 out_unlock:
     pod_unlock(p2m);
-    gfn_unlock(p2m, gpfn, order);
+    gfn_unlock(p2m, gfn, order);
     return ret;
 }
 
diff --git a/xen/common/memory.c b/xen/common/memory.c
index a2abf554e3..ad987e0f29 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -417,7 +417,8 @@ static void decrease_reservation(struct memop_args *a)
 
         /* See if populate-on-demand wants to handle this */
         if ( is_hvm_domain(a->domain)
-             && p2m_pod_decrease_reservation(a->domain, gmfn, a->extent_order) )
+             && p2m_pod_decrease_reservation(a->domain, _gfn(gmfn),
+                                             a->extent_order) )
             continue;
 
         for ( j = 0; j < (1 << a->extent_order); j++ )
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index bc5bbf0db7..faadcfe8fe 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -266,19 +266,6 @@ static inline int guest_physmap_add_page(struct domain *d,
 
 mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn);
 
-/*
- * Populate-on-demand
- */
-
-/*
- * Call when decreasing memory reservation to handle PoD entries properly.
- * Will return '1' if all entries were handled and nothing more need be done.
- */
-int
-p2m_pod_decrease_reservation(struct domain *d,
-                             xen_pfn_t gpfn,
-                             unsigned int order);
-
 /* Look up a GFN and take a reference count on the backing page. */
 typedef unsigned int p2m_query_t;
 #define P2M_ALLOC    (1u<<0)   /* Populate PoD and paged-out entries */
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 10cdfc09a9..8f3409b400 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -643,13 +643,6 @@ int p2m_pod_empty_cache(struct domain *d);
  * domain matches target */
 int p2m_pod_set_mem_target(struct domain *d, unsigned long target);
 
-/* Call when decreasing memory reservation to handle PoD entries properly.
- * Will return '1' if all entries were handled and nothing more need be done.*/
-int
-p2m_pod_decrease_reservation(struct domain *d,
-                             xen_pfn_t gpfn,
-                             unsigned int order);
-
 /* Scan pod cache when offline/broken page triggered */
 int
 p2m_pod_offline_or_broken_hit(struct page_info *p);
diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h
index 2b5696cf33..27f89208f5 100644
--- a/xen/include/xen/p2m-common.h
+++ b/xen/include/xen/p2m-common.h
@@ -20,4 +20,17 @@ int unmap_mmio_regions(struct domain *d,
                        unsigned long nr,
                        mfn_t mfn);
 
+/*
+ * Populate-on-Demand
+ */
+
+/*
+ * Call when decreasing memory reservation to handle PoD entries properly.
+ * Will return '1' if all entries were handled and nothing more need be done.
+ */
+int
+p2m_pod_decrease_reservation(struct domain *d, gfn_t gfn,
+                             unsigned int order);
+
+
 #endif /* _XEN_P2M_COMMON_H */
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v2 08/16] xen/x86: p2m: Use typesafe gfn for the P2M callbacks get_entry and set_entry
  2017-09-21 12:40 [PATCH v2 00/16] xen/x86: Clean-up the PoD code Julien Grall
                   ` (6 preceding siblings ...)
  2017-09-21 12:40 ` [PATCH v2 07/16] xen/x86: p2m-pod: Use typesafe gfn in p2m_pod_decrease_reservation Julien Grall
@ 2017-09-21 12:40 ` Julien Grall
  2017-09-21 15:23   ` Wei Liu
  2017-09-21 12:40 ` [PATCH v2 09/16] xen/x86: p2m: Use typesafe GFN in p2m_set_entry Julien Grall
                   ` (7 subsequent siblings)
  15 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-09-21 12:40 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Tamas K Lengyel, Jun Nakajima, Razvan Cojocaru,
	George Dunlap, Andrew Cooper, Julien Grall, Jan Beulich

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>

---

Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
Cc: Tamas K Lengyel <tamas@tklengyel.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>

    Changes in v2:
        - Add Andre's acked
        - Add Kevin's reviewed (EPT part)
---
 xen/arch/x86/hvm/hvm.c        |  2 +-
 xen/arch/x86/mm/mem_access.c  | 19 +++++------
 xen/arch/x86/mm/mem_sharing.c |  4 +--
 xen/arch/x86/mm/p2m-ept.c     |  6 ++--
 xen/arch/x86/mm/p2m-pod.c     | 15 +++++----
 xen/arch/x86/mm/p2m-pt.c      |  6 ++--
 xen/arch/x86/mm/p2m.c         | 77 +++++++++++++++++++++++--------------------
 xen/include/asm-x86/p2m.h     |  4 +--
 8 files changed, 73 insertions(+), 60 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 93394c1fb6..2a32102a41 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1787,7 +1787,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
             {
                 bool_t sve;
 
-                p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL, &sve);
+                p2m->get_entry(p2m, _gfn(gfn), &p2mt, &p2ma, 0, NULL, &sve);
 
                 if ( !sve && altp2m_vcpu_emulate_ve(curr) )
                 {
diff --git a/xen/arch/x86/mm/mem_access.c b/xen/arch/x86/mm/mem_access.c
index 9211fc0abe..948e377e69 100644
--- a/xen/arch/x86/mm/mem_access.c
+++ b/xen/arch/x86/mm/mem_access.c
@@ -66,7 +66,7 @@ static int _p2m_get_mem_access(struct p2m_domain *p2m, gfn_t gfn,
     }
 
     gfn_lock(p2m, gfn, 0);
-    mfn = p2m->get_entry(p2m, gfn_x(gfn), &t, &a, 0, NULL, NULL);
+    mfn = p2m->get_entry(p2m, gfn, &t, &a, 0, NULL, NULL);
     gfn_unlock(p2m, gfn, 0);
 
     if ( mfn_eq(mfn, INVALID_MFN) )
@@ -142,7 +142,7 @@ bool p2m_mem_access_check(paddr_t gpa, unsigned long gla,
                           vm_event_request_t **req_ptr)
 {
     struct vcpu *v = current;
-    unsigned long gfn = gpa >> PAGE_SHIFT;
+    gfn_t gfn = gaddr_to_gfn(gpa);
     struct domain *d = v->domain;
     struct p2m_domain *p2m = NULL;
     mfn_t mfn;
@@ -215,7 +215,7 @@ bool p2m_mem_access_check(paddr_t gpa, unsigned long gla,
         *req_ptr = req;
 
         req->reason = VM_EVENT_REASON_MEM_ACCESS;
-        req->u.mem_access.gfn = gfn;
+        req->u.mem_access.gfn = gfn_x(gfn);
         req->u.mem_access.offset = gpa & ((1 << PAGE_SHIFT) - 1);
         if ( npfec.gla_valid )
         {
@@ -247,7 +247,7 @@ int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain *hp2m,
     unsigned long gfn_l = gfn_x(gfn);
     int rc;
 
-    mfn = ap2m->get_entry(ap2m, gfn_l, &t, &old_a, 0, NULL, NULL);
+    mfn = ap2m->get_entry(ap2m, gfn, &t, &old_a, 0, NULL, NULL);
 
     /* Check host p2m if no valid entry in alternate */
     if ( !mfn_valid(mfn) )
@@ -264,16 +264,16 @@ int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain *hp2m,
         if ( page_order != PAGE_ORDER_4K )
         {
             unsigned long mask = ~((1UL << page_order) - 1);
-            unsigned long gfn2_l = gfn_l & mask;
+            gfn_t gfn2 = _gfn(gfn_l & mask);
             mfn_t mfn2 = _mfn(mfn_x(mfn) & mask);
 
-            rc = ap2m->set_entry(ap2m, gfn2_l, mfn2, page_order, t, old_a, 1);
+            rc = ap2m->set_entry(ap2m, gfn2, mfn2, page_order, t, old_a, 1);
             if ( rc )
                 return rc;
         }
     }
 
-    return ap2m->set_entry(ap2m, gfn_l, mfn, PAGE_ORDER_4K, t, a,
+    return ap2m->set_entry(ap2m, gfn, mfn, PAGE_ORDER_4K, t, a,
                          (current->domain != d));
 }
 
@@ -295,10 +295,9 @@ static int set_mem_access(struct domain *d, struct p2m_domain *p2m,
         mfn_t mfn;
         p2m_access_t _a;
         p2m_type_t t;
-        unsigned long gfn_l = gfn_x(gfn);
 
-        mfn = p2m->get_entry(p2m, gfn_l, &t, &_a, 0, NULL, NULL);
-        rc = p2m->set_entry(p2m, gfn_l, mfn, PAGE_ORDER_4K, t, a, -1);
+        mfn = p2m->get_entry(p2m, gfn, &t, &_a, 0, NULL, NULL);
+        rc = p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, t, a, -1);
     }
 
     return rc;
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 3ab119cef2..62a3899089 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1234,7 +1234,7 @@ int relinquish_shared_pages(struct domain *d)
 
         if ( atomic_read(&d->shr_pages) == 0 )
             break;
-        mfn = p2m->get_entry(p2m, gfn, &t, &a, 0, NULL, NULL);
+        mfn = p2m->get_entry(p2m, _gfn(gfn), &t, &a, 0, NULL, NULL);
         if ( mfn_valid(mfn) && (t == p2m_ram_shared) )
         {
             /* Does not fail with ENOMEM given the DESTROY flag */
@@ -1243,7 +1243,7 @@ int relinquish_shared_pages(struct domain *d)
             /* Clear out the p2m entry so no one else may try to
              * unshare.  Must succeed: we just read the old entry and
              * we hold the p2m lock. */
-            set_rc = p2m->set_entry(p2m, gfn, _mfn(0), PAGE_ORDER_4K,
+            set_rc = p2m->set_entry(p2m, _gfn(gfn), _mfn(0), PAGE_ORDER_4K,
                                     p2m_invalid, p2m_access_rwx, -1);
             ASSERT(set_rc == 0);
             count += 0x10;
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 23c0518733..f14d1686b7 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -674,11 +674,12 @@ bool_t ept_handle_misconfig(uint64_t gpa)
  * Returns: 0 for success, -errno for failure
  */
 static int
-ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn, 
+ept_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
               unsigned int order, p2m_type_t p2mt, p2m_access_t p2ma,
               int sve)
 {
     ept_entry_t *table, *ept_entry = NULL;
+    unsigned long gfn = gfn_x(gfn_);
     unsigned long gfn_remainder = gfn;
     unsigned int i, target = order / EPT_TABLE_ORDER;
     unsigned long fn_mask = !mfn_eq(mfn, INVALID_MFN) ? (gfn | mfn_x(mfn)) : gfn;
@@ -910,11 +911,12 @@ out:
 
 /* Read ept p2m entries */
 static mfn_t ept_get_entry(struct p2m_domain *p2m,
-                           unsigned long gfn, p2m_type_t *t, p2m_access_t* a,
+                           gfn_t gfn_, p2m_type_t *t, p2m_access_t* a,
                            p2m_query_t q, unsigned int *page_order,
                            bool_t *sve)
 {
     ept_entry_t *table = map_domain_page(_mfn(pagetable_get_pfn(p2m_get_pagetable(p2m))));
+    unsigned long gfn = gfn_x(gfn_);
     unsigned long gfn_remainder = gfn;
     ept_entry_t *ept_entry;
     u32 index;
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index eb74e5c01f..c8c8cff014 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -543,7 +543,7 @@ p2m_pod_decrease_reservation(struct domain *d, gfn_t gfn, unsigned int order)
         p2m_type_t t;
         unsigned int cur_order;
 
-        p2m->get_entry(p2m, gfn_x(gfn) + i, &t, &a, 0, &cur_order, NULL);
+        p2m->get_entry(p2m, gfn_add(gfn, i), &t, &a, 0, &cur_order, NULL);
         n = 1UL << min(order, cur_order);
         if ( t == p2m_populate_on_demand )
             pod += n;
@@ -603,7 +603,7 @@ p2m_pod_decrease_reservation(struct domain *d, gfn_t gfn, unsigned int order)
         p2m_access_t a;
         unsigned int cur_order;
 
-        mfn = p2m->get_entry(p2m, gfn_x(gfn) + i, &t, &a, 0, &cur_order, NULL);
+        mfn = p2m->get_entry(p2m, gfn_add(gfn, i), &t, &a, 0, &cur_order, NULL);
         if ( order < cur_order )
             cur_order = order;
         n = 1UL << cur_order;
@@ -717,7 +717,8 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
         unsigned long k;
         const struct page_info *page;
 
-        mfn = p2m->get_entry(p2m, gfn + i, &type, &a, 0, &cur_order, NULL);
+        mfn = p2m->get_entry(p2m, _gfn(gfn +  i), &type, &a, 0,
+                             &cur_order, NULL);
 
         /*
          * Conditions that must be met for superpage-superpage:
@@ -859,7 +860,9 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
     for ( i = 0; i < count; i++ )
     {
         p2m_access_t a;
-        mfns[i] = p2m->get_entry(p2m, gfns[i], types + i, &a, 0, NULL, NULL);
+
+        mfns[i] = p2m->get_entry(p2m, _gfn(gfns[i]), types + i, &a,
+                                 0, NULL, NULL);
         /*
          * If this is ram, and not a pagetable or from the xen heap, and
          * probably not mapped elsewhere, map it; otherwise, skip.
@@ -988,7 +991,7 @@ p2m_pod_emergency_sweep(struct p2m_domain *p2m)
     for ( i = p2m->pod.reclaim_single; i > 0 ; i-- )
     {
         p2m_access_t a;
-        (void)p2m->get_entry(p2m, i, &t, &a, 0, NULL, NULL);
+        (void)p2m->get_entry(p2m, _gfn(i), &t, &a, 0, NULL, NULL);
         if ( p2m_is_ram(t) )
         {
             gfns[j] = i;
@@ -1237,7 +1240,7 @@ guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn,
         p2m_access_t a;
         unsigned int cur_order;
 
-        p2m->get_entry(p2m, gfn + i, &ot, &a, 0, &cur_order, NULL);
+        p2m->get_entry(p2m, _gfn(gfn + i), &ot, &a, 0, &cur_order, NULL);
         n = 1UL << min(order, cur_order);
         if ( p2m_is_ram(ot) )
         {
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 0e63d6ed11..4bfec4f5f0 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -479,12 +479,13 @@ int p2m_pt_handle_deferred_changes(uint64_t gpa)
 
 /* Returns: 0 for success, -errno for failure */
 static int
-p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
+p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
                  unsigned int page_order, p2m_type_t p2mt, p2m_access_t p2ma,
                  int sve)
 {
     /* XXX -- this might be able to be faster iff current->domain == d */
     void *table;
+    unsigned long gfn = gfn_x(gfn_);
     unsigned long i, gfn_remainder = gfn;
     l1_pgentry_t *p2m_entry, entry_content;
     /* Intermediate table to free if we're replacing it with a superpage. */
@@ -731,11 +732,12 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
 }
 
 static mfn_t
-p2m_pt_get_entry(struct p2m_domain *p2m, unsigned long gfn,
+p2m_pt_get_entry(struct p2m_domain *p2m, gfn_t gfn_,
                  p2m_type_t *t, p2m_access_t *a, p2m_query_t q,
                  unsigned int *page_order, bool_t *sve)
 {
     mfn_t mfn;
+    unsigned long gfn = gfn_x(gfn_);
     paddr_t addr = ((paddr_t)gfn) << PAGE_SHIFT;
     l2_pgentry_t *l2e;
     l1_pgentry_t *l1e;
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 0b479105b9..35d4a15391 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -415,11 +415,12 @@ void p2m_unlock_and_tlb_flush(struct p2m_domain *p2m)
         mm_write_unlock(&p2m->lock);
 }
 
-mfn_t __get_gfn_type_access(struct p2m_domain *p2m, unsigned long gfn,
+mfn_t __get_gfn_type_access(struct p2m_domain *p2m, unsigned long gfn_l,
                     p2m_type_t *t, p2m_access_t *a, p2m_query_t q,
                     unsigned int *page_order, bool_t locked)
 {
     mfn_t mfn;
+    gfn_t gfn = _gfn(gfn_l);
 
     /* Unshare makes no sense withuot populate. */
     if ( q & P2M_UNSHARE )
@@ -430,7 +431,7 @@ mfn_t __get_gfn_type_access(struct p2m_domain *p2m, unsigned long gfn,
         /* Not necessarily true, but for non-translated guests, we claim
          * it's the most generic kind of memory */
         *t = p2m_ram_rw;
-        return _mfn(gfn);
+        return _mfn(gfn_l);
     }
 
     if ( locked )
@@ -444,8 +445,8 @@ mfn_t __get_gfn_type_access(struct p2m_domain *p2m, unsigned long gfn,
         ASSERT(p2m_is_hostp2m(p2m));
         /* Try to unshare. If we fail, communicate ENOMEM without
          * sleeping. */
-        if ( mem_sharing_unshare_page(p2m->domain, gfn, 0) < 0 )
-            (void)mem_sharing_notify_enomem(p2m->domain, gfn, 0);
+        if ( mem_sharing_unshare_page(p2m->domain, gfn_l, 0) < 0 )
+            (void)mem_sharing_notify_enomem(p2m->domain, gfn_l, 0);
         mfn = p2m->get_entry(p2m, gfn, t, a, q, page_order, NULL);
     }
 
@@ -556,7 +557,7 @@ int p2m_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
         else
             order = 0;
 
-        set_rc = p2m->set_entry(p2m, gfn, mfn, order, p2mt, p2ma, -1);
+        set_rc = p2m->set_entry(p2m, _gfn(gfn), mfn, order, p2mt, p2ma, -1);
         if ( set_rc )
             rc = set_rc;
 
@@ -735,7 +736,8 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn, unsigned long mfn,
     {
         for ( i = 0; i < (1UL << page_order); i++ )
         {
-            mfn_return = p2m->get_entry(p2m, gfn + i, &t, &a, 0, NULL, NULL);
+            mfn_return = p2m->get_entry(p2m, _gfn(gfn + i), &t, &a, 0,
+                                        NULL, NULL);
             if ( !p2m_is_grant(t) && !p2m_is_shared(t) && !p2m_is_foreign(t) )
                 set_gpfn_from_mfn(mfn+i, INVALID_M2P_ENTRY);
             ASSERT( !p2m_is_valid(t) || mfn + i == mfn_x(mfn_return) );
@@ -762,7 +764,8 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
                         unsigned int page_order, p2m_type_t t)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    unsigned long i, ogfn;
+    unsigned long i;
+    gfn_t ogfn;
     p2m_type_t ot;
     p2m_access_t a;
     mfn_t omfn;
@@ -803,7 +806,7 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
     /* First, remove m->p mappings for existing p->m mappings */
     for ( i = 0; i < (1UL << page_order); i++ )
     {
-        omfn = p2m->get_entry(p2m, gfn_x(gfn_add(gfn, i)), &ot,
+        omfn = p2m->get_entry(p2m, gfn_add(gfn, i), &ot,
                               &a, 0, NULL, NULL);
         if ( p2m_is_shared(ot) )
         {
@@ -831,7 +834,7 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
                                                 0);
                 return rc;
             }
-            omfn = p2m->get_entry(p2m, gfn_x(gfn_add(gfn, i)),
+            omfn = p2m->get_entry(p2m, gfn_add(gfn, i),
                                   &ot, &a, 0, NULL, NULL);
             ASSERT(!p2m_is_shared(ot));
         }
@@ -873,21 +876,24 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
         }
         if ( page_get_owner(mfn_to_page(mfn_add(mfn, i))) != d )
             continue;
-        ogfn = mfn_to_gfn(d, mfn_add(mfn, i));
-        if ( (ogfn != INVALID_M2P_ENTRY) && (ogfn != gfn_x(gfn_add(gfn, i))) )
+        ogfn = _gfn(mfn_to_gfn(d, mfn_add(mfn, i)));
+        if ( !gfn_eq(ogfn, _gfn(INVALID_M2P_ENTRY)) &&
+             !gfn_eq(ogfn, gfn_add(gfn, i)) )
         {
             /* This machine frame is already mapped at another physical
              * address */
             P2M_DEBUG("aliased! mfn=%#lx, old gfn=%#lx, new gfn=%#lx\n",
-                      mfn_x(mfn_add(mfn, i)), ogfn, gfn_x(gfn_add(gfn, i)));
+                      mfn_x(mfn_add(mfn, i)), gfn_x(ogfn),
+                      gfn_x(gfn_add(gfn, i)));
             omfn = p2m->get_entry(p2m, ogfn, &ot, &a, 0, NULL, NULL);
             if ( p2m_is_ram(ot) && !p2m_is_paged(ot) )
             {
                 ASSERT(mfn_valid(omfn));
                 P2M_DEBUG("old gfn=%#lx -> mfn %#lx\n",
-                          ogfn , mfn_x(omfn));
+                          gfn_x(ogfn) , mfn_x(omfn));
                 if ( mfn_eq(omfn, mfn_add(mfn, i)) )
-                    p2m_remove_page(p2m, ogfn, mfn_x(mfn_add(mfn, i)), 0);
+                    p2m_remove_page(p2m, gfn_x(ogfn), mfn_x(mfn_add(mfn, i)),
+                                    0);
             }
         }
     }
@@ -948,7 +954,7 @@ int p2m_change_type_one(struct domain *d, unsigned long gfn,
 
     gfn_lock(p2m, gfn, 0);
 
-    mfn = p2m->get_entry(p2m, gfn, &pt, &a, 0, NULL, NULL);
+    mfn = p2m->get_entry(p2m, _gfn(gfn), &pt, &a, 0, NULL, NULL);
     rc = likely(pt == ot)
          ? p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, nt,
                          p2m->default_access)
@@ -1065,14 +1071,15 @@ int p2m_finish_type_change(struct domain *d,
  *    1 + new order  for caller to retry with smaller order (guaranteed
  *                   to be smaller than order passed in)
  */
-static int set_typed_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn,
-                               unsigned int order, p2m_type_t gfn_p2mt,
-                               p2m_access_t access)
+static int set_typed_p2m_entry(struct domain *d, unsigned long gfn_l,
+                               mfn_t mfn, unsigned int order,
+                               p2m_type_t gfn_p2mt, p2m_access_t access)
 {
     int rc = 0;
     p2m_access_t a;
     p2m_type_t ot;
     mfn_t omfn;
+    gfn_t gfn = _gfn(gfn_l);
     unsigned int cur_order = 0;
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
@@ -1103,11 +1110,11 @@ static int set_typed_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn,
         }
     }
 
-    P2M_DEBUG("set %d %lx %lx\n", gfn_p2mt, gfn, mfn_x(mfn));
-    rc = p2m_set_entry(p2m, gfn, mfn, order, gfn_p2mt, access);
+    P2M_DEBUG("set %d %lx %lx\n", gfn_p2mt, gfn_l, mfn_x(mfn));
+    rc = p2m_set_entry(p2m, gfn_l, mfn, order, gfn_p2mt, access);
     if ( rc )
         gdprintk(XENLOG_ERR, "p2m_set_entry: %#lx:%u -> %d (0x%"PRI_mfn")\n",
-                 gfn, order, rc, mfn_x(mfn));
+                 gfn_l, order, rc, mfn_x(mfn));
     else if ( p2m_is_pod(ot) )
     {
         pod_lock(p2m);
@@ -1157,7 +1164,7 @@ int set_identity_p2m_entry(struct domain *d, unsigned long gfn,
 
     gfn_lock(p2m, gfn, 0);
 
-    mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
+    mfn = p2m->get_entry(p2m, _gfn(gfn), &p2mt, &a, 0, NULL, NULL);
 
     if ( p2mt == p2m_invalid || p2mt == p2m_mmio_dm )
         ret = p2m_set_entry(p2m, gfn, _mfn(gfn), PAGE_ORDER_4K,
@@ -1201,7 +1208,7 @@ int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn,
         return -EIO;
 
     gfn_lock(p2m, gfn, order);
-    actual_mfn = p2m->get_entry(p2m, gfn, &t, &a, 0, &cur_order, NULL);
+    actual_mfn = p2m->get_entry(p2m, _gfn(gfn), &t, &a, 0, &cur_order, NULL);
     if ( cur_order < order )
     {
         rc = cur_order + 1;
@@ -1245,7 +1252,7 @@ int clear_identity_p2m_entry(struct domain *d, unsigned long gfn)
 
     gfn_lock(p2m, gfn, 0);
 
-    mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
+    mfn = p2m->get_entry(p2m, _gfn(gfn), &p2mt, &a, 0, NULL, NULL);
     if ( p2mt == p2m_mmio_direct && mfn_x(mfn) == gfn )
     {
         ret = p2m_set_entry(p2m, gfn, INVALID_MFN, PAGE_ORDER_4K,
@@ -1278,7 +1285,7 @@ int set_shared_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn)
         return -EIO;
 
     gfn_lock(p2m, gfn, 0);
-    omfn = p2m->get_entry(p2m, gfn, &ot, &a, 0, NULL, NULL);
+    omfn = p2m->get_entry(p2m, _gfn(gfn), &ot, &a, 0, NULL, NULL);
     /* At the moment we only allow p2m change if gfn has already been made
      * sharable first */
     ASSERT(p2m_is_shared(ot));
@@ -1330,7 +1337,7 @@ int p2m_mem_paging_nominate(struct domain *d, unsigned long gfn)
 
     gfn_lock(p2m, gfn, 0);
 
-    mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
+    mfn = p2m->get_entry(p2m, _gfn(gfn), &p2mt, &a, 0, NULL, NULL);
 
     /* Check if mfn is valid */
     if ( !mfn_valid(mfn) )
@@ -1392,7 +1399,7 @@ int p2m_mem_paging_evict(struct domain *d, unsigned long gfn)
     gfn_lock(p2m, gfn, 0);
 
     /* Get mfn */
-    mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
+    mfn = p2m->get_entry(p2m, _gfn(gfn), &p2mt, &a, 0, NULL, NULL);
     if ( unlikely(!mfn_valid(mfn)) )
         goto out;
 
@@ -1524,7 +1531,7 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
 
     /* Fix p2m mapping */
     gfn_lock(p2m, gfn, 0);
-    mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
+    mfn = p2m->get_entry(p2m, _gfn(gfn), &p2mt, &a, 0, NULL, NULL);
     /* Allow only nominated or evicted pages to enter page-in path */
     if ( p2mt == p2m_ram_paging_out || p2mt == p2m_ram_paged )
     {
@@ -1586,7 +1593,7 @@ int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer)
 
     gfn_lock(p2m, gfn, 0);
 
-    mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
+    mfn = p2m->get_entry(p2m, _gfn(gfn), &p2mt, &a, 0, NULL, NULL);
 
     ret = -ENOENT;
     /* Allow missing pages */
@@ -1674,7 +1681,7 @@ void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp)
         unsigned long gfn = rsp->u.mem_access.gfn;
 
         gfn_lock(p2m, gfn, 0);
-        mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
+        mfn = p2m->get_entry(p2m, _gfn(gfn), &p2mt, &a, 0, NULL, NULL);
         /*
          * Allow only pages which were prepared properly, or pages which
          * were nominated but not evicted.
@@ -2263,7 +2270,7 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
     p2m_lock(hp2m);
     p2m_lock(ap2m);
 
-    mfn = ap2m->get_entry(ap2m, gfn_x(old_gfn), &t, &a, 0, NULL, NULL);
+    mfn = ap2m->get_entry(ap2m, old_gfn, &t, &a, 0, NULL, NULL);
 
     if ( gfn_eq(new_gfn, INVALID_GFN) )
     {
@@ -2292,21 +2299,21 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
             gfn = _gfn(gfn_x(old_gfn) & mask);
             mfn = _mfn(mfn_x(mfn) & mask);
 
-            if ( ap2m->set_entry(ap2m, gfn_x(gfn), mfn, page_order, t, a, 1) )
+            if ( ap2m->set_entry(ap2m, gfn, mfn, page_order, t, a, 1) )
                 goto out;
         }
     }
 
-    mfn = ap2m->get_entry(ap2m, gfn_x(new_gfn), &t, &a, 0, NULL, NULL);
+    mfn = ap2m->get_entry(ap2m, new_gfn, &t, &a, 0, NULL, NULL);
 
     if ( !mfn_valid(mfn) )
-        mfn = hp2m->get_entry(hp2m, gfn_x(new_gfn), &t, &a, 0, NULL, NULL);
+        mfn = hp2m->get_entry(hp2m, new_gfn, &t, &a, 0, NULL, NULL);
 
     /* Note: currently it is not safe to remap to a shared entry */
     if ( !mfn_valid(mfn) || (t != p2m_ram_rw) )
         goto out;
 
-    if ( !ap2m->set_entry(ap2m, gfn_x(old_gfn), mfn, PAGE_ORDER_4K, t, a,
+    if ( !ap2m->set_entry(ap2m, old_gfn, mfn, PAGE_ORDER_4K, t, a,
                           (current->domain != d)) )
     {
         rc = 0;
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 8f3409b400..1c9a51e9ad 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -234,13 +234,13 @@ struct p2m_domain {
     struct page_list_head pages;
 
     int                (*set_entry)(struct p2m_domain *p2m,
-                                    unsigned long gfn,
+                                    gfn_t gfn,
                                     mfn_t mfn, unsigned int page_order,
                                     p2m_type_t p2mt,
                                     p2m_access_t p2ma,
                                     int sve);
     mfn_t              (*get_entry)(struct p2m_domain *p2m,
-                                    unsigned long gfn,
+                                    gfn_t gfn,
                                     p2m_type_t *p2mt,
                                     p2m_access_t *p2ma,
                                     p2m_query_t q,
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v2 09/16] xen/x86: p2m: Use typesafe GFN in p2m_set_entry
  2017-09-21 12:40 [PATCH v2 00/16] xen/x86: Clean-up the PoD code Julien Grall
                   ` (7 preceding siblings ...)
  2017-09-21 12:40 ` [PATCH v2 08/16] xen/x86: p2m: Use typesafe gfn for the P2M callbacks get_entry and set_entry Julien Grall
@ 2017-09-21 12:40 ` Julien Grall
  2017-09-21 15:34   ` Wei Liu
  2017-09-21 12:40 ` [PATCH v2 10/16] xen/x86: p2m-pod: Use typesafe GFN in pod_eager_record Julien Grall
                   ` (6 subsequent siblings)
  15 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-09-21 12:40 UTC (permalink / raw)
  To: xen-devel
  Cc: George Dunlap, Andrew Cooper, Julien Grall, Tamas K Lengyel, Jan Beulich

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Tamas K Lengyel <tamas@tklengyel.com>

---

Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Tamas K Lengyel <tamas@tklengyel.com>

    Changes in v2:
        - Add Andrew & Tamas' acked-by
        - Rename the variable gfn_t to gfn_ to avoid shadowing the type
        gfn_t
---
 xen/arch/x86/mm/hap/nested_hap.c |   2 +-
 xen/arch/x86/mm/mem_sharing.c    |   3 +-
 xen/arch/x86/mm/p2m-pod.c        |  36 +++++++------
 xen/arch/x86/mm/p2m.c            | 112 ++++++++++++++++++++++-----------------
 xen/include/asm-x86/p2m.h        |   2 +-
 5 files changed, 85 insertions(+), 70 deletions(-)

diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 162afed46b..346fcb53e5 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -121,7 +121,7 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
         gfn = (L2_gpa >> PAGE_SHIFT) & mask;
         mfn = _mfn((L0_gpa >> PAGE_SHIFT) & mask);
 
-        rc = p2m_set_entry(p2m, gfn, mfn, page_order, p2mt, p2ma);
+        rc = p2m_set_entry(p2m, _gfn(gfn), mfn, page_order, p2mt, p2ma);
     }
 
     p2m_unlock(p2m);
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 62a3899089..b856028c02 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1052,7 +1052,8 @@ int mem_sharing_add_to_physmap(struct domain *sd, unsigned long sgfn, shr_handle
         goto err_unlock;
     }
 
-    ret = p2m_set_entry(p2m, cgfn, smfn, PAGE_ORDER_4K, p2m_ram_shared, a);
+    ret = p2m_set_entry(p2m, _gfn(cgfn), smfn, PAGE_ORDER_4K,
+                        p2m_ram_shared, a);
 
     /* Tempted to turn this into an assert */
     if ( ret )
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index c8c8cff014..b8a51cf12a 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -565,7 +565,7 @@ p2m_pod_decrease_reservation(struct domain *d, gfn_t gfn, unsigned int order)
          * All PoD: Mark the whole region invalid and tell caller
          * we're done.
          */
-        p2m_set_entry(p2m, gfn_x(gfn), INVALID_MFN, order, p2m_invalid,
+        p2m_set_entry(p2m, gfn, INVALID_MFN, order, p2m_invalid,
                       p2m->default_access);
         p2m->pod.entry_count -= 1UL << order;
         BUG_ON(p2m->pod.entry_count < 0);
@@ -609,7 +609,7 @@ p2m_pod_decrease_reservation(struct domain *d, gfn_t gfn, unsigned int order)
         n = 1UL << cur_order;
         if ( t == p2m_populate_on_demand )
         {
-            p2m_set_entry(p2m, gfn_x(gfn) + i, INVALID_MFN, cur_order,
+            p2m_set_entry(p2m, gfn_add(gfn, i), INVALID_MFN, cur_order,
                           p2m_invalid, p2m->default_access);
             p2m->pod.entry_count -= n;
             BUG_ON(p2m->pod.entry_count < 0);
@@ -631,7 +631,7 @@ p2m_pod_decrease_reservation(struct domain *d, gfn_t gfn, unsigned int order)
 
             page = mfn_to_page(mfn);
 
-            p2m_set_entry(p2m, gfn_x(gfn) + i, INVALID_MFN, cur_order,
+            p2m_set_entry(p2m, gfn_add(gfn, i), INVALID_MFN, cur_order,
                           p2m_invalid, p2m->default_access);
             p2m_tlb_flush_sync(p2m);
             for ( j = 0; j < n; ++j )
@@ -680,9 +680,10 @@ void p2m_pod_dump_data(struct domain *d)
  * in the p2m.
  */
 static int
-p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
+p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn_l)
 {
     mfn_t mfn, mfn0 = INVALID_MFN;
+    gfn_t gfn = _gfn(gfn_l);
     p2m_type_t type, type0 = 0;
     unsigned long * map = NULL;
     int ret=0, reset = 0;
@@ -693,7 +694,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
 
     ASSERT(pod_locked_by_me(p2m));
 
-    if ( !superpage_aligned(gfn) )
+    if ( !superpage_aligned(gfn_l) )
         goto out;
 
     /* Allow an extra refcount for one shadow pt mapping in shadowed domains */
@@ -717,7 +718,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
         unsigned long k;
         const struct page_info *page;
 
-        mfn = p2m->get_entry(p2m, _gfn(gfn +  i), &type, &a, 0,
+        mfn = p2m->get_entry(p2m, gfn_add(gfn, i), &type, &a, 0,
                              &cur_order, NULL);
 
         /*
@@ -815,7 +816,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
             int d:16,order:16;
         } t;
 
-        t.gfn = gfn;
+        t.gfn = gfn_l;
         t.mfn = mfn_x(mfn);
         t.d = d->domain_id;
         t.order = 9;
@@ -898,7 +899,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
         }
 
         /* Try to remove the page, restoring old mapping if it fails. */
-        p2m_set_entry(p2m, gfns[i], INVALID_MFN, PAGE_ORDER_4K,
+        p2m_set_entry(p2m, _gfn(gfns[i]), INVALID_MFN, PAGE_ORDER_4K,
                       p2m_populate_on_demand, p2m->default_access);
 
         /*
@@ -910,7 +911,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
             unmap_domain_page(map[i]);
             map[i] = NULL;
 
-            p2m_set_entry(p2m, gfns[i], mfns[i], PAGE_ORDER_4K,
+            p2m_set_entry(p2m, _gfn(gfns[i]), mfns[i], PAGE_ORDER_4K,
                 types[i], p2m->default_access);
 
             continue;
@@ -937,7 +938,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
          */
         if ( j < (PAGE_SIZE / sizeof(*map[i])) )
         {
-            p2m_set_entry(p2m, gfns[i], mfns[i], PAGE_ORDER_4K,
+            p2m_set_entry(p2m, _gfn(gfns[i]), mfns[i], PAGE_ORDER_4K,
                           types[i], p2m->default_access);
         }
         else
@@ -1080,7 +1081,7 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
 {
     struct domain *d = p2m->domain;
     struct page_info *p = NULL; /* Compiler warnings */
-    unsigned long gfn_aligned = (gfn >> order) << order;
+    gfn_t gfn_aligned = _gfn((gfn >> order) << order);
     mfn_t mfn;
     unsigned long i;
 
@@ -1152,14 +1153,14 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
 
     for( i = 0; i < (1UL << order); i++ )
     {
-        set_gpfn_from_mfn(mfn_x(mfn) + i, gfn_aligned + i);
+        set_gpfn_from_mfn(mfn_x(mfn) + i, gfn_x(gfn_aligned) + i);
         paging_mark_dirty(d, mfn_add(mfn, i));
     }
 
     p2m->pod.entry_count -= (1UL << order);
     BUG_ON(p2m->pod.entry_count < 0);
 
-    pod_eager_record(p2m, gfn_aligned, order);
+    pod_eager_record(p2m, gfn_x(gfn_aligned), order);
 
     if ( tb_init_done )
     {
@@ -1199,7 +1200,7 @@ remap_and_retry:
      * need promoting the gfn lock from gfn->2M superpage.
      */
     for ( i = 0; i < (1UL << order); i++ )
-        p2m_set_entry(p2m, gfn_aligned + i, INVALID_MFN, PAGE_ORDER_4K,
+        p2m_set_entry(p2m, gfn_add(gfn_aligned, i), INVALID_MFN, PAGE_ORDER_4K,
                       p2m_populate_on_demand, p2m->default_access);
     if ( tb_init_done )
     {
@@ -1219,10 +1220,11 @@ remap_and_retry:
 
 
 int
-guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn,
+guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn_l,
                                       unsigned int order)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    gfn_t gfn = _gfn(gfn_l);
     unsigned long i, n, pod_count = 0;
     int rc = 0;
 
@@ -1231,7 +1233,7 @@ guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn,
 
     gfn_lock(p2m, gfn, order);
 
-    P2M_DEBUG("mark pod gfn=%#lx\n", gfn);
+    P2M_DEBUG("mark pod gfn=%#lx\n", gfn_l);
 
     /* Make sure all gpfns are unused */
     for ( i = 0; i < (1UL << order); i += n )
@@ -1240,7 +1242,7 @@ guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn,
         p2m_access_t a;
         unsigned int cur_order;
 
-        p2m->get_entry(p2m, _gfn(gfn + i), &ot, &a, 0, &cur_order, NULL);
+        p2m->get_entry(p2m, gfn_add(gfn, i), &ot, &a, 0, &cur_order, NULL);
         n = 1UL << min(order, cur_order);
         if ( p2m_is_ram(ot) )
         {
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 35d4a15391..3fbc537da6 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -532,7 +532,7 @@ struct page_info *p2m_get_page_from_gfn(
 }
 
 /* Returns: 0 for success, -errno for failure */
-int p2m_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
+int p2m_set_entry(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn,
                   unsigned int page_order, p2m_type_t p2mt, p2m_access_t p2ma)
 {
     struct domain *d = p2m->domain;
@@ -546,8 +546,9 @@ int p2m_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
     {
         if ( hap_enabled(d) )
         {
-            unsigned long fn_mask = !mfn_eq(mfn, INVALID_MFN) ?
-                                     (gfn | mfn_x(mfn) | todo) : (gfn | todo);
+            unsigned long fn_mask = !mfn_eq(mfn, INVALID_MFN) ? mfn_x(mfn) : 0;
+
+            fn_mask |= gfn_x(gfn) | todo;
 
             order = (!(fn_mask & ((1ul << PAGE_ORDER_1G) - 1)) &&
                      hap_has_1gb) ? PAGE_ORDER_1G :
@@ -557,11 +558,11 @@ int p2m_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
         else
             order = 0;
 
-        set_rc = p2m->set_entry(p2m, _gfn(gfn), mfn, order, p2mt, p2ma, -1);
+        set_rc = p2m->set_entry(p2m, gfn, mfn, order, p2mt, p2ma, -1);
         if ( set_rc )
             rc = set_rc;
 
-        gfn += 1ul << order;
+        gfn = gfn_add(gfn, 1ul << order);
         if ( !mfn_eq(mfn, INVALID_MFN) )
             mfn = mfn_add(mfn, 1ul << order);
         todo -= 1ul << order;
@@ -652,7 +653,7 @@ int p2m_alloc_table(struct p2m_domain *p2m)
 
     /* Initialise physmap tables for slot zero. Other code assumes this. */
     p2m->defer_nested_flush = 1;
-    rc = p2m_set_entry(p2m, 0, INVALID_MFN, PAGE_ORDER_4K,
+    rc = p2m_set_entry(p2m, _gfn(0), INVALID_MFN, PAGE_ORDER_4K,
                        p2m_invalid, p2m->default_access);
     p2m->defer_nested_flush = 0;
     p2m_unlock(p2m);
@@ -703,10 +704,11 @@ void p2m_final_teardown(struct domain *d)
 
 
 static int
-p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn, unsigned long mfn,
+p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn_l, unsigned long mfn,
                 unsigned int page_order)
 {
     unsigned long i;
+    gfn_t gfn = _gfn(gfn_l);
     mfn_t mfn_return;
     p2m_type_t t;
     p2m_access_t a;
@@ -730,13 +732,13 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn, unsigned long mfn,
     }
 
     ASSERT(gfn_locked_by_me(p2m, gfn));
-    P2M_DEBUG("removing gfn=%#lx mfn=%#lx\n", gfn, mfn);
+    P2M_DEBUG("removing gfn=%#lx mfn=%#lx\n", gfn_l, mfn);
 
     if ( mfn_valid(_mfn(mfn)) )
     {
         for ( i = 0; i < (1UL << page_order); i++ )
         {
-            mfn_return = p2m->get_entry(p2m, _gfn(gfn + i), &t, &a, 0,
+            mfn_return = p2m->get_entry(p2m, gfn_add(gfn, i), &t, &a, 0,
                                         NULL, NULL);
             if ( !p2m_is_grant(t) && !p2m_is_shared(t) && !p2m_is_foreign(t) )
                 set_gpfn_from_mfn(mfn+i, INVALID_M2P_ENTRY);
@@ -901,7 +903,7 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
     /* Now, actually do the two-way mapping */
     if ( mfn_valid(mfn) )
     {
-        rc = p2m_set_entry(p2m, gfn_x(gfn), mfn, page_order, t,
+        rc = p2m_set_entry(p2m, gfn, mfn, page_order, t,
                            p2m->default_access);
         if ( rc )
             goto out; /* Failed to update p2m, bail without updating m2p. */
@@ -917,7 +919,7 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
     {
         gdprintk(XENLOG_WARNING, "Adding bad mfn to p2m map (%#lx -> %#lx)\n",
                  gfn_x(gfn), mfn_x(mfn));
-        rc = p2m_set_entry(p2m, gfn_x(gfn), INVALID_MFN, page_order,
+        rc = p2m_set_entry(p2m, gfn, INVALID_MFN, page_order,
                            p2m_invalid, p2m->default_access);
         if ( rc == 0 )
         {
@@ -940,11 +942,12 @@ out:
  * Returns: 0 for success, -errno for failure.
  * Resets the access permissions.
  */
-int p2m_change_type_one(struct domain *d, unsigned long gfn,
+int p2m_change_type_one(struct domain *d, unsigned long gfn_l,
                        p2m_type_t ot, p2m_type_t nt)
 {
     p2m_access_t a;
     p2m_type_t pt;
+    gfn_t gfn = _gfn(gfn_l);
     mfn_t mfn;
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     int rc;
@@ -954,7 +957,7 @@ int p2m_change_type_one(struct domain *d, unsigned long gfn,
 
     gfn_lock(p2m, gfn, 0);
 
-    mfn = p2m->get_entry(p2m, _gfn(gfn), &pt, &a, 0, NULL, NULL);
+    mfn = p2m->get_entry(p2m, gfn, &pt, &a, 0, NULL, NULL);
     rc = likely(pt == ot)
          ? p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, nt,
                          p2m->default_access)
@@ -1111,7 +1114,7 @@ static int set_typed_p2m_entry(struct domain *d, unsigned long gfn_l,
     }
 
     P2M_DEBUG("set %d %lx %lx\n", gfn_p2mt, gfn_l, mfn_x(mfn));
-    rc = p2m_set_entry(p2m, gfn_l, mfn, order, gfn_p2mt, access);
+    rc = p2m_set_entry(p2m, gfn, mfn, order, gfn_p2mt, access);
     if ( rc )
         gdprintk(XENLOG_ERR, "p2m_set_entry: %#lx:%u -> %d (0x%"PRI_mfn")\n",
                  gfn_l, order, rc, mfn_x(mfn));
@@ -1146,11 +1149,12 @@ int set_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn,
     return set_typed_p2m_entry(d, gfn, mfn, order, p2m_mmio_direct, access);
 }
 
-int set_identity_p2m_entry(struct domain *d, unsigned long gfn,
+int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
                            p2m_access_t p2ma, unsigned int flag)
 {
     p2m_type_t p2mt;
     p2m_access_t a;
+    gfn_t gfn = _gfn(gfn_l);
     mfn_t mfn;
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     int ret;
@@ -1159,17 +1163,17 @@ int set_identity_p2m_entry(struct domain *d, unsigned long gfn,
     {
         if ( !need_iommu(d) )
             return 0;
-        return iommu_map_page(d, gfn, gfn, IOMMUF_readable|IOMMUF_writable);
+        return iommu_map_page(d, gfn_l, gfn_l, IOMMUF_readable|IOMMUF_writable);
     }
 
     gfn_lock(p2m, gfn, 0);
 
-    mfn = p2m->get_entry(p2m, _gfn(gfn), &p2mt, &a, 0, NULL, NULL);
+    mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
 
     if ( p2mt == p2m_invalid || p2mt == p2m_mmio_dm )
-        ret = p2m_set_entry(p2m, gfn, _mfn(gfn), PAGE_ORDER_4K,
+        ret = p2m_set_entry(p2m, gfn, _mfn(gfn_l), PAGE_ORDER_4K,
                             p2m_mmio_direct, p2ma);
-    else if ( mfn_x(mfn) == gfn && p2mt == p2m_mmio_direct && a == p2ma )
+    else if ( mfn_x(mfn) == gfn_l && p2mt == p2m_mmio_direct && a == p2ma )
         ret = 0;
     else
     {
@@ -1180,7 +1184,7 @@ int set_identity_p2m_entry(struct domain *d, unsigned long gfn,
         printk(XENLOG_G_WARNING
                "Cannot setup identity map d%d:%lx,"
                " gfn already mapped to %lx.\n",
-               d->domain_id, gfn, mfn_x(mfn));
+               d->domain_id, gfn_l, mfn_x(mfn));
     }
 
     gfn_unlock(p2m, gfn, 0);
@@ -1194,10 +1198,11 @@ int set_identity_p2m_entry(struct domain *d, unsigned long gfn,
  *    order+1  for caller to retry with order (guaranteed smaller than
  *             the order value passed in)
  */
-int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn,
+int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn_l, mfn_t mfn,
                          unsigned int order)
 {
     int rc = -EINVAL;
+    gfn_t gfn = _gfn(gfn_l);
     mfn_t actual_mfn;
     p2m_access_t a;
     p2m_type_t t;
@@ -1208,7 +1213,7 @@ int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn,
         return -EIO;
 
     gfn_lock(p2m, gfn, order);
-    actual_mfn = p2m->get_entry(p2m, _gfn(gfn), &t, &a, 0, &cur_order, NULL);
+    actual_mfn = p2m->get_entry(p2m, gfn, &t, &a, 0, &cur_order, NULL);
     if ( cur_order < order )
     {
         rc = cur_order + 1;
@@ -1219,13 +1224,13 @@ int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn,
     if ( mfn_eq(actual_mfn, INVALID_MFN) || (t != p2m_mmio_direct) )
     {
         gdprintk(XENLOG_ERR,
-                 "gfn_to_mfn failed! gfn=%08lx type:%d\n", gfn, t);
+                 "gfn_to_mfn failed! gfn=%08lx type:%d\n", gfn_l, t);
         goto out;
     }
     if ( mfn_x(mfn) != mfn_x(actual_mfn) )
         gdprintk(XENLOG_WARNING,
                  "no mapping between mfn %08lx and gfn %08lx\n",
-                 mfn_x(mfn), gfn);
+                 mfn_x(mfn), gfn_l);
     rc = p2m_set_entry(p2m, gfn, INVALID_MFN, order, p2m_invalid,
                        p2m->default_access);
 
@@ -1235,10 +1240,11 @@ int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn,
     return rc;
 }
 
-int clear_identity_p2m_entry(struct domain *d, unsigned long gfn)
+int clear_identity_p2m_entry(struct domain *d, unsigned long gfn_l)
 {
     p2m_type_t p2mt;
     p2m_access_t a;
+    gfn_t gfn = _gfn(gfn_l);
     mfn_t mfn;
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     int ret;
@@ -1247,13 +1253,13 @@ int clear_identity_p2m_entry(struct domain *d, unsigned long gfn)
     {
         if ( !need_iommu(d) )
             return 0;
-        return iommu_unmap_page(d, gfn);
+        return iommu_unmap_page(d, gfn_l);
     }
 
     gfn_lock(p2m, gfn, 0);
 
-    mfn = p2m->get_entry(p2m, _gfn(gfn), &p2mt, &a, 0, NULL, NULL);
-    if ( p2mt == p2m_mmio_direct && mfn_x(mfn) == gfn )
+    mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
+    if ( p2mt == p2m_mmio_direct && mfn_x(mfn) == gfn_l )
     {
         ret = p2m_set_entry(p2m, gfn, INVALID_MFN, PAGE_ORDER_4K,
                             p2m_invalid, p2m->default_access);
@@ -1264,7 +1270,7 @@ int clear_identity_p2m_entry(struct domain *d, unsigned long gfn)
         gfn_unlock(p2m, gfn, 0);
         printk(XENLOG_G_WARNING
                "non-identity map d%d:%lx not cleared (mapped to %lx)\n",
-               d->domain_id, gfn, mfn_x(mfn));
+               d->domain_id, gfn_l, mfn_x(mfn));
         ret = 0;
     }
 
@@ -1272,10 +1278,11 @@ int clear_identity_p2m_entry(struct domain *d, unsigned long gfn)
 }
 
 /* Returns: 0 for success, -errno for failure */
-int set_shared_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn)
+int set_shared_p2m_entry(struct domain *d, unsigned long gfn_l, mfn_t mfn)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     int rc = 0;
+    gfn_t gfn = _gfn(gfn_l);
     p2m_access_t a;
     p2m_type_t ot;
     mfn_t omfn;
@@ -1285,7 +1292,7 @@ int set_shared_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn)
         return -EIO;
 
     gfn_lock(p2m, gfn, 0);
-    omfn = p2m->get_entry(p2m, _gfn(gfn), &ot, &a, 0, NULL, NULL);
+    omfn = p2m->get_entry(p2m, gfn, &ot, &a, 0, NULL, NULL);
     /* At the moment we only allow p2m change if gfn has already been made
      * sharable first */
     ASSERT(p2m_is_shared(ot));
@@ -1297,14 +1304,14 @@ int set_shared_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn)
          || (pg_type & PGT_type_mask) != PGT_shared_page )
         set_gpfn_from_mfn(mfn_x(omfn), INVALID_M2P_ENTRY);
 
-    P2M_DEBUG("set shared %lx %lx\n", gfn, mfn_x(mfn));
+    P2M_DEBUG("set shared %lx %lx\n", gfn_l, mfn_x(mfn));
     rc = p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2m_ram_shared,
                        p2m->default_access);
     gfn_unlock(p2m, gfn, 0);
     if ( rc )
         gdprintk(XENLOG_ERR,
                  "p2m_set_entry failed! mfn=%08lx rc:%d\n",
-                 mfn_x(get_gfn_query_unlocked(p2m->domain, gfn, &ot)), rc);
+                 mfn_x(get_gfn_query_unlocked(p2m->domain, gfn_l, &ot)), rc);
     return rc;
 }
 
@@ -1326,18 +1333,19 @@ int set_shared_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn)
  * Once the p2mt is changed the page is readonly for the guest.  On success the
  * pager can write the page contents to disk and later evict the page.
  */
-int p2m_mem_paging_nominate(struct domain *d, unsigned long gfn)
+int p2m_mem_paging_nominate(struct domain *d, unsigned long gfn_l)
 {
     struct page_info *page;
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     p2m_type_t p2mt;
     p2m_access_t a;
+    gfn_t gfn = _gfn(gfn_l);
     mfn_t mfn;
     int ret = -EBUSY;
 
     gfn_lock(p2m, gfn, 0);
 
-    mfn = p2m->get_entry(p2m, _gfn(gfn), &p2mt, &a, 0, NULL, NULL);
+    mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
 
     /* Check if mfn is valid */
     if ( !mfn_valid(mfn) )
@@ -1387,11 +1395,12 @@ int p2m_mem_paging_nominate(struct domain *d, unsigned long gfn)
  * could evict it, eviction can not be done either. In this case the gfn is
  * still backed by a mfn.
  */
-int p2m_mem_paging_evict(struct domain *d, unsigned long gfn)
+int p2m_mem_paging_evict(struct domain *d, unsigned long gfn_l)
 {
     struct page_info *page;
     p2m_type_t p2mt;
     p2m_access_t a;
+    gfn_t gfn = _gfn(gfn_l);
     mfn_t mfn;
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     int ret = -EBUSY;
@@ -1399,7 +1408,7 @@ int p2m_mem_paging_evict(struct domain *d, unsigned long gfn)
     gfn_lock(p2m, gfn, 0);
 
     /* Get mfn */
-    mfn = p2m->get_entry(p2m, _gfn(gfn), &p2mt, &a, 0, NULL, NULL);
+    mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
     if ( unlikely(!mfn_valid(mfn)) )
         goto out;
 
@@ -1502,15 +1511,16 @@ void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn,
  * already sent to the pager. In this case the caller has to try again until the
  * gfn is fully paged in again.
  */
-void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
+void p2m_mem_paging_populate(struct domain *d, unsigned long gfn_l)
 {
     struct vcpu *v = current;
     vm_event_request_t req = {
         .reason = VM_EVENT_REASON_MEM_PAGING,
-        .u.mem_paging.gfn = gfn
+        .u.mem_paging.gfn = gfn_l
     };
     p2m_type_t p2mt;
     p2m_access_t a;
+    gfn_t gfn = _gfn(gfn_l);
     mfn_t mfn;
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
@@ -1519,7 +1529,7 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
     if ( rc == -ENOSYS )
     {
         gdprintk(XENLOG_ERR, "Domain %hu paging gfn %lx yet no ring "
-                             "in place\n", d->domain_id, gfn);
+                             "in place\n", d->domain_id, gfn_l);
         /* Prevent the vcpu from faulting repeatedly on the same gfn */
         if ( v->domain == d )
             vcpu_pause_nosync(v);
@@ -1531,7 +1541,7 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
 
     /* Fix p2m mapping */
     gfn_lock(p2m, gfn, 0);
-    mfn = p2m->get_entry(p2m, _gfn(gfn), &p2mt, &a, 0, NULL, NULL);
+    mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
     /* Allow only nominated or evicted pages to enter page-in path */
     if ( p2mt == p2m_ram_paging_out || p2mt == p2m_ram_paged )
     {
@@ -1575,11 +1585,12 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
  * mfn if populate was called for  gfn which was nominated but not evicted. In
  * this case only the p2mt needs to be forwarded.
  */
-int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer)
+int p2m_mem_paging_prep(struct domain *d, unsigned long gfn_l, uint64_t buffer)
 {
     struct page_info *page;
     p2m_type_t p2mt;
     p2m_access_t a;
+    gfn_t gfn = _gfn(gfn_l);
     mfn_t mfn;
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     int ret, page_extant = 1;
@@ -1593,7 +1604,7 @@ int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer)
 
     gfn_lock(p2m, gfn, 0);
 
-    mfn = p2m->get_entry(p2m, _gfn(gfn), &p2mt, &a, 0, NULL, NULL);
+    mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
 
     ret = -ENOENT;
     /* Allow missing pages */
@@ -1629,7 +1640,7 @@ int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer)
         if ( rc )
         {
             gdprintk(XENLOG_ERR, "Failed to load paging-in gfn %lx domain %u "
-                                 "bytes left %d\n", gfn, d->domain_id, rc);
+                                 "bytes left %d\n", gfn_l, d->domain_id, rc);
             ret = -EFAULT;
             put_page(page); /* Don't leak pages */
             goto out;            
@@ -1642,7 +1653,7 @@ int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer)
     ret = p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
                         paging_mode_log_dirty(d) ? p2m_ram_logdirty
                                                  : p2m_ram_rw, a);
-    set_gpfn_from_mfn(mfn_x(mfn), gfn);
+    set_gpfn_from_mfn(mfn_x(mfn), gfn_l);
 
     if ( !page_extant )
         atomic_dec(&d->paged_pages);
@@ -1678,10 +1689,10 @@ void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp)
     /* Fix p2m entry if the page was not dropped */
     if ( !(rsp->u.mem_paging.flags & MEM_PAGING_DROP_PAGE) )
     {
-        unsigned long gfn = rsp->u.mem_access.gfn;
+        gfn_t gfn = _gfn(rsp->u.mem_access.gfn);
 
         gfn_lock(p2m, gfn, 0);
-        mfn = p2m->get_entry(p2m, _gfn(gfn), &p2mt, &a, 0, NULL, NULL);
+        mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
         /*
          * Allow only pages which were prepared properly, or pages which
          * were nominated but not evicted.
@@ -1691,7 +1702,7 @@ void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp)
             p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
                           paging_mode_log_dirty(d) ? p2m_ram_logdirty :
                           p2m_ram_rw, a);
-            set_gpfn_from_mfn(mfn_x(mfn), gfn);
+            set_gpfn_from_mfn(mfn_x(mfn), gfn_x(gfn));
         }
         gfn_unlock(p2m, gfn, 0);
     }
@@ -2109,8 +2120,9 @@ bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa,
      */
     mask = ~((1UL << page_order) - 1);
     mfn = _mfn(mfn_x(mfn) & mask);
+    gfn = _gfn(gfn_x(gfn) & mask);
 
-    rv = p2m_set_entry(*ap2m, gfn_x(gfn) & mask, mfn, page_order, p2mt, p2ma);
+    rv = p2m_set_entry(*ap2m, gfn, mfn, page_order, p2mt, p2ma);
     p2m_unlock(*ap2m);
 
     if ( rv )
@@ -2396,7 +2408,7 @@ void p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
             }
         }
         else if ( !mfn_eq(m, INVALID_MFN) )
-            p2m_set_entry(p2m, gfn_x(gfn), mfn, page_order, p2mt, p2ma);
+            p2m_set_entry(p2m, gfn, mfn, page_order, p2mt, p2ma);
 
         __put_gfn(p2m, gfn_x(gfn));
     }
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 1c9a51e9ad..07ca02a173 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -682,7 +682,7 @@ void p2m_free_ptp(struct p2m_domain *p2m, struct page_info *pg);
 
 /* Directly set a p2m entry: only for use by p2m code. Does not need
  * a call to put_gfn afterwards/ */
-int p2m_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
+int p2m_set_entry(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn,
                   unsigned int page_order, p2m_type_t p2mt, p2m_access_t p2ma);
 
 /* Set up function pointers for PT implementation: only for use by p2m code */
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v2 10/16] xen/x86: p2m-pod: Use typesafe GFN in pod_eager_record
  2017-09-21 12:40 [PATCH v2 00/16] xen/x86: Clean-up the PoD code Julien Grall
                   ` (8 preceding siblings ...)
  2017-09-21 12:40 ` [PATCH v2 09/16] xen/x86: p2m: Use typesafe GFN in p2m_set_entry Julien Grall
@ 2017-09-21 12:40 ` Julien Grall
  2017-09-21 15:35   ` Wei Liu
  2017-09-21 12:40 ` [PATCH v2 11/16] xen/x86: p2m-pod: Clean-up p2m_pod_zero_check Julien Grall
                   ` (5 subsequent siblings)
  15 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-09-21 12:40 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Julien Grall, Jan Beulich

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

---

Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>

    Changes in v2:
        - Add Andrew's acked-by
---
 xen/arch/x86/mm/p2m-pod.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index b8a51cf12a..176d06cb42 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -1062,15 +1062,15 @@ static void pod_eager_reclaim(struct p2m_domain *p2m)
     } while ( (p2m->pod.count == 0) && (i < ARRAY_SIZE(mrp->list)) );
 }
 
-static void pod_eager_record(struct p2m_domain *p2m,
-                             unsigned long gfn, unsigned int order)
+static void pod_eager_record(struct p2m_domain *p2m, gfn_t gfn,
+                             unsigned int order)
 {
     struct pod_mrp_list *mrp = &p2m->pod.mrp;
 
-    ASSERT(gfn != gfn_x(INVALID_GFN));
+    ASSERT(!gfn_eq(gfn, INVALID_GFN));
 
     mrp->list[mrp->idx++] =
-        gfn | (order == PAGE_ORDER_2M ? POD_LAST_SUPERPAGE : 0);
+        gfn_x(gfn) | (order == PAGE_ORDER_2M ? POD_LAST_SUPERPAGE : 0);
     mrp->idx %= ARRAY_SIZE(mrp->list);
 }
 
@@ -1160,7 +1160,7 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
     p2m->pod.entry_count -= (1UL << order);
     BUG_ON(p2m->pod.entry_count < 0);
 
-    pod_eager_record(p2m, gfn_x(gfn_aligned), order);
+    pod_eager_record(p2m, gfn_aligned, order);
 
     if ( tb_init_done )
     {
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v2 11/16] xen/x86: p2m-pod: Clean-up p2m_pod_zero_check
  2017-09-21 12:40 [PATCH v2 00/16] xen/x86: Clean-up the PoD code Julien Grall
                   ` (9 preceding siblings ...)
  2017-09-21 12:40 ` [PATCH v2 10/16] xen/x86: p2m-pod: Use typesafe GFN in pod_eager_record Julien Grall
@ 2017-09-21 12:40 ` Julien Grall
  2017-09-21 15:35   ` Wei Liu
  2017-09-22  9:26   ` Jan Beulich
  2017-09-21 12:40 ` [PATCH v2 12/16] xen/x86: p2m-pod: Use typesafe gfn in p2m_pod_zero_check Julien Grall
                   ` (4 subsequent siblings)
  15 siblings, 2 replies; 49+ messages in thread
From: Julien Grall @ 2017-09-21 12:40 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Julien Grall, Jan Beulich

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

---

Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>

    Changes in v2:
        - Add Andrew's acked-by
---
 xen/arch/x86/mm/p2m-pod.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index 176d06cb42..611a087855 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -861,17 +861,19 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
     for ( i = 0; i < count; i++ )
     {
         p2m_access_t a;
+        struct page_info *pg;
 
         mfns[i] = p2m->get_entry(p2m, _gfn(gfns[i]), types + i, &a,
                                  0, NULL, NULL);
+        pg = mfn_to_page(mfns[i]);
+
         /*
          * If this is ram, and not a pagetable or from the xen heap, and
          * probably not mapped elsewhere, map it; otherwise, skip.
          */
-        if ( p2m_is_ram(types[i])
-             && ( (mfn_to_page(mfns[i])->count_info & PGC_allocated) != 0 )
-             && ( (mfn_to_page(mfns[i])->count_info & (PGC_page_table|PGC_xen_heap)) == 0 )
-             && ( (mfn_to_page(mfns[i])->count_info & PGC_count_mask) <= max_ref ) )
+        if ( p2m_is_ram(types[i]) && (pg->count_info & PGC_allocated) &&
+             ((pg->count_info & (PGC_page_table | PGC_xen_heap)) == 0) &&
+             ((pg->count_info & (PGC_count_mask)) <= max_ref) )
             map[i] = map_domain_page(mfns[i]);
         else
             map[i] = NULL;
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v2 12/16] xen/x86: p2m-pod: Use typesafe gfn in p2m_pod_zero_check
  2017-09-21 12:40 [PATCH v2 00/16] xen/x86: Clean-up the PoD code Julien Grall
                   ` (10 preceding siblings ...)
  2017-09-21 12:40 ` [PATCH v2 11/16] xen/x86: p2m-pod: Clean-up p2m_pod_zero_check Julien Grall
@ 2017-09-21 12:40 ` Julien Grall
  2017-09-21 15:36   ` Wei Liu
  2017-09-21 12:40 ` [PATCH v2 13/16] xen/x86: p2m-pod: Use typesafe gfn in p2m_pod_demand_populate Julien Grall
                   ` (3 subsequent siblings)
  15 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-09-21 12:40 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Julien Grall, Jan Beulich

At the same time make the array gfns const has it is not modified within
the function.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

---

Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>

    Changes in v2:
        - Add Andrew's acked-by
---
 xen/arch/x86/mm/p2m-pod.c | 37 ++++++++++++++++++-------------------
 1 file changed, 18 insertions(+), 19 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index 611a087855..0dd0f0a083 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -498,7 +498,7 @@ p2m_pod_offline_or_broken_replace(struct page_info *p)
 }
 
 static int
-p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn);
+p2m_pod_zero_check_superpage(struct p2m_domain *p2m, gfn_t gfn);
 
 
 /*
@@ -582,7 +582,7 @@ p2m_pod_decrease_reservation(struct domain *d, gfn_t gfn, unsigned int order)
      * - not all of the pages were RAM (now knowing order < SUPERPAGE_ORDER)
      */
     if ( steal_for_cache && order < SUPERPAGE_ORDER && ram == (1UL << order) &&
-         p2m_pod_zero_check_superpage(p2m, gfn_x(gfn) & ~(SUPERPAGE_PAGES - 1)) )
+         p2m_pod_zero_check_superpage(p2m, _gfn(gfn_x(gfn) & ~(SUPERPAGE_PAGES - 1))) )
     {
         pod = 1UL << order;
         ram = nonpod = 0;
@@ -680,10 +680,9 @@ void p2m_pod_dump_data(struct domain *d)
  * in the p2m.
  */
 static int
-p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn_l)
+p2m_pod_zero_check_superpage(struct p2m_domain *p2m, gfn_t gfn)
 {
     mfn_t mfn, mfn0 = INVALID_MFN;
-    gfn_t gfn = _gfn(gfn_l);
     p2m_type_t type, type0 = 0;
     unsigned long * map = NULL;
     int ret=0, reset = 0;
@@ -694,7 +693,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn_l)
 
     ASSERT(pod_locked_by_me(p2m));
 
-    if ( !superpage_aligned(gfn_l) )
+    if ( !superpage_aligned(gfn_x(gfn)) )
         goto out;
 
     /* Allow an extra refcount for one shadow pt mapping in shadowed domains */
@@ -816,7 +815,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn_l)
             int d:16,order:16;
         } t;
 
-        t.gfn = gfn_l;
+        t.gfn = gfn_x(gfn);
         t.mfn = mfn_x(mfn);
         t.d = d->domain_id;
         t.order = 9;
@@ -843,7 +842,7 @@ out:
 }
 
 static void
-p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
+p2m_pod_zero_check(struct p2m_domain *p2m, const gfn_t *gfns, int count)
 {
     mfn_t mfns[count];
     p2m_type_t types[count];
@@ -863,7 +862,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
         p2m_access_t a;
         struct page_info *pg;
 
-        mfns[i] = p2m->get_entry(p2m, _gfn(gfns[i]), types + i, &a,
+        mfns[i] = p2m->get_entry(p2m, gfns[i], types + i, &a,
                                  0, NULL, NULL);
         pg = mfn_to_page(mfns[i]);
 
@@ -901,7 +900,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
         }
 
         /* Try to remove the page, restoring old mapping if it fails. */
-        p2m_set_entry(p2m, _gfn(gfns[i]), INVALID_MFN, PAGE_ORDER_4K,
+        p2m_set_entry(p2m, gfns[i], INVALID_MFN, PAGE_ORDER_4K,
                       p2m_populate_on_demand, p2m->default_access);
 
         /*
@@ -913,7 +912,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
             unmap_domain_page(map[i]);
             map[i] = NULL;
 
-            p2m_set_entry(p2m, _gfn(gfns[i]), mfns[i], PAGE_ORDER_4K,
+            p2m_set_entry(p2m, gfns[i], mfns[i], PAGE_ORDER_4K,
                 types[i], p2m->default_access);
 
             continue;
@@ -940,7 +939,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
          */
         if ( j < (PAGE_SIZE / sizeof(*map[i])) )
         {
-            p2m_set_entry(p2m, _gfn(gfns[i]), mfns[i], PAGE_ORDER_4K,
+            p2m_set_entry(p2m, gfns[i], mfns[i], PAGE_ORDER_4K,
                           types[i], p2m->default_access);
         }
         else
@@ -952,7 +951,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
                     int d:16,order:16;
                 } t;
 
-                t.gfn = gfns[i];
+                t.gfn = gfn_x(gfns[i]);
                 t.mfn = mfn_x(mfns[i]);
                 t.d = d->domain_id;
                 t.order = 0;
@@ -973,7 +972,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
 static void
 p2m_pod_emergency_sweep(struct p2m_domain *p2m)
 {
-    unsigned long gfns[POD_SWEEP_STRIDE];
+    gfn_t gfns[POD_SWEEP_STRIDE];
     unsigned long i, j = 0, start, limit;
     p2m_type_t t;
 
@@ -997,7 +996,7 @@ p2m_pod_emergency_sweep(struct p2m_domain *p2m)
         (void)p2m->get_entry(p2m, _gfn(i), &t, &a, 0, NULL, NULL);
         if ( p2m_is_ram(t) )
         {
-            gfns[j] = i;
+            gfns[j] = _gfn(i);
             j++;
             BUG_ON(j > POD_SWEEP_STRIDE);
             if ( j == POD_SWEEP_STRIDE )
@@ -1039,19 +1038,19 @@ static void pod_eager_reclaim(struct p2m_domain *p2m)
     do
     {
         unsigned int idx = (mrp->idx + i++) % ARRAY_SIZE(mrp->list);
-        unsigned long gfn = mrp->list[idx];
+        gfn_t gfn = _gfn(mrp->list[idx]);
 
-        if ( gfn != gfn_x(INVALID_GFN) )
+        if ( !gfn_eq(gfn, INVALID_GFN) )
         {
-            if ( gfn & POD_LAST_SUPERPAGE )
+            if ( gfn_x(gfn) & POD_LAST_SUPERPAGE )
             {
-                gfn &= ~POD_LAST_SUPERPAGE;
+                gfn = _gfn(gfn_x(gfn) & ~POD_LAST_SUPERPAGE);
 
                 if ( p2m_pod_zero_check_superpage(p2m, gfn) == 0 )
                 {
                     unsigned int x;
 
-                    for ( x = 0; x < SUPERPAGE_PAGES; ++x, ++gfn )
+                    for ( x = 0; x < SUPERPAGE_PAGES; ++x, gfn = gfn_add(gfn, 1) )
                         p2m_pod_zero_check(p2m, &gfn, 1);
                 }
             }
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v2 13/16] xen/x86: p2m-pod: Use typesafe gfn in p2m_pod_demand_populate
  2017-09-21 12:40 [PATCH v2 00/16] xen/x86: Clean-up the PoD code Julien Grall
                   ` (11 preceding siblings ...)
  2017-09-21 12:40 ` [PATCH v2 12/16] xen/x86: p2m-pod: Use typesafe gfn in p2m_pod_zero_check Julien Grall
@ 2017-09-21 12:40 ` Julien Grall
  2017-09-21 15:36   ` Wei Liu
  2017-09-21 12:40 ` [PATCH v2 14/16] xen/x86: p2m-pod: Use typesafe gfn for the fields reclaim_single and max_guest Julien Grall
                   ` (2 subsequent siblings)
  15 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-09-21 12:40 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Julien Grall, Jan Beulich

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

---

Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>

    Changes in v2:
        - Add Andrew's acked-by
        - Variable gfn_t was renamed to gfn_ in a previous patch. So use
        the new name
---
 xen/arch/x86/mm/p2m-ept.c |  5 ++---
 xen/arch/x86/mm/p2m-pod.c | 12 ++++++------
 xen/arch/x86/mm/p2m-pt.c  |  6 +++---
 xen/include/asm-x86/p2m.h |  2 +-
 4 files changed, 12 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index f14d1686b7..bc25582c5a 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -965,7 +965,7 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
             index = gfn_remainder >> ( i * EPT_TABLE_ORDER);
             ept_entry = table + index;
 
-            if ( !p2m_pod_demand_populate(p2m, gfn, i * EPT_TABLE_ORDER, q) )
+            if ( !p2m_pod_demand_populate(p2m, gfn_, i * EPT_TABLE_ORDER, q) )
                 goto retry;
             else
                 goto out;
@@ -987,8 +987,7 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
 
         ASSERT(i == 0);
         
-        if ( p2m_pod_demand_populate(p2m, gfn, 
-                                        PAGE_ORDER_4K, q) )
+        if ( p2m_pod_demand_populate(p2m, gfn_, PAGE_ORDER_4K, q) )
             goto out;
     }
 
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index 0dd0f0a083..5c79444d7b 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -1076,13 +1076,13 @@ static void pod_eager_record(struct p2m_domain *p2m, gfn_t gfn,
 }
 
 int
-p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
+p2m_pod_demand_populate(struct p2m_domain *p2m, gfn_t gfn,
                         unsigned int order,
                         p2m_query_t q)
 {
     struct domain *d = p2m->domain;
     struct page_info *p = NULL; /* Compiler warnings */
-    gfn_t gfn_aligned = _gfn((gfn >> order) << order);
+    gfn_t gfn_aligned = _gfn((gfn_x(gfn) >> order) << order);
     mfn_t mfn;
     unsigned long i;
 
@@ -1135,8 +1135,8 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
         goto out_of_memory;
 
     /* Keep track of the highest gfn demand-populated by a guest fault */
-    if ( gfn > p2m->pod.max_guest )
-        p2m->pod.max_guest = gfn;
+    if ( gfn_x(gfn) > p2m->pod.max_guest )
+        p2m->pod.max_guest = gfn_x(gfn);
 
     /*
      * Get a page f/ the cache.  A NULL return value indicates that the
@@ -1170,7 +1170,7 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
             int d:16,order:16;
         } t;
 
-        t.gfn = gfn;
+        t.gfn = gfn_x(gfn);
         t.mfn = mfn_x(mfn);
         t.d = d->domain_id;
         t.order = order;
@@ -1210,7 +1210,7 @@ remap_and_retry:
             int d:16;
         } t;
 
-        t.gfn = gfn;
+        t.gfn = gfn_x(gfn);
         t.d = d->domain_id;
 
         __trace_var(TRC_MEM_POD_SUPERPAGE_SPLINTER, 0, sizeof(t), &t);
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 4bfec4f5f0..a639a00e9c 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -802,7 +802,7 @@ pod_retry_l3:
             {
                 if ( q & P2M_ALLOC )
                 {
-                    if ( !p2m_pod_demand_populate(p2m, gfn, PAGE_ORDER_1G, q) )
+                    if ( !p2m_pod_demand_populate(p2m, gfn_, PAGE_ORDER_1G, q) )
                         goto pod_retry_l3;
                     gdprintk(XENLOG_ERR, "%s: Allocate 1GB failed!\n", __func__);
                 }
@@ -844,7 +844,7 @@ pod_retry_l2:
         if ( p2m_flags_to_type(flags) == p2m_populate_on_demand )
         {
             if ( q & P2M_ALLOC ) {
-                if ( !p2m_pod_demand_populate(p2m, gfn, PAGE_ORDER_2M, q) )
+                if ( !p2m_pod_demand_populate(p2m, gfn_, PAGE_ORDER_2M, q) )
                     goto pod_retry_l2;
             } else
                 *t = p2m_populate_on_demand;
@@ -883,7 +883,7 @@ pod_retry_l1:
         if ( l1t == p2m_populate_on_demand )
         {
             if ( q & P2M_ALLOC ) {
-                if ( !p2m_pod_demand_populate(p2m, gfn, PAGE_ORDER_4K, q) )
+                if ( !p2m_pod_demand_populate(p2m, gfn_, PAGE_ORDER_4K, q) )
                     goto pod_retry_l1;
             } else
                 *t = p2m_populate_on_demand;
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 07ca02a173..1ae9216404 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -720,7 +720,7 @@ extern void audit_p2m(struct domain *d,
 
 /* Called by p2m code when demand-populating a PoD page */
 int
-p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
+p2m_pod_demand_populate(struct p2m_domain *p2m, gfn_t gfn,
                         unsigned int order,
                         p2m_query_t q);
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v2 14/16] xen/x86: p2m-pod: Use typesafe gfn for the fields reclaim_single and max_guest
  2017-09-21 12:40 [PATCH v2 00/16] xen/x86: Clean-up the PoD code Julien Grall
                   ` (12 preceding siblings ...)
  2017-09-21 12:40 ` [PATCH v2 13/16] xen/x86: p2m-pod: Use typesafe gfn in p2m_pod_demand_populate Julien Grall
@ 2017-09-21 12:40 ` Julien Grall
  2017-09-21 15:36   ` Wei Liu
  2017-09-21 12:40 ` [PATCH v2 15/16] xen/x86: p2m-pod: Rework prototype of p2m_pod_demand_populate Julien Grall
  2017-09-21 12:40 ` [PATCH v2 16/16] xen/x86: mem_access: Fix mis-indented line Julien Grall
  15 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-09-21 12:40 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Julien Grall, Jan Beulich

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

---

Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>

    Changes in v2:
        - Add Andrew's acked-by
---
 xen/arch/x86/mm/p2m-pod.c | 11 +++++------
 xen/include/asm-x86/p2m.h |  4 ++--
 2 files changed, 7 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index 5c79444d7b..311762b1ce 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -977,10 +977,10 @@ p2m_pod_emergency_sweep(struct p2m_domain *p2m)
     p2m_type_t t;
 
 
-    if ( p2m->pod.reclaim_single == 0 )
+    if ( gfn_eq(p2m->pod.reclaim_single, _gfn(0)) )
         p2m->pod.reclaim_single = p2m->pod.max_guest;
 
-    start = p2m->pod.reclaim_single;
+    start = gfn_x(p2m->pod.reclaim_single);
     limit = (start > POD_SWEEP_LIMIT) ? (start - POD_SWEEP_LIMIT) : 0;
 
     /* FIXME: Figure out how to avoid superpages */
@@ -990,7 +990,7 @@ p2m_pod_emergency_sweep(struct p2m_domain *p2m)
      * careful about spinlock recursion limits and POD_SWEEP_STRIDE.
      */
     p2m_lock(p2m);
-    for ( i = p2m->pod.reclaim_single; i > 0 ; i-- )
+    for ( i = gfn_x(p2m->pod.reclaim_single); i > 0 ; i-- )
     {
         p2m_access_t a;
         (void)p2m->get_entry(p2m, _gfn(i), &t, &a, 0, NULL, NULL);
@@ -1020,7 +1020,7 @@ p2m_pod_emergency_sweep(struct p2m_domain *p2m)
         p2m_pod_zero_check(p2m, gfns, j);
 
     p2m_unlock(p2m);
-    p2m->pod.reclaim_single = i ? i - 1 : i;
+    p2m->pod.reclaim_single = _gfn(i ? i - 1 : i);
 
 }
 
@@ -1135,8 +1135,7 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, gfn_t gfn,
         goto out_of_memory;
 
     /* Keep track of the highest gfn demand-populated by a guest fault */
-    if ( gfn_x(gfn) > p2m->pod.max_guest )
-        p2m->pod.max_guest = gfn_x(gfn);
+    p2m->pod.max_guest = gfn_max(gfn, p2m->pod.max_guest);
 
     /*
      * Get a page f/ the cache.  A NULL return value indicates that the
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 1ae9216404..e8a9dca480 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -316,8 +316,8 @@ struct p2m_domain {
                          single;       /* Non-super lists                   */
         long             count,        /* # of pages in cache lists         */
                          entry_count;  /* # of pages in p2m marked pod      */
-        unsigned long    reclaim_single; /* Last gpfn of a scan */
-        unsigned long    max_guest;    /* gpfn of max guest demand-populate */
+        gfn_t            reclaim_single; /* Last gfn of a scan */
+        gfn_t            max_guest;    /* gfn of max guest demand-populate */
 
         /*
          * Tracking of the most recently populated PoD pages, for eager
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v2 15/16] xen/x86: p2m-pod: Rework prototype of p2m_pod_demand_populate
  2017-09-21 12:40 [PATCH v2 00/16] xen/x86: Clean-up the PoD code Julien Grall
                   ` (13 preceding siblings ...)
  2017-09-21 12:40 ` [PATCH v2 14/16] xen/x86: p2m-pod: Use typesafe gfn for the fields reclaim_single and max_guest Julien Grall
@ 2017-09-21 12:40 ` Julien Grall
  2017-09-21 15:37   ` Wei Liu
  2017-09-21 12:40 ` [PATCH v2 16/16] xen/x86: mem_access: Fix mis-indented line Julien Grall
  15 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-09-21 12:40 UTC (permalink / raw)
  To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Julien Grall, Jan Beulich

    - Switch the return type to bool
    - Remove the parameter p2m_query_t q as it is not used

Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

---

Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>

    Changes in v2:
        - Add Andrew's acked-by
---
 xen/arch/x86/mm/p2m-ept.c |  4 ++--
 xen/arch/x86/mm/p2m-pod.c | 15 +++++++--------
 xen/arch/x86/mm/p2m-pt.c  |  6 +++---
 xen/include/asm-x86/p2m.h |  6 ++----
 4 files changed, 14 insertions(+), 17 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index bc25582c5a..054827aa88 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -965,7 +965,7 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
             index = gfn_remainder >> ( i * EPT_TABLE_ORDER);
             ept_entry = table + index;
 
-            if ( !p2m_pod_demand_populate(p2m, gfn_, i * EPT_TABLE_ORDER, q) )
+            if ( p2m_pod_demand_populate(p2m, gfn_, i * EPT_TABLE_ORDER) )
                 goto retry;
             else
                 goto out;
@@ -987,7 +987,7 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
 
         ASSERT(i == 0);
         
-        if ( p2m_pod_demand_populate(p2m, gfn_, PAGE_ORDER_4K, q) )
+        if ( !p2m_pod_demand_populate(p2m, gfn_, PAGE_ORDER_4K) )
             goto out;
     }
 
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index 311762b1ce..89b7dc949d 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -1075,10 +1075,9 @@ static void pod_eager_record(struct p2m_domain *p2m, gfn_t gfn,
     mrp->idx %= ARRAY_SIZE(mrp->list);
 }
 
-int
+bool
 p2m_pod_demand_populate(struct p2m_domain *p2m, gfn_t gfn,
-                        unsigned int order,
-                        p2m_query_t q)
+                        unsigned int order)
 {
     struct domain *d = p2m->domain;
     struct page_info *p = NULL; /* Compiler warnings */
@@ -1116,7 +1115,7 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, gfn_t gfn,
          */
         p2m_set_entry(p2m, gfn_aligned, INVALID_MFN, PAGE_ORDER_2M,
                       p2m_populate_on_demand, p2m->default_access);
-        return 0;
+        return true;
     }
 
     /* Only reclaim if we're in actual need of more cache. */
@@ -1178,7 +1177,7 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, gfn_t gfn,
     }
 
     pod_unlock(p2m);
-    return 0;
+    return true;
 out_of_memory:
     pod_unlock(p2m);
 
@@ -1186,10 +1185,10 @@ out_of_memory:
            __func__, d->domain_id, d->tot_pages, p2m->pod.entry_count,
            current->domain->domain_id);
     domain_crash(d);
-    return -1;
+    return false;
 out_fail:
     pod_unlock(p2m);
-    return -1;
+    return false;
 remap_and_retry:
     BUG_ON(order != PAGE_ORDER_2M);
     pod_unlock(p2m);
@@ -1215,7 +1214,7 @@ remap_and_retry:
         __trace_var(TRC_MEM_POD_SUPERPAGE_SPLINTER, 0, sizeof(t), &t);
     }
 
-    return 0;
+    return true;
 }
 
 
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index a639a00e9c..50637083f4 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -802,7 +802,7 @@ pod_retry_l3:
             {
                 if ( q & P2M_ALLOC )
                 {
-                    if ( !p2m_pod_demand_populate(p2m, gfn_, PAGE_ORDER_1G, q) )
+                    if ( p2m_pod_demand_populate(p2m, gfn_, PAGE_ORDER_1G) )
                         goto pod_retry_l3;
                     gdprintk(XENLOG_ERR, "%s: Allocate 1GB failed!\n", __func__);
                 }
@@ -844,7 +844,7 @@ pod_retry_l2:
         if ( p2m_flags_to_type(flags) == p2m_populate_on_demand )
         {
             if ( q & P2M_ALLOC ) {
-                if ( !p2m_pod_demand_populate(p2m, gfn_, PAGE_ORDER_2M, q) )
+                if ( p2m_pod_demand_populate(p2m, gfn_, PAGE_ORDER_2M) )
                     goto pod_retry_l2;
             } else
                 *t = p2m_populate_on_demand;
@@ -883,7 +883,7 @@ pod_retry_l1:
         if ( l1t == p2m_populate_on_demand )
         {
             if ( q & P2M_ALLOC ) {
-                if ( !p2m_pod_demand_populate(p2m, gfn_, PAGE_ORDER_4K, q) )
+                if ( p2m_pod_demand_populate(p2m, gfn_, PAGE_ORDER_4K) )
                     goto pod_retry_l1;
             } else
                 *t = p2m_populate_on_demand;
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index e8a9dca480..70f00c332f 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -719,10 +719,8 @@ extern void audit_p2m(struct domain *d,
 #endif
 
 /* Called by p2m code when demand-populating a PoD page */
-int
-p2m_pod_demand_populate(struct p2m_domain *p2m, gfn_t gfn,
-                        unsigned int order,
-                        p2m_query_t q);
+bool
+p2m_pod_demand_populate(struct p2m_domain *p2m, gfn_t gfn, unsigned int order);
 
 /*
  * Functions specific to the p2m-pt implementation
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v2 16/16] xen/x86: mem_access: Fix mis-indented line
  2017-09-21 12:40 [PATCH v2 00/16] xen/x86: Clean-up the PoD code Julien Grall
                   ` (14 preceding siblings ...)
  2017-09-21 12:40 ` [PATCH v2 15/16] xen/x86: p2m-pod: Rework prototype of p2m_pod_demand_populate Julien Grall
@ 2017-09-21 12:40 ` Julien Grall
  2017-09-21 12:52   ` Razvan Cojocaru
  2017-09-21 12:55   ` Jan Beulich
  15 siblings, 2 replies; 49+ messages in thread
From: Julien Grall @ 2017-09-21 12:40 UTC (permalink / raw)
  To: xen-devel
  Cc: Tamas K Lengyel, Razvan Cojocaru, George Dunlap, Andrew Cooper,
	Julien Grall, Jan Beulich

Signed-off-by: Julien Grall <julien.grall@arm.com>

---

Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
Cc: Tamas K Lengyel <tamas@tklengyel.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>

    Changes in v2:
        - Patch added
---
 xen/arch/x86/mm/mem_access.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/mm/mem_access.c b/xen/arch/x86/mm/mem_access.c
index 948e377e69..5c1008890e 100644
--- a/xen/arch/x86/mm/mem_access.c
+++ b/xen/arch/x86/mm/mem_access.c
@@ -274,7 +274,7 @@ int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain *hp2m,
     }
 
     return ap2m->set_entry(ap2m, gfn, mfn, PAGE_ORDER_4K, t, a,
-                         (current->domain != d));
+                           (current->domain != d));
 }
 
 static int set_mem_access(struct domain *d, struct p2m_domain *p2m,
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 16/16] xen/x86: mem_access: Fix mis-indented line
  2017-09-21 12:40 ` [PATCH v2 16/16] xen/x86: mem_access: Fix mis-indented line Julien Grall
@ 2017-09-21 12:52   ` Razvan Cojocaru
  2017-09-21 12:55   ` Jan Beulich
  1 sibling, 0 replies; 49+ messages in thread
From: Razvan Cojocaru @ 2017-09-21 12:52 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: George Dunlap, Andrew Cooper, Tamas K Lengyel, Jan Beulich

On 09/21/2017 03:40 PM, Julien Grall wrote:
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> 
> ---
> 
> Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
> Cc: Tamas K Lengyel <tamas@tklengyel.com>
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com>


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 16/16] xen/x86: mem_access: Fix mis-indented line
  2017-09-21 12:40 ` [PATCH v2 16/16] xen/x86: mem_access: Fix mis-indented line Julien Grall
  2017-09-21 12:52   ` Razvan Cojocaru
@ 2017-09-21 12:55   ` Jan Beulich
  1 sibling, 0 replies; 49+ messages in thread
From: Jan Beulich @ 2017-09-21 12:55 UTC (permalink / raw)
  To: Julien Grall
  Cc: George Dunlap, Andrew Cooper, Tamas K Lengyel, Razvan Cojocaru,
	xen-devel

>>> On 21.09.17 at 14:40, <julien.grall@arm.com> wrote:
> --- a/xen/arch/x86/mm/mem_access.c
> +++ b/xen/arch/x86/mm/mem_access.c
> @@ -274,7 +274,7 @@ int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain *hp2m,
>      }
>  
>      return ap2m->set_entry(ap2m, gfn, mfn, PAGE_ORDER_4K, t, a,
> -                         (current->domain != d));
> +                           (current->domain != d));

But once touching it, the stray parentheses could go away as well.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 01/16] xen/x86: p2m-pod: Clean-up includes
  2017-09-21 12:40 ` [PATCH v2 01/16] xen/x86: p2m-pod: Clean-up includes Julien Grall
@ 2017-09-21 14:47   ` Wei Liu
  2017-09-21 16:23   ` George Dunlap
  1 sibling, 0 replies; 49+ messages in thread
From: Wei Liu @ 2017-09-21 14:47 UTC (permalink / raw)
  To: Julien Grall
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich, xen-devel

On Thu, Sep 21, 2017 at 01:40:20PM +0100, Julien Grall wrote:
> A lot of the headers are not necessary. At the same time, order them in the
> alphabetical order.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 02/16] xen/x86: p2m-pod: Remove trailing whitespaces
  2017-09-21 12:40 ` [PATCH v2 02/16] xen/x86: p2m-pod: Remove trailing whitespaces Julien Grall
@ 2017-09-21 14:47   ` Wei Liu
  2017-09-21 16:25   ` George Dunlap
  1 sibling, 0 replies; 49+ messages in thread
From: Wei Liu @ 2017-09-21 14:47 UTC (permalink / raw)
  To: Julien Grall
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich, xen-devel

On Thu, Sep 21, 2017 at 01:40:21PM +0100, Julien Grall wrote:
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 03/16] xen/x86: p2m-pod: Fix coding style for comments
  2017-09-21 12:40 ` [PATCH v2 03/16] xen/x86: p2m-pod: Fix coding style for comments Julien Grall
@ 2017-09-21 14:47   ` Wei Liu
  2017-09-21 16:28   ` George Dunlap
  1 sibling, 0 replies; 49+ messages in thread
From: Wei Liu @ 2017-09-21 14:47 UTC (permalink / raw)
  To: Julien Grall
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich, xen-devel

On Thu, Sep 21, 2017 at 01:40:22PM +0100, Julien Grall wrote:
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 04/16] xen/x86: p2m-pod: Fix coding style
  2017-09-21 12:40 ` [PATCH v2 04/16] xen/x86: p2m-pod: Fix coding style Julien Grall
@ 2017-09-21 14:48   ` Wei Liu
  2017-09-21 16:33   ` George Dunlap
  2017-09-22  9:15   ` Jan Beulich
  2 siblings, 0 replies; 49+ messages in thread
From: Wei Liu @ 2017-09-21 14:48 UTC (permalink / raw)
  To: Julien Grall
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich, xen-devel

On Thu, Sep 21, 2017 at 01:40:23PM +0100, Julien Grall wrote:
> Also take the opportunity to:
>     - move from 1 << * to 1UL << *.
>     - use unsigned when possible
>     - move from unsigned int -> unsigned long for some induction
>     variables
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 05/16] xen/x86: p2m-pod: Avoid redundant assignments in p2m_pod_demand_populate
  2017-09-21 12:40 ` [PATCH v2 05/16] xen/x86: p2m-pod: Avoid redundant assignments in p2m_pod_demand_populate Julien Grall
@ 2017-09-21 14:53   ` Wei Liu
  2017-09-21 16:35   ` George Dunlap
  1 sibling, 0 replies; 49+ messages in thread
From: Wei Liu @ 2017-09-21 14:53 UTC (permalink / raw)
  To: Julien Grall
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich, xen-devel

On Thu, Sep 21, 2017 at 01:40:24PM +0100, Julien Grall wrote:
> gfn_aligned is assigned 3 times with the exact same formula. All the
> variables used are not modified, so consolidate in a single assignment
> at the beginning of the function.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 06/16] xen/x86: p2m-pod: Clean-up use of typesafe MFN
  2017-09-21 12:40 ` [PATCH v2 06/16] xen/x86: p2m-pod: Clean-up use of typesafe MFN Julien Grall
@ 2017-09-21 14:54   ` Wei Liu
  2017-09-21 16:36   ` George Dunlap
  1 sibling, 0 replies; 49+ messages in thread
From: Wei Liu @ 2017-09-21 14:54 UTC (permalink / raw)
  To: Julien Grall
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich, xen-devel

On Thu, Sep 21, 2017 at 01:40:25PM +0100, Julien Grall wrote:
> Some unboxing/boxing can be avoided by using mfn_add(...) instead.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 07/16] xen/x86: p2m-pod: Use typesafe gfn in p2m_pod_decrease_reservation
  2017-09-21 12:40 ` [PATCH v2 07/16] xen/x86: p2m-pod: Use typesafe gfn in p2m_pod_decrease_reservation Julien Grall
@ 2017-09-21 15:19   ` Wei Liu
  2017-09-21 16:39   ` George Dunlap
  1 sibling, 0 replies; 49+ messages in thread
From: Wei Liu @ 2017-09-21 15:19 UTC (permalink / raw)
  To: Julien Grall
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich, xen-devel

On Thu, Sep 21, 2017 at 01:40:26PM +0100, Julien Grall wrote:
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

I wonder how gfn_lock work with the new time without any change, but it
appears gfn_lock ignores gfn completely. :-)

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 08/16] xen/x86: p2m: Use typesafe gfn for the P2M callbacks get_entry and set_entry
  2017-09-21 12:40 ` [PATCH v2 08/16] xen/x86: p2m: Use typesafe gfn for the P2M callbacks get_entry and set_entry Julien Grall
@ 2017-09-21 15:23   ` Wei Liu
  0 siblings, 0 replies; 49+ messages in thread
From: Wei Liu @ 2017-09-21 15:23 UTC (permalink / raw)
  To: Julien Grall
  Cc: Kevin Tian, Tamas K Lengyel, Wei Liu, Jan Beulich,
	Razvan Cojocaru, George Dunlap, Andrew Cooper, xen-devel,
	Jun Nakajima

On Thu, Sep 21, 2017 at 01:40:27PM +0100, Julien Grall wrote:
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
> 

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 09/16] xen/x86: p2m: Use typesafe GFN in p2m_set_entry
  2017-09-21 12:40 ` [PATCH v2 09/16] xen/x86: p2m: Use typesafe GFN in p2m_set_entry Julien Grall
@ 2017-09-21 15:34   ` Wei Liu
  0 siblings, 0 replies; 49+ messages in thread
From: Wei Liu @ 2017-09-21 15:34 UTC (permalink / raw)
  To: Julien Grall
  Cc: Tamas K Lengyel, Wei Liu, George Dunlap, Andrew Cooper,
	xen-devel, Jan Beulich

On Thu, Sep 21, 2017 at 01:40:28PM +0100, Julien Grall wrote:
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Acked-by: Tamas K Lengyel <tamas@tklengyel.com>
> 

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 10/16] xen/x86: p2m-pod: Use typesafe GFN in pod_eager_record
  2017-09-21 12:40 ` [PATCH v2 10/16] xen/x86: p2m-pod: Use typesafe GFN in pod_eager_record Julien Grall
@ 2017-09-21 15:35   ` Wei Liu
  0 siblings, 0 replies; 49+ messages in thread
From: Wei Liu @ 2017-09-21 15:35 UTC (permalink / raw)
  To: Julien Grall
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich, xen-devel

On Thu, Sep 21, 2017 at 01:40:29PM +0100, Julien Grall wrote:
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 11/16] xen/x86: p2m-pod: Clean-up p2m_pod_zero_check
  2017-09-21 12:40 ` [PATCH v2 11/16] xen/x86: p2m-pod: Clean-up p2m_pod_zero_check Julien Grall
@ 2017-09-21 15:35   ` Wei Liu
  2017-09-22  9:26   ` Jan Beulich
  1 sibling, 0 replies; 49+ messages in thread
From: Wei Liu @ 2017-09-21 15:35 UTC (permalink / raw)
  To: Julien Grall
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich, xen-devel

On Thu, Sep 21, 2017 at 01:40:30PM +0100, Julien Grall wrote:
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 12/16] xen/x86: p2m-pod: Use typesafe gfn in p2m_pod_zero_check
  2017-09-21 12:40 ` [PATCH v2 12/16] xen/x86: p2m-pod: Use typesafe gfn in p2m_pod_zero_check Julien Grall
@ 2017-09-21 15:36   ` Wei Liu
  0 siblings, 0 replies; 49+ messages in thread
From: Wei Liu @ 2017-09-21 15:36 UTC (permalink / raw)
  To: Julien Grall
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich, xen-devel

On Thu, Sep 21, 2017 at 01:40:31PM +0100, Julien Grall wrote:
> At the same time make the array gfns const has it is not modified within
> the function.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 13/16] xen/x86: p2m-pod: Use typesafe gfn in p2m_pod_demand_populate
  2017-09-21 12:40 ` [PATCH v2 13/16] xen/x86: p2m-pod: Use typesafe gfn in p2m_pod_demand_populate Julien Grall
@ 2017-09-21 15:36   ` Wei Liu
  0 siblings, 0 replies; 49+ messages in thread
From: Wei Liu @ 2017-09-21 15:36 UTC (permalink / raw)
  To: Julien Grall
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich, xen-devel

On Thu, Sep 21, 2017 at 01:40:32PM +0100, Julien Grall wrote:
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 14/16] xen/x86: p2m-pod: Use typesafe gfn for the fields reclaim_single and max_guest
  2017-09-21 12:40 ` [PATCH v2 14/16] xen/x86: p2m-pod: Use typesafe gfn for the fields reclaim_single and max_guest Julien Grall
@ 2017-09-21 15:36   ` Wei Liu
  0 siblings, 0 replies; 49+ messages in thread
From: Wei Liu @ 2017-09-21 15:36 UTC (permalink / raw)
  To: Julien Grall
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich, xen-devel

On Thu, Sep 21, 2017 at 01:40:33PM +0100, Julien Grall wrote:
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 15/16] xen/x86: p2m-pod: Rework prototype of p2m_pod_demand_populate
  2017-09-21 12:40 ` [PATCH v2 15/16] xen/x86: p2m-pod: Rework prototype of p2m_pod_demand_populate Julien Grall
@ 2017-09-21 15:37   ` Wei Liu
  0 siblings, 0 replies; 49+ messages in thread
From: Wei Liu @ 2017-09-21 15:37 UTC (permalink / raw)
  To: Julien Grall
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich, xen-devel

On Thu, Sep 21, 2017 at 01:40:34PM +0100, Julien Grall wrote:
>     - Switch the return type to bool
>     - Remove the parameter p2m_query_t q as it is not used
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 01/16] xen/x86: p2m-pod: Clean-up includes
  2017-09-21 12:40 ` [PATCH v2 01/16] xen/x86: p2m-pod: Clean-up includes Julien Grall
  2017-09-21 14:47   ` Wei Liu
@ 2017-09-21 16:23   ` George Dunlap
  1 sibling, 0 replies; 49+ messages in thread
From: George Dunlap @ 2017-09-21 16:23 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: George Dunlap, Andrew Cooper, Jan Beulich

On 09/21/2017 01:40 PM, Julien Grall wrote:
> A lot of the headers are not necessary. At the same time, order them in the
> alphabetical order.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: George Dunlap <george.dunlap@citrix.com>

> 
> ---
> 
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> 
>     Changes in v2:
>         - Add Andrew's acked-by
> ---
>  xen/arch/x86/mm/p2m-pod.c | 11 +++--------
>  1 file changed, 3 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
> index 4085b7f752..fec87e5224 100644
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -19,18 +19,13 @@
>   * along with this program; If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> -#include <xen/iommu.h>
> -#include <xen/vm_event.h>
>  #include <xen/event.h>
> -#include <public/vm_event.h>
> -#include <asm/domain.h>
> +#include <xen/mm.h>
> +#include <xen/sched.h>
> +#include <xen/trace.h>
>  #include <asm/page.h>
>  #include <asm/paging.h>
>  #include <asm/p2m.h>
> -#include <asm/hvm/vmx/vmx.h> /* ept_p2m_init() */
> -#include <asm/mem_sharing.h>
> -#include <asm/hvm/nestedhvm.h>
> -#include <asm/hvm/svm/amd-iommu-proto.h>
>  
>  #include "mm-locks.h"
>  
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 02/16] xen/x86: p2m-pod: Remove trailing whitespaces
  2017-09-21 12:40 ` [PATCH v2 02/16] xen/x86: p2m-pod: Remove trailing whitespaces Julien Grall
  2017-09-21 14:47   ` Wei Liu
@ 2017-09-21 16:25   ` George Dunlap
  1 sibling, 0 replies; 49+ messages in thread
From: George Dunlap @ 2017-09-21 16:25 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: George Dunlap, Andrew Cooper, Jan Beulich

On 09/21/2017 01:40 PM, Julien Grall wrote:
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>


> 
> ---
> 
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> 
>     Changes in v2:
>         - Add Andrew's acked-by
> ---
>  xen/arch/x86/mm/p2m-pod.c | 46 +++++++++++++++++++++++-----------------------
>  1 file changed, 23 insertions(+), 23 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
> index fec87e5224..1f07441259 100644
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -1,7 +1,7 @@
>  /******************************************************************************
>   * arch/x86/mm/p2m-pod.c
>   *
> - * Populate-on-demand p2m entries. 
> + * Populate-on-demand p2m entries.
>   *
>   * Copyright (c) 2009-2011 Citrix Systems, Inc.
>   *
> @@ -76,7 +76,7 @@ p2m_pod_cache_add(struct p2m_domain *p2m,
>                 __func__, mfn_x(mfn), order, ((1UL << order) - 1));
>          return -1;
>      }
> -    
> +
>      for(i=0; i < 1 << order ; i++) {
>          struct domain * od;
>  
> @@ -223,8 +223,8 @@ p2m_pod_set_cache_target(struct p2m_domain *p2m, unsigned long pod_target, int p
>                  /* If we can't allocate a superpage, try singleton pages */
>                  order = PAGE_ORDER_4K;
>                  goto retry;
> -            }   
> -            
> +            }
> +
>              printk("%s: Unable to allocate page for PoD cache (target=%lu cache=%ld)\n",
>                     __func__, pod_target, p2m->pod.count);
>              ret = -ENOMEM;
> @@ -272,7 +272,7 @@ p2m_pod_set_cache_target(struct p2m_domain *p2m, unsigned long pod_target, int p
>  
>              if ( test_and_clear_bit(_PGT_pinned, &(page+i)->u.inuse.type_info) )
>                  put_page_and_type(page+i);
> -            
> +
>              if ( test_and_clear_bit(_PGC_allocated, &(page+i)->count_info) )
>                  put_page(page+i);
>  
> @@ -296,7 +296,7 @@ out:
>   * definitions:
>   * + M: static_max
>   * + B: number of pages the balloon driver has ballooned down to.
> - * + P: Number of populated pages. 
> + * + P: Number of populated pages.
>   * + T: Old target
>   * + T': New target
>   *
> @@ -311,10 +311,10 @@ out:
>   *   the remainder of the ram to the guest OS.
>   *  T <T'<B : Increase PoD cache size.
>   *  T'<T<=B : Here we have a choice.  We can decrease the size of the cache,
> - *   get the memory right away.  However, that means every time we 
> - *   reduce the memory target we risk the guest attempting to populate the 
> + *   get the memory right away.  However, that means every time we
> + *   reduce the memory target we risk the guest attempting to populate the
>   *   memory before the balloon driver has reached its new target.  Safer to
> - *   never reduce the cache size here, but only when the balloon driver frees 
> + *   never reduce the cache size here, but only when the balloon driver frees
>   *   PoD ranges.
>   *
>   * If there are many zero pages, we could reach the target also by doing
> @@ -511,7 +511,7 @@ p2m_pod_decrease_reservation(struct domain *d,
>      long pod, nonpod, ram;
>  
>      gfn_lock(p2m, gpfn, order);
> -    pod_lock(p2m);    
> +    pod_lock(p2m);
>  
>      /* If we don't have any outstanding PoD entries, let things take their
>       * course */
> @@ -629,7 +629,7 @@ p2m_pod_decrease_reservation(struct domain *d,
>              nonpod -= n;
>              ram -= n;
>          }
> -    }    
> +    }
>  
>      /* If there are no more non-PoD entries, tell decrease_reservation() that
>       * there's nothing left to do. */
> @@ -682,7 +682,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
>      if ( paging_mode_shadow(d) )
>          max_ref++;
>  
> -    /* NOTE: this is why we don't enforce deadlock constraints between p2m 
> +    /* NOTE: this is why we don't enforce deadlock constraints between p2m
>       * and pod locks */
>      gfn_lock(p2m, gfn, SUPERPAGE_ORDER);
>  
> @@ -690,7 +690,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
>       * and aligned, and mapping them. */
>      for ( i = 0; i < SUPERPAGE_PAGES; i += n )
>      {
> -        p2m_access_t a; 
> +        p2m_access_t a;
>          unsigned int cur_order;
>          unsigned long k;
>          const struct page_info *page;
> @@ -807,7 +807,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
>  out_reset:
>      if ( reset )
>          p2m_set_entry(p2m, gfn, mfn0, 9, type0, p2m->default_access);
> -    
> +
>  out:
>      gfn_unlock(p2m, gfn, SUPERPAGE_ORDER);
>      return ret;
> @@ -836,8 +836,8 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
>          /* If this is ram, and not a pagetable or from the xen heap, and probably not mapped
>             elsewhere, map it; otherwise, skip. */
>          if ( p2m_is_ram(types[i])
> -             && ( (mfn_to_page(mfns[i])->count_info & PGC_allocated) != 0 ) 
> -             && ( (mfn_to_page(mfns[i])->count_info & (PGC_page_table|PGC_xen_heap)) == 0 ) 
> +             && ( (mfn_to_page(mfns[i])->count_info & PGC_allocated) != 0 )
> +             && ( (mfn_to_page(mfns[i])->count_info & (PGC_page_table|PGC_xen_heap)) == 0 )
>               && ( (mfn_to_page(mfns[i])->count_info & PGC_count_mask) <= max_ref ) )
>              map[i] = map_domain_page(mfns[i]);
>          else
> @@ -915,7 +915,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
>                  t.mfn = mfn_x(mfns[i]);
>                  t.d = d->domain_id;
>                  t.order = 0;
> -        
> +
>                  __trace_var(TRC_MEM_POD_ZERO_RECLAIM, 0, sizeof(t), &t);
>              }
>  
> @@ -924,7 +924,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
>              p2m->pod.entry_count++;
>          }
>      }
> -    
> +
>  }
>  
>  #define POD_SWEEP_LIMIT 1024
> @@ -1046,12 +1046,12 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
>      pod_lock(p2m);
>  
>      /* This check is done with the pod lock held.  This will make sure that
> -     * even if d->is_dying changes under our feet, p2m_pod_empty_cache() 
> +     * even if d->is_dying changes under our feet, p2m_pod_empty_cache()
>       * won't start until we're done. */
>      if ( unlikely(d->is_dying) )
>          goto out_fail;
>  
> -    
> +
>      /* Because PoD does not have cache list for 1GB pages, it has to remap
>       * 1GB region to 2MB chunks for a retry. */
>      if ( order == PAGE_ORDER_1G )
> @@ -1107,7 +1107,7 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
>          set_gpfn_from_mfn(mfn_x(mfn) + i, gfn_aligned + i);
>          paging_mark_dirty(d, mfn_add(mfn, i));
>      }
> -    
> +
>      p2m->pod.entry_count -= (1 << order);
>      BUG_ON(p2m->pod.entry_count < 0);
>  
> @@ -1124,7 +1124,7 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
>          t.mfn = mfn_x(mfn);
>          t.d = d->domain_id;
>          t.order = order;
> -        
> +
>          __trace_var(TRC_MEM_POD_POPULATE, 0, sizeof(t), &t);
>      }
>  
> @@ -1161,7 +1161,7 @@ remap_and_retry:
>  
>          t.gfn = gfn;
>          t.d = d->domain_id;
> -        
> +
>          __trace_var(TRC_MEM_POD_SUPERPAGE_SPLINTER, 0, sizeof(t), &t);
>      }
>  
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 03/16] xen/x86: p2m-pod: Fix coding style for comments
  2017-09-21 12:40 ` [PATCH v2 03/16] xen/x86: p2m-pod: Fix coding style for comments Julien Grall
  2017-09-21 14:47   ` Wei Liu
@ 2017-09-21 16:28   ` George Dunlap
  2017-09-21 16:33     ` George Dunlap
  2017-09-21 16:34     ` Julien Grall
  1 sibling, 2 replies; 49+ messages in thread
From: George Dunlap @ 2017-09-21 16:28 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: George Dunlap, Andrew Cooper, Jan Beulich

On 09/21/2017 01:40 PM, Julien Grall wrote:
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Should probably add something like:

"While we're here, specify 1UL when shifting in a couple of cases."

This could be done on check-in.

Reviewed-by: George Dunlap <george.dunlap@citrix.com>

> 
> ---
> 
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> 
>     Changes in v2:
>         - Add Andrew's acked-by
> ---
>  xen/arch/x86/mm/p2m-pod.c | 154 ++++++++++++++++++++++++++++++----------------
>  1 file changed, 102 insertions(+), 52 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
> index 1f07441259..6f045081b5 100644
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -155,8 +155,10 @@ static struct page_info * p2m_pod_cache_get(struct p2m_domain *p2m,
>  
>          BUG_ON( page_list_empty(&p2m->pod.super) );
>  
> -        /* Break up a superpage to make single pages. NB count doesn't
> -         * need to be adjusted. */
> +        /*
> +         * Break up a superpage to make single pages. NB count doesn't
> +         * need to be adjusted.
> +         */
>          p = page_list_remove_head(&p2m->pod.super);
>          mfn = mfn_x(page_to_mfn(p));
>  
> @@ -242,8 +244,10 @@ p2m_pod_set_cache_target(struct p2m_domain *p2m, unsigned long pod_target, int p
>      }
>  
>      /* Decreasing the target */
> -    /* We hold the pod lock here, so we don't need to worry about
> -     * cache disappearing under our feet. */
> +    /*
> +     * We hold the pod lock here, so we don't need to worry about
> +     * cache disappearing under our feet.
> +     */
>      while ( pod_target < p2m->pod.count )
>      {
>          struct page_info * page;
> @@ -345,15 +349,19 @@ p2m_pod_set_mem_target(struct domain *d, unsigned long target)
>      if ( d->is_dying )
>          goto out;
>  
> -    /* T' < B: Don't reduce the cache size; let the balloon driver
> -     * take care of it. */
> +    /*
> +     * T' < B: Don't reduce the cache size; let the balloon driver
> +     * take care of it.
> +     */
>      if ( target < d->tot_pages )
>          goto out;
>  
>      pod_target = target - populated;
>  
> -    /* B < T': Set the cache size equal to # of outstanding entries,
> -     * let the balloon driver fill in the rest. */
> +    /*
> +     * B < T': Set the cache size equal to # of outstanding entries,
> +     * let the balloon driver fill in the rest.
> +     */
>      if ( populated > 0 && pod_target > p2m->pod.entry_count )
>          pod_target = p2m->pod.entry_count;
>  
> @@ -491,7 +499,8 @@ static int
>  p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn);
>  
>  
> -/* This function is needed for two reasons:
> +/*
> + * This function is needed for two reasons:
>   * + To properly handle clearing of PoD entries
>   * + To "steal back" memory being freed for the PoD cache, rather than
>   *   releasing it.
> @@ -513,8 +522,10 @@ p2m_pod_decrease_reservation(struct domain *d,
>      gfn_lock(p2m, gpfn, order);
>      pod_lock(p2m);
>  
> -    /* If we don't have any outstanding PoD entries, let things take their
> -     * course */
> +    /*
> +     * If we don't have any outstanding PoD entries, let things take their
> +     * course.
> +     */
>      if ( p2m->pod.entry_count == 0 )
>          goto out_unlock;
>  
> @@ -550,8 +561,10 @@ p2m_pod_decrease_reservation(struct domain *d,
>  
>      if ( !nonpod )
>      {
> -        /* All PoD: Mark the whole region invalid and tell caller
> -         * we're done. */
> +        /*
> +         * All PoD: Mark the whole region invalid and tell caller
> +         * we're done.
> +         */
>          p2m_set_entry(p2m, gpfn, INVALID_MFN, order, p2m_invalid,
>                        p2m->default_access);
>          p2m->pod.entry_count-=(1<<order);
> @@ -568,15 +581,16 @@ p2m_pod_decrease_reservation(struct domain *d,
>       * - order >= SUPERPAGE_ORDER (the loop below will take care of this)
>       * - not all of the pages were RAM (now knowing order < SUPERPAGE_ORDER)
>       */
> -    if ( steal_for_cache && order < SUPERPAGE_ORDER && ram == (1 << order) &&
> +    if ( steal_for_cache && order < SUPERPAGE_ORDER && ram == (1UL << order) &&
>           p2m_pod_zero_check_superpage(p2m, gpfn & ~(SUPERPAGE_PAGES - 1)) )
>      {
> -        pod = 1 << order;
> +        pod = 1UL << order;
>          ram = nonpod = 0;
>          ASSERT(steal_for_cache == (p2m->pod.entry_count > p2m->pod.count));
>      }
>  
> -    /* Process as long as:
> +    /*
> +     * Process as long as:
>       * + There are PoD entries to handle, or
>       * + There is ram left, and we want to steal it
>       */
> @@ -631,8 +645,10 @@ p2m_pod_decrease_reservation(struct domain *d,
>          }
>      }
>  
> -    /* If there are no more non-PoD entries, tell decrease_reservation() that
> -     * there's nothing left to do. */
> +    /*
> +     * If there are no more non-PoD entries, tell decrease_reservation() that
> +     * there's nothing left to do.
> +     */
>      if ( nonpod == 0 )
>          ret = 1;
>  
> @@ -658,9 +674,11 @@ void p2m_pod_dump_data(struct domain *d)
>  }
>  
>  
> -/* Search for all-zero superpages to be reclaimed as superpages for the
> +/*
> + * Search for all-zero superpages to be reclaimed as superpages for the
>   * PoD cache. Must be called w/ pod lock held, must lock the superpage
> - * in the p2m */
> + * in the p2m.
> + */
>  static int
>  p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
>  {
> @@ -682,12 +700,16 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
>      if ( paging_mode_shadow(d) )
>          max_ref++;
>  
> -    /* NOTE: this is why we don't enforce deadlock constraints between p2m
> -     * and pod locks */
> +    /*
> +     * NOTE: this is why we don't enforce deadlock constraints between p2m
> +     * and pod locks.
> +     */
>      gfn_lock(p2m, gfn, SUPERPAGE_ORDER);
>  
> -    /* Look up the mfns, checking to make sure they're the same mfn
> -     * and aligned, and mapping them. */
> +    /*
> +     * Look up the mfns, checking to make sure they're the same mfn
> +     * and aligned, and mapping them.
> +     */
>      for ( i = 0; i < SUPERPAGE_PAGES; i += n )
>      {
>          p2m_access_t a;
> @@ -697,7 +719,8 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
>  
>          mfn = p2m->get_entry(p2m, gfn + i, &type, &a, 0, &cur_order, NULL);
>  
> -        /* Conditions that must be met for superpage-superpage:
> +        /*
> +         * Conditions that must be met for superpage-superpage:
>           * + All gfns are ram types
>           * + All gfns have the same type
>           * + All of the mfns are allocated to a domain
> @@ -751,9 +774,11 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
>                    p2m_populate_on_demand, p2m->default_access);
>      p2m_tlb_flush_sync(p2m);
>  
> -    /* Make none of the MFNs are used elsewhere... for example, mapped
> +    /*
> +     * Make none of the MFNs are used elsewhere... for example, mapped
>       * via the grant table interface, or by qemu.  Allow one refcount for
> -     * being allocated to the domain. */
> +     * being allocated to the domain.
> +     */
>      for ( i=0; i < SUPERPAGE_PAGES; i++ )
>      {
>          mfn = _mfn(mfn_x(mfn0) + i);
> @@ -797,8 +822,10 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
>          __trace_var(TRC_MEM_POD_ZERO_RECLAIM, 0, sizeof(t), &t);
>      }
>  
> -    /* Finally!  We've passed all the checks, and can add the mfn superpage
> -     * back on the PoD cache, and account for the new p2m PoD entries */
> +    /*
> +     * Finally!  We've passed all the checks, and can add the mfn superpage
> +     * back on the PoD cache, and account for the new p2m PoD entries.
> +     */
>      p2m_pod_cache_add(p2m, mfn_to_page(mfn0), PAGE_ORDER_2M);
>      p2m->pod.entry_count += SUPERPAGE_PAGES;
>  
> @@ -833,8 +860,10 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
>      {
>          p2m_access_t a;
>          mfns[i] = p2m->get_entry(p2m, gfns[i], types + i, &a, 0, NULL, NULL);
> -        /* If this is ram, and not a pagetable or from the xen heap, and probably not mapped
> -           elsewhere, map it; otherwise, skip. */
> +        /*
> +         * If this is ram, and not a pagetable or from the xen heap, and
> +         * probably not mapped elsewhere, map it; otherwise, skip.
> +         */
>          if ( p2m_is_ram(types[i])
>               && ( (mfn_to_page(mfns[i])->count_info & PGC_allocated) != 0 )
>               && ( (mfn_to_page(mfns[i])->count_info & (PGC_page_table|PGC_xen_heap)) == 0 )
> @@ -844,8 +873,10 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
>              map[i] = NULL;
>      }
>  
> -    /* Then, go through and check for zeroed pages, removing write permission
> -     * for those with zeroes. */
> +    /*
> +     * Then, go through and check for zeroed pages, removing write permission
> +     * for those with zeroes.
> +     */
>      for ( i=0; i<count; i++ )
>      {
>          if(!map[i])
> @@ -867,8 +898,10 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
>          p2m_set_entry(p2m, gfns[i], INVALID_MFN, PAGE_ORDER_4K,
>                        p2m_populate_on_demand, p2m->default_access);
>  
> -        /* See if the page was successfully unmapped.  (Allow one refcount
> -         * for being allocated to a domain.) */
> +        /*
> +         * See if the page was successfully unmapped.  (Allow one refcount
> +         * for being allocated to a domain.)
> +         */
>          if ( (mfn_to_page(mfns[i])->count_info & PGC_count_mask) > 1 )
>          {
>              unmap_domain_page(map[i]);
> @@ -895,8 +928,10 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
>  
>          unmap_domain_page(map[i]);
>  
> -        /* See comment in p2m_pod_zero_check_superpage() re gnttab
> -         * check timing.  */
> +        /*
> +         * See comment in p2m_pod_zero_check_superpage() re gnttab
> +         * check timing.
> +         */
>          if ( j < PAGE_SIZE/sizeof(*map[i]) )
>          {
>              p2m_set_entry(p2m, gfns[i], mfns[i], PAGE_ORDER_4K,
> @@ -944,9 +979,11 @@ p2m_pod_emergency_sweep(struct p2m_domain *p2m)
>      limit = (start > POD_SWEEP_LIMIT) ? (start - POD_SWEEP_LIMIT) : 0;
>  
>      /* FIXME: Figure out how to avoid superpages */
> -    /* NOTE: Promote to globally locking the p2m. This will get complicated
> +    /*
> +     * NOTE: Promote to globally locking the p2m. This will get complicated
>       * in a fine-grained scenario. If we lock each gfn individually we must be
> -     * careful about spinlock recursion limits and POD_SWEEP_STRIDE. */
> +     * careful about spinlock recursion limits and POD_SWEEP_STRIDE.
> +     */
>      p2m_lock(p2m);
>      for ( i=p2m->pod.reclaim_single; i > 0 ; i-- )
>      {
> @@ -963,11 +1000,13 @@ p2m_pod_emergency_sweep(struct p2m_domain *p2m)
>                  j = 0;
>              }
>          }
> -        /* Stop if we're past our limit and we have found *something*.
> +        /*
> +         * Stop if we're past our limit and we have found *something*.
>           *
>           * NB that this is a zero-sum game; we're increasing our cache size
>           * by re-increasing our 'debt'.  Since we hold the pod lock,
> -         * (entry_count - count) must remain the same. */
> +         * (entry_count - count) must remain the same.
> +         */
>          if ( i < limit && (p2m->pod.count > 0 || hypercall_preempt_check()) )
>              break;
>      }
> @@ -1045,20 +1084,25 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
>      ASSERT(gfn_locked_by_me(p2m, gfn));
>      pod_lock(p2m);
>  
> -    /* This check is done with the pod lock held.  This will make sure that
> +    /*
> +     * This check is done with the pod lock held.  This will make sure that
>       * even if d->is_dying changes under our feet, p2m_pod_empty_cache()
> -     * won't start until we're done. */
> +     * won't start until we're done.
> +     */
>      if ( unlikely(d->is_dying) )
>          goto out_fail;
>  
>  
> -    /* Because PoD does not have cache list for 1GB pages, it has to remap
> -     * 1GB region to 2MB chunks for a retry. */
> +    /*
> +     * Because PoD does not have cache list for 1GB pages, it has to remap
> +     * 1GB region to 2MB chunks for a retry.
> +     */
>      if ( order == PAGE_ORDER_1G )
>      {
>          pod_unlock(p2m);
>          gfn_aligned = (gfn >> order) << order;
> -        /* Note that we are supposed to call p2m_set_entry() 512 times to
> +        /*
> +         * Note that we are supposed to call p2m_set_entry() 512 times to
>           * split 1GB into 512 2MB pages here. But We only do once here because
>           * p2m_set_entry() should automatically shatter the 1GB page into
>           * 512 2MB pages. The rest of 511 calls are unnecessary.
> @@ -1075,8 +1119,10 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
>      if ( p2m->pod.entry_count > p2m->pod.count )
>          pod_eager_reclaim(p2m);
>  
> -    /* Only sweep if we're actually out of memory.  Doing anything else
> -     * causes unnecessary time and fragmentation of superpages in the p2m. */
> +    /*
> +     * Only sweep if we're actually out of memory.  Doing anything else
> +     * causes unnecessary time and fragmentation of superpages in the p2m.
> +     */
>      if ( p2m->pod.count == 0 )
>          p2m_pod_emergency_sweep(p2m);
>  
> @@ -1088,8 +1134,10 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
>      if ( gfn > p2m->pod.max_guest )
>          p2m->pod.max_guest = gfn;
>  
> -    /* Get a page f/ the cache.  A NULL return value indicates that the
> -     * 2-meg range should be marked singleton PoD, and retried */
> +    /*
> +     * Get a page f/ the cache.  A NULL return value indicates that the
> +     * 2-meg range should be marked singleton PoD, and retried.
> +     */
>      if ( (p = p2m_pod_cache_get(p2m, order)) == NULL )
>          goto remap_and_retry;
>  
> @@ -1146,8 +1194,10 @@ remap_and_retry:
>      pod_unlock(p2m);
>  
>      /* Remap this 2-meg region in singleton chunks */
> -    /* NOTE: In a p2m fine-grained lock scenario this might
> -     * need promoting the gfn lock from gfn->2M superpage */
> +    /*
> +     * NOTE: In a p2m fine-grained lock scenario this might
> +     * need promoting the gfn lock from gfn->2M superpage.
> +     */
>      gfn_aligned = (gfn>>order)<<order;
>      for(i=0; i<(1<<order); i++)
>          p2m_set_entry(p2m, gfn_aligned + i, INVALID_MFN, PAGE_ORDER_4K,
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 03/16] xen/x86: p2m-pod: Fix coding style for comments
  2017-09-21 16:28   ` George Dunlap
@ 2017-09-21 16:33     ` George Dunlap
  2017-09-21 16:34     ` Julien Grall
  1 sibling, 0 replies; 49+ messages in thread
From: George Dunlap @ 2017-09-21 16:33 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: George Dunlap, Andrew Cooper, Jan Beulich

On 09/21/2017 05:28 PM, George Dunlap wrote:
> On 09/21/2017 01:40 PM, Julien Grall wrote:
>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Should probably add something like:
> 
> "While we're here, specify 1UL when shifting in a couple of cases."

Oh, I see -- this is a sneak peek at the next patch. :-)

If everything else looks good I can move it when I check it in.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 04/16] xen/x86: p2m-pod: Fix coding style
  2017-09-21 12:40 ` [PATCH v2 04/16] xen/x86: p2m-pod: Fix coding style Julien Grall
  2017-09-21 14:48   ` Wei Liu
@ 2017-09-21 16:33   ` George Dunlap
  2017-09-22  9:15   ` Jan Beulich
  2 siblings, 0 replies; 49+ messages in thread
From: George Dunlap @ 2017-09-21 16:33 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: George Dunlap, Andrew Cooper, Jan Beulich

On 09/21/2017 01:40 PM, Julien Grall wrote:
> Also take the opportunity to:
>     - move from 1 << * to 1UL << *.
>     - use unsigned when possible
>     - move from unsigned int -> unsigned long for some induction
>     variables
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 03/16] xen/x86: p2m-pod: Fix coding style for comments
  2017-09-21 16:28   ` George Dunlap
  2017-09-21 16:33     ` George Dunlap
@ 2017-09-21 16:34     ` Julien Grall
  1 sibling, 0 replies; 49+ messages in thread
From: Julien Grall @ 2017-09-21 16:34 UTC (permalink / raw)
  To: George Dunlap, xen-devel; +Cc: George Dunlap, Andrew Cooper, Jan Beulich

Hi,

On 21/09/17 17:28, George Dunlap wrote:
> On 09/21/2017 01:40 PM, Julien Grall wrote:
>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Should probably add something like:
> 
> "While we're here, specify 1UL when shifting in a couple of cases."

This was meant to be done in patch #4 that contain the other 1UL <<. I 
probably messed up during the split. I can move them in #4 if I need to 
resend a series. Otherwise this sentence in the commit message would be 
fine.

Cheers,

> 
> This could be done on check-in.
> 
> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
> 
>>
>> ---
>>
>> Cc: George Dunlap <george.dunlap@eu.citrix.com>
>> Cc: Jan Beulich <jbeulich@suse.com>
>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
>>
>>      Changes in v2:
>>          - Add Andrew's acked-by
>> ---
>>   xen/arch/x86/mm/p2m-pod.c | 154 ++++++++++++++++++++++++++++++----------------
>>   1 file changed, 102 insertions(+), 52 deletions(-)
>>
>> diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
>> index 1f07441259..6f045081b5 100644
>> --- a/xen/arch/x86/mm/p2m-pod.c
>> +++ b/xen/arch/x86/mm/p2m-pod.c
>> @@ -155,8 +155,10 @@ static struct page_info * p2m_pod_cache_get(struct p2m_domain *p2m,
>>   
>>           BUG_ON( page_list_empty(&p2m->pod.super) );
>>   
>> -        /* Break up a superpage to make single pages. NB count doesn't
>> -         * need to be adjusted. */
>> +        /*
>> +         * Break up a superpage to make single pages. NB count doesn't
>> +         * need to be adjusted.
>> +         */
>>           p = page_list_remove_head(&p2m->pod.super);
>>           mfn = mfn_x(page_to_mfn(p));
>>   
>> @@ -242,8 +244,10 @@ p2m_pod_set_cache_target(struct p2m_domain *p2m, unsigned long pod_target, int p
>>       }
>>   
>>       /* Decreasing the target */
>> -    /* We hold the pod lock here, so we don't need to worry about
>> -     * cache disappearing under our feet. */
>> +    /*
>> +     * We hold the pod lock here, so we don't need to worry about
>> +     * cache disappearing under our feet.
>> +     */
>>       while ( pod_target < p2m->pod.count )
>>       {
>>           struct page_info * page;
>> @@ -345,15 +349,19 @@ p2m_pod_set_mem_target(struct domain *d, unsigned long target)
>>       if ( d->is_dying )
>>           goto out;
>>   
>> -    /* T' < B: Don't reduce the cache size; let the balloon driver
>> -     * take care of it. */
>> +    /*
>> +     * T' < B: Don't reduce the cache size; let the balloon driver
>> +     * take care of it.
>> +     */
>>       if ( target < d->tot_pages )
>>           goto out;
>>   
>>       pod_target = target - populated;
>>   
>> -    /* B < T': Set the cache size equal to # of outstanding entries,
>> -     * let the balloon driver fill in the rest. */
>> +    /*
>> +     * B < T': Set the cache size equal to # of outstanding entries,
>> +     * let the balloon driver fill in the rest.
>> +     */
>>       if ( populated > 0 && pod_target > p2m->pod.entry_count )
>>           pod_target = p2m->pod.entry_count;
>>   
>> @@ -491,7 +499,8 @@ static int
>>   p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn);
>>   
>>   
>> -/* This function is needed for two reasons:
>> +/*
>> + * This function is needed for two reasons:
>>    * + To properly handle clearing of PoD entries
>>    * + To "steal back" memory being freed for the PoD cache, rather than
>>    *   releasing it.
>> @@ -513,8 +522,10 @@ p2m_pod_decrease_reservation(struct domain *d,
>>       gfn_lock(p2m, gpfn, order);
>>       pod_lock(p2m);
>>   
>> -    /* If we don't have any outstanding PoD entries, let things take their
>> -     * course */
>> +    /*
>> +     * If we don't have any outstanding PoD entries, let things take their
>> +     * course.
>> +     */
>>       if ( p2m->pod.entry_count == 0 )
>>           goto out_unlock;
>>   
>> @@ -550,8 +561,10 @@ p2m_pod_decrease_reservation(struct domain *d,
>>   
>>       if ( !nonpod )
>>       {
>> -        /* All PoD: Mark the whole region invalid and tell caller
>> -         * we're done. */
>> +        /*
>> +         * All PoD: Mark the whole region invalid and tell caller
>> +         * we're done.
>> +         */
>>           p2m_set_entry(p2m, gpfn, INVALID_MFN, order, p2m_invalid,
>>                         p2m->default_access);
>>           p2m->pod.entry_count-=(1<<order);
>> @@ -568,15 +581,16 @@ p2m_pod_decrease_reservation(struct domain *d,
>>        * - order >= SUPERPAGE_ORDER (the loop below will take care of this)
>>        * - not all of the pages were RAM (now knowing order < SUPERPAGE_ORDER)
>>        */
>> -    if ( steal_for_cache && order < SUPERPAGE_ORDER && ram == (1 << order) &&
>> +    if ( steal_for_cache && order < SUPERPAGE_ORDER && ram == (1UL << order) &&
>>            p2m_pod_zero_check_superpage(p2m, gpfn & ~(SUPERPAGE_PAGES - 1)) )
>>       {
>> -        pod = 1 << order;
>> +        pod = 1UL << order;
>>           ram = nonpod = 0;
>>           ASSERT(steal_for_cache == (p2m->pod.entry_count > p2m->pod.count));
>>       }
>>   
>> -    /* Process as long as:
>> +    /*
>> +     * Process as long as:
>>        * + There are PoD entries to handle, or
>>        * + There is ram left, and we want to steal it
>>        */
>> @@ -631,8 +645,10 @@ p2m_pod_decrease_reservation(struct domain *d,
>>           }
>>       }
>>   
>> -    /* If there are no more non-PoD entries, tell decrease_reservation() that
>> -     * there's nothing left to do. */
>> +    /*
>> +     * If there are no more non-PoD entries, tell decrease_reservation() that
>> +     * there's nothing left to do.
>> +     */
>>       if ( nonpod == 0 )
>>           ret = 1;
>>   
>> @@ -658,9 +674,11 @@ void p2m_pod_dump_data(struct domain *d)
>>   }
>>   
>>   
>> -/* Search for all-zero superpages to be reclaimed as superpages for the
>> +/*
>> + * Search for all-zero superpages to be reclaimed as superpages for the
>>    * PoD cache. Must be called w/ pod lock held, must lock the superpage
>> - * in the p2m */
>> + * in the p2m.
>> + */
>>   static int
>>   p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
>>   {
>> @@ -682,12 +700,16 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
>>       if ( paging_mode_shadow(d) )
>>           max_ref++;
>>   
>> -    /* NOTE: this is why we don't enforce deadlock constraints between p2m
>> -     * and pod locks */
>> +    /*
>> +     * NOTE: this is why we don't enforce deadlock constraints between p2m
>> +     * and pod locks.
>> +     */
>>       gfn_lock(p2m, gfn, SUPERPAGE_ORDER);
>>   
>> -    /* Look up the mfns, checking to make sure they're the same mfn
>> -     * and aligned, and mapping them. */
>> +    /*
>> +     * Look up the mfns, checking to make sure they're the same mfn
>> +     * and aligned, and mapping them.
>> +     */
>>       for ( i = 0; i < SUPERPAGE_PAGES; i += n )
>>       {
>>           p2m_access_t a;
>> @@ -697,7 +719,8 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
>>   
>>           mfn = p2m->get_entry(p2m, gfn + i, &type, &a, 0, &cur_order, NULL);
>>   
>> -        /* Conditions that must be met for superpage-superpage:
>> +        /*
>> +         * Conditions that must be met for superpage-superpage:
>>            * + All gfns are ram types
>>            * + All gfns have the same type
>>            * + All of the mfns are allocated to a domain
>> @@ -751,9 +774,11 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
>>                     p2m_populate_on_demand, p2m->default_access);
>>       p2m_tlb_flush_sync(p2m);
>>   
>> -    /* Make none of the MFNs are used elsewhere... for example, mapped
>> +    /*
>> +     * Make none of the MFNs are used elsewhere... for example, mapped
>>        * via the grant table interface, or by qemu.  Allow one refcount for
>> -     * being allocated to the domain. */
>> +     * being allocated to the domain.
>> +     */
>>       for ( i=0; i < SUPERPAGE_PAGES; i++ )
>>       {
>>           mfn = _mfn(mfn_x(mfn0) + i);
>> @@ -797,8 +822,10 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
>>           __trace_var(TRC_MEM_POD_ZERO_RECLAIM, 0, sizeof(t), &t);
>>       }
>>   
>> -    /* Finally!  We've passed all the checks, and can add the mfn superpage
>> -     * back on the PoD cache, and account for the new p2m PoD entries */
>> +    /*
>> +     * Finally!  We've passed all the checks, and can add the mfn superpage
>> +     * back on the PoD cache, and account for the new p2m PoD entries.
>> +     */
>>       p2m_pod_cache_add(p2m, mfn_to_page(mfn0), PAGE_ORDER_2M);
>>       p2m->pod.entry_count += SUPERPAGE_PAGES;
>>   
>> @@ -833,8 +860,10 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
>>       {
>>           p2m_access_t a;
>>           mfns[i] = p2m->get_entry(p2m, gfns[i], types + i, &a, 0, NULL, NULL);
>> -        /* If this is ram, and not a pagetable or from the xen heap, and probably not mapped
>> -           elsewhere, map it; otherwise, skip. */
>> +        /*
>> +         * If this is ram, and not a pagetable or from the xen heap, and
>> +         * probably not mapped elsewhere, map it; otherwise, skip.
>> +         */
>>           if ( p2m_is_ram(types[i])
>>                && ( (mfn_to_page(mfns[i])->count_info & PGC_allocated) != 0 )
>>                && ( (mfn_to_page(mfns[i])->count_info & (PGC_page_table|PGC_xen_heap)) == 0 )
>> @@ -844,8 +873,10 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
>>               map[i] = NULL;
>>       }
>>   
>> -    /* Then, go through and check for zeroed pages, removing write permission
>> -     * for those with zeroes. */
>> +    /*
>> +     * Then, go through and check for zeroed pages, removing write permission
>> +     * for those with zeroes.
>> +     */
>>       for ( i=0; i<count; i++ )
>>       {
>>           if(!map[i])
>> @@ -867,8 +898,10 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
>>           p2m_set_entry(p2m, gfns[i], INVALID_MFN, PAGE_ORDER_4K,
>>                         p2m_populate_on_demand, p2m->default_access);
>>   
>> -        /* See if the page was successfully unmapped.  (Allow one refcount
>> -         * for being allocated to a domain.) */
>> +        /*
>> +         * See if the page was successfully unmapped.  (Allow one refcount
>> +         * for being allocated to a domain.)
>> +         */
>>           if ( (mfn_to_page(mfns[i])->count_info & PGC_count_mask) > 1 )
>>           {
>>               unmap_domain_page(map[i]);
>> @@ -895,8 +928,10 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
>>   
>>           unmap_domain_page(map[i]);
>>   
>> -        /* See comment in p2m_pod_zero_check_superpage() re gnttab
>> -         * check timing.  */
>> +        /*
>> +         * See comment in p2m_pod_zero_check_superpage() re gnttab
>> +         * check timing.
>> +         */
>>           if ( j < PAGE_SIZE/sizeof(*map[i]) )
>>           {
>>               p2m_set_entry(p2m, gfns[i], mfns[i], PAGE_ORDER_4K,
>> @@ -944,9 +979,11 @@ p2m_pod_emergency_sweep(struct p2m_domain *p2m)
>>       limit = (start > POD_SWEEP_LIMIT) ? (start - POD_SWEEP_LIMIT) : 0;
>>   
>>       /* FIXME: Figure out how to avoid superpages */
>> -    /* NOTE: Promote to globally locking the p2m. This will get complicated
>> +    /*
>> +     * NOTE: Promote to globally locking the p2m. This will get complicated
>>        * in a fine-grained scenario. If we lock each gfn individually we must be
>> -     * careful about spinlock recursion limits and POD_SWEEP_STRIDE. */
>> +     * careful about spinlock recursion limits and POD_SWEEP_STRIDE.
>> +     */
>>       p2m_lock(p2m);
>>       for ( i=p2m->pod.reclaim_single; i > 0 ; i-- )
>>       {
>> @@ -963,11 +1000,13 @@ p2m_pod_emergency_sweep(struct p2m_domain *p2m)
>>                   j = 0;
>>               }
>>           }
>> -        /* Stop if we're past our limit and we have found *something*.
>> +        /*
>> +         * Stop if we're past our limit and we have found *something*.
>>            *
>>            * NB that this is a zero-sum game; we're increasing our cache size
>>            * by re-increasing our 'debt'.  Since we hold the pod lock,
>> -         * (entry_count - count) must remain the same. */
>> +         * (entry_count - count) must remain the same.
>> +         */
>>           if ( i < limit && (p2m->pod.count > 0 || hypercall_preempt_check()) )
>>               break;
>>       }
>> @@ -1045,20 +1084,25 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
>>       ASSERT(gfn_locked_by_me(p2m, gfn));
>>       pod_lock(p2m);
>>   
>> -    /* This check is done with the pod lock held.  This will make sure that
>> +    /*
>> +     * This check is done with the pod lock held.  This will make sure that
>>        * even if d->is_dying changes under our feet, p2m_pod_empty_cache()
>> -     * won't start until we're done. */
>> +     * won't start until we're done.
>> +     */
>>       if ( unlikely(d->is_dying) )
>>           goto out_fail;
>>   
>>   
>> -    /* Because PoD does not have cache list for 1GB pages, it has to remap
>> -     * 1GB region to 2MB chunks for a retry. */
>> +    /*
>> +     * Because PoD does not have cache list for 1GB pages, it has to remap
>> +     * 1GB region to 2MB chunks for a retry.
>> +     */
>>       if ( order == PAGE_ORDER_1G )
>>       {
>>           pod_unlock(p2m);
>>           gfn_aligned = (gfn >> order) << order;
>> -        /* Note that we are supposed to call p2m_set_entry() 512 times to
>> +        /*
>> +         * Note that we are supposed to call p2m_set_entry() 512 times to
>>            * split 1GB into 512 2MB pages here. But We only do once here because
>>            * p2m_set_entry() should automatically shatter the 1GB page into
>>            * 512 2MB pages. The rest of 511 calls are unnecessary.
>> @@ -1075,8 +1119,10 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
>>       if ( p2m->pod.entry_count > p2m->pod.count )
>>           pod_eager_reclaim(p2m);
>>   
>> -    /* Only sweep if we're actually out of memory.  Doing anything else
>> -     * causes unnecessary time and fragmentation of superpages in the p2m. */
>> +    /*
>> +     * Only sweep if we're actually out of memory.  Doing anything else
>> +     * causes unnecessary time and fragmentation of superpages in the p2m.
>> +     */
>>       if ( p2m->pod.count == 0 )
>>           p2m_pod_emergency_sweep(p2m);
>>   
>> @@ -1088,8 +1134,10 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
>>       if ( gfn > p2m->pod.max_guest )
>>           p2m->pod.max_guest = gfn;
>>   
>> -    /* Get a page f/ the cache.  A NULL return value indicates that the
>> -     * 2-meg range should be marked singleton PoD, and retried */
>> +    /*
>> +     * Get a page f/ the cache.  A NULL return value indicates that the
>> +     * 2-meg range should be marked singleton PoD, and retried.
>> +     */
>>       if ( (p = p2m_pod_cache_get(p2m, order)) == NULL )
>>           goto remap_and_retry;
>>   
>> @@ -1146,8 +1194,10 @@ remap_and_retry:
>>       pod_unlock(p2m);
>>   
>>       /* Remap this 2-meg region in singleton chunks */
>> -    /* NOTE: In a p2m fine-grained lock scenario this might
>> -     * need promoting the gfn lock from gfn->2M superpage */
>> +    /*
>> +     * NOTE: In a p2m fine-grained lock scenario this might
>> +     * need promoting the gfn lock from gfn->2M superpage.
>> +     */
>>       gfn_aligned = (gfn>>order)<<order;
>>       for(i=0; i<(1<<order); i++)
>>           p2m_set_entry(p2m, gfn_aligned + i, INVALID_MFN, PAGE_ORDER_4K,
>>
> 

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 05/16] xen/x86: p2m-pod: Avoid redundant assignments in p2m_pod_demand_populate
  2017-09-21 12:40 ` [PATCH v2 05/16] xen/x86: p2m-pod: Avoid redundant assignments in p2m_pod_demand_populate Julien Grall
  2017-09-21 14:53   ` Wei Liu
@ 2017-09-21 16:35   ` George Dunlap
  1 sibling, 0 replies; 49+ messages in thread
From: George Dunlap @ 2017-09-21 16:35 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: George Dunlap, Andrew Cooper, Jan Beulich

On 09/21/2017 01:40 PM, Julien Grall wrote:
> gfn_aligned is assigned 3 times with the exact same formula. All the
> variables used are not modified, so consolidate in a single assignment
> at the beginning of the function.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>

> 
> ---
> 
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> 
>     Changes in v2:
>         - Add Andrew's acked-by
> ---
>  xen/arch/x86/mm/p2m-pod.c | 6 +-----
>  1 file changed, 1 insertion(+), 5 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
> index f04d6e03e2..bcc87aee03 100644
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -1079,7 +1079,7 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
>  {
>      struct domain *d = p2m->domain;
>      struct page_info *p = NULL; /* Compiler warnings */
> -    unsigned long gfn_aligned;
> +    unsigned long gfn_aligned = (gfn >> order) << order;
>      mfn_t mfn;
>      unsigned long i;
>  
> @@ -1102,7 +1102,6 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
>      if ( order == PAGE_ORDER_1G )
>      {
>          pod_unlock(p2m);
> -        gfn_aligned = (gfn >> order) << order;
>          /*
>           * Note that we are supposed to call p2m_set_entry() 512 times to
>           * split 1GB into 512 2MB pages here. But We only do once here because
> @@ -1147,8 +1146,6 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
>  
>      BUG_ON((mfn_x(mfn) & ((1UL << order) - 1)) != 0);
>  
> -    gfn_aligned = (gfn >> order) << order;
> -
>      p2m_set_entry(p2m, gfn_aligned, mfn, order, p2m_ram_rw,
>                    p2m->default_access);
>  
> @@ -1200,7 +1197,6 @@ remap_and_retry:
>       * NOTE: In a p2m fine-grained lock scenario this might
>       * need promoting the gfn lock from gfn->2M superpage.
>       */
> -    gfn_aligned = (gfn >> order) << order;
>      for ( i = 0; i < (1UL << order); i++ )
>          p2m_set_entry(p2m, gfn_aligned + i, INVALID_MFN, PAGE_ORDER_4K,
>                        p2m_populate_on_demand, p2m->default_access);
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 06/16] xen/x86: p2m-pod: Clean-up use of typesafe MFN
  2017-09-21 12:40 ` [PATCH v2 06/16] xen/x86: p2m-pod: Clean-up use of typesafe MFN Julien Grall
  2017-09-21 14:54   ` Wei Liu
@ 2017-09-21 16:36   ` George Dunlap
  1 sibling, 0 replies; 49+ messages in thread
From: George Dunlap @ 2017-09-21 16:36 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: George Dunlap, Andrew Cooper, Jan Beulich

On 09/21/2017 01:40 PM, Julien Grall wrote:
> Some unboxing/boxing can be avoided by using mfn_add(...) instead.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>

> 
> ---
> 
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> 
>     Changes in v2:
>         - Add Andrew's acked-by
> ---
>  xen/arch/x86/mm/p2m-pod.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
> index bcc87aee03..34f5239b6d 100644
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -101,7 +101,7 @@ p2m_pod_cache_add(struct p2m_domain *p2m,
>       * promise to provide zero pages. So we scrub pages before using.
>       */
>      for ( i = 0; i < (1UL << order); i++ )
> -        clear_domain_page(_mfn(mfn_x(page_to_mfn(page)) + i));
> +        clear_domain_page(mfn_add(page_to_mfn(page), i));
>  
>      /* First, take all pages off the domain list */
>      lock_page_alloc(p2m);
> @@ -743,7 +743,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
>              mfn0 = mfn;
>              type0 = type;
>          }
> -        else if ( type != type0 || mfn_x(mfn) != (mfn_x(mfn0) + i) )
> +        else if ( type != type0 || !mfn_eq(mfn, mfn_add(mfn0, i)) )
>              goto out;
>  
>          n = 1UL << min(cur_order, SUPERPAGE_ORDER + 0U);
> @@ -758,7 +758,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
>      for ( i = 0; i < SUPERPAGE_PAGES; i++ )
>      {
>          /* Quick zero-check */
> -        map = map_domain_page(_mfn(mfn_x(mfn0) + i));
> +        map = map_domain_page(mfn_add(mfn0, i));
>  
>          for ( j = 0; j < 16; j++ )
>              if ( *(map + j) != 0 )
> @@ -783,7 +783,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
>       */
>      for ( i = 0; i < SUPERPAGE_PAGES; i++ )
>      {
> -        mfn = _mfn(mfn_x(mfn0) + i);
> +        mfn = mfn_add(mfn0, i);
>          if ( (mfn_to_page(mfn)->count_info & PGC_count_mask) > 1 )
>          {
>              reset = 1;
> @@ -794,7 +794,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
>      /* Finally, do a full zero-check */
>      for ( i = 0; i < SUPERPAGE_PAGES; i++ )
>      {
> -        map = map_domain_page(_mfn(mfn_x(mfn0) + i));
> +        map = map_domain_page(mfn_add(mfn0, i));
>  
>          for ( j = 0; j < (PAGE_SIZE / sizeof(*map)); j++ )
>              if ( *(map+j) != 0 )
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 07/16] xen/x86: p2m-pod: Use typesafe gfn in p2m_pod_decrease_reservation
  2017-09-21 12:40 ` [PATCH v2 07/16] xen/x86: p2m-pod: Use typesafe gfn in p2m_pod_decrease_reservation Julien Grall
  2017-09-21 15:19   ` Wei Liu
@ 2017-09-21 16:39   ` George Dunlap
  1 sibling, 0 replies; 49+ messages in thread
From: George Dunlap @ 2017-09-21 16:39 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: George Dunlap, Andrew Cooper, Jan Beulich

On 09/21/2017 01:40 PM, Julien Grall wrote:
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: George Dunlap <george.dunlap@citrix.com>

> 
> ---
> 
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> 
>     Changes in v2:
>         - Add Andrew's acked-by
> ---
>  xen/arch/arm/p2m.c           |  3 +--
>  xen/arch/x86/mm/p2m-pod.c    | 20 +++++++++-----------
>  xen/common/memory.c          |  3 ++-
>  xen/include/asm-arm/p2m.h    | 13 -------------
>  xen/include/asm-x86/p2m.h    |  7 -------
>  xen/include/xen/p2m-common.h | 13 +++++++++++++
>  6 files changed, 25 insertions(+), 34 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 192a1c329d..0410b1e86b 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -393,8 +393,7 @@ int guest_physmap_mark_populate_on_demand(struct domain *d,
>      return -ENOSYS;
>  }
>  
> -int p2m_pod_decrease_reservation(struct domain *d,
> -                                 xen_pfn_t gpfn,
> +int p2m_pod_decrease_reservation(struct domain *d, gfn_t gfn,
>                                   unsigned int order)
>  {
>      return -ENOSYS;
> diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
> index 34f5239b6d..eb74e5c01f 100644
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -511,9 +511,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn);
>   * allow decrease_reservation() to handle everything else.
>   */
>  int
> -p2m_pod_decrease_reservation(struct domain *d,
> -                             xen_pfn_t gpfn,
> -                             unsigned int order)
> +p2m_pod_decrease_reservation(struct domain *d, gfn_t gfn, unsigned int order)
>  {
>      int ret = 0;
>      unsigned long i, n;
> @@ -521,7 +519,7 @@ p2m_pod_decrease_reservation(struct domain *d,
>      bool_t steal_for_cache;
>      long pod, nonpod, ram;
>  
> -    gfn_lock(p2m, gpfn, order);
> +    gfn_lock(p2m, gfn, order);
>      pod_lock(p2m);
>  
>      /*
> @@ -545,7 +543,7 @@ p2m_pod_decrease_reservation(struct domain *d,
>          p2m_type_t t;
>          unsigned int cur_order;
>  
> -        p2m->get_entry(p2m, gpfn + i, &t, &a, 0, &cur_order, NULL);
> +        p2m->get_entry(p2m, gfn_x(gfn) + i, &t, &a, 0, &cur_order, NULL);
>          n = 1UL << min(order, cur_order);
>          if ( t == p2m_populate_on_demand )
>              pod += n;
> @@ -567,7 +565,7 @@ p2m_pod_decrease_reservation(struct domain *d,
>           * All PoD: Mark the whole region invalid and tell caller
>           * we're done.
>           */
> -        p2m_set_entry(p2m, gpfn, INVALID_MFN, order, p2m_invalid,
> +        p2m_set_entry(p2m, gfn_x(gfn), INVALID_MFN, order, p2m_invalid,
>                        p2m->default_access);
>          p2m->pod.entry_count -= 1UL << order;
>          BUG_ON(p2m->pod.entry_count < 0);
> @@ -584,7 +582,7 @@ p2m_pod_decrease_reservation(struct domain *d,
>       * - not all of the pages were RAM (now knowing order < SUPERPAGE_ORDER)
>       */
>      if ( steal_for_cache && order < SUPERPAGE_ORDER && ram == (1UL << order) &&
> -         p2m_pod_zero_check_superpage(p2m, gpfn & ~(SUPERPAGE_PAGES - 1)) )
> +         p2m_pod_zero_check_superpage(p2m, gfn_x(gfn) & ~(SUPERPAGE_PAGES - 1)) )
>      {
>          pod = 1UL << order;
>          ram = nonpod = 0;
> @@ -605,13 +603,13 @@ p2m_pod_decrease_reservation(struct domain *d,
>          p2m_access_t a;
>          unsigned int cur_order;
>  
> -        mfn = p2m->get_entry(p2m, gpfn + i, &t, &a, 0, &cur_order, NULL);
> +        mfn = p2m->get_entry(p2m, gfn_x(gfn) + i, &t, &a, 0, &cur_order, NULL);
>          if ( order < cur_order )
>              cur_order = order;
>          n = 1UL << cur_order;
>          if ( t == p2m_populate_on_demand )
>          {
> -            p2m_set_entry(p2m, gpfn + i, INVALID_MFN, cur_order,
> +            p2m_set_entry(p2m, gfn_x(gfn) + i, INVALID_MFN, cur_order,
>                            p2m_invalid, p2m->default_access);
>              p2m->pod.entry_count -= n;
>              BUG_ON(p2m->pod.entry_count < 0);
> @@ -633,7 +631,7 @@ p2m_pod_decrease_reservation(struct domain *d,
>  
>              page = mfn_to_page(mfn);
>  
> -            p2m_set_entry(p2m, gpfn + i, INVALID_MFN, cur_order,
> +            p2m_set_entry(p2m, gfn_x(gfn) + i, INVALID_MFN, cur_order,
>                            p2m_invalid, p2m->default_access);
>              p2m_tlb_flush_sync(p2m);
>              for ( j = 0; j < n; ++j )
> @@ -663,7 +661,7 @@ out_entry_check:
>  
>  out_unlock:
>      pod_unlock(p2m);
> -    gfn_unlock(p2m, gpfn, order);
> +    gfn_unlock(p2m, gfn, order);
>      return ret;
>  }
>  
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> index a2abf554e3..ad987e0f29 100644
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -417,7 +417,8 @@ static void decrease_reservation(struct memop_args *a)
>  
>          /* See if populate-on-demand wants to handle this */
>          if ( is_hvm_domain(a->domain)
> -             && p2m_pod_decrease_reservation(a->domain, gmfn, a->extent_order) )
> +             && p2m_pod_decrease_reservation(a->domain, _gfn(gmfn),
> +                                             a->extent_order) )
>              continue;
>  
>          for ( j = 0; j < (1 << a->extent_order); j++ )
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index bc5bbf0db7..faadcfe8fe 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -266,19 +266,6 @@ static inline int guest_physmap_add_page(struct domain *d,
>  
>  mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn);
>  
> -/*
> - * Populate-on-demand
> - */
> -
> -/*
> - * Call when decreasing memory reservation to handle PoD entries properly.
> - * Will return '1' if all entries were handled and nothing more need be done.
> - */
> -int
> -p2m_pod_decrease_reservation(struct domain *d,
> -                             xen_pfn_t gpfn,
> -                             unsigned int order);
> -
>  /* Look up a GFN and take a reference count on the backing page. */
>  typedef unsigned int p2m_query_t;
>  #define P2M_ALLOC    (1u<<0)   /* Populate PoD and paged-out entries */
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> index 10cdfc09a9..8f3409b400 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -643,13 +643,6 @@ int p2m_pod_empty_cache(struct domain *d);
>   * domain matches target */
>  int p2m_pod_set_mem_target(struct domain *d, unsigned long target);
>  
> -/* Call when decreasing memory reservation to handle PoD entries properly.
> - * Will return '1' if all entries were handled and nothing more need be done.*/
> -int
> -p2m_pod_decrease_reservation(struct domain *d,
> -                             xen_pfn_t gpfn,
> -                             unsigned int order);
> -
>  /* Scan pod cache when offline/broken page triggered */
>  int
>  p2m_pod_offline_or_broken_hit(struct page_info *p);
> diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h
> index 2b5696cf33..27f89208f5 100644
> --- a/xen/include/xen/p2m-common.h
> +++ b/xen/include/xen/p2m-common.h
> @@ -20,4 +20,17 @@ int unmap_mmio_regions(struct domain *d,
>                         unsigned long nr,
>                         mfn_t mfn);
>  
> +/*
> + * Populate-on-Demand
> + */
> +
> +/*
> + * Call when decreasing memory reservation to handle PoD entries properly.
> + * Will return '1' if all entries were handled and nothing more need be done.
> + */
> +int
> +p2m_pod_decrease_reservation(struct domain *d, gfn_t gfn,
> +                             unsigned int order);
> +
> +
>  #endif /* _XEN_P2M_COMMON_H */
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 04/16] xen/x86: p2m-pod: Fix coding style
  2017-09-21 12:40 ` [PATCH v2 04/16] xen/x86: p2m-pod: Fix coding style Julien Grall
  2017-09-21 14:48   ` Wei Liu
  2017-09-21 16:33   ` George Dunlap
@ 2017-09-22  9:15   ` Jan Beulich
  2017-09-28 19:29     ` Julien Grall
  2 siblings, 1 reply; 49+ messages in thread
From: Jan Beulich @ 2017-09-22  9:15 UTC (permalink / raw)
  To: Julien Grall; +Cc: George Dunlap, Andrew Cooper, xen-devel

>>> On 21.09.17 at 14:40, <julien.grall@arm.com> wrote:
> Also take the opportunity to:
>     - move from 1 << * to 1UL << *.
>     - use unsigned when possible
>     - move from unsigned int -> unsigned long for some induction
>     variables

I don't understand this last point, btw - the largest order page the
code needs to deal with right now is 1Gb, so there's no risk of
overflow (yet). But you've got George's and Andrew's ack, so no
need to revise this...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 11/16] xen/x86: p2m-pod: Clean-up p2m_pod_zero_check
  2017-09-21 12:40 ` [PATCH v2 11/16] xen/x86: p2m-pod: Clean-up p2m_pod_zero_check Julien Grall
  2017-09-21 15:35   ` Wei Liu
@ 2017-09-22  9:26   ` Jan Beulich
  2017-10-02 12:43     ` Julien Grall
  1 sibling, 1 reply; 49+ messages in thread
From: Jan Beulich @ 2017-09-22  9:26 UTC (permalink / raw)
  To: Julien Grall; +Cc: George Dunlap, Andrew Cooper, xen-devel

>>> On 21.09.17 at 14:40, <julien.grall@arm.com> wrote:
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -861,17 +861,19 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
>      for ( i = 0; i < count; i++ )
>      {
>          p2m_access_t a;
> +        struct page_info *pg;
>  
>          mfns[i] = p2m->get_entry(p2m, _gfn(gfns[i]), types + i, &a,
>                                   0, NULL, NULL);
> +        pg = mfn_to_page(mfns[i]);
> +
>          /*
>           * If this is ram, and not a pagetable or from the xen heap, and
>           * probably not mapped elsewhere, map it; otherwise, skip.
>           */
> -        if ( p2m_is_ram(types[i])
> -             && ( (mfn_to_page(mfns[i])->count_info & PGC_allocated) != 0 )
> -             && ( (mfn_to_page(mfns[i])->count_info & (PGC_page_table|PGC_xen_heap)) == 0 )
> -             && ( (mfn_to_page(mfns[i])->count_info & PGC_count_mask) <= max_ref ) )
> +        if ( p2m_is_ram(types[i]) && (pg->count_info & PGC_allocated) &&

If you omit the != 0 here (which I appreciate) ...

> +             ((pg->count_info & (PGC_page_table | PGC_xen_heap)) == 0) &&

... you should also use ! instead of == 0 here.

> +             ((pg->count_info & (PGC_count_mask)) <= max_ref) )

Stray innermost parentheses left?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 04/16] xen/x86: p2m-pod: Fix coding style
  2017-09-22  9:15   ` Jan Beulich
@ 2017-09-28 19:29     ` Julien Grall
  2017-10-04  5:38       ` Jan Beulich
  0 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-09-28 19:29 UTC (permalink / raw)
  To: Jan Beulich; +Cc: George Dunlap, Andrew Cooper, xen-devel

Hi Jan,

On 09/22/2017 10:15 AM, Jan Beulich wrote:
>>>> On 21.09.17 at 14:40, <julien.grall@arm.com> wrote:
>> Also take the opportunity to:
>>      - move from 1 << * to 1UL << *.
>>      - use unsigned when possible
>>      - move from unsigned int -> unsigned long for some induction
>>      variables
> 
> I don't understand this last point, btw - the largest order page the
> code needs to deal with right now is 1Gb, so there's no risk of
> overflow (yet). But you've got George's and Andrew's ack, so no
> need to revise this...

The last one result from the existing 1UL << in the code. We have place 
where the induction variable is unsigned int but the shift unsigned long.

Similarly the code is using a mix of 1 << and 1UL <<. I moved to UL 
because even if the code only support up to 1GB superpage at the moment, 
it would be pain to find all the places the day we decide to use bigger one.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 11/16] xen/x86: p2m-pod: Clean-up p2m_pod_zero_check
  2017-09-22  9:26   ` Jan Beulich
@ 2017-10-02 12:43     ` Julien Grall
  0 siblings, 0 replies; 49+ messages in thread
From: Julien Grall @ 2017-10-02 12:43 UTC (permalink / raw)
  To: Jan Beulich; +Cc: George Dunlap, Andrew Cooper, xen-devel

Hi Jan,

On 22/09/17 10:26, Jan Beulich wrote:
>>>> On 21.09.17 at 14:40, <julien.grall@arm.com> wrote:
>> --- a/xen/arch/x86/mm/p2m-pod.c
>> +++ b/xen/arch/x86/mm/p2m-pod.c
>> @@ -861,17 +861,19 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
>>       for ( i = 0; i < count; i++ )
>>       {
>>           p2m_access_t a;
>> +        struct page_info *pg;
>>   
>>           mfns[i] = p2m->get_entry(p2m, _gfn(gfns[i]), types + i, &a,
>>                                    0, NULL, NULL);
>> +        pg = mfn_to_page(mfns[i]);
>> +
>>           /*
>>            * If this is ram, and not a pagetable or from the xen heap, and
>>            * probably not mapped elsewhere, map it; otherwise, skip.
>>            */
>> -        if ( p2m_is_ram(types[i])
>> -             && ( (mfn_to_page(mfns[i])->count_info & PGC_allocated) != 0 )
>> -             && ( (mfn_to_page(mfns[i])->count_info & (PGC_page_table|PGC_xen_heap)) == 0 )
>> -             && ( (mfn_to_page(mfns[i])->count_info & PGC_count_mask) <= max_ref ) )
>> +        if ( p2m_is_ram(types[i]) && (pg->count_info & PGC_allocated) &&
> 
> If you omit the != 0 here (which I appreciate) ...
> 
>> +             ((pg->count_info & (PGC_page_table | PGC_xen_heap)) == 0) &&
> 
> ... you should also use ! instead of == 0 here.

Good point. I will do that.

> 
>> +             ((pg->count_info & (PGC_count_mask)) <= max_ref) )
> 
> Stray innermost parentheses left?

I will drop the parentheses around PGC_count_mask.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 04/16] xen/x86: p2m-pod: Fix coding style
  2017-09-28 19:29     ` Julien Grall
@ 2017-10-04  5:38       ` Jan Beulich
  2017-10-04 13:12         ` Julien Grall
  0 siblings, 1 reply; 49+ messages in thread
From: Jan Beulich @ 2017-10-04  5:38 UTC (permalink / raw)
  To: julien.grall; +Cc: george.dunlap, andrew.cooper3, xen-devel

>>> Julien Grall <julien.grall@arm.com> 09/28/17 9:30 PM >>>
>On 09/22/2017 10:15 AM, Jan Beulich wrote:
>>>>> On 21.09.17 at 14:40, <julien.grall@arm.com> wrote:
>>> Also take the opportunity to:
>>>      - move from 1 << * to 1UL << *.
>>>      - use unsigned when possible
>>>      - move from unsigned int -> unsigned long for some induction
>>>      variables
>> 
>> I don't understand this last point, btw - the largest order page the
>> code needs to deal with right now is 1Gb, so there's no risk of
>> overflow (yet). But you've got George's and Andrew's ack, so no
>> need to revise this...
>
>The last one result from the existing 1UL << in the code. We have place 
>where the induction variable is unsigned int but the shift unsigned long.

In which case I would have suggested to change the shift to use 1U, since ...

>Similarly the code is using a mix of 1 << and 1UL <<. I moved to UL 
>because even if the code only support up to 1GB superpage at the moment, 
>it would be pain to find all the places the day we decide to use bigger one.

... I doubt this would be too hard. But in the end it's George to judge anyway.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v2 04/16] xen/x86: p2m-pod: Fix coding style
  2017-10-04  5:38       ` Jan Beulich
@ 2017-10-04 13:12         ` Julien Grall
  0 siblings, 0 replies; 49+ messages in thread
From: Julien Grall @ 2017-10-04 13:12 UTC (permalink / raw)
  To: Jan Beulich; +Cc: george.dunlap, andrew.cooper3, xen-devel



On 04/10/17 06:38, Jan Beulich wrote:
>>>> Julien Grall <julien.grall@arm.com> 09/28/17 9:30 PM >>>
>> On 09/22/2017 10:15 AM, Jan Beulich wrote:
>>>>>> On 21.09.17 at 14:40, <julien.grall@arm.com> wrote:
>>>> Also take the opportunity to:
>>>>       - move from 1 << * to 1UL << *.
>>>>       - use unsigned when possible
>>>>       - move from unsigned int -> unsigned long for some induction
>>>>       variables
>>>
>>> I don't understand this last point, btw - the largest order page the
>>> code needs to deal with right now is 1Gb, so there's no risk of
>>> overflow (yet). But you've got George's and Andrew's ack, so no
>>> need to revise this...
>>
>> The last one result from the existing 1UL << in the code. We have place
>> where the induction variable is unsigned int but the shift unsigned long.
> 
> In which case I would have suggested to change the shift to use 1U, since ...
Looking at "<< order" within Xen we seems to use a mix of 1UL <<, 1U <<, 
1 <<. The variant 1UL << seems to be predominant.

At the end we really want to be consistent with the rest of the code base.

>> Similarly the code is using a mix of 1 << and 1UL <<. I moved to UL
>> because even if the code only support up to 1GB superpage at the moment,
>> it would be pain to find all the places the day we decide to use bigger one.
> 
> ... I doubt this would be too hard. But in the end it's George to judge anyway.

order is the number of MFN that is unsigned long and therefore would 
make sense to use 1UL <<. So I am not sure why you are pushing for 1U << 
here...

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

end of thread, other threads:[~2017-10-04 13:12 UTC | newest]

Thread overview: 49+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-21 12:40 [PATCH v2 00/16] xen/x86: Clean-up the PoD code Julien Grall
2017-09-21 12:40 ` [PATCH v2 01/16] xen/x86: p2m-pod: Clean-up includes Julien Grall
2017-09-21 14:47   ` Wei Liu
2017-09-21 16:23   ` George Dunlap
2017-09-21 12:40 ` [PATCH v2 02/16] xen/x86: p2m-pod: Remove trailing whitespaces Julien Grall
2017-09-21 14:47   ` Wei Liu
2017-09-21 16:25   ` George Dunlap
2017-09-21 12:40 ` [PATCH v2 03/16] xen/x86: p2m-pod: Fix coding style for comments Julien Grall
2017-09-21 14:47   ` Wei Liu
2017-09-21 16:28   ` George Dunlap
2017-09-21 16:33     ` George Dunlap
2017-09-21 16:34     ` Julien Grall
2017-09-21 12:40 ` [PATCH v2 04/16] xen/x86: p2m-pod: Fix coding style Julien Grall
2017-09-21 14:48   ` Wei Liu
2017-09-21 16:33   ` George Dunlap
2017-09-22  9:15   ` Jan Beulich
2017-09-28 19:29     ` Julien Grall
2017-10-04  5:38       ` Jan Beulich
2017-10-04 13:12         ` Julien Grall
2017-09-21 12:40 ` [PATCH v2 05/16] xen/x86: p2m-pod: Avoid redundant assignments in p2m_pod_demand_populate Julien Grall
2017-09-21 14:53   ` Wei Liu
2017-09-21 16:35   ` George Dunlap
2017-09-21 12:40 ` [PATCH v2 06/16] xen/x86: p2m-pod: Clean-up use of typesafe MFN Julien Grall
2017-09-21 14:54   ` Wei Liu
2017-09-21 16:36   ` George Dunlap
2017-09-21 12:40 ` [PATCH v2 07/16] xen/x86: p2m-pod: Use typesafe gfn in p2m_pod_decrease_reservation Julien Grall
2017-09-21 15:19   ` Wei Liu
2017-09-21 16:39   ` George Dunlap
2017-09-21 12:40 ` [PATCH v2 08/16] xen/x86: p2m: Use typesafe gfn for the P2M callbacks get_entry and set_entry Julien Grall
2017-09-21 15:23   ` Wei Liu
2017-09-21 12:40 ` [PATCH v2 09/16] xen/x86: p2m: Use typesafe GFN in p2m_set_entry Julien Grall
2017-09-21 15:34   ` Wei Liu
2017-09-21 12:40 ` [PATCH v2 10/16] xen/x86: p2m-pod: Use typesafe GFN in pod_eager_record Julien Grall
2017-09-21 15:35   ` Wei Liu
2017-09-21 12:40 ` [PATCH v2 11/16] xen/x86: p2m-pod: Clean-up p2m_pod_zero_check Julien Grall
2017-09-21 15:35   ` Wei Liu
2017-09-22  9:26   ` Jan Beulich
2017-10-02 12:43     ` Julien Grall
2017-09-21 12:40 ` [PATCH v2 12/16] xen/x86: p2m-pod: Use typesafe gfn in p2m_pod_zero_check Julien Grall
2017-09-21 15:36   ` Wei Liu
2017-09-21 12:40 ` [PATCH v2 13/16] xen/x86: p2m-pod: Use typesafe gfn in p2m_pod_demand_populate Julien Grall
2017-09-21 15:36   ` Wei Liu
2017-09-21 12:40 ` [PATCH v2 14/16] xen/x86: p2m-pod: Use typesafe gfn for the fields reclaim_single and max_guest Julien Grall
2017-09-21 15:36   ` Wei Liu
2017-09-21 12:40 ` [PATCH v2 15/16] xen/x86: p2m-pod: Rework prototype of p2m_pod_demand_populate Julien Grall
2017-09-21 15:37   ` Wei Liu
2017-09-21 12:40 ` [PATCH v2 16/16] xen/x86: mem_access: Fix mis-indented line Julien Grall
2017-09-21 12:52   ` Razvan Cojocaru
2017-09-21 12:55   ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.