xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [Xen-devel] [PATCH v4 0/3] make sure PGC_extra pages are dealt with properly
@ 2020-03-18 17:32 Paul Durrant
  2020-03-18 17:32 ` [Xen-devel] [PATCH v4 1/3] mm: keep PGC_extra pages on a separate list Paul Durrant
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: Paul Durrant @ 2020-03-18 17:32 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Paul Durrant,
	Andrew Cooper, Konrad Rzeszutek Wilk, Ian Jackson, George Dunlap,
	Tim Deegan, Jan Beulich, Roger Pau Monné

This series was formerly called "remove one more shared xenheap page:
shared_info" but I have dropped the patches actually changing shared_info
and just left the PGC_extra clean-up that was previously intertwined.

Paul Durrant (3):
  mm: keep PGC_extra pages on a separate list
  x86 / ioreq: use a MEMF_no_refcount allocation for server pages...
  mm: add 'is_special_page' inline function...

 xen/arch/x86/domain.c           |  9 +++++++++
 xen/arch/x86/domctl.c           |  2 +-
 xen/arch/x86/hvm/ioreq.c        |  2 +-
 xen/arch/x86/mm.c               | 13 ++++++-------
 xen/arch/x86/mm/altp2m.c        |  2 +-
 xen/arch/x86/mm/mem_sharing.c   |  3 +--
 xen/arch/x86/mm/p2m-pod.c       | 12 +++++++-----
 xen/arch/x86/mm/p2m.c           |  4 ++--
 xen/arch/x86/mm/shadow/common.c | 13 ++++++++-----
 xen/arch/x86/mm/shadow/multi.c  |  3 ++-
 xen/arch/x86/tboot.c            |  4 ++--
 xen/common/domain.c             |  1 +
 xen/common/page_alloc.c         |  2 +-
 xen/include/asm-x86/mm.h        |  6 ++----
 xen/include/xen/mm.h            | 10 +++++++---
 xen/include/xen/sched.h         | 13 +++++++++++++
 16 files changed, 64 insertions(+), 35 deletions(-)
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wl@xen.org>
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Xen-devel] [PATCH v4 1/3] mm: keep PGC_extra pages on a separate list
  2020-03-18 17:32 [Xen-devel] [PATCH v4 0/3] make sure PGC_extra pages are dealt with properly Paul Durrant
@ 2020-03-18 17:32 ` Paul Durrant
  2020-03-24 14:34   ` Julien Grall
  2020-03-18 17:32 ` [Xen-devel] [PATCH v4 2/3] x86 / ioreq: use a MEMF_no_refcount allocation for server pages Paul Durrant
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 9+ messages in thread
From: Paul Durrant @ 2020-03-18 17:32 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Paul Durrant,
	Andrew Cooper, Ian Jackson, George Dunlap, Jan Beulich,
	Roger Pau Monné

This patch adds a new page_list_head into struct domain to hold PGC_extra
pages. This avoids them getting confused with 'normal' domheap pages where
the domain's page_list is walked.

A new dump loop is also added to dump_pageframe_info() to unconditionally
dump the 'extra page list'.

Signed-off-by: Paul Durrant <paul@xen.org>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v7:
 - Cosmetic changes

v6:
 - New in v6
---
 xen/arch/x86/domain.c    |  9 +++++++++
 xen/common/domain.c      |  1 +
 xen/common/page_alloc.c  |  2 +-
 xen/include/asm-x86/mm.h |  6 ++----
 xen/include/xen/mm.h     |  5 ++---
 xen/include/xen/sched.h  | 13 +++++++++++++
 6 files changed, 28 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index caf2ecad7e..683bc619aa 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -251,12 +251,21 @@ void dump_pageframe_info(struct domain *d)
         p2m_pod_dump_data(d);
 
     spin_lock(&d->page_alloc_lock);
+
     page_list_for_each ( page, &d->xenpage_list )
     {
         printk("    XenPage %p: caf=%08lx, taf=%" PRtype_info "\n",
                _p(mfn_x(page_to_mfn(page))),
                page->count_info, page->u.inuse.type_info);
     }
+
+    page_list_for_each ( page, &d->extra_page_list )
+    {
+        printk("    ExtraPage %p: caf=%08lx, taf=%" PRtype_info "\n",
+               _p(mfn_x(page_to_mfn(page))),
+               page->count_info, page->u.inuse.type_info);
+    }
+
     spin_unlock(&d->page_alloc_lock);
 }
 
diff --git a/xen/common/domain.c b/xen/common/domain.c
index b4eb476a9c..3dcd73f67c 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -403,6 +403,7 @@ struct domain *domain_create(domid_t domid,
     spin_lock_init_prof(d, page_alloc_lock);
     spin_lock_init(&d->hypercall_deadlock_mutex);
     INIT_PAGE_LIST_HEAD(&d->page_list);
+    INIT_PAGE_LIST_HEAD(&d->extra_page_list);
     INIT_PAGE_LIST_HEAD(&d->xenpage_list);
 
     spin_lock_init(&d->node_affinity_lock);
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 76d37226df..10b7aeca48 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -2314,7 +2314,7 @@ int assign_pages(
         smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
         pg[i].count_info =
             (pg[i].count_info & PGC_extra) | PGC_allocated | 1;
-        page_list_add_tail(&pg[i], &d->page_list);
+        page_list_add_tail(&pg[i], page_to_list(d, &pg[i]));
     }
 
  out:
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index a06b2fb81f..1fa334b306 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -629,10 +629,8 @@ typedef struct mm_rwlock {
     const char        *locker_function; /* func that took it */
 } mm_rwlock_t;
 
-#define arch_free_heap_page(d, pg)                                      \
-    page_list_del2(pg, is_xen_heap_page(pg) ?                           \
-                       &(d)->xenpage_list : &(d)->page_list,            \
-                   &(d)->arch.relmem_list)
+#define arch_free_heap_page(d, pg) \
+    page_list_del2(pg, page_to_list(d, pg), &(d)->arch.relmem_list)
 
 extern const char zero_page[];
 
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index d0d095d9c7..a163c201e2 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -583,9 +583,8 @@ static inline unsigned int get_order_from_pages(unsigned long nr_pages)
 void scrub_one_page(struct page_info *);
 
 #ifndef arch_free_heap_page
-#define arch_free_heap_page(d, pg)                      \
-    page_list_del(pg, is_xen_heap_page(pg) ?            \
-                      &(d)->xenpage_list : &(d)->page_list)
+#define arch_free_heap_page(d, pg) \
+    page_list_del(pg, page_to_list(d, pg))
 #endif
 
 int xenmem_add_to_physmap_one(struct domain *d, unsigned int space,
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index e6813288ab..4b78291d51 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -329,6 +329,7 @@ struct domain
 
     spinlock_t       page_alloc_lock; /* protects all the following fields  */
     struct page_list_head page_list;  /* linked list */
+    struct page_list_head extra_page_list; /* linked list (size extra_pages) */
     struct page_list_head xenpage_list; /* linked list (size xenheap_pages) */
 
     /*
@@ -512,6 +513,18 @@ struct domain
 #endif
 };
 
+static inline struct page_list_head *page_to_list(
+    struct domain *d, const struct page_info *pg)
+{
+    if ( is_xen_heap_page(pg) )
+        return &d->xenpage_list;
+
+    if ( pg->count_info & PGC_extra )
+        return &d->extra_page_list;
+
+    return &d->page_list;
+}
+
 /* Return number of pages currently posessed by the domain */
 static inline unsigned int domain_tot_pages(const struct domain *d)
 {
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [Xen-devel] [PATCH v4 2/3] x86 / ioreq: use a MEMF_no_refcount allocation for server pages...
  2020-03-18 17:32 [Xen-devel] [PATCH v4 0/3] make sure PGC_extra pages are dealt with properly Paul Durrant
  2020-03-18 17:32 ` [Xen-devel] [PATCH v4 1/3] mm: keep PGC_extra pages on a separate list Paul Durrant
@ 2020-03-18 17:32 ` Paul Durrant
  2020-03-18 17:32 ` [Xen-devel] [PATCH v4 3/3] mm: add 'is_special_page' inline function Paul Durrant
  2020-03-24  9:39 ` [Xen-devel] [PATCH v4 0/3] make sure PGC_extra pages are dealt with properly Jan Beulich
  3 siblings, 0 replies; 9+ messages in thread
From: Paul Durrant @ 2020-03-18 17:32 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, Paul Durrant, Andrew Cooper, Paul Durrant, Jan Beulich,
	Roger Pau Monné

From: Paul Durrant <pdurrant@amazon.com>

... now that it is safe to assign them.

This avoids relying on libxl (or whatever toolstack is in use) setting
max_pages up with sufficient 'slop' to allow all necessary ioreq server
pages to be allocated.

Signed-off-by: Paul Durrant <paul@xen.org>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v2:
 - New in v2
---
 xen/arch/x86/hvm/ioreq.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 36fbbcf0ea..70e61788d7 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -376,7 +376,7 @@ static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
         return 0;
     }
 
-    page = alloc_domheap_page(s->target, 0);
+    page = alloc_domheap_page(s->target, MEMF_no_refcount);
 
     if ( !page )
         return -ENOMEM;
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [Xen-devel] [PATCH v4 3/3] mm: add 'is_special_page' inline function...
  2020-03-18 17:32 [Xen-devel] [PATCH v4 0/3] make sure PGC_extra pages are dealt with properly Paul Durrant
  2020-03-18 17:32 ` [Xen-devel] [PATCH v4 1/3] mm: keep PGC_extra pages on a separate list Paul Durrant
  2020-03-18 17:32 ` [Xen-devel] [PATCH v4 2/3] x86 / ioreq: use a MEMF_no_refcount allocation for server pages Paul Durrant
@ 2020-03-18 17:32 ` Paul Durrant
  2020-03-20 15:40   ` Jan Beulich
  2020-03-24 14:34   ` Julien Grall
  2020-03-24  9:39 ` [Xen-devel] [PATCH v4 0/3] make sure PGC_extra pages are dealt with properly Jan Beulich
  3 siblings, 2 replies; 9+ messages in thread
From: Paul Durrant @ 2020-03-18 17:32 UTC (permalink / raw)
  To: xen-devel
  Cc: Tamas K Lengyel, Julien Grall, Wei Liu, Paul Durrant,
	Andrew Cooper, Paul Durrant, Konrad Rzeszutek Wilk, Ian Jackson,
	George Dunlap, Tim Deegan, Stefano Stabellini, Jan Beulich,
	Roger Pau Monné

From: Paul Durrant <pdurrant@amazon.com>

... to cover xenheap and PGC_extra pages.

PGC_extra pages are intended to hold data structures that are associated
with a domain and may be mapped by that domain. They should not be treated
as 'normal' guest pages (i.e. RAM or page tables). Hence, in many cases
where code currently tests is_xen_heap_page() it should also check for
the PGC_extra bit in 'count_info'.

This patch therefore defines is_special_page() to cover both cases and
converts tests of is_xen_heap_page() (or open coded tests of PGC_xen_heap)
to is_special_page() where the page is assigned to a domain.

Signed-off-by: Paul Durrant <paul@xen.org>
Acked-by: Tamas K Lengyel <tamas@tklengyel.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Julien Grall <julien@xen.org>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>

v7:
 - Fixed some uses of is_xen_heap_mfn() that I'd missed
 - Updated commit comment to point out that only tests on assigned xenheap
   pages are candidates for conversion

v6:
 - Convert open-coded checks of PGC_xen_heap to use is_special_page()
   where appropriate

v4:
 - Use inline function instead of macro
 - Add missing conversions from is_xen_heap_page()

v3:
 - Delete obsolete comment.

v2:
 - New in v2
---
 xen/arch/x86/domctl.c           |  2 +-
 xen/arch/x86/mm.c               | 13 ++++++-------
 xen/arch/x86/mm/altp2m.c        |  2 +-
 xen/arch/x86/mm/mem_sharing.c   |  3 +--
 xen/arch/x86/mm/p2m-pod.c       | 12 +++++++-----
 xen/arch/x86/mm/p2m.c           |  4 ++--
 xen/arch/x86/mm/shadow/common.c | 13 ++++++++-----
 xen/arch/x86/mm/shadow/multi.c  |  3 ++-
 xen/arch/x86/tboot.c            |  4 ++--
 xen/include/xen/mm.h            |  5 +++++
 10 files changed, 35 insertions(+), 26 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index ed86762fa6..add70126b9 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -394,7 +394,7 @@ long arch_do_domctl(
             page = get_page_from_gfn(d, gfn, &t, P2M_ALLOC);
 
             if ( unlikely(!page) ||
-                 unlikely(is_xen_heap_page(page)) )
+                 unlikely(is_special_page(page)) )
             {
                 if ( unlikely(p2m_is_broken(t)) )
                     type = XEN_DOMCTL_PFINFO_BROKEN;
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 62507ca651..2fac67ad57 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -1014,7 +1014,7 @@ get_page_from_l1e(
         unsigned long cacheattr = pte_flags_to_cacheattr(l1f);
         int err;
 
-        if ( is_xen_heap_page(page) )
+        if ( is_special_page(page) )
         {
             if ( write )
                 put_page_type(page);
@@ -2447,7 +2447,7 @@ static int cleanup_page_mappings(struct page_info *page)
     {
         page->count_info &= ~PGC_cacheattr_mask;
 
-        BUG_ON(is_xen_heap_page(page));
+        BUG_ON(is_special_page(page));
 
         rc = update_xen_mappings(mfn, 0);
     }
@@ -2477,7 +2477,7 @@ static int cleanup_page_mappings(struct page_info *page)
                 rc = rc2;
         }
 
-        if ( likely(!is_xen_heap_page(page)) )
+        if ( likely(!is_special_page(page)) )
         {
             ASSERT((page->u.inuse.type_info &
                     (PGT_type_mask | PGT_count_mask)) == PGT_writable_page);
@@ -4216,8 +4216,7 @@ int steal_page(
     if ( !(owner = page_get_owner_and_reference(page)) )
         goto fail;
 
-    if ( owner != d || is_xen_heap_page(page) ||
-         (page->count_info & PGC_extra) )
+    if ( owner != d || is_special_page(page) )
         goto fail_put;
 
     /*
@@ -4580,8 +4579,8 @@ int xenmem_add_to_physmap_one(
     prev_mfn = get_gfn(d, gfn_x(gpfn), &p2mt);
     if ( mfn_valid(prev_mfn) )
     {
-        if ( is_xen_heap_mfn(prev_mfn) )
-            /* Xen heap frames are simply unhooked from this phys slot. */
+        if ( is_special_page(mfn_to_page(prev_mfn)) )
+            /* Special pages are simply unhooked from this phys slot. */
             rc = guest_physmap_remove_page(d, gpfn, prev_mfn, PAGE_ORDER_4K);
         else
             /* Normal domain memory is freed, to avoid leaking memory. */
diff --git a/xen/arch/x86/mm/altp2m.c b/xen/arch/x86/mm/altp2m.c
index 50768f2547..c091b03ea3 100644
--- a/xen/arch/x86/mm/altp2m.c
+++ b/xen/arch/x86/mm/altp2m.c
@@ -77,7 +77,7 @@ int altp2m_vcpu_enable_ve(struct vcpu *v, gfn_t gfn)
      * pageable() predicate for this, due to it having the same properties
      * that we want.
      */
-    if ( !p2m_is_pageable(p2mt) || is_xen_heap_page(pg) )
+    if ( !p2m_is_pageable(p2mt) || is_special_page(pg) )
     {
         rc = -EINVAL;
         goto err;
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 3835bc928f..f49f27a3ef 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -840,9 +840,8 @@ static int nominate_page(struct domain *d, gfn_t gfn,
     if ( !p2m_is_sharable(p2mt) )
         goto out;
 
-    /* Skip xen heap pages */
     page = mfn_to_page(mfn);
-    if ( !page || is_xen_heap_page(page) )
+    if ( !page || is_special_page(page) )
         goto out;
 
     /* Check if there are mem_access/remapped altp2m entries for this page */
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index 2a7b8c117b..36bc471e7f 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -749,8 +749,9 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, gfn_t gfn)
 
         n = 1UL << min(cur_order, SUPERPAGE_ORDER + 0U);
         for ( k = 0, page = mfn_to_page(mfn); k < n; ++k, ++page )
-            if ( !(page->count_info & PGC_allocated) ||
-                 (page->count_info & (PGC_page_table | PGC_xen_heap)) ||
+            if ( is_special_page(page) ||
+                 !(page->count_info & PGC_allocated) ||
+                 (page->count_info & PGC_page_table) ||
                  (page->count_info & PGC_count_mask) > max_ref )
                 goto out;
     }
@@ -883,11 +884,12 @@ p2m_pod_zero_check(struct p2m_domain *p2m, const gfn_t *gfns, unsigned int count
         pg = mfn_to_page(mfns[i]);
 
         /*
-         * If this is ram, and not a pagetable or from the xen heap, and
+         * If this is ram, and not a pagetable or a special page, and
          * probably not mapped elsewhere, map it; otherwise, skip.
          */
-        if ( p2m_is_ram(types[i]) && (pg->count_info & PGC_allocated) &&
-             !(pg->count_info & (PGC_page_table | PGC_xen_heap)) &&
+        if ( !is_special_page(pg) && p2m_is_ram(types[i]) &&
+             (pg->count_info & PGC_allocated) &&
+             !(pg->count_info & PGC_page_table) &&
              ((pg->count_info & PGC_count_mask) <= max_ref) )
             map[i] = map_domain_page(mfns[i]);
         else
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 9f51370327..d93c418bcf 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -2983,8 +2983,8 @@ int p2m_add_foreign(struct domain *tdom, unsigned long fgfn,
     prev_mfn = get_gfn(tdom, gpfn, &p2mt_prev);
     if ( mfn_valid(prev_mfn) )
     {
-        if ( is_xen_heap_mfn(prev_mfn) )
-            /* Xen heap frames are simply unhooked from this phys slot */
+        if ( is_special_page(mfn_to_page(prev_mfn)) )
+            /* Special pages are simply unhooked from this phys slot */
             rc = guest_physmap_remove_page(tdom, _gfn(gpfn), prev_mfn, 0);
         else
             /* Normal domain memory is freed, to avoid leaking memory. */
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 121ddf1255..75dd414a6e 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2087,19 +2087,22 @@ static int sh_remove_all_mappings(struct domain *d, mfn_t gmfn, gfn_t gfn)
          * The qemu helper process has an untyped mapping of this dom's RAM
          * and the HVM restore program takes another.
          * Also allow one typed refcount for
-         * - Xen heap pages, to match share_xen_page_with_guest(),
-         * - ioreq server pages, to match prepare_ring_for_helper().
+         * - special pages, which are explicitly referenced and mapped by
+         *   Xen.
+         * - ioreq server pages, which may be special pages or normal
+         *   guest pages with an extra reference taken by
+         *   prepare_ring_for_helper().
          */
         if ( !(shadow_mode_external(d)
                && (page->count_info & PGC_count_mask) <= 3
                && ((page->u.inuse.type_info & PGT_count_mask)
-                   == (is_xen_heap_page(page) ||
+                   == (is_special_page(page) ||
                        (is_hvm_domain(d) && is_ioreq_server_page(d, page))))) )
             printk(XENLOG_G_ERR "can't find all mappings of mfn %"PRI_mfn
-                   " (gfn %"PRI_gfn"): c=%lx t=%lx x=%d i=%d\n",
+                   " (gfn %"PRI_gfn"): c=%lx t=%lx s=%d i=%d\n",
                    mfn_x(gmfn), gfn_x(gfn),
                    page->count_info, page->u.inuse.type_info,
-                   !!is_xen_heap_page(page),
+                   is_special_page(page),
                    (is_hvm_domain(d) && is_ioreq_server_page(d, page)));
     }
 
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index b6afc0fba4..f6b1628742 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -559,7 +559,8 @@ _sh_propagate(struct vcpu *v,
      * caching attributes in the shadows to match what was asked for.
      */
     if ( (level == 1) && is_hvm_domain(d) &&
-         !is_xen_heap_mfn(target_mfn) )
+         (!mfn_valid(target_mfn) ||
+          !is_special_page(mfn_to_page(target_mfn))) )
     {
         int type;
 
diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c
index 8c232270b4..3224d1684b 100644
--- a/xen/arch/x86/tboot.c
+++ b/xen/arch/x86/tboot.c
@@ -189,7 +189,7 @@ static void update_pagetable_mac(vmac_ctx_t *ctx)
 
         if ( !mfn_valid(_mfn(mfn)) )
             continue;
-        if ( is_page_in_use(page) && !is_xen_heap_page(page) )
+        if ( is_page_in_use(page) && !is_special_page(page) )
         {
             if ( page->count_info & PGC_page_table )
             {
@@ -289,7 +289,7 @@ static void tboot_gen_xenheap_integrity(const uint8_t key[TB_KEY_SIZE],
                               + 3 * PAGE_SIZE)) )
             continue; /* skip tboot and its page tables */
 
-        if ( is_page_in_use(page) && is_xen_heap_page(page) )
+        if ( is_page_in_use(page) && is_special_page(page) )
         {
             void *pg;
 
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index a163c201e2..9b62087be1 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -285,6 +285,11 @@ extern struct domain *dom_cow;
 
 #include <asm/mm.h>
 
+static inline bool is_special_page(const struct page_info *page)
+{
+    return is_xen_heap_page(page) || (page->count_info & PGC_extra);
+}
+
 #ifndef page_list_entry
 struct page_list_head
 {
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [Xen-devel] [PATCH v4 3/3] mm: add 'is_special_page' inline function...
  2020-03-18 17:32 ` [Xen-devel] [PATCH v4 3/3] mm: add 'is_special_page' inline function Paul Durrant
@ 2020-03-20 15:40   ` Jan Beulich
  2020-03-24 14:34   ` Julien Grall
  1 sibling, 0 replies; 9+ messages in thread
From: Jan Beulich @ 2020-03-20 15:40 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Tamas K Lengyel, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Paul Durrant, Ian Jackson, George Dunlap,
	Tim Deegan, Stefano Stabellini, xen-devel, Roger Pau Monné

On 18.03.2020 18:32, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> ... to cover xenheap and PGC_extra pages.
> 
> PGC_extra pages are intended to hold data structures that are associated
> with a domain and may be mapped by that domain. They should not be treated
> as 'normal' guest pages (i.e. RAM or page tables). Hence, in many cases
> where code currently tests is_xen_heap_page() it should also check for
> the PGC_extra bit in 'count_info'.
> 
> This patch therefore defines is_special_page() to cover both cases and
> converts tests of is_xen_heap_page() (or open coded tests of PGC_xen_heap)
> to is_special_page() where the page is assigned to a domain.
> 
> Signed-off-by: Paul Durrant <paul@xen.org>
> Acked-by: Tamas K Lengyel <tamas@tklengyel.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

The implied ack here is on the assumption that ultimately the
per-domain ->xenpage_list will go away altogether, leaving us
without the currently resulting mix.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Xen-devel] [PATCH v4 0/3] make sure PGC_extra pages are dealt with properly
  2020-03-18 17:32 [Xen-devel] [PATCH v4 0/3] make sure PGC_extra pages are dealt with properly Paul Durrant
                   ` (2 preceding siblings ...)
  2020-03-18 17:32 ` [Xen-devel] [PATCH v4 3/3] mm: add 'is_special_page' inline function Paul Durrant
@ 2020-03-24  9:39 ` Jan Beulich
  2020-03-24 10:26   ` Paul Durrant
  3 siblings, 1 reply; 9+ messages in thread
From: Jan Beulich @ 2020-03-24  9:39 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Ian Jackson, George Dunlap, Tim Deegan, xen-devel,
	Roger Pau Monné

On 18.03.2020 18:32, Paul Durrant wrote:
> This series was formerly called "remove one more shared xenheap page:
> shared_info" but I have dropped the patches actually changing shared_info
> and just left the PGC_extra clean-up that was previously intertwined.
> 
> Paul Durrant (3):
>   mm: keep PGC_extra pages on a separate list
>   x86 / ioreq: use a MEMF_no_refcount allocation for server pages...
>   mm: add 'is_special_page' inline function...

So I'm confused - I had just replied twice to v6 patch 5/5. This
series calls itself v4 and consists of the middle three patches
of what v6 was. What's the deal? Is this really v7, and the two
patches have been dropped / postponed?

Jan


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Xen-devel] [PATCH v4 0/3] make sure PGC_extra pages are dealt with properly
  2020-03-24  9:39 ` [Xen-devel] [PATCH v4 0/3] make sure PGC_extra pages are dealt with properly Jan Beulich
@ 2020-03-24 10:26   ` Paul Durrant
  0 siblings, 0 replies; 9+ messages in thread
From: Paul Durrant @ 2020-03-24 10:26 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: 'Stefano Stabellini', 'Julien Grall',
	'Wei Liu', 'Konrad Rzeszutek Wilk',
	'Andrew Cooper', 'Ian Jackson',
	'George Dunlap', 'Tim Deegan',
	xen-devel, 'Roger Pau Monné'

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 24 March 2020 09:39
> To: Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org; Andrew Cooper <andrew.cooper3@citrix.com>; George Dunlap
> <george.dunlap@citrix.com>; Ian Jackson <ian.jackson@eu.citrix.com>; Julien Grall <julien@xen.org>;
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Roger Pau Monné <roger.pau@citrix.com>; Stefano
> Stabellini <sstabellini@kernel.org>; Tim Deegan <tim@xen.org>; Wei Liu <wl@xen.org>
> Subject: Re: [PATCH v4 0/3] make sure PGC_extra pages are dealt with properly
> 
> On 18.03.2020 18:32, Paul Durrant wrote:
> > This series was formerly called "remove one more shared xenheap page:
> > shared_info" but I have dropped the patches actually changing shared_info
> > and just left the PGC_extra clean-up that was previously intertwined.
> >
> > Paul Durrant (3):
> >   mm: keep PGC_extra pages on a separate list
> >   x86 / ioreq: use a MEMF_no_refcount allocation for server pages...
> >   mm: add 'is_special_page' inline function...
> 
> So I'm confused - I had just replied twice to v6 patch 5/5. This
> series calls itself v4 and consists of the middle three patches
> of what v6 was. What's the deal? Is this really v7, and the two
> patches have been dropped / postponed?

Sorry, I clearly got confused with numbering against one of my other series. Yes, this should be v7.

I wanted to send the patches that clear up use of PGC_extra, separating from the change to shared_info since I'm pressed for time to complete all the other conversions from xenheap pages such that I can send them as a single series.

  Paul

> 
> Jan



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Xen-devel] [PATCH v4 3/3] mm: add 'is_special_page' inline function...
  2020-03-18 17:32 ` [Xen-devel] [PATCH v4 3/3] mm: add 'is_special_page' inline function Paul Durrant
  2020-03-20 15:40   ` Jan Beulich
@ 2020-03-24 14:34   ` Julien Grall
  1 sibling, 0 replies; 9+ messages in thread
From: Julien Grall @ 2020-03-24 14:34 UTC (permalink / raw)
  To: Paul Durrant, xen-devel
  Cc: Tamas K Lengyel, Wei Liu, Konrad Rzeszutek Wilk, Andrew Cooper,
	Paul Durrant, Ian Jackson, George Dunlap, Tim Deegan,
	Stefano Stabellini, Jan Beulich, Roger Pau Monné



On 18/03/2020 17:32, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> ... to cover xenheap and PGC_extra pages.
> 
> PGC_extra pages are intended to hold data structures that are associated
> with a domain and may be mapped by that domain. They should not be treated
> as 'normal' guest pages (i.e. RAM or page tables). Hence, in many cases
> where code currently tests is_xen_heap_page() it should also check for
> the PGC_extra bit in 'count_info'.
> 
> This patch therefore defines is_special_page() to cover both cases and
> converts tests of is_xen_heap_page() (or open coded tests of PGC_xen_heap)
> to is_special_page() where the page is assigned to a domain.
> 
> Signed-off-by: Paul Durrant <paul@xen.org>
> Acked-by: Tamas K Lengyel <tamas@tklengyel.com>
Acked-by: Julien Grall <julien@xen.org>

> ---
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Wei Liu <wl@xen.org>
> Cc: "Roger Pau Monné" <roger.pau@citrix.com>
> Cc: George Dunlap <george.dunlap@citrix.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Julien Grall <julien@xen.org>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Tim Deegan <tim@xen.org>
> 
> v7:
>   - Fixed some uses of is_xen_heap_mfn() that I'd missed
>   - Updated commit comment to point out that only tests on assigned xenheap
>     pages are candidates for conversion
> 
> v6:
>   - Convert open-coded checks of PGC_xen_heap to use is_special_page()
>     where appropriate
> 
> v4:
>   - Use inline function instead of macro
>   - Add missing conversions from is_xen_heap_page()
> 
> v3:
>   - Delete obsolete comment.
> 
> v2:
>   - New in v2
> ---
>   xen/arch/x86/domctl.c           |  2 +-
>   xen/arch/x86/mm.c               | 13 ++++++-------
>   xen/arch/x86/mm/altp2m.c        |  2 +-
>   xen/arch/x86/mm/mem_sharing.c   |  3 +--
>   xen/arch/x86/mm/p2m-pod.c       | 12 +++++++-----
>   xen/arch/x86/mm/p2m.c           |  4 ++--
>   xen/arch/x86/mm/shadow/common.c | 13 ++++++++-----
>   xen/arch/x86/mm/shadow/multi.c  |  3 ++-
>   xen/arch/x86/tboot.c            |  4 ++--
>   xen/include/xen/mm.h            |  5 +++++
>   10 files changed, 35 insertions(+), 26 deletions(-)
> 
> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> index ed86762fa6..add70126b9 100644
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -394,7 +394,7 @@ long arch_do_domctl(
>               page = get_page_from_gfn(d, gfn, &t, P2M_ALLOC);
>   
>               if ( unlikely(!page) ||
> -                 unlikely(is_xen_heap_page(page)) )
> +                 unlikely(is_special_page(page)) )
>               {
>                   if ( unlikely(p2m_is_broken(t)) )
>                       type = XEN_DOMCTL_PFINFO_BROKEN;
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index 62507ca651..2fac67ad57 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -1014,7 +1014,7 @@ get_page_from_l1e(
>           unsigned long cacheattr = pte_flags_to_cacheattr(l1f);
>           int err;
>   
> -        if ( is_xen_heap_page(page) )
> +        if ( is_special_page(page) )
>           {
>               if ( write )
>                   put_page_type(page);
> @@ -2447,7 +2447,7 @@ static int cleanup_page_mappings(struct page_info *page)
>       {
>           page->count_info &= ~PGC_cacheattr_mask;
>   
> -        BUG_ON(is_xen_heap_page(page));
> +        BUG_ON(is_special_page(page));
>   
>           rc = update_xen_mappings(mfn, 0);
>       }
> @@ -2477,7 +2477,7 @@ static int cleanup_page_mappings(struct page_info *page)
>                   rc = rc2;
>           }
>   
> -        if ( likely(!is_xen_heap_page(page)) )
> +        if ( likely(!is_special_page(page)) )
>           {
>               ASSERT((page->u.inuse.type_info &
>                       (PGT_type_mask | PGT_count_mask)) == PGT_writable_page);
> @@ -4216,8 +4216,7 @@ int steal_page(
>       if ( !(owner = page_get_owner_and_reference(page)) )
>           goto fail;
>   
> -    if ( owner != d || is_xen_heap_page(page) ||
> -         (page->count_info & PGC_extra) )
> +    if ( owner != d || is_special_page(page) )
>           goto fail_put;
>   
>       /*
> @@ -4580,8 +4579,8 @@ int xenmem_add_to_physmap_one(
>       prev_mfn = get_gfn(d, gfn_x(gpfn), &p2mt);
>       if ( mfn_valid(prev_mfn) )
>       {
> -        if ( is_xen_heap_mfn(prev_mfn) )
> -            /* Xen heap frames are simply unhooked from this phys slot. */
> +        if ( is_special_page(mfn_to_page(prev_mfn)) )
> +            /* Special pages are simply unhooked from this phys slot. */
>               rc = guest_physmap_remove_page(d, gpfn, prev_mfn, PAGE_ORDER_4K);
>           else
>               /* Normal domain memory is freed, to avoid leaking memory. */
> diff --git a/xen/arch/x86/mm/altp2m.c b/xen/arch/x86/mm/altp2m.c
> index 50768f2547..c091b03ea3 100644
> --- a/xen/arch/x86/mm/altp2m.c
> +++ b/xen/arch/x86/mm/altp2m.c
> @@ -77,7 +77,7 @@ int altp2m_vcpu_enable_ve(struct vcpu *v, gfn_t gfn)
>        * pageable() predicate for this, due to it having the same properties
>        * that we want.
>        */
> -    if ( !p2m_is_pageable(p2mt) || is_xen_heap_page(pg) )
> +    if ( !p2m_is_pageable(p2mt) || is_special_page(pg) )
>       {
>           rc = -EINVAL;
>           goto err;
> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
> index 3835bc928f..f49f27a3ef 100644
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -840,9 +840,8 @@ static int nominate_page(struct domain *d, gfn_t gfn,
>       if ( !p2m_is_sharable(p2mt) )
>           goto out;
>   
> -    /* Skip xen heap pages */
>       page = mfn_to_page(mfn);
> -    if ( !page || is_xen_heap_page(page) )
> +    if ( !page || is_special_page(page) )
>           goto out;
>   
>       /* Check if there are mem_access/remapped altp2m entries for this page */
> diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
> index 2a7b8c117b..36bc471e7f 100644
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -749,8 +749,9 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, gfn_t gfn)
>   
>           n = 1UL << min(cur_order, SUPERPAGE_ORDER + 0U);
>           for ( k = 0, page = mfn_to_page(mfn); k < n; ++k, ++page )
> -            if ( !(page->count_info & PGC_allocated) ||
> -                 (page->count_info & (PGC_page_table | PGC_xen_heap)) ||
> +            if ( is_special_page(page) ||
> +                 !(page->count_info & PGC_allocated) ||
> +                 (page->count_info & PGC_page_table) ||
>                    (page->count_info & PGC_count_mask) > max_ref )
>                   goto out;
>       }
> @@ -883,11 +884,12 @@ p2m_pod_zero_check(struct p2m_domain *p2m, const gfn_t *gfns, unsigned int count
>           pg = mfn_to_page(mfns[i]);
>   
>           /*
> -         * If this is ram, and not a pagetable or from the xen heap, and
> +         * If this is ram, and not a pagetable or a special page, and
>            * probably not mapped elsewhere, map it; otherwise, skip.
>            */
> -        if ( p2m_is_ram(types[i]) && (pg->count_info & PGC_allocated) &&
> -             !(pg->count_info & (PGC_page_table | PGC_xen_heap)) &&
> +        if ( !is_special_page(pg) && p2m_is_ram(types[i]) &&
> +             (pg->count_info & PGC_allocated) &&
> +             !(pg->count_info & PGC_page_table) &&
>                ((pg->count_info & PGC_count_mask) <= max_ref) )
>               map[i] = map_domain_page(mfns[i]);
>           else
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 9f51370327..d93c418bcf 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -2983,8 +2983,8 @@ int p2m_add_foreign(struct domain *tdom, unsigned long fgfn,
>       prev_mfn = get_gfn(tdom, gpfn, &p2mt_prev);
>       if ( mfn_valid(prev_mfn) )
>       {
> -        if ( is_xen_heap_mfn(prev_mfn) )
> -            /* Xen heap frames are simply unhooked from this phys slot */
> +        if ( is_special_page(mfn_to_page(prev_mfn)) )
> +            /* Special pages are simply unhooked from this phys slot */
>               rc = guest_physmap_remove_page(tdom, _gfn(gpfn), prev_mfn, 0);
>           else
>               /* Normal domain memory is freed, to avoid leaking memory. */
> diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
> index 121ddf1255..75dd414a6e 100644
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -2087,19 +2087,22 @@ static int sh_remove_all_mappings(struct domain *d, mfn_t gmfn, gfn_t gfn)
>            * The qemu helper process has an untyped mapping of this dom's RAM
>            * and the HVM restore program takes another.
>            * Also allow one typed refcount for
> -         * - Xen heap pages, to match share_xen_page_with_guest(),
> -         * - ioreq server pages, to match prepare_ring_for_helper().
> +         * - special pages, which are explicitly referenced and mapped by
> +         *   Xen.
> +         * - ioreq server pages, which may be special pages or normal
> +         *   guest pages with an extra reference taken by
> +         *   prepare_ring_for_helper().
>            */
>           if ( !(shadow_mode_external(d)
>                  && (page->count_info & PGC_count_mask) <= 3
>                  && ((page->u.inuse.type_info & PGT_count_mask)
> -                   == (is_xen_heap_page(page) ||
> +                   == (is_special_page(page) ||
>                          (is_hvm_domain(d) && is_ioreq_server_page(d, page))))) )
>               printk(XENLOG_G_ERR "can't find all mappings of mfn %"PRI_mfn
> -                   " (gfn %"PRI_gfn"): c=%lx t=%lx x=%d i=%d\n",
> +                   " (gfn %"PRI_gfn"): c=%lx t=%lx s=%d i=%d\n",
>                      mfn_x(gmfn), gfn_x(gfn),
>                      page->count_info, page->u.inuse.type_info,
> -                   !!is_xen_heap_page(page),
> +                   is_special_page(page),
>                      (is_hvm_domain(d) && is_ioreq_server_page(d, page)));
>       }
>   
> diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
> index b6afc0fba4..f6b1628742 100644
> --- a/xen/arch/x86/mm/shadow/multi.c
> +++ b/xen/arch/x86/mm/shadow/multi.c
> @@ -559,7 +559,8 @@ _sh_propagate(struct vcpu *v,
>        * caching attributes in the shadows to match what was asked for.
>        */
>       if ( (level == 1) && is_hvm_domain(d) &&
> -         !is_xen_heap_mfn(target_mfn) )
> +         (!mfn_valid(target_mfn) ||
> +          !is_special_page(mfn_to_page(target_mfn))) )
>       {
>           int type;
>   
> diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c
> index 8c232270b4..3224d1684b 100644
> --- a/xen/arch/x86/tboot.c
> +++ b/xen/arch/x86/tboot.c
> @@ -189,7 +189,7 @@ static void update_pagetable_mac(vmac_ctx_t *ctx)
>   
>           if ( !mfn_valid(_mfn(mfn)) )
>               continue;
> -        if ( is_page_in_use(page) && !is_xen_heap_page(page) )
> +        if ( is_page_in_use(page) && !is_special_page(page) )
>           {
>               if ( page->count_info & PGC_page_table )
>               {
> @@ -289,7 +289,7 @@ static void tboot_gen_xenheap_integrity(const uint8_t key[TB_KEY_SIZE],
>                                 + 3 * PAGE_SIZE)) )
>               continue; /* skip tboot and its page tables */
>   
> -        if ( is_page_in_use(page) && is_xen_heap_page(page) )
> +        if ( is_page_in_use(page) && is_special_page(page) )
>           {
>               void *pg;
>   
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index a163c201e2..9b62087be1 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -285,6 +285,11 @@ extern struct domain *dom_cow;
>   
>   #include <asm/mm.h>
>   
> +static inline bool is_special_page(const struct page_info *page)
> +{
> +    return is_xen_heap_page(page) || (page->count_info & PGC_extra);
> +}
> +
>   #ifndef page_list_entry
>   struct page_list_head
>   {
> 

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Xen-devel] [PATCH v4 1/3] mm: keep PGC_extra pages on a separate list
  2020-03-18 17:32 ` [Xen-devel] [PATCH v4 1/3] mm: keep PGC_extra pages on a separate list Paul Durrant
@ 2020-03-24 14:34   ` Julien Grall
  0 siblings, 0 replies; 9+ messages in thread
From: Julien Grall @ 2020-03-24 14:34 UTC (permalink / raw)
  To: Paul Durrant, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson,
	George Dunlap, Jan Beulich, Roger Pau Monné



On 18/03/2020 17:32, Paul Durrant wrote:
> This patch adds a new page_list_head into struct domain to hold PGC_extra
> pages. This avoids them getting confused with 'normal' domheap pages where
> the domain's page_list is walked.
> 
> A new dump loop is also added to dump_pageframe_info() to unconditionally
> dump the 'extra page list'.
> 
> Signed-off-by: Paul Durrant <paul@xen.org>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Julien Grall <julien@xen.org>

> ---
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: George Dunlap <george.dunlap@citrix.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Julien Grall <julien@xen.org>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Wei Liu <wl@xen.org>
> Cc: "Roger Pau Monné" <roger.pau@citrix.com>
> 
> v7:
>   - Cosmetic changes
> 
> v6:
>   - New in v6
> ---
>   xen/arch/x86/domain.c    |  9 +++++++++
>   xen/common/domain.c      |  1 +
>   xen/common/page_alloc.c  |  2 +-
>   xen/include/asm-x86/mm.h |  6 ++----
>   xen/include/xen/mm.h     |  5 ++---
>   xen/include/xen/sched.h  | 13 +++++++++++++
>   6 files changed, 28 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index caf2ecad7e..683bc619aa 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -251,12 +251,21 @@ void dump_pageframe_info(struct domain *d)
>           p2m_pod_dump_data(d);
>   
>       spin_lock(&d->page_alloc_lock);
> +
>       page_list_for_each ( page, &d->xenpage_list )
>       {
>           printk("    XenPage %p: caf=%08lx, taf=%" PRtype_info "\n",
>                  _p(mfn_x(page_to_mfn(page))),
>                  page->count_info, page->u.inuse.type_info);
>       }
> +
> +    page_list_for_each ( page, &d->extra_page_list )
> +    {
> +        printk("    ExtraPage %p: caf=%08lx, taf=%" PRtype_info "\n",
> +               _p(mfn_x(page_to_mfn(page))),
> +               page->count_info, page->u.inuse.type_info);
> +    }
> +
>       spin_unlock(&d->page_alloc_lock);
>   }
>   
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index b4eb476a9c..3dcd73f67c 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -403,6 +403,7 @@ struct domain *domain_create(domid_t domid,
>       spin_lock_init_prof(d, page_alloc_lock);
>       spin_lock_init(&d->hypercall_deadlock_mutex);
>       INIT_PAGE_LIST_HEAD(&d->page_list);
> +    INIT_PAGE_LIST_HEAD(&d->extra_page_list);
>       INIT_PAGE_LIST_HEAD(&d->xenpage_list);
>   
>       spin_lock_init(&d->node_affinity_lock);
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 76d37226df..10b7aeca48 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -2314,7 +2314,7 @@ int assign_pages(
>           smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
>           pg[i].count_info =
>               (pg[i].count_info & PGC_extra) | PGC_allocated | 1;
> -        page_list_add_tail(&pg[i], &d->page_list);
> +        page_list_add_tail(&pg[i], page_to_list(d, &pg[i]));
>       }
>   
>    out:
> diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
> index a06b2fb81f..1fa334b306 100644
> --- a/xen/include/asm-x86/mm.h
> +++ b/xen/include/asm-x86/mm.h
> @@ -629,10 +629,8 @@ typedef struct mm_rwlock {
>       const char        *locker_function; /* func that took it */
>   } mm_rwlock_t;
>   
> -#define arch_free_heap_page(d, pg)                                      \
> -    page_list_del2(pg, is_xen_heap_page(pg) ?                           \
> -                       &(d)->xenpage_list : &(d)->page_list,            \
> -                   &(d)->arch.relmem_list)
> +#define arch_free_heap_page(d, pg) \
> +    page_list_del2(pg, page_to_list(d, pg), &(d)->arch.relmem_list)
>   
>   extern const char zero_page[];
>   
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index d0d095d9c7..a163c201e2 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -583,9 +583,8 @@ static inline unsigned int get_order_from_pages(unsigned long nr_pages)
>   void scrub_one_page(struct page_info *);
>   
>   #ifndef arch_free_heap_page
> -#define arch_free_heap_page(d, pg)                      \
> -    page_list_del(pg, is_xen_heap_page(pg) ?            \
> -                      &(d)->xenpage_list : &(d)->page_list)
> +#define arch_free_heap_page(d, pg) \
> +    page_list_del(pg, page_to_list(d, pg))
>   #endif
>   
>   int xenmem_add_to_physmap_one(struct domain *d, unsigned int space,
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index e6813288ab..4b78291d51 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -329,6 +329,7 @@ struct domain
>   
>       spinlock_t       page_alloc_lock; /* protects all the following fields  */
>       struct page_list_head page_list;  /* linked list */
> +    struct page_list_head extra_page_list; /* linked list (size extra_pages) */
>       struct page_list_head xenpage_list; /* linked list (size xenheap_pages) */
>   
>       /*
> @@ -512,6 +513,18 @@ struct domain
>   #endif
>   };
>   
> +static inline struct page_list_head *page_to_list(
> +    struct domain *d, const struct page_info *pg)
> +{
> +    if ( is_xen_heap_page(pg) )
> +        return &d->xenpage_list;
> +
> +    if ( pg->count_info & PGC_extra )
> +        return &d->extra_page_list;
> +
> +    return &d->page_list;
> +}
> +
>   /* Return number of pages currently posessed by the domain */
>   static inline unsigned int domain_tot_pages(const struct domain *d)
>   {
> 

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-03-24 14:35 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-18 17:32 [Xen-devel] [PATCH v4 0/3] make sure PGC_extra pages are dealt with properly Paul Durrant
2020-03-18 17:32 ` [Xen-devel] [PATCH v4 1/3] mm: keep PGC_extra pages on a separate list Paul Durrant
2020-03-24 14:34   ` Julien Grall
2020-03-18 17:32 ` [Xen-devel] [PATCH v4 2/3] x86 / ioreq: use a MEMF_no_refcount allocation for server pages Paul Durrant
2020-03-18 17:32 ` [Xen-devel] [PATCH v4 3/3] mm: add 'is_special_page' inline function Paul Durrant
2020-03-20 15:40   ` Jan Beulich
2020-03-24 14:34   ` Julien Grall
2020-03-24  9:39 ` [Xen-devel] [PATCH v4 0/3] make sure PGC_extra pages are dealt with properly Jan Beulich
2020-03-24 10:26   ` Paul Durrant

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).