All of lore.kernel.org
 help / color / mirror / Atom feed
* [Xen-devel] [PATCH v6 0/5] remove one more shared xenheap page: shared_info
@ 2020-03-10 17:49 paul
  2020-03-10 17:49 ` [Xen-devel] [PATCH v6 1/5] domain: introduce alloc/free_shared_info() helpers paul
                   ` (4 more replies)
  0 siblings, 5 replies; 20+ messages in thread
From: paul @ 2020-03-10 17:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Paul Durrant,
	Andrew Cooper, Paul Durrant, Konrad Rzeszutek Wilk, Ian Jackson,
	George Dunlap, Tim Deegan, Tamas K Lengyel, Jan Beulich,
	Volodymyr Babchuk, Roger Pau Monné

From: Paul Durrant <paul@xen.org>

Paul Durrant (5):
  domain: introduce alloc/free_shared_info() helpers...
  mm: keep PGC_extra pages on a separate list
  x86 / ioreq: use a MEMF_no_refcount allocation for server pages...
  mm: add 'is_special_page' inline function...
  domain: use PGC_extra domheap page for shared_info

 xen/arch/arm/domain.c           | 10 +++----
 xen/arch/arm/mm.c               |  2 +-
 xen/arch/x86/domain.c           | 22 ++++++++++-----
 xen/arch/x86/domctl.c           |  2 +-
 xen/arch/x86/hvm/ioreq.c        |  2 +-
 xen/arch/x86/mm.c               | 11 ++++----
 xen/arch/x86/mm/altp2m.c        |  2 +-
 xen/arch/x86/mm/mem_sharing.c   |  3 +--
 xen/arch/x86/mm/p2m-pod.c       | 10 ++++---
 xen/arch/x86/mm/shadow/common.c | 13 +++++----
 xen/arch/x86/mm/shadow/multi.c  |  2 +-
 xen/arch/x86/pv/dom0_build.c    |  2 +-
 xen/arch/x86/pv/shim.c          |  2 +-
 xen/arch/x86/tboot.c            |  4 +--
 xen/common/domain.c             | 47 +++++++++++++++++++++++++++++++++
 xen/common/domctl.c             |  2 +-
 xen/common/event_channel.c      |  3 +++
 xen/common/page_alloc.c         |  2 +-
 xen/common/time.c               | 19 +++++++++++--
 xen/include/asm-x86/mm.h        |  6 ++---
 xen/include/asm-x86/shared.h    | 15 ++++++-----
 xen/include/xen/domain.h        |  3 +++
 xen/include/xen/mm.h            | 10 ++++---
 xen/include/xen/sched.h         | 17 +++++++++++-
 xen/include/xen/shared.h        |  2 +-
 25 files changed, 154 insertions(+), 59 deletions(-)
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Paul Durrant <pdurrant@amazon.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tamas K Lengyel <tamas@tklengyel.com>
Cc: Tim Deegan <tim@xen.org>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: Wei Liu <wl@xen.org>
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [Xen-devel] [PATCH v6 1/5] domain: introduce alloc/free_shared_info() helpers...
  2020-03-10 17:49 [Xen-devel] [PATCH v6 0/5] remove one more shared xenheap page: shared_info paul
@ 2020-03-10 17:49 ` paul
  2020-03-10 17:49 ` [Xen-devel] [PATCH v6 2/5] mm: keep PGC_extra pages on a separate list paul
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 20+ messages in thread
From: paul @ 2020-03-10 17:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Paul Durrant,
	Andrew Cooper, Paul Durrant, Konrad Rzeszutek Wilk, Ian Jackson,
	George Dunlap, Hongyan Xia, Jan Beulich, Volodymyr Babchuk,
	Roger Pau Monné

From: Paul Durrant <pdurrant@amazon.com>

... and save the MFN.

This patch modifies the 'shared_info' field of struct domain to be
a structure comprising an MFN and a virtual address. Allocations are
still done from xenheap, so the virtual address still equates to
virt_to_mfn() called on the MFN but subsequent patch will change this.
Hence the need to save the MFN.

NOTE: Whist defining the new helpers, virt_to_mfn() in common/domain.c
      is made type safe.
      The definition of nmi_reason() in asm-x86/shared.h is also re-
      flowed to avoid overly long lines.

Signed-off-by: Paul Durrant <paul@xen.org>
Reviewed-by: Julien Grall <julien@xen.org>
Reviewed-by: Hongyan Xia <hongyxia@amazon.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
---
 xen/arch/arm/domain.c        |  8 ++------
 xen/arch/arm/mm.c            |  2 +-
 xen/arch/x86/domain.c        | 11 ++++-------
 xen/arch/x86/mm.c            |  2 +-
 xen/arch/x86/pv/dom0_build.c |  2 +-
 xen/arch/x86/pv/shim.c       |  2 +-
 xen/common/domain.c          | 26 ++++++++++++++++++++++++++
 xen/common/domctl.c          |  2 +-
 xen/common/time.c            |  4 ++--
 xen/include/asm-x86/shared.h | 15 ++++++++-------
 xen/include/xen/domain.h     |  3 +++
 xen/include/xen/sched.h      |  5 ++++-
 xen/include/xen/shared.h     |  2 +-
 13 files changed, 55 insertions(+), 29 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 6627be2922..5298d80bd2 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -689,13 +689,9 @@ int arch_domain_create(struct domain *d,
     if ( (rc = p2m_init(d)) != 0 )
         goto fail;
 
-    rc = -ENOMEM;
-    if ( (d->shared_info = alloc_xenheap_pages(0, 0)) == NULL )
+    if ( (rc = alloc_shared_info(d, 0)) != 0 )
         goto fail;
 
-    clear_page(d->shared_info);
-    share_xen_page_with_guest(virt_to_page(d->shared_info), d, SHARE_rw);
-
     switch ( config->arch.gic_version )
     {
     case XEN_DOMCTL_CONFIG_GIC_V2:
@@ -766,7 +762,7 @@ void arch_domain_destroy(struct domain *d)
     p2m_teardown(d);
     domain_vgic_free(d);
     domain_vuart_free(d);
-    free_xenheap_page(d->shared_info);
+    free_shared_info(d);
 #ifdef CONFIG_ACPI
     free_xenheap_pages(d->arch.efi_acpi_table,
                        get_order_from_bytes(d->arch.efi_acpi_len));
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 727107eefa..2bb592101d 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1424,7 +1424,7 @@ int xenmem_add_to_physmap_one(
         if ( idx != 0 )
             return -EINVAL;
 
-        mfn = virt_to_mfn(d->shared_info);
+        mfn = d->shared_info.mfn;
         t = p2m_ram_rw;
 
         break;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index caf2ecad7e..bdcc0d972a 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -611,12 +611,9 @@ int arch_domain_create(struct domain *d,
      * The shared_info machine address must fit in a 32-bit field within a
      * 32-bit guest's start_info structure. Hence we specify MEMF_bits(32).
      */
-    if ( (d->shared_info = alloc_xenheap_pages(0, MEMF_bits(32))) == NULL )
+    if ( (rc = alloc_shared_info(d, MEMF_bits(32))) != 0 )
         goto fail;
 
-    clear_page(d->shared_info);
-    share_xen_page_with_guest(virt_to_page(d->shared_info), d, SHARE_rw);
-
     if ( (rc = init_domain_irq_mapping(d)) != 0 )
         goto fail;
 
@@ -664,7 +661,7 @@ int arch_domain_create(struct domain *d,
     psr_domain_free(d);
     iommu_domain_destroy(d);
     cleanup_domain_irq_mapping(d);
-    free_xenheap_page(d->shared_info);
+    free_shared_info(d);
     xfree(d->arch.cpuid);
     xfree(d->arch.msr);
     if ( paging_initialised )
@@ -693,7 +690,7 @@ void arch_domain_destroy(struct domain *d)
         pv_domain_destroy(d);
     free_perdomain_mappings(d);
 
-    free_xenheap_page(d->shared_info);
+    free_shared_info(d);
     cleanup_domain_irq_mapping(d);
 
     psr_domain_free(d);
@@ -719,7 +716,7 @@ void arch_domain_unpause(struct domain *d)
 
 int arch_domain_soft_reset(struct domain *d)
 {
-    struct page_info *page = virt_to_page(d->shared_info), *new_page;
+    struct page_info *page = mfn_to_page(d->shared_info.mfn), *new_page;
     int ret = 0;
     struct domain *owner;
     mfn_t mfn;
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 62507ca651..ba7563ed3c 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4540,7 +4540,7 @@ int xenmem_add_to_physmap_one(
     {
         case XENMAPSPACE_shared_info:
             if ( idx == 0 )
-                mfn = virt_to_mfn(d->shared_info);
+                mfn = d->shared_info.mfn;
             break;
         case XENMAPSPACE_grant_table:
             rc = gnttab_map_frame(d, idx, gpfn, &mfn);
diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index 5678da782d..dc16ef2e79 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -743,7 +743,7 @@ int __init dom0_construct_pv(struct domain *d,
     clear_page(si);
     si->nr_pages = nr_pages;
 
-    si->shared_info = virt_to_maddr(d->shared_info);
+    si->shared_info = mfn_to_maddr(d->shared_info.mfn);
 
     if ( !pv_shim )
         si->flags    = SIF_PRIVILEGED | SIF_INITDOMAIN;
diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c
index d86e2de118..f512809dad 100644
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -282,7 +282,7 @@ static void write_start_info(struct domain *d)
     snprintf(si->magic, sizeof(si->magic), "xen-3.0-x86_%s",
              is_pv_32bit_domain(d) ? "32p" : "64");
     si->nr_pages = domain_tot_pages(d);
-    si->shared_info = virt_to_maddr(d->shared_info);
+    si->shared_info = mfn_to_maddr(d->shared_info.mfn);
     si->flags = 0;
     BUG_ON(xen_hypercall_hvm_get_param(HVM_PARAM_STORE_PFN, &si->store_mfn));
     BUG_ON(xen_hypercall_hvm_get_param(HVM_PARAM_STORE_EVTCHN, &param));
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 6ad458fa6b..ba7a905258 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -47,6 +47,10 @@
 #include <asm/guest.h>
 #endif
 
+/* Override macros from asm/page.h to make them work with mfn_t */
+#undef virt_to_mfn
+#define virt_to_mfn(v) _mfn(__virt_to_mfn(v))
+
 /* Linux config option: propageted to domain0 */
 /* xen_processor_pmbits: xen control Cx, Px, ... */
 unsigned int xen_processor_pmbits = XEN_PROCESSOR_PM_PX;
@@ -1644,6 +1648,28 @@ int continue_hypercall_on_cpu(
     return 0;
 }
 
+int alloc_shared_info(struct domain *d, unsigned int memflags)
+{
+    if ( (d->shared_info.virt = alloc_xenheap_pages(0, memflags)) == NULL )
+        return -ENOMEM;
+
+    d->shared_info.mfn = virt_to_mfn(d->shared_info.virt);
+
+    clear_page(d->shared_info.virt);
+    share_xen_page_with_guest(mfn_to_page(d->shared_info.mfn), d, SHARE_rw);
+
+    return 0;
+}
+
+void free_shared_info(struct domain *d)
+{
+    if ( !d->shared_info.virt )
+        return;
+
+    free_xenheap_page(d->shared_info.virt);
+    d->shared_info.virt = NULL;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index a69b3b59a8..81f18e63a7 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -196,7 +196,7 @@ void getdomaininfo(struct domain *d, struct xen_domctl_getdomaininfo *info)
     info->outstanding_pages = d->outstanding_pages;
     info->shr_pages         = atomic_read(&d->shr_pages);
     info->paged_pages       = atomic_read(&d->paged_pages);
-    info->shared_info_frame = mfn_to_gmfn(d, virt_to_mfn(d->shared_info));
+    info->shared_info_frame = mfn_to_gmfn(d, mfn_x(d->shared_info.mfn));
     BUG_ON(SHARED_M2P(info->shared_info_frame));
 
     info->cpupool = cpupool_get_id(d);
diff --git a/xen/common/time.c b/xen/common/time.c
index 82336e2d5a..58fa9abc40 100644
--- a/xen/common/time.c
+++ b/xen/common/time.c
@@ -110,9 +110,9 @@ void update_domain_wallclock_time(struct domain *d)
     shared_info(d, wc_nsec)   = wc_nsec;
 #ifdef CONFIG_X86
     if ( likely(!has_32bit_shinfo(d)) )
-        d->shared_info->native.wc_sec_hi = sec >> 32;
+        d->shared_info.virt->native.wc_sec_hi = sec >> 32;
     else
-        d->shared_info->compat.arch.wc_sec_hi = sec >> 32;
+        d->shared_info.virt->compat.arch.wc_sec_hi = sec >> 32;
 #else
     shared_info(d, wc_sec_hi) = sec >> 32;
 #endif
diff --git a/xen/include/asm-x86/shared.h b/xen/include/asm-x86/shared.h
index af5d959d04..d4588e08a6 100644
--- a/xen/include/asm-x86/shared.h
+++ b/xen/include/asm-x86/shared.h
@@ -1,24 +1,25 @@
 #ifndef __XEN_X86_SHARED_H__
 #define __XEN_X86_SHARED_H__
 
-#define nmi_reason(d) (!has_32bit_shinfo(d) ?                             \
-                       (u32 *)&(d)->shared_info->native.arch.nmi_reason : \
-                       (u32 *)&(d)->shared_info->compat.arch.nmi_reason)
+#define nmi_reason(d)                                           \
+    (!has_32bit_shinfo(d) ?                                     \
+     (u32 *)&(d)->shared_info.virt->native.arch.nmi_reason :    \
+     (u32 *)&(d)->shared_info.virt->compat.arch.nmi_reason)
 
 #define GET_SET_SHARED(type, field)                             \
 static inline type arch_get_##field(const struct domain *d)     \
 {                                                               \
     return !has_32bit_shinfo(d) ?                               \
-           d->shared_info->native.arch.field :                  \
-           d->shared_info->compat.arch.field;                   \
+           d->shared_info.virt->native.arch.field :             \
+           d->shared_info.virt->compat.arch.field;              \
 }                                                               \
 static inline void arch_set_##field(struct domain *d,           \
                                     type val)                   \
 {                                                               \
     if ( !has_32bit_shinfo(d) )                                 \
-        d->shared_info->native.arch.field = val;                \
+        d->shared_info.virt->native.arch.field = val;           \
     else                                                        \
-        d->shared_info->compat.arch.field = val;                \
+        d->shared_info.virt->compat.arch.field = val;           \
 }
 
 #define GET_SET_VCPU(type, field)                               \
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 7e51d361de..740e2032ad 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -130,4 +130,7 @@ struct vnuma_info {
 
 void vnuma_destroy(struct vnuma_info *vnuma);
 
+int alloc_shared_info(struct domain *d, unsigned int memflags);
+void free_shared_info(struct domain *d);
+
 #endif /* __XEN_DOMAIN_H__ */
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 3a4f43098c..f41d0ad2a0 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -346,7 +346,10 @@ struct domain
     unsigned int     max_vcpus;
     struct vcpu    **vcpu;
 
-    shared_info_t   *shared_info;     /* shared data area */
+    struct {
+        mfn_t mfn;
+        shared_info_t *virt;
+    } shared_info; /* shared data area */
 
     spinlock_t       domain_lock;
 
diff --git a/xen/include/xen/shared.h b/xen/include/xen/shared.h
index a411a8a3e3..57b2ff1e34 100644
--- a/xen/include/xen/shared.h
+++ b/xen/include/xen/shared.h
@@ -43,7 +43,7 @@ typedef struct vcpu_info vcpu_info_t;
 
 extern vcpu_info_t dummy_vcpu_info;
 
-#define shared_info(d, field)      __shared_info(d, (d)->shared_info, field)
+#define shared_info(d, field)      __shared_info(d, (d)->shared_info.virt, field)
 #define vcpu_info(v, field)        __vcpu_info(v, (v)->vcpu_info, field)
 
 #endif /* __XEN_SHARED_H__ */
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Xen-devel] [PATCH v6 2/5] mm: keep PGC_extra pages on a separate list
  2020-03-10 17:49 [Xen-devel] [PATCH v6 0/5] remove one more shared xenheap page: shared_info paul
  2020-03-10 17:49 ` [Xen-devel] [PATCH v6 1/5] domain: introduce alloc/free_shared_info() helpers paul
@ 2020-03-10 17:49 ` paul
  2020-03-16 16:53   ` Jan Beulich
  2020-03-10 17:49 ` [Xen-devel] [PATCH v6 3/5] x86 / ioreq: use a MEMF_no_refcount allocation for server pages paul
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 20+ messages in thread
From: paul @ 2020-03-10 17:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Paul Durrant,
	Andrew Cooper, Ian Jackson, George Dunlap, Jan Beulich,
	Roger Pau Monné

From: Paul Durrant <paul@xen.org>

This patch adds a new page_list_head into struct domain to hold PGC_extra
pages. This avoids them getting confused with 'normal' domheap pages where
the domain's page_list is walked.

A new dump loop is also added to dump_pageframe_info() to unconditionally
dump the 'extra page list'.

Signed-off-by: Paul Durrant <paul@xen.org>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v6:
 - New in v6
---
 xen/arch/x86/domain.c    |  7 +++++++
 xen/common/domain.c      |  1 +
 xen/common/page_alloc.c  |  2 +-
 xen/include/asm-x86/mm.h |  6 ++----
 xen/include/xen/mm.h     |  5 ++---
 xen/include/xen/sched.h  | 12 ++++++++++++
 6 files changed, 25 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index bdcc0d972a..2dda2dbca1 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -257,6 +257,13 @@ void dump_pageframe_info(struct domain *d)
                _p(mfn_x(page_to_mfn(page))),
                page->count_info, page->u.inuse.type_info);
     }
+
+    page_list_for_each ( page, &d->extra_page_list )
+    {
+        printk("    ExtraPage %p: caf=%08lx, taf=%" PRtype_info "\n",
+               _p(mfn_x(page_to_mfn(page))),
+               page->count_info, page->u.inuse.type_info);
+    }
     spin_unlock(&d->page_alloc_lock);
 }
 
diff --git a/xen/common/domain.c b/xen/common/domain.c
index ba7a905258..4ef0d3b21e 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -406,6 +406,7 @@ struct domain *domain_create(domid_t domid,
     spin_lock_init_prof(d, page_alloc_lock);
     spin_lock_init(&d->hypercall_deadlock_mutex);
     INIT_PAGE_LIST_HEAD(&d->page_list);
+    INIT_PAGE_LIST_HEAD(&d->extra_page_list);
     INIT_PAGE_LIST_HEAD(&d->xenpage_list);
 
     spin_lock_init(&d->node_affinity_lock);
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 76d37226df..10b7aeca48 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -2314,7 +2314,7 @@ int assign_pages(
         smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
         pg[i].count_info =
             (pg[i].count_info & PGC_extra) | PGC_allocated | 1;
-        page_list_add_tail(&pg[i], &d->page_list);
+        page_list_add_tail(&pg[i], page_to_list(d, &pg[i]));
     }
 
  out:
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index a06b2fb81f..81beb359e1 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -629,10 +629,8 @@ typedef struct mm_rwlock {
     const char        *locker_function; /* func that took it */
 } mm_rwlock_t;
 
-#define arch_free_heap_page(d, pg)                                      \
-    page_list_del2(pg, is_xen_heap_page(pg) ?                           \
-                       &(d)->xenpage_list : &(d)->page_list,            \
-                   &(d)->arch.relmem_list)
+#define arch_free_heap_page(d, pg) \
+    page_list_del2(pg, page_to_list((d), (pg)), &(d)->arch.relmem_list)
 
 extern const char zero_page[];
 
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index d0d095d9c7..0769e376d2 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -583,9 +583,8 @@ static inline unsigned int get_order_from_pages(unsigned long nr_pages)
 void scrub_one_page(struct page_info *);
 
 #ifndef arch_free_heap_page
-#define arch_free_heap_page(d, pg)                      \
-    page_list_del(pg, is_xen_heap_page(pg) ?            \
-                      &(d)->xenpage_list : &(d)->page_list)
+#define arch_free_heap_page(d, pg) \
+    page_list_del(pg, page_to_list((d), (pg)))
 #endif
 
 int xenmem_add_to_physmap_one(struct domain *d, unsigned int space,
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index f41d0ad2a0..85433e0bb1 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -355,6 +355,7 @@ struct domain
 
     spinlock_t       page_alloc_lock; /* protects all the following fields  */
     struct page_list_head page_list;  /* linked list */
+    struct page_list_head extra_page_list; /* linked list (size extra_pages) */
     struct page_list_head xenpage_list; /* linked list (size xenheap_pages) */
 
     /*
@@ -538,6 +539,17 @@ struct domain
 #endif
 };
 
+static inline struct page_list_head *page_to_list(
+    struct domain *d, const struct page_info *pg)
+{
+    if ( is_xen_heap_page(pg) )
+        return &d->xenpage_list;
+    else if ( pg->count_info & PGC_extra )
+        return &d->extra_page_list;
+
+    return &d->page_list;
+}
+
 /* Return number of pages currently posessed by the domain */
 static inline unsigned int domain_tot_pages(const struct domain *d)
 {
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Xen-devel] [PATCH v6 3/5] x86 / ioreq: use a MEMF_no_refcount allocation for server pages...
  2020-03-10 17:49 [Xen-devel] [PATCH v6 0/5] remove one more shared xenheap page: shared_info paul
  2020-03-10 17:49 ` [Xen-devel] [PATCH v6 1/5] domain: introduce alloc/free_shared_info() helpers paul
  2020-03-10 17:49 ` [Xen-devel] [PATCH v6 2/5] mm: keep PGC_extra pages on a separate list paul
@ 2020-03-10 17:49 ` paul
  2020-03-10 17:49 ` [Xen-devel] [PATCH v6 4/5] mm: add 'is_special_page' inline function paul
  2020-03-10 17:49 ` [Xen-devel] [PATCH v6 5/5] domain: use PGC_extra domheap page for shared_info paul
  4 siblings, 0 replies; 20+ messages in thread
From: paul @ 2020-03-10 17:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, Paul Durrant, Andrew Cooper, Paul Durrant, Jan Beulich,
	Roger Pau Monné

From: Paul Durrant <pdurrant@amazon.com>

... now that it is safe to assign them.

This avoids relying on libxl (or whatever toolstack is in use) setting
max_pages up with sufficient 'slop' to allow all necessary ioreq server
pages to be allocated.

Signed-off-by: Paul Durrant <paul@xen.org>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v2:
 - New in v2
---
 xen/arch/x86/hvm/ioreq.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index f8a5c81546..648ef9137f 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -375,7 +375,7 @@ static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
         return 0;
     }
 
-    page = alloc_domheap_page(s->target, 0);
+    page = alloc_domheap_page(s->target, MEMF_no_refcount);
 
     if ( !page )
         return -ENOMEM;
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Xen-devel] [PATCH v6 4/5] mm: add 'is_special_page' inline function...
  2020-03-10 17:49 [Xen-devel] [PATCH v6 0/5] remove one more shared xenheap page: shared_info paul
                   ` (2 preceding siblings ...)
  2020-03-10 17:49 ` [Xen-devel] [PATCH v6 3/5] x86 / ioreq: use a MEMF_no_refcount allocation for server pages paul
@ 2020-03-10 17:49 ` paul
  2020-03-17 13:06   ` Jan Beulich
  2020-03-10 17:49 ` [Xen-devel] [PATCH v6 5/5] domain: use PGC_extra domheap page for shared_info paul
  4 siblings, 1 reply; 20+ messages in thread
From: paul @ 2020-03-10 17:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Tamas K Lengyel, Julien Grall, Wei Liu, Paul Durrant,
	Andrew Cooper, Paul Durrant, Konrad Rzeszutek Wilk, Ian Jackson,
	George Dunlap, Tim Deegan, Stefano Stabellini, Jan Beulich,
	Roger Pau Monné

From: Paul Durrant <pdurrant@amazon.com>

... to cover xenheap and PGC_extra pages.

PGC_extra pages are intended to hold data structures that are associated
with a domain and may be mapped by that domain. They should not be treated
as 'normal' guest pages (i.e. RAM or page tables). Hence, in many cases
where code currently tests is_xen_heap_page() it should also check for
the PGC_extra bit in 'count_info'.

This patch therefore defines is_special_page() to cover both cases and
converts tests if is_xen_heap_page() to is_special_page() where
appropriate.

Signed-off-by: Paul Durrant <paul@xen.org>
Acked-by: Tamas K Lengyel <tamas@tklengyel.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Julien Grall <julien@xen.org>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>

In auditing open-coded tests of PGC_xen_heap, I am unsure if offline_page()
needs to check for PGC_extra pages too.

v6:
 - Convert open-coded checks of PGC_xen_heap to use is_special_page()
   where appropriate

v4:
 - Use inline function instead of macro
 - Add missing conversions from is_xen_heap_page()

v3:
 - Delete obsolete comment.

v2:
 - New in v2
---
 xen/arch/x86/domctl.c           |  2 +-
 xen/arch/x86/mm.c               |  9 ++++-----
 xen/arch/x86/mm/altp2m.c        |  2 +-
 xen/arch/x86/mm/mem_sharing.c   |  3 +--
 xen/arch/x86/mm/p2m-pod.c       | 10 ++++++----
 xen/arch/x86/mm/shadow/common.c | 13 ++++++++-----
 xen/arch/x86/mm/shadow/multi.c  |  2 +-
 xen/arch/x86/tboot.c            |  4 ++--
 xen/include/xen/mm.h            |  5 +++++
 9 files changed, 29 insertions(+), 21 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index ed86762fa6..add70126b9 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -394,7 +394,7 @@ long arch_do_domctl(
             page = get_page_from_gfn(d, gfn, &t, P2M_ALLOC);
 
             if ( unlikely(!page) ||
-                 unlikely(is_xen_heap_page(page)) )
+                 unlikely(is_special_page(page)) )
             {
                 if ( unlikely(p2m_is_broken(t)) )
                     type = XEN_DOMCTL_PFINFO_BROKEN;
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index ba7563ed3c..353bde5c2c 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -1014,7 +1014,7 @@ get_page_from_l1e(
         unsigned long cacheattr = pte_flags_to_cacheattr(l1f);
         int err;
 
-        if ( is_xen_heap_page(page) )
+        if ( is_special_page(page) )
         {
             if ( write )
                 put_page_type(page);
@@ -2447,7 +2447,7 @@ static int cleanup_page_mappings(struct page_info *page)
     {
         page->count_info &= ~PGC_cacheattr_mask;
 
-        BUG_ON(is_xen_heap_page(page));
+        BUG_ON(is_special_page(page));
 
         rc = update_xen_mappings(mfn, 0);
     }
@@ -2477,7 +2477,7 @@ static int cleanup_page_mappings(struct page_info *page)
                 rc = rc2;
         }
 
-        if ( likely(!is_xen_heap_page(page)) )
+        if ( likely(!is_special_page(page)) )
         {
             ASSERT((page->u.inuse.type_info &
                     (PGT_type_mask | PGT_count_mask)) == PGT_writable_page);
@@ -4216,8 +4216,7 @@ int steal_page(
     if ( !(owner = page_get_owner_and_reference(page)) )
         goto fail;
 
-    if ( owner != d || is_xen_heap_page(page) ||
-         (page->count_info & PGC_extra) )
+    if ( owner != d || is_special_page(page) )
         goto fail_put;
 
     /*
diff --git a/xen/arch/x86/mm/altp2m.c b/xen/arch/x86/mm/altp2m.c
index 50768f2547..c091b03ea3 100644
--- a/xen/arch/x86/mm/altp2m.c
+++ b/xen/arch/x86/mm/altp2m.c
@@ -77,7 +77,7 @@ int altp2m_vcpu_enable_ve(struct vcpu *v, gfn_t gfn)
      * pageable() predicate for this, due to it having the same properties
      * that we want.
      */
-    if ( !p2m_is_pageable(p2mt) || is_xen_heap_page(pg) )
+    if ( !p2m_is_pageable(p2mt) || is_special_page(pg) )
     {
         rc = -EINVAL;
         goto err;
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 3835bc928f..f49f27a3ef 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -840,9 +840,8 @@ static int nominate_page(struct domain *d, gfn_t gfn,
     if ( !p2m_is_sharable(p2mt) )
         goto out;
 
-    /* Skip xen heap pages */
     page = mfn_to_page(mfn);
-    if ( !page || is_xen_heap_page(page) )
+    if ( !page || is_special_page(page) )
         goto out;
 
     /* Check if there are mem_access/remapped altp2m entries for this page */
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index 2a7b8c117b..834375ac33 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -749,8 +749,9 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, gfn_t gfn)
 
         n = 1UL << min(cur_order, SUPERPAGE_ORDER + 0U);
         for ( k = 0, page = mfn_to_page(mfn); k < n; ++k, ++page )
-            if ( !(page->count_info & PGC_allocated) ||
-                 (page->count_info & (PGC_page_table | PGC_xen_heap)) ||
+            if ( is_special_page(page) ||
+                 !(page->count_info & PGC_allocated) ||
+                 (page->count_info & PGC_page_table) ||
                  (page->count_info & PGC_count_mask) > max_ref )
                 goto out;
     }
@@ -886,8 +887,9 @@ p2m_pod_zero_check(struct p2m_domain *p2m, const gfn_t *gfns, unsigned int count
          * If this is ram, and not a pagetable or from the xen heap, and
          * probably not mapped elsewhere, map it; otherwise, skip.
          */
-        if ( p2m_is_ram(types[i]) && (pg->count_info & PGC_allocated) &&
-             !(pg->count_info & (PGC_page_table | PGC_xen_heap)) &&
+        if ( p2m_is_ram(types[i]) && !is_special_page(pg) &&
+             (pg->count_info & PGC_allocated) &&
+             !(pg->count_info & PGC_page_table) &&
              ((pg->count_info & PGC_count_mask) <= max_ref) )
             map[i] = map_domain_page(mfns[i]);
         else
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index cba3ab1eba..5431142704 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2087,19 +2087,22 @@ static int sh_remove_all_mappings(struct domain *d, mfn_t gmfn, gfn_t gfn)
          * The qemu helper process has an untyped mapping of this dom's RAM
          * and the HVM restore program takes another.
          * Also allow one typed refcount for
-         * - Xen heap pages, to match share_xen_page_with_guest(),
-         * - ioreq server pages, to match prepare_ring_for_helper().
+         * - special pages, which are explicitly referenced and mapped by
+         *   Xen.
+         * - ioreq server pages, which may be special pages or normal
+         *   guest pages with an extra reference taken by
+         *   prepare_ring_for_helper().
          */
         if ( !(shadow_mode_external(d)
                && (page->count_info & PGC_count_mask) <= 3
                && ((page->u.inuse.type_info & PGT_count_mask)
-                   == (is_xen_heap_page(page) ||
+                   == (is_special_page(page) ||
                        (is_hvm_domain(d) && is_ioreq_server_page(d, page))))) )
             printk(XENLOG_G_ERR "can't find all mappings of mfn %"PRI_mfn
-                   " (gfn %"PRI_gfn"): c=%lx t=%lx x=%d i=%d\n",
+                   " (gfn %"PRI_gfn"): c=%lx t=%lx s=%d i=%d\n",
                    mfn_x(gmfn), gfn_x(gfn),
                    page->count_info, page->u.inuse.type_info,
-                   !!is_xen_heap_page(page),
+                   is_special_page(page),
                    (is_hvm_domain(d) && is_ioreq_server_page(d, page)));
     }
 
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 26798b317c..ac19d203d7 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -559,7 +559,7 @@ _sh_propagate(struct vcpu *v,
      * caching attributes in the shadows to match what was asked for.
      */
     if ( (level == 1) && is_hvm_domain(d) &&
-         !is_xen_heap_mfn(target_mfn) )
+         !is_special_page(mfn_to_page(target_mfn)) )
     {
         int type;
 
diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c
index 8c232270b4..3224d1684b 100644
--- a/xen/arch/x86/tboot.c
+++ b/xen/arch/x86/tboot.c
@@ -189,7 +189,7 @@ static void update_pagetable_mac(vmac_ctx_t *ctx)
 
         if ( !mfn_valid(_mfn(mfn)) )
             continue;
-        if ( is_page_in_use(page) && !is_xen_heap_page(page) )
+        if ( is_page_in_use(page) && !is_special_page(page) )
         {
             if ( page->count_info & PGC_page_table )
             {
@@ -289,7 +289,7 @@ static void tboot_gen_xenheap_integrity(const uint8_t key[TB_KEY_SIZE],
                               + 3 * PAGE_SIZE)) )
             continue; /* skip tboot and its page tables */
 
-        if ( is_page_in_use(page) && is_xen_heap_page(page) )
+        if ( is_page_in_use(page) && is_special_page(page) )
         {
             void *pg;
 
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 0769e376d2..bbe7edeb34 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -285,6 +285,11 @@ extern struct domain *dom_cow;
 
 #include <asm/mm.h>
 
+static inline bool is_special_page(const struct page_info *page)
+{
+    return is_xen_heap_page(page) || (page->count_info & PGC_extra);
+}
+
 #ifndef page_list_entry
 struct page_list_head
 {
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Xen-devel] [PATCH v6 5/5] domain: use PGC_extra domheap page for shared_info
  2020-03-10 17:49 [Xen-devel] [PATCH v6 0/5] remove one more shared xenheap page: shared_info paul
                   ` (3 preceding siblings ...)
  2020-03-10 17:49 ` [Xen-devel] [PATCH v6 4/5] mm: add 'is_special_page' inline function paul
@ 2020-03-10 17:49 ` paul
  2020-03-17 13:14   ` Jan Beulich
                     ` (2 more replies)
  4 siblings, 3 replies; 20+ messages in thread
From: paul @ 2020-03-10 17:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Paul Durrant,
	Andrew Cooper, Paul Durrant, Konrad Rzeszutek Wilk, Ian Jackson,
	George Dunlap, Jan Beulich, Volodymyr Babchuk

From: Paul Durrant <pdurrant@amazon.com>

Currently shared_info is a shared xenheap page but shared xenheap pages
complicate future plans for live-update of Xen so it is desirable to,
where possible, not use them [1]. This patch therefore converts shared_info
into a PGC_extra domheap page. This does entail freeing shared_info during
domain_relinquish_resources() rather than domain_destroy() so care is
needed to avoid de-referencing a NULL shared_info pointer hence some
extra checks of 'is_dying' are needed.

NOTE: For Arm, the call to free_shared_info() in arch_domain_destroy() is
      left in place since it is idempotent and called in the error path for
      arch_domain_create().

[1] See https://lists.xenproject.org/archives/html/xen-devel/2020-02/msg02018.html

Signed-off-by: Paul Durrant <paul@xen.org>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien@xen.org>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Wei Liu <wl@xen.org>

v6:
 - Drop dump_shared_info() but tag the shared info in the 'ExtraPage'
   dump

v5:
 - Incorporate new dump_shared_info() function

v2:
 - Addressed comments from Julien
 - Expanded the commit comment to explain why this patch is wanted
---
 xen/arch/arm/domain.c      |  2 ++
 xen/arch/x86/domain.c      | 10 +++++++---
 xen/common/domain.c        | 28 ++++++++++++++++++++++++----
 xen/common/event_channel.c |  3 +++
 xen/common/time.c          | 15 +++++++++++++++
 5 files changed, 51 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 5298d80bd2..741f6dd444 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -1005,6 +1005,8 @@ int domain_relinquish_resources(struct domain *d)
         BUG();
     }
 
+    free_shared_info(d);
+
     return 0;
 }
 
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 2dda2dbca1..539b5d9fe0 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -260,9 +260,12 @@ void dump_pageframe_info(struct domain *d)
 
     page_list_for_each ( page, &d->extra_page_list )
     {
-        printk("    ExtraPage %p: caf=%08lx, taf=%" PRtype_info "\n",
+        const char *tag = mfn_eq(page_to_mfn(page), d->shared_info.mfn) ?
+            "[SHARED INFO]" : "";
+
+        printk("    ExtraPage %p: caf=%08lx, taf=%" PRtype_info " %s\n",
                _p(mfn_x(page_to_mfn(page))),
-               page->count_info, page->u.inuse.type_info);
+               page->count_info, page->u.inuse.type_info, tag);
     }
     spin_unlock(&d->page_alloc_lock);
 }
@@ -697,7 +700,6 @@ void arch_domain_destroy(struct domain *d)
         pv_domain_destroy(d);
     free_perdomain_mappings(d);
 
-    free_shared_info(d);
     cleanup_domain_irq_mapping(d);
 
     psr_domain_free(d);
@@ -2252,6 +2254,8 @@ int domain_relinquish_resources(struct domain *d)
     if ( is_hvm_domain(d) )
         hvm_domain_relinquish_resources(d);
 
+    free_shared_info(d);
+
     return 0;
 }
 
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 4ef0d3b21e..4f3266454f 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1651,24 +1651,44 @@ int continue_hypercall_on_cpu(
 
 int alloc_shared_info(struct domain *d, unsigned int memflags)
 {
-    if ( (d->shared_info.virt = alloc_xenheap_pages(0, memflags)) == NULL )
+    struct page_info *pg;
+
+    pg = alloc_domheap_page(d, MEMF_no_refcount | memflags);
+    if ( !pg )
         return -ENOMEM;
 
-    d->shared_info.mfn = virt_to_mfn(d->shared_info.virt);
+    if ( !get_page_and_type(pg, d, PGT_writable_page) )
+    {
+        /*
+         * The domain should not be running at this point so there is
+         * no way we should reach this error path.
+         */
+        ASSERT_UNREACHABLE();
+        return -ENODATA;
+    }
+
+    d->shared_info.mfn = page_to_mfn(pg);
+    d->shared_info.virt = __map_domain_page_global(pg);
 
     clear_page(d->shared_info.virt);
-    share_xen_page_with_guest(mfn_to_page(d->shared_info.mfn), d, SHARE_rw);
 
     return 0;
 }
 
 void free_shared_info(struct domain *d)
 {
+    struct page_info *pg;
+
     if ( !d->shared_info.virt )
         return;
 
-    free_xenheap_page(d->shared_info.virt);
+    unmap_domain_page_global(d->shared_info.virt);
     d->shared_info.virt = NULL;
+
+    pg = mfn_to_page(d->shared_info.mfn);
+
+    put_page_alloc_ref(pg);
+    put_page_and_type(pg);
 }
 
 /*
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index e86e2bfab0..a17422284d 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -1325,6 +1325,9 @@ void evtchn_destroy(struct domain *d)
 {
     unsigned int i;
 
+    /* This must be done before shared_info is freed */
+    BUG_ON(!d->shared_info.virt);
+
     /* After this barrier no new event-channel allocations can occur. */
     BUG_ON(!d->is_dying);
     spin_barrier(&d->event_lock);
diff --git a/xen/common/time.c b/xen/common/time.c
index 58fa9abc40..ada02faf07 100644
--- a/xen/common/time.c
+++ b/xen/common/time.c
@@ -99,6 +99,18 @@ void update_domain_wallclock_time(struct domain *d)
     uint32_t *wc_version;
     uint64_t sec;
 
+    if ( d != current->domain )
+    {
+        /*
+         * We need to check is_dying here as, if it is set, the
+         * shared_info may have been freed. To do this safely we need
+         * hold the domain lock.
+         */
+        domain_lock(d);
+        if ( d->is_dying )
+            goto unlock;
+    }
+
     spin_lock(&wc_lock);
 
     wc_version = &shared_info(d, wc_version);
@@ -121,6 +133,9 @@ void update_domain_wallclock_time(struct domain *d)
     *wc_version = version_update_end(*wc_version);
 
     spin_unlock(&wc_lock);
+ unlock:
+    if ( d != current->domain )
+        domain_unlock(d);
 }
 
 /* Set clock to <secs,usecs> after 00:00:00 UTC, 1 January, 1970. */
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH v6 2/5] mm: keep PGC_extra pages on a separate list
  2020-03-10 17:49 ` [Xen-devel] [PATCH v6 2/5] mm: keep PGC_extra pages on a separate list paul
@ 2020-03-16 16:53   ` Jan Beulich
  2020-03-16 18:11     ` Paul Durrant
  0 siblings, 1 reply; 20+ messages in thread
From: Jan Beulich @ 2020-03-16 16:53 UTC (permalink / raw)
  To: paul
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper,
	Ian Jackson, George Dunlap, xen-devel, Roger Pau Monné

On 10.03.2020 18:49, paul@xen.org wrote:
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -257,6 +257,13 @@ void dump_pageframe_info(struct domain *d)
>                 _p(mfn_x(page_to_mfn(page))),
>                 page->count_info, page->u.inuse.type_info);
>      }
> +
> +    page_list_for_each ( page, &d->extra_page_list )
> +    {
> +        printk("    ExtraPage %p: caf=%08lx, taf=%" PRtype_info "\n",
> +               _p(mfn_x(page_to_mfn(page))),
> +               page->count_info, page->u.inuse.type_info);
> +    }
>      spin_unlock(&d->page_alloc_lock);

Another blank line above here would have been nice.

> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -2314,7 +2314,7 @@ int assign_pages(
>          smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
>          pg[i].count_info =
>              (pg[i].count_info & PGC_extra) | PGC_allocated | 1;
> -        page_list_add_tail(&pg[i], &d->page_list);
> +        page_list_add_tail(&pg[i], page_to_list(d, &pg[i]));
>      }

This moves the one extra page we currently have (VMX'es APIC access
page) to a different list. Without adjustment to domain cleanup,
how is this page now going to get freed? (This of course then should
be extended to Arm, even if right now there's no "extra" page there.)

> --- a/xen/include/asm-x86/mm.h
> +++ b/xen/include/asm-x86/mm.h
> @@ -629,10 +629,8 @@ typedef struct mm_rwlock {
>      const char        *locker_function; /* func that took it */
>  } mm_rwlock_t;
>  
> -#define arch_free_heap_page(d, pg)                                      \
> -    page_list_del2(pg, is_xen_heap_page(pg) ?                           \
> -                       &(d)->xenpage_list : &(d)->page_list,            \
> -                   &(d)->arch.relmem_list)
> +#define arch_free_heap_page(d, pg) \
> +    page_list_del2(pg, page_to_list((d), (pg)), &(d)->arch.relmem_list)

Arguments passed on as is (i.e. not as part of an expression) don't
need parentheses.

> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -583,9 +583,8 @@ static inline unsigned int get_order_from_pages(unsigned long nr_pages)
>  void scrub_one_page(struct page_info *);
>  
>  #ifndef arch_free_heap_page
> -#define arch_free_heap_page(d, pg)                      \
> -    page_list_del(pg, is_xen_heap_page(pg) ?            \
> -                      &(d)->xenpage_list : &(d)->page_list)
> +#define arch_free_heap_page(d, pg) \
> +    page_list_del(pg, page_to_list((d), (pg)))

Same here then.

> @@ -538,6 +539,17 @@ struct domain
>  #endif
>  };
>  
> +static inline struct page_list_head *page_to_list(
> +    struct domain *d, const struct page_info *pg)
> +{
> +    if ( is_xen_heap_page(pg) )
> +        return &d->xenpage_list;
> +    else if ( pg->count_info & PGC_extra )

Unnecessary "else".

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH v6 2/5] mm: keep PGC_extra pages on a separate list
  2020-03-16 16:53   ` Jan Beulich
@ 2020-03-16 18:11     ` Paul Durrant
  2020-03-17 10:45       ` Jan Beulich
  0 siblings, 1 reply; 20+ messages in thread
From: Paul Durrant @ 2020-03-16 18:11 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: 'Stefano Stabellini', 'Julien Grall',
	'Wei Liu', 'Andrew Cooper', 'Ian Jackson',
	'George Dunlap',
	xen-devel, 'Roger Pau Monné'

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 16 March 2020 16:53
> To: paul@xen.org
> Cc: xen-devel@lists.xenproject.org; Andrew Cooper <andrew.cooper3@citrix.com>; George Dunlap
> <george.dunlap@citrix.com>; Ian Jackson <ian.jackson@eu.citrix.com>; Julien Grall <julien@xen.org>;
> Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wl@xen.org>; Roger Pau Monné
> <roger.pau@citrix.com>
> Subject: Re: [PATCH v6 2/5] mm: keep PGC_extra pages on a separate list
> 
> On 10.03.2020 18:49, paul@xen.org wrote:
> > --- a/xen/arch/x86/domain.c
> > +++ b/xen/arch/x86/domain.c
> > @@ -257,6 +257,13 @@ void dump_pageframe_info(struct domain *d)
> >                 _p(mfn_x(page_to_mfn(page))),
> >                 page->count_info, page->u.inuse.type_info);
> >      }
> > +
> > +    page_list_for_each ( page, &d->extra_page_list )
> > +    {
> > +        printk("    ExtraPage %p: caf=%08lx, taf=%" PRtype_info "\n",
> > +               _p(mfn_x(page_to_mfn(page))),
> > +               page->count_info, page->u.inuse.type_info);
> > +    }
> >      spin_unlock(&d->page_alloc_lock);
> 
> Another blank line above here would have been nice.
> 

Ok.

> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -2314,7 +2314,7 @@ int assign_pages(
> >          smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
> >          pg[i].count_info =
> >              (pg[i].count_info & PGC_extra) | PGC_allocated | 1;
> > -        page_list_add_tail(&pg[i], &d->page_list);
> > +        page_list_add_tail(&pg[i], page_to_list(d, &pg[i]));
> >      }
> 
> This moves the one extra page we currently have (VMX'es APIC access
> page) to a different list. Without adjustment to domain cleanup,
> how is this page now going to get freed? (This of course then should
> be extended to Arm, even if right now there's no "extra" page there.)
> 

I don't think there is any need to adjust as the current code in will drop the allocation ref in vmx_free_vlapic_mapping(), so it doesn't matter that it is missed by relinquish_memory().

> > --- a/xen/include/asm-x86/mm.h
> > +++ b/xen/include/asm-x86/mm.h
> > @@ -629,10 +629,8 @@ typedef struct mm_rwlock {
> >      const char        *locker_function; /* func that took it */
> >  } mm_rwlock_t;
> >
> > -#define arch_free_heap_page(d, pg)                                      \
> > -    page_list_del2(pg, is_xen_heap_page(pg) ?                           \
> > -                       &(d)->xenpage_list : &(d)->page_list,            \
> > -                   &(d)->arch.relmem_list)
> > +#define arch_free_heap_page(d, pg) \
> > +    page_list_del2(pg, page_to_list((d), (pg)), &(d)->arch.relmem_list)
> 
> Arguments passed on as is (i.e. not as part of an expression) don't
> need parentheses.
> 

Are you saying it should be:

#define arch_free_heap_page(d, pg) \
    page_list_del2(pg, page_to_list(d, pg), &(d)->arch.relmem_list)

?

> > --- a/xen/include/xen/mm.h
> > +++ b/xen/include/xen/mm.h
> > @@ -583,9 +583,8 @@ static inline unsigned int get_order_from_pages(unsigned long nr_pages)
> >  void scrub_one_page(struct page_info *);
> >
> >  #ifndef arch_free_heap_page
> > -#define arch_free_heap_page(d, pg)                      \
> > -    page_list_del(pg, is_xen_heap_page(pg) ?            \
> > -                      &(d)->xenpage_list : &(d)->page_list)
> > +#define arch_free_heap_page(d, pg) \
> > +    page_list_del(pg, page_to_list((d), (pg)))
> 
> Same here then.
> 
> > @@ -538,6 +539,17 @@ struct domain
> >  #endif
> >  };
> >
> > +static inline struct page_list_head *page_to_list(
> > +    struct domain *d, const struct page_info *pg)
> > +{
> > +    if ( is_xen_heap_page(pg) )
> > +        return &d->xenpage_list;
> > +    else if ( pg->count_info & PGC_extra )
> 
> Unnecessary "else".
>

Oh yes.

  Paul

 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH v6 2/5] mm: keep PGC_extra pages on a separate list
  2020-03-16 18:11     ` Paul Durrant
@ 2020-03-17 10:45       ` Jan Beulich
  2020-03-17 10:51         ` Paul Durrant
  0 siblings, 1 reply; 20+ messages in thread
From: Jan Beulich @ 2020-03-17 10:45 UTC (permalink / raw)
  To: paul
  Cc: 'Stefano Stabellini', 'Julien Grall',
	'Wei Liu', 'Andrew Cooper', 'Ian Jackson',
	'George Dunlap',
	xen-devel, 'Roger Pau Monné'

On 16.03.2020 19:11, Paul Durrant wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 16 March 2020 16:53
>>
>> On 10.03.2020 18:49, paul@xen.org wrote:
>>> --- a/xen/common/page_alloc.c
>>> +++ b/xen/common/page_alloc.c
>>> @@ -2314,7 +2314,7 @@ int assign_pages(
>>>          smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
>>>          pg[i].count_info =
>>>              (pg[i].count_info & PGC_extra) | PGC_allocated | 1;
>>> -        page_list_add_tail(&pg[i], &d->page_list);
>>> +        page_list_add_tail(&pg[i], page_to_list(d, &pg[i]));
>>>      }
>>
>> This moves the one extra page we currently have (VMX'es APIC access
>> page) to a different list. Without adjustment to domain cleanup,
>> how is this page now going to get freed? (This of course then should
>> be extended to Arm, even if right now there's no "extra" page there.)
>>
> 
> I don't think there is any need to adjust as the current code in will
> drop the allocation ref in vmx_free_vlapic_mapping(), so it doesn't
> matter that it is missed by relinquish_memory().

Hmm, yes. It feels like thin ice, but I think you're right. Nevertheless
the freeing of extra pages should imo ultimately move to the central
place, i.e. it would seem more clean to me if the put_page_alloc_ref()
call was dropped from vmx_free_vlapic_mapping().

>>> --- a/xen/include/asm-x86/mm.h
>>> +++ b/xen/include/asm-x86/mm.h
>>> @@ -629,10 +629,8 @@ typedef struct mm_rwlock {
>>>      const char        *locker_function; /* func that took it */
>>>  } mm_rwlock_t;
>>>
>>> -#define arch_free_heap_page(d, pg)                                      \
>>> -    page_list_del2(pg, is_xen_heap_page(pg) ?                           \
>>> -                       &(d)->xenpage_list : &(d)->page_list,            \
>>> -                   &(d)->arch.relmem_list)
>>> +#define arch_free_heap_page(d, pg) \
>>> +    page_list_del2(pg, page_to_list((d), (pg)), &(d)->arch.relmem_list)
>>
>> Arguments passed on as is (i.e. not as part of an expression) don't
>> need parentheses.
>>
> 
> Are you saying it should be:
> 
> #define arch_free_heap_page(d, pg) \
>     page_list_del2(pg, page_to_list(d, pg), &(d)->arch.relmem_list)

Yes. But if this and the other cosmetic changes are the only changes to
make, I'd be fine to do so while committing (if no other reason for a
v7 arises):
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH v6 2/5] mm: keep PGC_extra pages on a separate list
  2020-03-17 10:45       ` Jan Beulich
@ 2020-03-17 10:51         ` Paul Durrant
  0 siblings, 0 replies; 20+ messages in thread
From: Paul Durrant @ 2020-03-17 10:51 UTC (permalink / raw)
  To: 'Jan Beulich'
  Cc: 'Stefano Stabellini', 'Julien Grall',
	'Wei Liu', 'Andrew Cooper', 'Ian Jackson',
	'George Dunlap',
	xen-devel, 'Roger Pau Monné'

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 17 March 2020 10:45
> To: paul@xen.org
> Cc: xen-devel@lists.xenproject.org; 'Andrew Cooper' <andrew.cooper3@citrix.com>; 'George Dunlap'
> <george.dunlap@citrix.com>; 'Ian Jackson' <ian.jackson@eu.citrix.com>; 'Julien Grall'
> <julien@xen.org>; 'Stefano Stabellini' <sstabellini@kernel.org>; 'Wei Liu' <wl@xen.org>; 'Roger Pau
> Monné' <roger.pau@citrix.com>
> Subject: Re: [PATCH v6 2/5] mm: keep PGC_extra pages on a separate list
> 
> On 16.03.2020 19:11, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Jan Beulich <jbeulich@suse.com>
> >> Sent: 16 March 2020 16:53
> >>
> >> On 10.03.2020 18:49, paul@xen.org wrote:
> >>> --- a/xen/common/page_alloc.c
> >>> +++ b/xen/common/page_alloc.c
> >>> @@ -2314,7 +2314,7 @@ int assign_pages(
> >>>          smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
> >>>          pg[i].count_info =
> >>>              (pg[i].count_info & PGC_extra) | PGC_allocated | 1;
> >>> -        page_list_add_tail(&pg[i], &d->page_list);
> >>> +        page_list_add_tail(&pg[i], page_to_list(d, &pg[i]));
> >>>      }
> >>
> >> This moves the one extra page we currently have (VMX'es APIC access
> >> page) to a different list. Without adjustment to domain cleanup,
> >> how is this page now going to get freed? (This of course then should
> >> be extended to Arm, even if right now there's no "extra" page there.)
> >>
> >
> > I don't think there is any need to adjust as the current code in will
> > drop the allocation ref in vmx_free_vlapic_mapping(), so it doesn't
> > matter that it is missed by relinquish_memory().
> 
> Hmm, yes. It feels like thin ice, but I think you're right. Nevertheless
> the freeing of extra pages should imo ultimately move to the central
> place, i.e. it would seem more clean to me if the put_page_alloc_ref()
> call was dropped from vmx_free_vlapic_mapping().
> 
> >>> --- a/xen/include/asm-x86/mm.h
> >>> +++ b/xen/include/asm-x86/mm.h
> >>> @@ -629,10 +629,8 @@ typedef struct mm_rwlock {
> >>>      const char        *locker_function; /* func that took it */
> >>>  } mm_rwlock_t;
> >>>
> >>> -#define arch_free_heap_page(d, pg)                                      \
> >>> -    page_list_del2(pg, is_xen_heap_page(pg) ?                           \
> >>> -                       &(d)->xenpage_list : &(d)->page_list,            \
> >>> -                   &(d)->arch.relmem_list)
> >>> +#define arch_free_heap_page(d, pg) \
> >>> +    page_list_del2(pg, page_to_list((d), (pg)), &(d)->arch.relmem_list)
> >>
> >> Arguments passed on as is (i.e. not as part of an expression) don't
> >> need parentheses.
> >>
> >
> > Are you saying it should be:
> >
> > #define arch_free_heap_page(d, pg) \
> >     page_list_del2(pg, page_to_list(d, pg), &(d)->arch.relmem_list)
> 
> Yes. But if this and the other cosmetic changes are the only changes to
> make, I'd be fine to do so while committing (if no other reason for a
> v7 arises):
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks,

  Paul


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH v6 4/5] mm: add 'is_special_page' inline function...
  2020-03-10 17:49 ` [Xen-devel] [PATCH v6 4/5] mm: add 'is_special_page' inline function paul
@ 2020-03-17 13:06   ` Jan Beulich
  2020-03-17 14:47     ` [Xen-devel] [EXTERNAL] " Paul Durrant
  2020-03-17 15:23     ` Julien Grall
  0 siblings, 2 replies; 20+ messages in thread
From: Jan Beulich @ 2020-03-17 13:06 UTC (permalink / raw)
  To: paul
  Cc: Tamas K Lengyel, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Paul Durrant, Ian Jackson, George Dunlap,
	Tim Deegan, Stefano Stabellini, xen-devel, Roger Pau Monné

On 10.03.2020 18:49, paul@xen.org wrote:
> In auditing open-coded tests of PGC_xen_heap, I am unsure if offline_page()
> needs to check for PGC_extra pages too.

"Extra" pages being the designated replacement for Xen heap ones,
I think it should. Then again the earlier

    if ( (owner = page_get_owner_and_reference(pg)) )

should succeed on them (as much as it should for Xen heap pages
shared with a domain), so perhaps simply say something to this
effect in the description?

> @@ -4216,8 +4216,7 @@ int steal_page(
>      if ( !(owner = page_get_owner_and_reference(page)) )
>          goto fail;
>  
> -    if ( owner != d || is_xen_heap_page(page) ||
> -         (page->count_info & PGC_extra) )
> +    if ( owner != d || is_special_page(page) )
>          goto fail_put;
>  
>      /*

A few hundred lines down from here in xenmem_add_to_physmap_one()
there is a use of is_xen_heap_mfn(). Any reason that doesn't get
converted? Same question - because of the code being similar -
then goes for mm/p2m.c:p2m_add_foreign().

> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -749,8 +749,9 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, gfn_t gfn)
>  
>          n = 1UL << min(cur_order, SUPERPAGE_ORDER + 0U);
>          for ( k = 0, page = mfn_to_page(mfn); k < n; ++k, ++page )
> -            if ( !(page->count_info & PGC_allocated) ||
> -                 (page->count_info & (PGC_page_table | PGC_xen_heap)) ||
> +            if ( is_special_page(page) ||
> +                 !(page->count_info & PGC_allocated) ||
> +                 (page->count_info & PGC_page_table) ||
>                   (page->count_info & PGC_count_mask) > max_ref )
>                  goto out;
>      }
> @@ -886,8 +887,9 @@ p2m_pod_zero_check(struct p2m_domain *p2m, const gfn_t *gfns, unsigned int count
>           * If this is ram, and not a pagetable or from the xen heap, and
>           * probably not mapped elsewhere, map it; otherwise, skip.
>           */
> -        if ( p2m_is_ram(types[i]) && (pg->count_info & PGC_allocated) &&
> -             !(pg->count_info & (PGC_page_table | PGC_xen_heap)) &&
> +        if ( p2m_is_ram(types[i]) && !is_special_page(pg) &&
> +             (pg->count_info & PGC_allocated) &&
> +             !(pg->count_info & PGC_page_table) &&
>               ((pg->count_info & PGC_count_mask) <= max_ref) )
>              map[i] = map_domain_page(mfns[i]);
>          else

I appreciate your desire to use the inline function you add, and
I also appreciate that you likely prefer to not make the other
suggested change (at least not right here), but then I think the
commit message would better gain a remark regarding the
suspicious uses of PGC_page_table here. I continue to think that
they should be dropped as being pointless. In any event I fear
the resulting code will be less efficient, as I'm unconvinced
that the compiler will fold the now split bit checks.

> --- a/xen/arch/x86/mm/shadow/multi.c
> +++ b/xen/arch/x86/mm/shadow/multi.c
> @@ -559,7 +559,7 @@ _sh_propagate(struct vcpu *v,
>       * caching attributes in the shadows to match what was asked for.
>       */
>      if ( (level == 1) && is_hvm_domain(d) &&
> -         !is_xen_heap_mfn(target_mfn) )
> +         !is_special_page(mfn_to_page(target_mfn)) )

Careful - is_xen_heap_mfn() also includes an mfn_valid() check.
Code a few lines up from here suggests that MMIO MFNs can make
it here.

> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -285,6 +285,11 @@ extern struct domain *dom_cow;
>  
>  #include <asm/mm.h>
>  
> +static inline bool is_special_page(const struct page_info *page)
> +{
> +    return is_xen_heap_page(page) || (page->count_info & PGC_extra);

Seeing Arm32's implementation I understand why you need to use
|| here; it's a pity the two checks can't be folded. Hopefully
at least here the compiler recognizes the opportunity.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH v6 5/5] domain: use PGC_extra domheap page for shared_info
  2020-03-10 17:49 ` [Xen-devel] [PATCH v6 5/5] domain: use PGC_extra domheap page for shared_info paul
@ 2020-03-17 13:14   ` Jan Beulich
  2020-03-17 14:48     ` Durrant, Paul
  2020-03-24  9:26   ` Jan Beulich
  2020-03-24 14:22   ` Julien Grall
  2 siblings, 1 reply; 20+ messages in thread
From: Jan Beulich @ 2020-03-17 13:14 UTC (permalink / raw)
  To: paul
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Paul Durrant, Ian Jackson, George Dunlap,
	xen-devel, Volodymyr Babchuk

On 10.03.2020 18:49, paul@xen.org wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> Currently shared_info is a shared xenheap page but shared xenheap pages
> complicate future plans for live-update of Xen so it is desirable to,
> where possible, not use them [1]. This patch therefore converts shared_info
> into a PGC_extra domheap page. This does entail freeing shared_info during
> domain_relinquish_resources() rather than domain_destroy() so care is
> needed to avoid de-referencing a NULL shared_info pointer hence some
> extra checks of 'is_dying' are needed.

If there's going to be agreement to follow this route, the implementation,
with a really minor cosmetic adjustment - see below -, looks okay to me.
Nevertheless I continue to dislike the implication from the extra care
that's now needed. As I think I have said before, I'd like to have at
least one other REST maintainer's opinion here.

> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -260,9 +260,12 @@ void dump_pageframe_info(struct domain *d)
>  
>      page_list_for_each ( page, &d->extra_page_list )
>      {
> -        printk("    ExtraPage %p: caf=%08lx, taf=%" PRtype_info "\n",
> +        const char *tag = mfn_eq(page_to_mfn(page), d->shared_info.mfn) ?
> +            "[SHARED INFO]" : "";

Please can this be " [SHARED INFO]" with ...

> +        printk("    ExtraPage %p: caf=%08lx, taf=%" PRtype_info " %s\n",

... the blank before the final %s dropped here, such that we won't
have a trailing blank in the output?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [EXTERNAL] [PATCH v6 4/5] mm: add 'is_special_page' inline function...
  2020-03-17 13:06   ` Jan Beulich
@ 2020-03-17 14:47     ` Paul Durrant
  2020-03-17 15:01       ` [Xen-devel] " Jan Beulich
  2020-03-17 15:23     ` Julien Grall
  1 sibling, 1 reply; 20+ messages in thread
From: Paul Durrant @ 2020-03-17 14:47 UTC (permalink / raw)
  To: 'Jan Beulich', paul
  Cc: 'Tamas K Lengyel', 'Julien Grall',
	'Wei Liu', 'Konrad Rzeszutek Wilk',
	'Andrew Cooper', 'Ian Jackson',
	'George Dunlap', 'Tim Deegan',
	'Stefano Stabellini',
	xen-devel, 'Roger Pau Monné'

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 17 March 2020 13:07
> To: paul@xen.org
> Cc: xen-devel@lists.xenproject.org; Durrant, Paul <pdurrant@amazon.co.uk>; Tamas K Lengyel
> <tamas@tklengyel.com>; Andrew Cooper <andrew.cooper3@citrix.com>; Wei Liu <wl@xen.org>; Roger Pau
> Monné <roger.pau@citrix.com>; George Dunlap <george.dunlap@citrix.com>; Ian Jackson
> <ian.jackson@eu.citrix.com>; Julien Grall <julien@xen.org>; Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>; Stefano Stabellini <sstabellini@kernel.org>; Tim Deegan <tim@xen.org>
> Subject: RE: [EXTERNAL] [PATCH v6 4/5] mm: add 'is_special_page' inline function...
> 
> CAUTION: This email originated from outside of the organization. Do not click links or open
> attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On 10.03.2020 18:49, paul@xen.org wrote:
> > In auditing open-coded tests of PGC_xen_heap, I am unsure if offline_page()
> > needs to check for PGC_extra pages too.
> 
> "Extra" pages being the designated replacement for Xen heap ones,
> I think it should. Then again the earlier
> 
>     if ( (owner = page_get_owner_and_reference(pg)) )
> 
> should succeed on them (as much as it should for Xen heap pages
> shared with a domain), so perhaps simply say something to this
> effect in the description?
> 
> > @@ -4216,8 +4216,7 @@ int steal_page(
> >      if ( !(owner = page_get_owner_and_reference(page)) )
> >          goto fail;
> >
> > -    if ( owner != d || is_xen_heap_page(page) ||
> > -         (page->count_info & PGC_extra) )
> > +    if ( owner != d || is_special_page(page) )
> >          goto fail_put;
> >
> >      /*
> 
> A few hundred lines down from here in xenmem_add_to_physmap_one()
> there is a use of is_xen_heap_mfn(). Any reason that doesn't get
> converted? Same question - because of the code being similar -
> then goes for mm/p2m.c:p2m_add_foreign().
> 

I'll check again.

> > --- a/xen/arch/x86/mm/p2m-pod.c
> > +++ b/xen/arch/x86/mm/p2m-pod.c
> > @@ -749,8 +749,9 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, gfn_t gfn)
> >
> >          n = 1UL << min(cur_order, SUPERPAGE_ORDER + 0U);
> >          for ( k = 0, page = mfn_to_page(mfn); k < n; ++k, ++page )
> > -            if ( !(page->count_info & PGC_allocated) ||
> > -                 (page->count_info & (PGC_page_table | PGC_xen_heap)) ||
> > +            if ( is_special_page(page) ||
> > +                 !(page->count_info & PGC_allocated) ||
> > +                 (page->count_info & PGC_page_table) ||
> >                   (page->count_info & PGC_count_mask) > max_ref )
> >                  goto out;
> >      }
> > @@ -886,8 +887,9 @@ p2m_pod_zero_check(struct p2m_domain *p2m, const gfn_t *gfns, unsigned int count
> >           * If this is ram, and not a pagetable or from the xen heap, and
> >           * probably not mapped elsewhere, map it; otherwise, skip.
> >           */
> > -        if ( p2m_is_ram(types[i]) && (pg->count_info & PGC_allocated) &&
> > -             !(pg->count_info & (PGC_page_table | PGC_xen_heap)) &&
> > +        if ( p2m_is_ram(types[i]) && !is_special_page(pg) &&
> > +             (pg->count_info & PGC_allocated) &&
> > +             !(pg->count_info & PGC_page_table) &&
> >               ((pg->count_info & PGC_count_mask) <= max_ref) )
> >              map[i] = map_domain_page(mfns[i]);
> >          else
> 
> I appreciate your desire to use the inline function you add, and
> I also appreciate that you likely prefer to not make the other
> suggested change (at least not right here), but then I think the
> commit message would better gain a remark regarding the
> suspicious uses of PGC_page_table here.

What's suspicious about it? I now realise the above comment also needs fixing.

> I continue to think that
> they should be dropped as being pointless. In any event I fear
> the resulting code will be less efficient, as I'm unconvinced
> that the compiler will fold the now split bit checks.
> 

I could go back to defining is_special_page() as a macro.

> > --- a/xen/arch/x86/mm/shadow/multi.c
> > +++ b/xen/arch/x86/mm/shadow/multi.c
> > @@ -559,7 +559,7 @@ _sh_propagate(struct vcpu *v,
> >       * caching attributes in the shadows to match what was asked for.
> >       */
> >      if ( (level == 1) && is_hvm_domain(d) &&
> > -         !is_xen_heap_mfn(target_mfn) )
> > +         !is_special_page(mfn_to_page(target_mfn)) )
> 
> Careful - is_xen_heap_mfn() also includes an mfn_valid() check.
> Code a few lines up from here suggests that MMIO MFNs can make
> it here.
> 

Ok.

> > --- a/xen/include/xen/mm.h
> > +++ b/xen/include/xen/mm.h
> > @@ -285,6 +285,11 @@ extern struct domain *dom_cow;
> >
> >  #include <asm/mm.h>
> >
> > +static inline bool is_special_page(const struct page_info *page)
> > +{
> > +    return is_xen_heap_page(page) || (page->count_info & PGC_extra);
> 
> Seeing Arm32's implementation I understand why you need to use
> || here; it's a pity the two checks can't be folded. Hopefully
> at least here the compiler recognizes the opportunity.
> 

Yes.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH v6 5/5] domain: use PGC_extra domheap page for shared_info
  2020-03-17 13:14   ` Jan Beulich
@ 2020-03-17 14:48     ` Durrant, Paul
  0 siblings, 0 replies; 20+ messages in thread
From: Durrant, Paul @ 2020-03-17 14:48 UTC (permalink / raw)
  To: Jan Beulich, paul
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Ian Jackson, George Dunlap, xen-devel,
	Volodymyr Babchuk

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 17 March 2020 13:14
> To: paul@xen.org
> Cc: xen-devel@lists.xenproject.org; Durrant, Paul <pdurrant@amazon.co.uk>; Stefano Stabellini
> <sstabellini@kernel.org>; Julien Grall <julien@xen.org>; Volodymyr Babchuk
> <Volodymyr_Babchuk@epam.com>; Andrew Cooper <andrew.cooper3@citrix.com>; George Dunlap
> <george.dunlap@citrix.com>; Ian Jackson <ian.jackson@eu.citrix.com>; Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>; Wei Liu <wl@xen.org>
> Subject: RE: [EXTERNAL] [PATCH v6 5/5] domain: use PGC_extra domheap page for shared_info
> 
> CAUTION: This email originated from outside of the organization. Do not click links or open
> attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On 10.03.2020 18:49, paul@xen.org wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > Currently shared_info is a shared xenheap page but shared xenheap pages
> > complicate future plans for live-update of Xen so it is desirable to,
> > where possible, not use them [1]. This patch therefore converts shared_info
> > into a PGC_extra domheap page. This does entail freeing shared_info during
> > domain_relinquish_resources() rather than domain_destroy() so care is
> > needed to avoid de-referencing a NULL shared_info pointer hence some
> > extra checks of 'is_dying' are needed.
> 
> If there's going to be agreement to follow this route, the implementation,
> with a really minor cosmetic adjustment - see below -, looks okay to me.
> Nevertheless I continue to dislike the implication from the extra care
> that's now needed. As I think I have said before, I'd like to have at
> least one other REST maintainer's opinion here.
> 

Ok, fair enough.

> > --- a/xen/arch/x86/domain.c
> > +++ b/xen/arch/x86/domain.c
> > @@ -260,9 +260,12 @@ void dump_pageframe_info(struct domain *d)
> >
> >      page_list_for_each ( page, &d->extra_page_list )
> >      {
> > -        printk("    ExtraPage %p: caf=%08lx, taf=%" PRtype_info "\n",
> > +        const char *tag = mfn_eq(page_to_mfn(page), d->shared_info.mfn) ?
> > +            "[SHARED INFO]" : "";
> 
> Please can this be " [SHARED INFO]" with ...
> 
> > +        printk("    ExtraPage %p: caf=%08lx, taf=%" PRtype_info " %s\n",
> 
> ... the blank before the final %s dropped here, such that we won't
> have a trailing blank in the output?

Sure.

  Paul

> 
> Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH v6 4/5] mm: add 'is_special_page' inline function...
  2020-03-17 14:47     ` [Xen-devel] [EXTERNAL] " Paul Durrant
@ 2020-03-17 15:01       ` Jan Beulich
  0 siblings, 0 replies; 20+ messages in thread
From: Jan Beulich @ 2020-03-17 15:01 UTC (permalink / raw)
  To: paul
  Cc: 'Tamas K Lengyel', 'Julien Grall',
	'Wei Liu', 'Konrad Rzeszutek Wilk',
	'Andrew Cooper', 'Ian Jackson',
	'George Dunlap', 'Tim Deegan',
	'Stefano Stabellini',
	xen-devel, 'Roger Pau Monné'

On 17.03.2020 15:47, Paul Durrant wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 17 March 2020 13:07
>>
>> On 10.03.2020 18:49, paul@xen.org wrote:
>>> --- a/xen/arch/x86/mm/p2m-pod.c
>>> +++ b/xen/arch/x86/mm/p2m-pod.c
>>> @@ -749,8 +749,9 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, gfn_t gfn)
>>>
>>>          n = 1UL << min(cur_order, SUPERPAGE_ORDER + 0U);
>>>          for ( k = 0, page = mfn_to_page(mfn); k < n; ++k, ++page )
>>> -            if ( !(page->count_info & PGC_allocated) ||
>>> -                 (page->count_info & (PGC_page_table | PGC_xen_heap)) ||
>>> +            if ( is_special_page(page) ||
>>> +                 !(page->count_info & PGC_allocated) ||
>>> +                 (page->count_info & PGC_page_table) ||
>>>                   (page->count_info & PGC_count_mask) > max_ref )
>>>                  goto out;
>>>      }
>>> @@ -886,8 +887,9 @@ p2m_pod_zero_check(struct p2m_domain *p2m, const gfn_t *gfns, unsigned int count
>>>           * If this is ram, and not a pagetable or from the xen heap, and
>>>           * probably not mapped elsewhere, map it; otherwise, skip.
>>>           */
>>> -        if ( p2m_is_ram(types[i]) && (pg->count_info & PGC_allocated) &&
>>> -             !(pg->count_info & (PGC_page_table | PGC_xen_heap)) &&
>>> +        if ( p2m_is_ram(types[i]) && !is_special_page(pg) &&
>>> +             (pg->count_info & PGC_allocated) &&
>>> +             !(pg->count_info & PGC_page_table) &&
>>>               ((pg->count_info & PGC_count_mask) <= max_ref) )
>>>              map[i] = map_domain_page(mfns[i]);
>>>          else
>>
>> I appreciate your desire to use the inline function you add, and
>> I also appreciate that you likely prefer to not make the other
>> suggested change (at least not right here), but then I think the
>> commit message would better gain a remark regarding the
>> suspicious uses of PGC_page_table here.
> 
> What's suspicious about it?

Hmm, looks like I was wrongly remembering shadow code to be setting
this on the shadows of guest page tables, rather than on the guest
page tables themselves. I'm sorry for the noise. (Keeping the bit-
is-set tests together may still be a good idea though code generation
wise, unless you know that all halfway recent compiler versions can
fold the code fine in its current shape.)

> I now realise the above comment also needs fixing.

Oh, indeed.

>> I continue to think that
>> they should be dropped as being pointless. In any event I fear
>> the resulting code will be less efficient, as I'm unconvinced
>> that the compiler will fold the now split bit checks.
> 
> I could go back to defining is_special_page() as a macro.

I don't think this would make a difference.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH v6 4/5] mm: add 'is_special_page' inline function...
  2020-03-17 13:06   ` Jan Beulich
  2020-03-17 14:47     ` [Xen-devel] [EXTERNAL] " Paul Durrant
@ 2020-03-17 15:23     ` Julien Grall
  1 sibling, 0 replies; 20+ messages in thread
From: Julien Grall @ 2020-03-17 15:23 UTC (permalink / raw)
  To: Jan Beulich, paul
  Cc: Tamas K Lengyel, Wei Liu, Konrad Rzeszutek Wilk, Andrew Cooper,
	Paul Durrant, Ian Jackson, George Dunlap, Tim Deegan,
	Stefano Stabellini, xen-devel, Roger Pau Monné



On 17/03/2020 13:06, Jan Beulich wrote:
> On 10.03.2020 18:49, paul@xen.org wrote:
>> In auditing open-coded tests of PGC_xen_heap, I am unsure if offline_page()
>> needs to check for PGC_extra pages too.
> 
> "Extra" pages being the designated replacement for Xen heap ones,
> I think it should. Then again the earlier
> 
>      if ( (owner = page_get_owner_and_reference(pg)) )
> 
> should succeed on them (as much as it should for Xen heap pages
> shared with a domain), so perhaps simply say something to this
> effect in the description?
> 
>> @@ -4216,8 +4216,7 @@ int steal_page(
>>       if ( !(owner = page_get_owner_and_reference(page)) )
>>           goto fail;
>>   
>> -    if ( owner != d || is_xen_heap_page(page) ||
>> -         (page->count_info & PGC_extra) )
>> +    if ( owner != d || is_special_page(page) )
>>           goto fail_put;
>>   
>>       /*
> 
> A few hundred lines down from here in xenmem_add_to_physmap_one()
> there is a use of is_xen_heap_mfn(). Any reason that doesn't get
> converted? Same question - because of the code being similar -
> then goes for mm/p2m.c:p2m_add_foreign().
> 
>> --- a/xen/arch/x86/mm/p2m-pod.c
>> +++ b/xen/arch/x86/mm/p2m-pod.c
>> @@ -749,8 +749,9 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, gfn_t gfn)
>>   
>>           n = 1UL << min(cur_order, SUPERPAGE_ORDER + 0U);
>>           for ( k = 0, page = mfn_to_page(mfn); k < n; ++k, ++page )
>> -            if ( !(page->count_info & PGC_allocated) ||
>> -                 (page->count_info & (PGC_page_table | PGC_xen_heap)) ||
>> +            if ( is_special_page(page) ||
>> +                 !(page->count_info & PGC_allocated) ||
>> +                 (page->count_info & PGC_page_table) ||
>>                    (page->count_info & PGC_count_mask) > max_ref )
>>                   goto out;
>>       }
>> @@ -886,8 +887,9 @@ p2m_pod_zero_check(struct p2m_domain *p2m, const gfn_t *gfns, unsigned int count
>>            * If this is ram, and not a pagetable or from the xen heap, and
>>            * probably not mapped elsewhere, map it; otherwise, skip.
>>            */
>> -        if ( p2m_is_ram(types[i]) && (pg->count_info & PGC_allocated) &&
>> -             !(pg->count_info & (PGC_page_table | PGC_xen_heap)) &&
>> +        if ( p2m_is_ram(types[i]) && !is_special_page(pg) &&
>> +             (pg->count_info & PGC_allocated) &&
>> +             !(pg->count_info & PGC_page_table) &&
>>                ((pg->count_info & PGC_count_mask) <= max_ref) )
>>               map[i] = map_domain_page(mfns[i]);
>>           else
> 
> I appreciate your desire to use the inline function you add, and
> I also appreciate that you likely prefer to not make the other
> suggested change (at least not right here), but then I think the
> commit message would better gain a remark regarding the
> suspicious uses of PGC_page_table here. I continue to think that
> they should be dropped as being pointless. In any event I fear
> the resulting code will be less efficient, as I'm unconvinced
> that the compiler will fold the now split bit checks.
> 
>> --- a/xen/arch/x86/mm/shadow/multi.c
>> +++ b/xen/arch/x86/mm/shadow/multi.c
>> @@ -559,7 +559,7 @@ _sh_propagate(struct vcpu *v,
>>        * caching attributes in the shadows to match what was asked for.
>>        */
>>       if ( (level == 1) && is_hvm_domain(d) &&
>> -         !is_xen_heap_mfn(target_mfn) )
>> +         !is_special_page(mfn_to_page(target_mfn)) )
> 
> Careful - is_xen_heap_mfn() also includes an mfn_valid() check.
> Code a few lines up from here suggests that MMIO MFNs can make
> it here.
> 
>> --- a/xen/include/xen/mm.h
>> +++ b/xen/include/xen/mm.h
>> @@ -285,6 +285,11 @@ extern struct domain *dom_cow;
>>   
>>   #include <asm/mm.h>
>>   
>> +static inline bool is_special_page(const struct page_info *page)
>> +{
>> +    return is_xen_heap_page(page) || (page->count_info & PGC_extra);
> 
> Seeing Arm32's implementation I understand why you need to use
> || here; it's a pity the two checks can't be folded. Hopefully
> at least here the compiler recognizes the opportunity.

Note this is not a request for this series :).

I noticed this oddity recently. I have been pondering whether we should 
aim to have PGC_xen_heap even when using split (like on Arm32).

This would avoid problem like the offline code, which is common code, to 
be broken on platform using split xenheap.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH v6 5/5] domain: use PGC_extra domheap page for shared_info
  2020-03-10 17:49 ` [Xen-devel] [PATCH v6 5/5] domain: use PGC_extra domheap page for shared_info paul
  2020-03-17 13:14   ` Jan Beulich
@ 2020-03-24  9:26   ` Jan Beulich
  2020-03-24  9:31     ` Jan Beulich
  2020-03-24 14:22   ` Julien Grall
  2 siblings, 1 reply; 20+ messages in thread
From: Jan Beulich @ 2020-03-24  9:26 UTC (permalink / raw)
  To: Stefano Stabellini, Julien Grall
  Cc: Wei Liu, paul, Andrew Cooper, Paul Durrant,
	Konrad Rzeszutek Wilk, Ian Jackson, George Dunlap, xen-devel,
	Volodymyr Babchuk

On 10.03.2020 18:49, paul@xen.org wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> Currently shared_info is a shared xenheap page but shared xenheap pages
> complicate future plans for live-update of Xen so it is desirable to,
> where possible, not use them [1]. This patch therefore converts shared_info
> into a PGC_extra domheap page. This does entail freeing shared_info during
> domain_relinquish_resources() rather than domain_destroy() so care is
> needed to avoid de-referencing a NULL shared_info pointer hence some
> extra checks of 'is_dying' are needed.
> 
> NOTE: For Arm, the call to free_shared_info() in arch_domain_destroy() is
>       left in place since it is idempotent and called in the error path for
>       arch_domain_create().
> 
> [1] See https://lists.xenproject.org/archives/html/xen-devel/2020-02/msg02018.html
> 
> Signed-off-by: Paul Durrant <paul@xen.org>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien@xen.org>
> Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: George Dunlap <george.dunlap@citrix.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Wei Liu <wl@xen.org>
> 
> v6:
>  - Drop dump_shared_info() but tag the shared info in the 'ExtraPage'
>    dump
> 
> v5:
>  - Incorporate new dump_shared_info() function
> 
> v2:
>  - Addressed comments from Julien
>  - Expanded the commit comment to explain why this patch is wanted
> ---
>  xen/arch/arm/domain.c      |  2 ++

Julien, Stefano? (I'd prefer to commit the entire series in one go,
rather than leaving out just this last patch.)

Jan


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH v6 5/5] domain: use PGC_extra domheap page for shared_info
  2020-03-24  9:26   ` Jan Beulich
@ 2020-03-24  9:31     ` Jan Beulich
  0 siblings, 0 replies; 20+ messages in thread
From: Jan Beulich @ 2020-03-24  9:31 UTC (permalink / raw)
  To: Stefano Stabellini, Julien Grall
  Cc: Wei Liu, paul, Andrew Cooper, Paul Durrant,
	Konrad Rzeszutek Wilk, Ian Jackson, George Dunlap, xen-devel,
	Volodymyr Babchuk

On 24.03.2020 10:26, Jan Beulich wrote:
> On 10.03.2020 18:49, paul@xen.org wrote:
>> From: Paul Durrant <pdurrant@amazon.com>
>>
>> Currently shared_info is a shared xenheap page but shared xenheap pages
>> complicate future plans for live-update of Xen so it is desirable to,
>> where possible, not use them [1]. This patch therefore converts shared_info
>> into a PGC_extra domheap page. This does entail freeing shared_info during
>> domain_relinquish_resources() rather than domain_destroy() so care is
>> needed to avoid de-referencing a NULL shared_info pointer hence some
>> extra checks of 'is_dying' are needed.
>>
>> NOTE: For Arm, the call to free_shared_info() in arch_domain_destroy() is
>>       left in place since it is idempotent and called in the error path for
>>       arch_domain_create().
>>
>> [1] See https://lists.xenproject.org/archives/html/xen-devel/2020-02/msg02018.html
>>
>> Signed-off-by: Paul Durrant <paul@xen.org>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien@xen.org>
>> Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
>> Cc: George Dunlap <george.dunlap@citrix.com>
>> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
>> Cc: Jan Beulich <jbeulich@suse.com>
>> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> Cc: Wei Liu <wl@xen.org>
>>
>> v6:
>>  - Drop dump_shared_info() but tag the shared info in the 'ExtraPage'
>>    dump
>>
>> v5:
>>  - Incorporate new dump_shared_info() function
>>
>> v2:
>>  - Addressed comments from Julien
>>  - Expanded the commit comment to explain why this patch is wanted
>> ---
>>  xen/arch/arm/domain.c      |  2 ++
> 
> Julien, Stefano? (I'd prefer to commit the entire series in one go,
> rather than leaving out just this last patch.)

Actually - never mind, I've just realized that there are still some
pending items on the last two patches of this series.

Jan


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH v6 5/5] domain: use PGC_extra domheap page for shared_info
  2020-03-10 17:49 ` [Xen-devel] [PATCH v6 5/5] domain: use PGC_extra domheap page for shared_info paul
  2020-03-17 13:14   ` Jan Beulich
  2020-03-24  9:26   ` Jan Beulich
@ 2020-03-24 14:22   ` Julien Grall
  2020-03-24 14:28     ` Durrant, Paul
  2 siblings, 1 reply; 20+ messages in thread
From: Julien Grall @ 2020-03-24 14:22 UTC (permalink / raw)
  To: paul, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Paul Durrant, Ian Jackson, George Dunlap,
	Jan Beulich, Volodymyr Babchuk

Hi Paul,

On 10/03/2020 17:49, paul@xen.org wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> Currently shared_info is a shared xenheap page but shared xenheap pages
> complicate future plans for live-update of Xen so it is desirable to,
> where possible, not use them [1]. This patch therefore converts shared_info
> into a PGC_extra domheap page. This does entail freeing shared_info during
> domain_relinquish_resources() rather than domain_destroy() so care is
> needed to avoid de-referencing a NULL shared_info pointer hence some
> extra checks of 'is_dying' are needed.
> 
> NOTE: For Arm, the call to free_shared_info() in arch_domain_destroy() is
>        left in place since it is idempotent and called in the error path for
>        arch_domain_create().

The approach looks good to me. I have one comment below.

[...]

> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 4ef0d3b21e..4f3266454f 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -1651,24 +1651,44 @@ int continue_hypercall_on_cpu(
>   
>   int alloc_shared_info(struct domain *d, unsigned int memflags)
>   {
> -    if ( (d->shared_info.virt = alloc_xenheap_pages(0, memflags)) == NULL )
> +    struct page_info *pg;
> +
> +    pg = alloc_domheap_page(d, MEMF_no_refcount | memflags);
> +    if ( !pg )
>           return -ENOMEM;
>   
> -    d->shared_info.mfn = virt_to_mfn(d->shared_info.virt);
> +    if ( !get_page_and_type(pg, d, PGT_writable_page) )
> +    {
> +        /*
> +         * The domain should not be running at this point so there is
> +         * no way we should reach this error path.
> +         */
> +        ASSERT_UNREACHABLE();
> +        return -ENODATA;
> +    }
> +
> +    d->shared_info.mfn = page_to_mfn(pg);
> +    d->shared_info.virt = __map_domain_page_global(pg);

__map_domain_page_global() is not guaranteed to succeed. For instance, 
on Arm32 this will be a call to vmap().

So you want to check the return.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH v6 5/5] domain: use PGC_extra domheap page for shared_info
  2020-03-24 14:22   ` Julien Grall
@ 2020-03-24 14:28     ` Durrant, Paul
  0 siblings, 0 replies; 20+ messages in thread
From: Durrant, Paul @ 2020-03-24 14:28 UTC (permalink / raw)
  To: Julien Grall, paul, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Ian Jackson, George Dunlap, Jan Beulich,
	Volodymyr Babchuk

> -----Original Message-----
[snip]
> > Currently shared_info is a shared xenheap page but shared xenheap pages
> > complicate future plans for live-update of Xen so it is desirable to,
> > where possible, not use them [1]. This patch therefore converts shared_info
> > into a PGC_extra domheap page. This does entail freeing shared_info during
> > domain_relinquish_resources() rather than domain_destroy() so care is
> > needed to avoid de-referencing a NULL shared_info pointer hence some
> > extra checks of 'is_dying' are needed.
> >
> > NOTE: For Arm, the call to free_shared_info() in arch_domain_destroy() is
> >        left in place since it is idempotent and called in the error path for
> >        arch_domain_create().
> 
> The approach looks good to me. I have one comment below.
> 

Thanks.

> [...]
> 
> > diff --git a/xen/common/domain.c b/xen/common/domain.c
> > index 4ef0d3b21e..4f3266454f 100644
> > --- a/xen/common/domain.c
> > +++ b/xen/common/domain.c
> > @@ -1651,24 +1651,44 @@ int continue_hypercall_on_cpu(
> >
> >   int alloc_shared_info(struct domain *d, unsigned int memflags)
> >   {
> > -    if ( (d->shared_info.virt = alloc_xenheap_pages(0, memflags)) == NULL )
> > +    struct page_info *pg;
> > +
> > +    pg = alloc_domheap_page(d, MEMF_no_refcount | memflags);
> > +    if ( !pg )
> >           return -ENOMEM;
> >
> > -    d->shared_info.mfn = virt_to_mfn(d->shared_info.virt);
> > +    if ( !get_page_and_type(pg, d, PGT_writable_page) )
> > +    {
> > +        /*
> > +         * The domain should not be running at this point so there is
> > +         * no way we should reach this error path.
> > +         */
> > +        ASSERT_UNREACHABLE();
> > +        return -ENODATA;
> > +    }
> > +
> > +    d->shared_info.mfn = page_to_mfn(pg);
> > +    d->shared_info.virt = __map_domain_page_global(pg);
> 
> __map_domain_page_global() is not guaranteed to succeed. For instance,
> on Arm32 this will be a call to vmap().
> 
> So you want to check the return.
> 

Ok, I'll add a check.

As Jan discovered, I messed up the version numbering so there is already a v7 series where I dropped this patch (until I can group it with conversion of other shared xenheap pages).

  Paul

> Cheers,
> 
> --
> Julien Grall

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2020-03-24 14:28 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-10 17:49 [Xen-devel] [PATCH v6 0/5] remove one more shared xenheap page: shared_info paul
2020-03-10 17:49 ` [Xen-devel] [PATCH v6 1/5] domain: introduce alloc/free_shared_info() helpers paul
2020-03-10 17:49 ` [Xen-devel] [PATCH v6 2/5] mm: keep PGC_extra pages on a separate list paul
2020-03-16 16:53   ` Jan Beulich
2020-03-16 18:11     ` Paul Durrant
2020-03-17 10:45       ` Jan Beulich
2020-03-17 10:51         ` Paul Durrant
2020-03-10 17:49 ` [Xen-devel] [PATCH v6 3/5] x86 / ioreq: use a MEMF_no_refcount allocation for server pages paul
2020-03-10 17:49 ` [Xen-devel] [PATCH v6 4/5] mm: add 'is_special_page' inline function paul
2020-03-17 13:06   ` Jan Beulich
2020-03-17 14:47     ` [Xen-devel] [EXTERNAL] " Paul Durrant
2020-03-17 15:01       ` [Xen-devel] " Jan Beulich
2020-03-17 15:23     ` Julien Grall
2020-03-10 17:49 ` [Xen-devel] [PATCH v6 5/5] domain: use PGC_extra domheap page for shared_info paul
2020-03-17 13:14   ` Jan Beulich
2020-03-17 14:48     ` Durrant, Paul
2020-03-24  9:26   ` Jan Beulich
2020-03-24  9:31     ` Jan Beulich
2020-03-24 14:22   ` Julien Grall
2020-03-24 14:28     ` Durrant, Paul

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.