All of lore.kernel.org
 help / color / mirror / Atom feed
* [Xen-devel] [PATCH v9 0/4] purge free_shared_domheap_page()
@ 2020-02-03 10:56 Paul Durrant
  2020-02-03 10:56 ` [Xen-devel] [PATCH v9 1/4] x86 / vmx: move teardown from domain_destroy() Paul Durrant
                   ` (4 more replies)
  0 siblings, 5 replies; 16+ messages in thread
From: Paul Durrant @ 2020-02-03 10:56 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Paul Durrant, Ian Jackson, George Dunlap,
	Tim Deegan, Jun Nakajima, Volodymyr Babchuk, Roger Pau Monné

Paul Durrant (4):
  x86 / vmx: move teardown from domain_destroy()...
  add a domain_tot_pages() helper function
  mm: make pages allocated with MEMF_no_refcount safe to assign
  x86 / vmx: use a MEMF_no_refcount domheap page for
    APIC_DEFAULT_PHYS_BASE

 xen/arch/arm/arm64/domctl.c     |  2 +-
 xen/arch/x86/domain.c           |  2 +-
 xen/arch/x86/hvm/vmx/vmx.c      | 25 ++++++++---
 xen/arch/x86/mm.c               | 15 ++-----
 xen/arch/x86/mm/p2m-pod.c       | 10 ++---
 xen/arch/x86/mm/shadow/common.c |  2 +-
 xen/arch/x86/msi.c              |  2 +-
 xen/arch/x86/numa.c             |  2 +-
 xen/arch/x86/pv/dom0_build.c    | 25 ++++++-----
 xen/arch/x86/pv/domain.c        |  2 +-
 xen/arch/x86/pv/shim.c          |  4 +-
 xen/common/domctl.c             |  2 +-
 xen/common/grant_table.c        |  4 +-
 xen/common/keyhandler.c         |  2 +-
 xen/common/memory.c             |  2 +-
 xen/common/page_alloc.c         | 78 ++++++++++++++++++++++++---------
 xen/include/asm-arm/mm.h        |  5 ++-
 xen/include/asm-x86/mm.h        |  9 ++--
 xen/include/public/memory.h     |  4 +-
 xen/include/xen/sched.h         | 27 +++++++++---
 20 files changed, 143 insertions(+), 81 deletions(-)
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Julien Grall <julien@xen.org>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: Wei Liu <wl@xen.org>
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [Xen-devel] [PATCH v9 1/4] x86 / vmx: move teardown from domain_destroy()...
  2020-02-03 10:56 [Xen-devel] [PATCH v9 0/4] purge free_shared_domheap_page() Paul Durrant
@ 2020-02-03 10:56 ` Paul Durrant
  2020-02-03 10:56 ` [Xen-devel] [PATCH v9 2/4] add a domain_tot_pages() helper function Paul Durrant
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 16+ messages in thread
From: Paul Durrant @ 2020-02-03 10:56 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Paul Durrant, George Dunlap,
	Jun Nakajima, Roger Pau Monné

... to domain_relinquish_resources().

The teardown code frees the APICv page. This does not need to be done late
so do it in domain_relinquish_resources() rather than domain_destroy().

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
---
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>

v4:
  - New in v4 (disaggregated from v3 patch #3)
---
 xen/arch/x86/hvm/vmx/vmx.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index b262d38a7c..606f3dc2eb 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -419,7 +419,7 @@ static int vmx_domain_initialise(struct domain *d)
     return 0;
 }
 
-static void vmx_domain_destroy(struct domain *d)
+static void vmx_domain_relinquish_resources(struct domain *d)
 {
     if ( !has_vlapic(d) )
         return;
@@ -2240,7 +2240,7 @@ static struct hvm_function_table __initdata vmx_function_table = {
     .cpu_up_prepare       = vmx_cpu_up_prepare,
     .cpu_dead             = vmx_cpu_dead,
     .domain_initialise    = vmx_domain_initialise,
-    .domain_destroy       = vmx_domain_destroy,
+    .domain_relinquish_resources = vmx_domain_relinquish_resources,
     .vcpu_initialise      = vmx_vcpu_initialise,
     .vcpu_destroy         = vmx_vcpu_destroy,
     .save_cpu_ctxt        = vmx_save_vmcs_ctxt,
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [Xen-devel] [PATCH v9 2/4] add a domain_tot_pages() helper function
  2020-02-03 10:56 [Xen-devel] [PATCH v9 0/4] purge free_shared_domheap_page() Paul Durrant
  2020-02-03 10:56 ` [Xen-devel] [PATCH v9 1/4] x86 / vmx: move teardown from domain_destroy() Paul Durrant
@ 2020-02-03 10:56 ` Paul Durrant
  2020-02-03 11:40   ` Jan Beulich
  2020-02-06  9:46   ` Julien Grall
  2020-02-03 10:56 ` [Xen-devel] [PATCH v9 3/4] mm: make pages allocated with MEMF_no_refcount safe to assign Paul Durrant
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 16+ messages in thread
From: Paul Durrant @ 2020-02-03 10:56 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Paul Durrant, Ian Jackson,
	Tim Deegan, Roger Pau Monné

This patch adds a new domain_tot_pages() inline helper function into
sched.h, which will be needed by a subsequent patch.

No functional change.

NOTE: While modifying the comment for 'tot_pages' in sched.h this patch
      makes some cosmetic fixes to surrounding comments.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Julien Grall <julien@xen.org>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Tim Deegan <tim@xen.org>

v9:
 - Fix missing changes in PV shim
 - Dropped some comment changes

v8:
 - New in v8
---
 xen/arch/arm/arm64/domctl.c     |  2 +-
 xen/arch/x86/domain.c           |  2 +-
 xen/arch/x86/mm.c               |  2 +-
 xen/arch/x86/mm/p2m-pod.c       | 10 +++++-----
 xen/arch/x86/mm/shadow/common.c |  2 +-
 xen/arch/x86/msi.c              |  2 +-
 xen/arch/x86/numa.c             |  2 +-
 xen/arch/x86/pv/dom0_build.c    | 25 +++++++++++++------------
 xen/arch/x86/pv/domain.c        |  2 +-
 xen/arch/x86/pv/shim.c          |  4 ++--
 xen/common/domctl.c             |  2 +-
 xen/common/grant_table.c        |  4 ++--
 xen/common/keyhandler.c         |  2 +-
 xen/common/memory.c             |  2 +-
 xen/common/page_alloc.c         | 15 ++++++++-------
 xen/include/public/memory.h     |  4 ++--
 xen/include/xen/sched.h         | 24 ++++++++++++++++++------
 17 files changed, 60 insertions(+), 46 deletions(-)

diff --git a/xen/arch/arm/arm64/domctl.c b/xen/arch/arm/arm64/domctl.c
index ab8781fb91..0de89b42c4 100644
--- a/xen/arch/arm/arm64/domctl.c
+++ b/xen/arch/arm/arm64/domctl.c
@@ -18,7 +18,7 @@ static long switch_mode(struct domain *d, enum domain_type type)
 
     if ( d == NULL )
         return -EINVAL;
-    if ( d->tot_pages != 0 )
+    if ( domain_tot_pages(d) != 0 )
         return -EBUSY;
     if ( d->arch.type == type )
         return 0;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 28fefa1f81..643c23ffb0 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -218,7 +218,7 @@ void dump_pageframe_info(struct domain *d)
 
     printk("Memory pages belonging to domain %u:\n", d->domain_id);
 
-    if ( d->tot_pages >= 10 && d->is_dying < DOMDYING_dead )
+    if ( domain_tot_pages(d) >= 10 && d->is_dying < DOMDYING_dead )
     {
         printk("    DomPage list too long to display\n");
     }
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index f50c065af3..e1b041e2df 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4870,7 +4870,7 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         else if ( rc >= 0 )
         {
             p2m = p2m_get_hostp2m(d);
-            target.tot_pages       = d->tot_pages;
+            target.tot_pages       = domain_tot_pages(d);
             target.pod_cache_pages = p2m->pod.count;
             target.pod_entries     = p2m->pod.entry_count;
 
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index 096e2773fb..f2c9409568 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -302,7 +302,7 @@ out:
  * The following equations should hold:
  *  0 <= P <= T <= B <= M
  *  d->arch.p2m->pod.entry_count == B - P
- *  d->tot_pages == P + d->arch.p2m->pod.count
+ *  domain_tot_pages(d) == P + d->arch.p2m->pod.count
  *
  * Now we have the following potential cases to cover:
  *     B <T': Set the PoD cache size equal to the number of outstanding PoD
@@ -336,7 +336,7 @@ p2m_pod_set_mem_target(struct domain *d, unsigned long target)
     pod_lock(p2m);
 
     /* P == B: Nothing to do (unless the guest is being created). */
-    populated = d->tot_pages - p2m->pod.count;
+    populated = domain_tot_pages(d) - p2m->pod.count;
     if ( populated > 0 && p2m->pod.entry_count == 0 )
         goto out;
 
@@ -348,7 +348,7 @@ p2m_pod_set_mem_target(struct domain *d, unsigned long target)
      * T' < B: Don't reduce the cache size; let the balloon driver
      * take care of it.
      */
-    if ( target < d->tot_pages )
+    if ( target < domain_tot_pages(d) )
         goto out;
 
     pod_target = target - populated;
@@ -1231,8 +1231,8 @@ out_of_memory:
     pod_unlock(p2m);
 
     printk("%s: Dom%d out of PoD memory! (tot=%"PRIu32" ents=%ld dom%d)\n",
-           __func__, d->domain_id, d->tot_pages, p2m->pod.entry_count,
-           current->domain->domain_id);
+           __func__, d->domain_id, domain_tot_pages(d),
+           p2m->pod.entry_count, current->domain->domain_id);
     domain_crash(d);
     return false;
 out_fail:
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 6212ec2c4a..cba3ab1eba 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1256,7 +1256,7 @@ static unsigned int sh_min_allocation(const struct domain *d)
      * up of slot zero and an LAPIC page), plus one for HVM's 1-to-1 pagetable.
      */
     return shadow_min_acceptable_pages(d) +
-           max(max(d->tot_pages / 256,
+           max(max(domain_tot_pages(d) / 256,
                    is_hvm_domain(d) ? CONFIG_PAGING_LEVELS + 2 : 0U) +
                is_hvm_domain(d),
                d->arch.paging.shadow.p2m_pages);
diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c
index df97ce0c72..2fabaaa155 100644
--- a/xen/arch/x86/msi.c
+++ b/xen/arch/x86/msi.c
@@ -991,7 +991,7 @@ static int msix_capability_init(struct pci_dev *dev,
                        seg, bus, slot, func, d->domain_id);
             if ( !is_hardware_domain(d) &&
                  /* Assume a domain without memory has no mappings yet. */
-                 (!is_hardware_domain(currd) || d->tot_pages) )
+                 (!is_hardware_domain(currd) || domain_tot_pages(d)) )
                 domain_crash(d);
             /* XXX How to deal with existing mappings? */
         }
diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c
index 7e1f563012..7f0d27c153 100644
--- a/xen/arch/x86/numa.c
+++ b/xen/arch/x86/numa.c
@@ -419,7 +419,7 @@ static void dump_numa(unsigned char key)
     {
         process_pending_softirqs();
 
-        printk("Domain %u (total: %u):\n", d->domain_id, d->tot_pages);
+        printk("Domain %u (total: %u):\n", d->domain_id, domain_tot_pages(d));
 
         for_each_online_node ( i )
             page_num_node[i] = 0;
diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index 9a97cf4abf..5678da782d 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -110,8 +110,9 @@ static __init void setup_pv_physmap(struct domain *d, unsigned long pgtbl_pfn,
 
     while ( vphysmap_start < vphysmap_end )
     {
-        if ( d->tot_pages + ((round_pgup(vphysmap_end) - vphysmap_start)
-                             >> PAGE_SHIFT) + 3 > nr_pages )
+        if ( domain_tot_pages(d) +
+             ((round_pgup(vphysmap_end) - vphysmap_start) >> PAGE_SHIFT) +
+             3 > nr_pages )
             panic("Dom0 allocation too small for initial P->M table\n");
 
         if ( pl1e )
@@ -264,7 +265,7 @@ static struct page_info * __init alloc_chunk(struct domain *d,
     {
         struct page_info *pg2;
 
-        if ( d->tot_pages + (1 << order) > d->max_pages )
+        if ( domain_tot_pages(d) + (1 << order) > d->max_pages )
             continue;
         pg2 = alloc_domheap_pages(d, order, MEMF_exact_node | MEMF_no_scrub);
         if ( pg2 > page )
@@ -500,13 +501,13 @@ int __init dom0_construct_pv(struct domain *d,
     if ( page == NULL )
         panic("Not enough RAM for domain 0 allocation\n");
     alloc_spfn = mfn_x(page_to_mfn(page));
-    alloc_epfn = alloc_spfn + d->tot_pages;
+    alloc_epfn = alloc_spfn + domain_tot_pages(d);
 
     if ( initrd_len )
     {
         initrd_pfn = vinitrd_start ?
                      (vinitrd_start - v_start) >> PAGE_SHIFT :
-                     d->tot_pages;
+                     domain_tot_pages(d);
         initrd_mfn = mfn = initrd->mod_start;
         count = PFN_UP(initrd_len);
         if ( d->arch.physaddr_bitsize &&
@@ -541,9 +542,9 @@ int __init dom0_construct_pv(struct domain *d,
     printk("PHYSICAL MEMORY ARRANGEMENT:\n"
            " Dom0 alloc.:   %"PRIpaddr"->%"PRIpaddr,
            pfn_to_paddr(alloc_spfn), pfn_to_paddr(alloc_epfn));
-    if ( d->tot_pages < nr_pages )
+    if ( domain_tot_pages(d) < nr_pages )
         printk(" (%lu pages to be allocated)",
-               nr_pages - d->tot_pages);
+               nr_pages - domain_tot_pages(d));
     if ( initrd )
     {
         mpt_alloc = (paddr_t)initrd->mod_start << PAGE_SHIFT;
@@ -755,7 +756,7 @@ int __init dom0_construct_pv(struct domain *d,
     snprintf(si->magic, sizeof(si->magic), "xen-3.0-x86_%d%s",
              elf_64bit(&elf) ? 64 : 32, parms.pae ? "p" : "");
 
-    count = d->tot_pages;
+    count = domain_tot_pages(d);
 
     /* Set up the phys->machine table if not part of the initial mapping. */
     if ( parms.p2m_base != UNSET_ADDR )
@@ -786,7 +787,7 @@ int __init dom0_construct_pv(struct domain *d,
             process_pending_softirqs();
     }
     si->first_p2m_pfn = pfn;
-    si->nr_p2m_frames = d->tot_pages - count;
+    si->nr_p2m_frames = domain_tot_pages(d) - count;
     page_list_for_each ( page, &d->page_list )
     {
         mfn = mfn_x(page_to_mfn(page));
@@ -804,15 +805,15 @@ int __init dom0_construct_pv(struct domain *d,
                 process_pending_softirqs();
         }
     }
-    BUG_ON(pfn != d->tot_pages);
+    BUG_ON(pfn != domain_tot_pages(d));
 #ifndef NDEBUG
     alloc_epfn += PFN_UP(initrd_len) + si->nr_p2m_frames;
 #endif
     while ( pfn < nr_pages )
     {
-        if ( (page = alloc_chunk(d, nr_pages - d->tot_pages)) == NULL )
+        if ( (page = alloc_chunk(d, nr_pages - domain_tot_pages(d))) == NULL )
             panic("Not enough RAM for DOM0 reservation\n");
-        while ( pfn < d->tot_pages )
+        while ( pfn < domain_tot_pages(d) )
         {
             mfn = mfn_x(page_to_mfn(page));
 #ifndef NDEBUG
diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c
index 4da0b2afff..c95652d1b8 100644
--- a/xen/arch/x86/pv/domain.c
+++ b/xen/arch/x86/pv/domain.c
@@ -173,7 +173,7 @@ int switch_compat(struct domain *d)
 
     BUILD_BUG_ON(offsetof(struct shared_info, vcpu_info) != 0);
 
-    if ( is_hvm_domain(d) || d->tot_pages != 0 )
+    if ( is_hvm_domain(d) || domain_tot_pages(d) != 0 )
         return -EACCES;
     if ( is_pv_32bit_domain(d) )
         return 0;
diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c
index 7a898fdbe5..f6d8794c62 100644
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -268,7 +268,7 @@ void __init pv_shim_setup_dom(struct domain *d, l4_pgentry_t *l4start,
      * Set the max pages to the current number of pages to prevent the
      * guest from depleting the shim memory pool.
      */
-    d->max_pages = d->tot_pages;
+    d->max_pages = domain_tot_pages(d);
 }
 
 static void write_start_info(struct domain *d)
@@ -280,7 +280,7 @@ static void write_start_info(struct domain *d)
 
     snprintf(si->magic, sizeof(si->magic), "xen-3.0-x86_%s",
              is_pv_32bit_domain(d) ? "32p" : "64");
-    si->nr_pages = d->tot_pages;
+    si->nr_pages = domain_tot_pages(d);
     si->shared_info = virt_to_maddr(d->shared_info);
     si->flags = 0;
     BUG_ON(xen_hypercall_hvm_get_param(HVM_PARAM_STORE_PFN, &si->store_mfn));
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 8b819f56e5..bdc24bbd7c 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -191,7 +191,7 @@ void getdomaininfo(struct domain *d, struct xen_domctl_getdomaininfo *info)
 
     xsm_security_domaininfo(d, info);
 
-    info->tot_pages         = d->tot_pages;
+    info->tot_pages         = domain_tot_pages(d);
     info->max_pages         = d->max_pages;
     info->outstanding_pages = d->outstanding_pages;
     info->shr_pages         = atomic_read(&d->shr_pages);
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 5536d282b9..8bee6b3b66 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -2261,7 +2261,7 @@ gnttab_transfer(
          * pages when it is dying.
          */
         if ( unlikely(e->is_dying) ||
-             unlikely(e->tot_pages >= e->max_pages) )
+             unlikely(domain_tot_pages(e) >= e->max_pages) )
         {
             spin_unlock(&e->page_alloc_lock);
 
@@ -2271,7 +2271,7 @@ gnttab_transfer(
             else
                 gdprintk(XENLOG_INFO,
                          "Transferee d%d has no headroom (tot %u, max %u)\n",
-                         e->domain_id, e->tot_pages, e->max_pages);
+                         e->domain_id, domain_tot_pages(e), e->max_pages);
 
             gop.status = GNTST_general_error;
             goto unlock_and_copyback;
diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
index f50490d0f3..87bd145374 100644
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -271,7 +271,7 @@ static void dump_domains(unsigned char key)
                atomic_read(&d->pause_count));
         printk("    nr_pages=%d xenheap_pages=%d shared_pages=%u paged_pages=%u "
                "dirty_cpus={%*pbl} max_pages=%u\n",
-               d->tot_pages, d->xenheap_pages, atomic_read(&d->shr_pages),
+               domain_tot_pages(d), d->xenheap_pages, atomic_read(&d->shr_pages),
                atomic_read(&d->paged_pages), CPUMASK_PR(d->dirty_cpumask),
                d->max_pages);
         printk("    handle=%02x%02x%02x%02x-%02x%02x-%02x%02x-"
diff --git a/xen/common/memory.c b/xen/common/memory.c
index c7d2bac452..38cb5d0bb4 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1267,7 +1267,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         switch ( op )
         {
         case XENMEM_current_reservation:
-            rc = d->tot_pages;
+            rc = domain_tot_pages(d);
             break;
         case XENMEM_maximum_reservation:
             rc = d->max_pages;
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 919a270587..bbd3163909 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -518,8 +518,8 @@ int domain_set_outstanding_pages(struct domain *d, unsigned long pages)
         goto out;
     }
 
-    /* disallow a claim not exceeding current tot_pages or above max_pages */
-    if ( (pages <= d->tot_pages) || (pages > d->max_pages) )
+    /* disallow a claim not exceeding domain_tot_pages() or above max_pages */
+    if ( (pages <= domain_tot_pages(d)) || (pages > d->max_pages) )
     {
         ret = -EINVAL;
         goto out;
@@ -532,9 +532,9 @@ int domain_set_outstanding_pages(struct domain *d, unsigned long pages)
 
     /*
      * Note, if domain has already allocated memory before making a claim
-     * then the claim must take tot_pages into account
+     * then the claim must take domain_tot_pages() into account
      */
-    claim = pages - d->tot_pages;
+    claim = pages - domain_tot_pages(d);
     if ( claim > avail_pages )
         goto out;
 
@@ -2269,11 +2269,12 @@ int assign_pages(
 
     if ( !(memflags & MEMF_no_refcount) )
     {
-        if ( unlikely((d->tot_pages + (1 << order)) > d->max_pages) )
+        unsigned int tot_pages = domain_tot_pages(d) + (1 << order);
+
+        if ( unlikely(tot_pages > d->max_pages) )
         {
             gprintk(XENLOG_INFO, "Over-allocation for domain %u: "
-                    "%u > %u\n", d->domain_id,
-                    d->tot_pages + (1 << order), d->max_pages);
+                    "%u > %u\n", d->domain_id, tot_pages, d->max_pages);
             rc = -E2BIG;
             goto out;
         }
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index cfdda6e2a8..126d0ff06e 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -553,8 +553,8 @@ DEFINE_XEN_GUEST_HANDLE(xen_mem_sharing_op_t);
  *
  * Note that a valid claim may be staked even after memory has been
  * allocated for a domain.  In this case, the claim is not incremental,
- * i.e. if the domain's tot_pages is 3, and a claim is staked for 10,
- * only 7 additional pages are claimed.
+ * i.e. if the domain's total page count is 3, and a claim is staked
+ * for 10, only 7 additional pages are claimed.
  *
  * Caller must be privileged or the hypercall fails.
  */
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 7c5c437247..1b6d7b941f 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -364,12 +364,18 @@ struct domain
     spinlock_t       page_alloc_lock; /* protects all the following fields  */
     struct page_list_head page_list;  /* linked list */
     struct page_list_head xenpage_list; /* linked list (size xenheap_pages) */
-    unsigned int     tot_pages;       /* number of pages currently possesed */
-    unsigned int     xenheap_pages;   /* # pages allocated from Xen heap    */
-    unsigned int     outstanding_pages; /* pages claimed but not possessed  */
-    unsigned int     max_pages;       /* maximum value for tot_pages        */
-    atomic_t         shr_pages;       /* number of shared pages             */
-    atomic_t         paged_pages;     /* number of paged-out pages          */
+
+    /*
+     * This field should only be directly accessed by domain_adjust_tot_pages()
+     * and the domain_tot_pages() helper function defined below.
+     */
+    unsigned int     tot_pages;
+
+    unsigned int     xenheap_pages;     /* pages allocated from Xen heap */
+    unsigned int     outstanding_pages; /* pages claimed but not possessed */
+    unsigned int     max_pages;         /* maximum value for domain_tot_pages() */
+    atomic_t         shr_pages;         /* shared pages */
+    atomic_t         paged_pages;       /* paged-out pages */
 
     /* Scheduling. */
     void            *sched_priv;    /* scheduler-specific data */
@@ -539,6 +545,12 @@ struct domain
 #endif
 };
 
+/* Return number of pages currently posessed by the domain */
+static inline unsigned int domain_tot_pages(const struct domain *d)
+{
+    return d->tot_pages;
+}
+
 /* Protect updates/reads (resp.) of domain_list and domain_hash. */
 extern spinlock_t domlist_update_lock;
 extern rcu_read_lock_t domlist_read_lock;
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [Xen-devel] [PATCH v9 3/4] mm: make pages allocated with MEMF_no_refcount safe to assign
  2020-02-03 10:56 [Xen-devel] [PATCH v9 0/4] purge free_shared_domheap_page() Paul Durrant
  2020-02-03 10:56 ` [Xen-devel] [PATCH v9 1/4] x86 / vmx: move teardown from domain_destroy() Paul Durrant
  2020-02-03 10:56 ` [Xen-devel] [PATCH v9 2/4] add a domain_tot_pages() helper function Paul Durrant
@ 2020-02-03 10:56 ` Paul Durrant
  2020-02-06 10:04   ` Julien Grall
  2020-02-03 10:56 ` [Xen-devel] [PATCH v9 4/4] x86 / vmx: use a MEMF_no_refcount domheap page for APIC_DEFAULT_PHYS_BASE Paul Durrant
  2020-02-06  8:28 ` [Xen-devel] [PATCH v9 0/4] purge free_shared_domheap_page() Durrant, Paul
  4 siblings, 1 reply; 16+ messages in thread
From: Paul Durrant @ 2020-02-03 10:56 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Paul Durrant, Ian Jackson,
	Volodymyr Babchuk, Roger Pau Monné

Currently it is unsafe to assign a domheap page allocated with
MEMF_no_refcount to a domain because the domain't 'tot_pages' will not
be incremented, but will be decrement when the page is freed (since
free_domheap_pages() has no way of telling that the increment was skipped).

This patch allocates a new 'count_info' bit for a PGC_extra flag
which is then used to mark pages when alloc_domheap_pages() is called
with MEMF_no_refcount. assign_pages() because it still needs to call
domain_adjust_tot_pages() to make sure the domain is appropriately
referenced. Hence it is modified to do that for PGC_extra pages even if it
is passed MEMF_no_refount.

The number of PGC_extra pages assigned to a domain is tracked in a new
'extra_pages' counter, which is then subtracted from 'total_pages' in
the domain_tot_pages() helper. Thus 'normal' page assignments will still
be appropriately checked against 'max_pages'.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Julien Grall <julien@xen.org>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wl@xen.org>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v8:
 - Drop the idea of post-allocation assignment adding an error path to
   steal_page() if it encounters a PGC_extra page
 - Tighten up the ASSERTs in assign_pages()

v7:
 - s/PGC_no_refcount/PGC_extra/g
 - Re-work allocation to account for 'extra' pages, also making it
   safe to assign PGC_extra pages post-allocation

v6:
 - Add an extra ASSERT into assign_pages() that PGC_no_refcount is not
   set if MEMF_no_refcount is clear
 - ASSERT that count_info is 0 in alloc_domheap_pages() and set to
   PGC_no_refcount rather than ORing

v5:
 - Make sure PGC_no_refcount is set before assign_pages() is called
 - Don't bother to clear PGC_no_refcount in free_domheap_pages() and
   drop ASSERT in free_heap_pages()
 - Don't latch count_info in free_heap_pages()

v4:
 - New in v4
---
 xen/arch/x86/mm.c        |  3 +-
 xen/common/page_alloc.c  | 63 +++++++++++++++++++++++++++++++---------
 xen/include/asm-arm/mm.h |  5 +++-
 xen/include/asm-x86/mm.h |  7 +++--
 xen/include/xen/sched.h  |  5 +++-
 5 files changed, 64 insertions(+), 19 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index e1b041e2df..fd134edcde 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4217,7 +4217,8 @@ int steal_page(
     if ( !(owner = page_get_owner_and_reference(page)) )
         goto fail;
 
-    if ( owner != d || is_xen_heap_page(page) )
+    if ( owner != d || is_xen_heap_page(page) ||
+         (page->count_info & PGC_extra) )
         goto fail_put;
 
     /*
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index bbd3163909..1ac9d9c719 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -2267,7 +2267,29 @@ int assign_pages(
         goto out;
     }
 
-    if ( !(memflags & MEMF_no_refcount) )
+#ifndef NDEBUG
+    {
+        unsigned int extra_pages = 0;
+
+        for ( i = 0; i < (1ul << order); i++ )
+        {
+            ASSERT(!(pg[i].count_info & ~PGC_extra));
+            if ( pg[i].count_info & PGC_extra )
+                extra_pages++;
+        }
+
+        ASSERT(!extra_pages ||
+               ((memflags & MEMF_no_refcount) &&
+                extra_pages == 1u << order));
+    }
+#endif
+
+    if ( pg[0].count_info & PGC_extra )
+    {
+        d->extra_pages += 1u << order;
+        memflags &= ~MEMF_no_refcount;
+    }
+    else if ( !(memflags & MEMF_no_refcount) )
     {
         unsigned int tot_pages = domain_tot_pages(d) + (1 << order);
 
@@ -2278,18 +2300,19 @@ int assign_pages(
             rc = -E2BIG;
             goto out;
         }
+    }
 
-        if ( unlikely(domain_adjust_tot_pages(d, 1 << order) == (1 << order)) )
+    if ( !(memflags & MEMF_no_refcount) &&
+         unlikely(domain_adjust_tot_pages(d, 1 << order) == (1 << order)) )
             get_knownalive_domain(d);
-    }
 
     for ( i = 0; i < (1 << order); i++ )
     {
         ASSERT(page_get_owner(&pg[i]) == NULL);
-        ASSERT(!pg[i].count_info);
         page_set_owner(&pg[i], d);
         smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
-        pg[i].count_info = PGC_allocated | 1;
+        pg[i].count_info =
+            (pg[i].count_info & PGC_extra) | PGC_allocated | 1;
         page_list_add_tail(&pg[i], &d->page_list);
     }
 
@@ -2315,11 +2338,6 @@ struct page_info *alloc_domheap_pages(
 
     if ( memflags & MEMF_no_owner )
         memflags |= MEMF_no_refcount;
-    else if ( (memflags & MEMF_no_refcount) && d )
-    {
-        ASSERT(!(memflags & MEMF_no_refcount));
-        return NULL;
-    }
 
     if ( !dma_bitsize )
         memflags &= ~MEMF_no_dma;
@@ -2332,11 +2350,23 @@ struct page_info *alloc_domheap_pages(
                                   memflags, d)) == NULL)) )
          return NULL;
 
-    if ( d && !(memflags & MEMF_no_owner) &&
-         assign_pages(d, pg, order, memflags) )
+    if ( d && !(memflags & MEMF_no_owner) )
     {
-        free_heap_pages(pg, order, memflags & MEMF_no_scrub);
-        return NULL;
+        if ( memflags & MEMF_no_refcount )
+        {
+            unsigned long i;
+
+            for ( i = 0; i < (1ul << order); i++ )
+            {
+                ASSERT(!pg[i].count_info);
+                pg[i].count_info = PGC_extra;
+            }
+        }
+        if ( assign_pages(d, pg, order, memflags) )
+        {
+            free_heap_pages(pg, order, memflags & MEMF_no_scrub);
+            return NULL;
+        }
     }
 
     return pg;
@@ -2384,6 +2414,11 @@ void free_domheap_pages(struct page_info *pg, unsigned int order)
                     BUG();
                 }
                 arch_free_heap_page(d, &pg[i]);
+                if ( pg[i].count_info & PGC_extra )
+                {
+                    ASSERT(d->extra_pages);
+                    d->extra_pages--;
+                }
             }
 
             drop_dom_ref = !domain_adjust_tot_pages(d, -(1 << order));
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 333efd3a60..7df91280bc 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -119,9 +119,12 @@ struct page_info
 #define PGC_state_offlined PG_mask(2, 9)
 #define PGC_state_free    PG_mask(3, 9)
 #define page_state_is(pg, st) (((pg)->count_info&PGC_state) == PGC_state_##st)
+/* Page is not reference counted */
+#define _PGC_extra        PG_shift(10)
+#define PGC_extra         PG_mask(1, 10)
 
 /* Count of references to this frame. */
-#define PGC_count_width   PG_shift(9)
+#define PGC_count_width   PG_shift(10)
 #define PGC_count_mask    ((1UL<<PGC_count_width)-1)
 
 /*
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 2ca8882ad0..06d64d494d 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -77,9 +77,12 @@
 #define PGC_state_offlined PG_mask(2, 9)
 #define PGC_state_free    PG_mask(3, 9)
 #define page_state_is(pg, st) (((pg)->count_info&PGC_state) == PGC_state_##st)
+/* Page is not reference counted */
+#define _PGC_extra        PG_shift(10)
+#define PGC_extra         PG_mask(1, 10)
 
- /* Count of references to this frame. */
-#define PGC_count_width   PG_shift(9)
+/* Count of references to this frame. */
+#define PGC_count_width   PG_shift(10)
 #define PGC_count_mask    ((1UL<<PGC_count_width)-1)
 
 /*
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 1b6d7b941f..21b5f4cebd 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -374,6 +374,7 @@ struct domain
     unsigned int     xenheap_pages;     /* pages allocated from Xen heap */
     unsigned int     outstanding_pages; /* pages claimed but not possessed */
     unsigned int     max_pages;         /* maximum value for domain_tot_pages() */
+    unsigned int     extra_pages;       /* pages not included in domain_tot_pages() */
     atomic_t         shr_pages;         /* shared pages */
     atomic_t         paged_pages;       /* paged-out pages */
 
@@ -548,7 +549,9 @@ struct domain
 /* Return number of pages currently posessed by the domain */
 static inline unsigned int domain_tot_pages(const struct domain *d)
 {
-    return d->tot_pages;
+    ASSERT(d->extra_pages <= d->tot_pages);
+
+    return d->tot_pages - d->extra_pages;
 }
 
 /* Protect updates/reads (resp.) of domain_list and domain_hash. */
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [Xen-devel] [PATCH v9 4/4] x86 / vmx: use a MEMF_no_refcount domheap page for APIC_DEFAULT_PHYS_BASE
  2020-02-03 10:56 [Xen-devel] [PATCH v9 0/4] purge free_shared_domheap_page() Paul Durrant
                   ` (2 preceding siblings ...)
  2020-02-03 10:56 ` [Xen-devel] [PATCH v9 3/4] mm: make pages allocated with MEMF_no_refcount safe to assign Paul Durrant
@ 2020-02-03 10:56 ` Paul Durrant
  2020-02-06  8:28 ` [Xen-devel] [PATCH v9 0/4] purge free_shared_domheap_page() Durrant, Paul
  4 siblings, 0 replies; 16+ messages in thread
From: Paul Durrant @ 2020-02-03 10:56 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Paul Durrant, Jun Nakajima,
	Roger Pau Monné

vmx_alloc_vlapic_mapping() currently contains some very odd looking code
that allocates a MEMF_no_owner domheap page and then shares with the guest
as if it were a xenheap page. This then requires vmx_free_vlapic_mapping()
to call a special function in the mm code: free_shared_domheap_page().

By using a MEMF_no_refcount domheap page instead, the odd looking code in
vmx_alloc_vlapic_mapping() can simply use get_page_and_type() to set up a
writable mapping before insertion in the P2M and vmx_free_vlapic_mapping()
can simply release the page using put_page_alloc_ref() followed by
put_page_and_type(). This then allows free_shared_domheap_page() to be
purged.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
---
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v4:
 - Use a MEMF_no_refcount page rather than a 'normal' page

v2:
 - Set an initial value for max_pages rather than avoiding the check in
   assign_pages()
 - Make domain_destroy() optional
---
 xen/arch/x86/hvm/vmx/vmx.c | 21 ++++++++++++++++++---
 xen/arch/x86/mm.c          | 10 ----------
 xen/include/asm-x86/mm.h   |  2 --
 3 files changed, 18 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 606f3dc2eb..7423d2421b 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -3028,12 +3028,22 @@ static int vmx_alloc_vlapic_mapping(struct domain *d)
     if ( !cpu_has_vmx_virtualize_apic_accesses )
         return 0;
 
-    pg = alloc_domheap_page(d, MEMF_no_owner);
+    pg = alloc_domheap_page(d, MEMF_no_refcount);
     if ( !pg )
         return -ENOMEM;
+
+    if ( !get_page_and_type(pg, d, PGT_writable_page) )
+    {
+        /*
+         * The domain can't possibly know about this page yet, so failure
+         * here is a clear indication of something fishy going on.
+         */
+        domain_crash(d);
+        return -ENODATA;
+    }
+
     mfn = page_to_mfn(pg);
     clear_domain_page(mfn);
-    share_xen_page_with_guest(pg, d, SHARE_rw);
     d->arch.hvm.vmx.apic_access_mfn = mfn;
 
     return set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE), mfn,
@@ -3047,7 +3057,12 @@ static void vmx_free_vlapic_mapping(struct domain *d)
 
     d->arch.hvm.vmx.apic_access_mfn = _mfn(0);
     if ( !mfn_eq(mfn, _mfn(0)) )
-        free_shared_domheap_page(mfn_to_page(mfn));
+    {
+        struct page_info *pg = mfn_to_page(mfn);
+
+        put_page_alloc_ref(pg);
+        put_page_and_type(pg);
+    }
 }
 
 static void vmx_install_vlapic_mapping(struct vcpu *v)
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index fd134edcde..1e49bb0156 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -496,16 +496,6 @@ void share_xen_page_with_guest(struct page_info *page, struct domain *d,
     spin_unlock(&d->page_alloc_lock);
 }
 
-void free_shared_domheap_page(struct page_info *page)
-{
-    put_page_alloc_ref(page);
-    if ( !test_and_clear_bit(_PGC_xen_heap, &page->count_info) )
-        ASSERT_UNREACHABLE();
-    page->u.inuse.type_info = 0;
-    page_set_owner(page, NULL);
-    free_domheap_page(page);
-}
-
 void make_cr3(struct vcpu *v, mfn_t mfn)
 {
     struct domain *d = v->domain;
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 06d64d494d..fafb3af46d 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -320,8 +320,6 @@ struct page_info
 
 #define maddr_get_owner(ma)   (page_get_owner(maddr_to_page((ma))))
 
-extern void free_shared_domheap_page(struct page_info *page);
-
 #define frame_table ((struct page_info *)FRAMETABLE_VIRT_START)
 extern unsigned long max_page;
 extern unsigned long total_pages;
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [Xen-devel] [PATCH v9 2/4] add a domain_tot_pages() helper function
  2020-02-03 10:56 ` [Xen-devel] [PATCH v9 2/4] add a domain_tot_pages() helper function Paul Durrant
@ 2020-02-03 11:40   ` Jan Beulich
  2020-02-06  9:46   ` Julien Grall
  1 sibling, 0 replies; 16+ messages in thread
From: Jan Beulich @ 2020-02-03 11:40 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan, xen-devel,
	Roger Pau Monné

On 03.02.2020 11:56, Paul Durrant wrote:
> This patch adds a new domain_tot_pages() inline helper function into
> sched.h, which will be needed by a subsequent patch.
> 
> No functional change.
> 
> NOTE: While modifying the comment for 'tot_pages' in sched.h this patch
>       makes some cosmetic fixes to surrounding comments.
> 
> Suggested-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Xen-devel] [PATCH v9 0/4] purge free_shared_domheap_page()
  2020-02-03 10:56 [Xen-devel] [PATCH v9 0/4] purge free_shared_domheap_page() Paul Durrant
                   ` (3 preceding siblings ...)
  2020-02-03 10:56 ` [Xen-devel] [PATCH v9 4/4] x86 / vmx: use a MEMF_no_refcount domheap page for APIC_DEFAULT_PHYS_BASE Paul Durrant
@ 2020-02-06  8:28 ` Durrant, Paul
  2020-02-06  8:45   ` Jan Beulich
  4 siblings, 1 reply; 16+ messages in thread
From: Durrant, Paul @ 2020-02-06  8:28 UTC (permalink / raw)
  To: Durrant, Paul, xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Ian Jackson, George Dunlap, Tim Deegan,
	Jun Nakajima, Volodymyr Babchuk, Roger Pau Monné

AFAICT these patches have the necessary A-b/R-b-s, or are there some missing that I need to chase?

  Paul

> -----Original Message-----
> From: Paul Durrant <pdurrant@amazon.com>
> Sent: 03 February 2020 10:57
> To: xen-devel@lists.xenproject.org
> Cc: Durrant, Paul <pdurrant@amazon.co.uk>; Andrew Cooper
> <andrew.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>; Ian
> Jackson <ian.jackson@eu.citrix.com>; Julien Grall <julien@xen.org>; Jun
> Nakajima <jun.nakajima@intel.com>; Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>; Roger Pau Monné <roger.pau@citrix.com>; Stefano
> Stabellini <sstabellini@kernel.org>; Tim Deegan <tim@xen.org>; Volodymyr
> Babchuk <Volodymyr_Babchuk@epam.com>; Wei Liu <wl@xen.org>
> Subject: [PATCH v9 0/4] purge free_shared_domheap_page()
> 
> Paul Durrant (4):
>   x86 / vmx: move teardown from domain_destroy()...
>   add a domain_tot_pages() helper function
>   mm: make pages allocated with MEMF_no_refcount safe to assign
>   x86 / vmx: use a MEMF_no_refcount domheap page for
>     APIC_DEFAULT_PHYS_BASE
> 
>  xen/arch/arm/arm64/domctl.c     |  2 +-
>  xen/arch/x86/domain.c           |  2 +-
>  xen/arch/x86/hvm/vmx/vmx.c      | 25 ++++++++---
>  xen/arch/x86/mm.c               | 15 ++-----
>  xen/arch/x86/mm/p2m-pod.c       | 10 ++---
>  xen/arch/x86/mm/shadow/common.c |  2 +-
>  xen/arch/x86/msi.c              |  2 +-
>  xen/arch/x86/numa.c             |  2 +-
>  xen/arch/x86/pv/dom0_build.c    | 25 ++++++-----
>  xen/arch/x86/pv/domain.c        |  2 +-
>  xen/arch/x86/pv/shim.c          |  4 +-
>  xen/common/domctl.c             |  2 +-
>  xen/common/grant_table.c        |  4 +-
>  xen/common/keyhandler.c         |  2 +-
>  xen/common/memory.c             |  2 +-
>  xen/common/page_alloc.c         | 78 ++++++++++++++++++++++++---------
>  xen/include/asm-arm/mm.h        |  5 ++-
>  xen/include/asm-x86/mm.h        |  9 ++--
>  xen/include/public/memory.h     |  4 +-
>  xen/include/xen/sched.h         | 27 +++++++++---
>  20 files changed, 143 insertions(+), 81 deletions(-)
> ---
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: George Dunlap <george.dunlap@citrix.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Julien Grall <julien@xen.org>
> Cc: Jun Nakajima <jun.nakajima@intel.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: "Roger Pau Monné" <roger.pau@citrix.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Tim Deegan <tim@xen.org>
> Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
> Cc: Wei Liu <wl@xen.org>
> --
> 2.20.1

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Xen-devel] [PATCH v9 0/4] purge free_shared_domheap_page()
  2020-02-06  8:28 ` [Xen-devel] [PATCH v9 0/4] purge free_shared_domheap_page() Durrant, Paul
@ 2020-02-06  8:45   ` Jan Beulich
  2020-02-06  9:17     ` Durrant, Paul
  0 siblings, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2020-02-06  8:45 UTC (permalink / raw)
  To: Durrant, Paul
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Ian Jackson, George Dunlap, Tim Deegan,
	Jun Nakajima, xen-devel, Volodymyr Babchuk, Roger Pau Monné

On 06.02.2020 09:28, Durrant, Paul wrote:
> AFAICT these patches have the necessary A-b/R-b-s, or are there some missing that I need to chase?

According to my records ...

>> -----Original Message-----
>> From: Paul Durrant <pdurrant@amazon.com>
>> Sent: 03 February 2020 10:57
>>
>> Paul Durrant (4):
>>   x86 / vmx: move teardown from domain_destroy()...
>>   add a domain_tot_pages() helper function
>>   mm: make pages allocated with MEMF_no_refcount safe to assign
>>   x86 / vmx: use a MEMF_no_refcount domheap page for
>>     APIC_DEFAULT_PHYS_BASE
>>
>>  xen/arch/arm/arm64/domctl.c     |  2 +-

... this (Arm), ...

>>  xen/arch/x86/domain.c           |  2 +-
>>  xen/arch/x86/hvm/vmx/vmx.c      | 25 ++++++++---

... this (VMX), ...

>>  xen/arch/x86/mm.c               | 15 ++-----
>>  xen/arch/x86/mm/p2m-pod.c       | 10 ++---

... this (MM), ...

>>  xen/arch/x86/mm/shadow/common.c |  2 +-

... this (shadow), ...

>>  xen/arch/x86/msi.c              |  2 +-
>>  xen/arch/x86/numa.c             |  2 +-
>>  xen/arch/x86/pv/dom0_build.c    | 25 ++++++-----
>>  xen/arch/x86/pv/domain.c        |  2 +-
>>  xen/arch/x86/pv/shim.c          |  4 +-
>>  xen/common/domctl.c             |  2 +-
>>  xen/common/grant_table.c        |  4 +-
>>  xen/common/keyhandler.c         |  2 +-
>>  xen/common/memory.c             |  2 +-
>>  xen/common/page_alloc.c         | 78 ++++++++++++++++++++++++---------
>>  xen/include/asm-arm/mm.h        |  5 ++-

... and this (Arm again). I think almost all are for patch 2, with
an Arm one needed on patch 3. If I overlooked any, please point me
at them.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Xen-devel] [PATCH v9 0/4] purge free_shared_domheap_page()
  2020-02-06  8:45   ` Jan Beulich
@ 2020-02-06  9:17     ` Durrant, Paul
  2020-02-08  6:58       ` Tim Deegan
  2020-02-13 12:53       ` Durrant, Paul
  0 siblings, 2 replies; 16+ messages in thread
From: Durrant, Paul @ 2020-02-06  9:17 UTC (permalink / raw)
  To: Jan Beulich, George Dunlap, Julien Grall, Tim Deegan
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Ian Jackson, Jun Nakajima, xen-devel,
	Volodymyr Babchuk, Roger Pau Monné

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 06 February 2020 08:46
> To: Durrant, Paul <pdurrant@amazon.co.uk>
> Cc: xen-devel@lists.xenproject.org; Stefano Stabellini
> <sstabellini@kernel.org>; Julien Grall <julien@xen.org>; Wei Liu
> <wl@xen.org>; Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Andrew
> Cooper <andrew.cooper3@citrix.com>; Ian Jackson
> <ian.jackson@eu.citrix.com>; George Dunlap <george.dunlap@citrix.com>; Tim
> Deegan <tim@xen.org>; Jun Nakajima <jun.nakajima@intel.com>; Volodymyr
> Babchuk <Volodymyr_Babchuk@epam.com>; Roger Pau Monné
> <roger.pau@citrix.com>
> Subject: Re: [Xen-devel] [PATCH v9 0/4] purge free_shared_domheap_page()
> 
> On 06.02.2020 09:28, Durrant, Paul wrote:
> > AFAICT these patches have the necessary A-b/R-b-s, or are there some
> missing that I need to chase?
> 
> According to my records ...
> 
> >> -----Original Message-----
> >> From: Paul Durrant <pdurrant@amazon.com>
> >> Sent: 03 February 2020 10:57
> >>
> >> Paul Durrant (4):
> >>   x86 / vmx: move teardown from domain_destroy()...
> >>   add a domain_tot_pages() helper function
> >>   mm: make pages allocated with MEMF_no_refcount safe to assign
> >>   x86 / vmx: use a MEMF_no_refcount domheap page for
> >>     APIC_DEFAULT_PHYS_BASE
> >>
> >>  xen/arch/arm/arm64/domctl.c     |  2 +-
> 
> ... this (Arm), ...
> 
> >>  xen/arch/x86/domain.c           |  2 +-
> >>  xen/arch/x86/hvm/vmx/vmx.c      | 25 ++++++++---
> 
> ... this (VMX), ...
> 
> >>  xen/arch/x86/mm.c               | 15 ++-----
> >>  xen/arch/x86/mm/p2m-pod.c       | 10 ++---
> 
> ... this (MM), ...
> 
> >>  xen/arch/x86/mm/shadow/common.c |  2 +-
> 
> ... this (shadow), ...
> 
> >>  xen/arch/x86/msi.c              |  2 +-
> >>  xen/arch/x86/numa.c             |  2 +-
> >>  xen/arch/x86/pv/dom0_build.c    | 25 ++++++-----
> >>  xen/arch/x86/pv/domain.c        |  2 +-
> >>  xen/arch/x86/pv/shim.c          |  4 +-
> >>  xen/common/domctl.c             |  2 +-
> >>  xen/common/grant_table.c        |  4 +-
> >>  xen/common/keyhandler.c         |  2 +-
> >>  xen/common/memory.c             |  2 +-
> >>  xen/common/page_alloc.c         | 78 ++++++++++++++++++++++++---------
> >>  xen/include/asm-arm/mm.h        |  5 ++-
> 
> ... and this (Arm again). I think almost all are for patch 2, with
> an Arm one needed on patch 3. If I overlooked any, please point me
> at them.

Ok, thanks. Kevin has completed his acks (patches #1 and #4).

George, Julien, Tim,

  Can I have acks or otherwise, please?

  Paul

> 
> Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Xen-devel] [PATCH v9 2/4] add a domain_tot_pages() helper function
  2020-02-03 10:56 ` [Xen-devel] [PATCH v9 2/4] add a domain_tot_pages() helper function Paul Durrant
  2020-02-03 11:40   ` Jan Beulich
@ 2020-02-06  9:46   ` Julien Grall
  1 sibling, 0 replies; 16+ messages in thread
From: Julien Grall @ 2020-02-06  9:46 UTC (permalink / raw)
  To: Paul Durrant, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Jan Beulich, Roger Pau Monné

Hi,

On 03/02/2020 10:56, Paul Durrant wrote:
> This patch adds a new domain_tot_pages() inline helper function into
> sched.h, which will be needed by a subsequent patch.
> 
> No functional change.
> 
> NOTE: While modifying the comment for 'tot_pages' in sched.h this patch
>        makes some cosmetic fixes to surrounding comments.
> 
> Suggested-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Acked-by: Julien Grall <jgrall@xen.org>

Cheers,

> ---
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Wei Liu <wl@xen.org>
> Cc: "Roger Pau Monné" <roger.pau@citrix.com>
> Cc: George Dunlap <George.Dunlap@eu.citrix.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Julien Grall <julien@xen.org>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Tim Deegan <tim@xen.org>
> 
> v9:
>   - Fix missing changes in PV shim
>   - Dropped some comment changes
> 
> v8:
>   - New in v8
> ---
>   xen/arch/arm/arm64/domctl.c     |  2 +-
>   xen/arch/x86/domain.c           |  2 +-
>   xen/arch/x86/mm.c               |  2 +-
>   xen/arch/x86/mm/p2m-pod.c       | 10 +++++-----
>   xen/arch/x86/mm/shadow/common.c |  2 +-
>   xen/arch/x86/msi.c              |  2 +-
>   xen/arch/x86/numa.c             |  2 +-
>   xen/arch/x86/pv/dom0_build.c    | 25 +++++++++++++------------
>   xen/arch/x86/pv/domain.c        |  2 +-
>   xen/arch/x86/pv/shim.c          |  4 ++--
>   xen/common/domctl.c             |  2 +-
>   xen/common/grant_table.c        |  4 ++--
>   xen/common/keyhandler.c         |  2 +-
>   xen/common/memory.c             |  2 +-
>   xen/common/page_alloc.c         | 15 ++++++++-------
>   xen/include/public/memory.h     |  4 ++--
>   xen/include/xen/sched.h         | 24 ++++++++++++++++++------
>   17 files changed, 60 insertions(+), 46 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/domctl.c b/xen/arch/arm/arm64/domctl.c
> index ab8781fb91..0de89b42c4 100644
> --- a/xen/arch/arm/arm64/domctl.c
> +++ b/xen/arch/arm/arm64/domctl.c
> @@ -18,7 +18,7 @@ static long switch_mode(struct domain *d, enum domain_type type)
>   
>       if ( d == NULL )
>           return -EINVAL;
> -    if ( d->tot_pages != 0 )
> +    if ( domain_tot_pages(d) != 0 )
>           return -EBUSY;
>       if ( d->arch.type == type )
>           return 0;
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index 28fefa1f81..643c23ffb0 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -218,7 +218,7 @@ void dump_pageframe_info(struct domain *d)
>   
>       printk("Memory pages belonging to domain %u:\n", d->domain_id);
>   
> -    if ( d->tot_pages >= 10 && d->is_dying < DOMDYING_dead )
> +    if ( domain_tot_pages(d) >= 10 && d->is_dying < DOMDYING_dead )
>       {
>           printk("    DomPage list too long to display\n");
>       }
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index f50c065af3..e1b041e2df 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -4870,7 +4870,7 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>           else if ( rc >= 0 )
>           {
>               p2m = p2m_get_hostp2m(d);
> -            target.tot_pages       = d->tot_pages;
> +            target.tot_pages       = domain_tot_pages(d);
>               target.pod_cache_pages = p2m->pod.count;
>               target.pod_entries     = p2m->pod.entry_count;
>   
> diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
> index 096e2773fb..f2c9409568 100644
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -302,7 +302,7 @@ out:
>    * The following equations should hold:
>    *  0 <= P <= T <= B <= M
>    *  d->arch.p2m->pod.entry_count == B - P
> - *  d->tot_pages == P + d->arch.p2m->pod.count
> + *  domain_tot_pages(d) == P + d->arch.p2m->pod.count
>    *
>    * Now we have the following potential cases to cover:
>    *     B <T': Set the PoD cache size equal to the number of outstanding PoD
> @@ -336,7 +336,7 @@ p2m_pod_set_mem_target(struct domain *d, unsigned long target)
>       pod_lock(p2m);
>   
>       /* P == B: Nothing to do (unless the guest is being created). */
> -    populated = d->tot_pages - p2m->pod.count;
> +    populated = domain_tot_pages(d) - p2m->pod.count;
>       if ( populated > 0 && p2m->pod.entry_count == 0 )
>           goto out;
>   
> @@ -348,7 +348,7 @@ p2m_pod_set_mem_target(struct domain *d, unsigned long target)
>        * T' < B: Don't reduce the cache size; let the balloon driver
>        * take care of it.
>        */
> -    if ( target < d->tot_pages )
> +    if ( target < domain_tot_pages(d) )
>           goto out;
>   
>       pod_target = target - populated;
> @@ -1231,8 +1231,8 @@ out_of_memory:
>       pod_unlock(p2m);
>   
>       printk("%s: Dom%d out of PoD memory! (tot=%"PRIu32" ents=%ld dom%d)\n",
> -           __func__, d->domain_id, d->tot_pages, p2m->pod.entry_count,
> -           current->domain->domain_id);
> +           __func__, d->domain_id, domain_tot_pages(d),
> +           p2m->pod.entry_count, current->domain->domain_id);
>       domain_crash(d);
>       return false;
>   out_fail:
> diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
> index 6212ec2c4a..cba3ab1eba 100644
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -1256,7 +1256,7 @@ static unsigned int sh_min_allocation(const struct domain *d)
>        * up of slot zero and an LAPIC page), plus one for HVM's 1-to-1 pagetable.
>        */
>       return shadow_min_acceptable_pages(d) +
> -           max(max(d->tot_pages / 256,
> +           max(max(domain_tot_pages(d) / 256,
>                      is_hvm_domain(d) ? CONFIG_PAGING_LEVELS + 2 : 0U) +
>                  is_hvm_domain(d),
>                  d->arch.paging.shadow.p2m_pages);
> diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c
> index df97ce0c72..2fabaaa155 100644
> --- a/xen/arch/x86/msi.c
> +++ b/xen/arch/x86/msi.c
> @@ -991,7 +991,7 @@ static int msix_capability_init(struct pci_dev *dev,
>                          seg, bus, slot, func, d->domain_id);
>               if ( !is_hardware_domain(d) &&
>                    /* Assume a domain without memory has no mappings yet. */
> -                 (!is_hardware_domain(currd) || d->tot_pages) )
> +                 (!is_hardware_domain(currd) || domain_tot_pages(d)) )
>                   domain_crash(d);
>               /* XXX How to deal with existing mappings? */
>           }
> diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c
> index 7e1f563012..7f0d27c153 100644
> --- a/xen/arch/x86/numa.c
> +++ b/xen/arch/x86/numa.c
> @@ -419,7 +419,7 @@ static void dump_numa(unsigned char key)
>       {
>           process_pending_softirqs();
>   
> -        printk("Domain %u (total: %u):\n", d->domain_id, d->tot_pages);
> +        printk("Domain %u (total: %u):\n", d->domain_id, domain_tot_pages(d));
>   
>           for_each_online_node ( i )
>               page_num_node[i] = 0;
> diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
> index 9a97cf4abf..5678da782d 100644
> --- a/xen/arch/x86/pv/dom0_build.c
> +++ b/xen/arch/x86/pv/dom0_build.c
> @@ -110,8 +110,9 @@ static __init void setup_pv_physmap(struct domain *d, unsigned long pgtbl_pfn,
>   
>       while ( vphysmap_start < vphysmap_end )
>       {
> -        if ( d->tot_pages + ((round_pgup(vphysmap_end) - vphysmap_start)
> -                             >> PAGE_SHIFT) + 3 > nr_pages )
> +        if ( domain_tot_pages(d) +
> +             ((round_pgup(vphysmap_end) - vphysmap_start) >> PAGE_SHIFT) +
> +             3 > nr_pages )
>               panic("Dom0 allocation too small for initial P->M table\n");
>   
>           if ( pl1e )
> @@ -264,7 +265,7 @@ static struct page_info * __init alloc_chunk(struct domain *d,
>       {
>           struct page_info *pg2;
>   
> -        if ( d->tot_pages + (1 << order) > d->max_pages )
> +        if ( domain_tot_pages(d) + (1 << order) > d->max_pages )
>               continue;
>           pg2 = alloc_domheap_pages(d, order, MEMF_exact_node | MEMF_no_scrub);
>           if ( pg2 > page )
> @@ -500,13 +501,13 @@ int __init dom0_construct_pv(struct domain *d,
>       if ( page == NULL )
>           panic("Not enough RAM for domain 0 allocation\n");
>       alloc_spfn = mfn_x(page_to_mfn(page));
> -    alloc_epfn = alloc_spfn + d->tot_pages;
> +    alloc_epfn = alloc_spfn + domain_tot_pages(d);
>   
>       if ( initrd_len )
>       {
>           initrd_pfn = vinitrd_start ?
>                        (vinitrd_start - v_start) >> PAGE_SHIFT :
> -                     d->tot_pages;
> +                     domain_tot_pages(d);
>           initrd_mfn = mfn = initrd->mod_start;
>           count = PFN_UP(initrd_len);
>           if ( d->arch.physaddr_bitsize &&
> @@ -541,9 +542,9 @@ int __init dom0_construct_pv(struct domain *d,
>       printk("PHYSICAL MEMORY ARRANGEMENT:\n"
>              " Dom0 alloc.:   %"PRIpaddr"->%"PRIpaddr,
>              pfn_to_paddr(alloc_spfn), pfn_to_paddr(alloc_epfn));
> -    if ( d->tot_pages < nr_pages )
> +    if ( domain_tot_pages(d) < nr_pages )
>           printk(" (%lu pages to be allocated)",
> -               nr_pages - d->tot_pages);
> +               nr_pages - domain_tot_pages(d));
>       if ( initrd )
>       {
>           mpt_alloc = (paddr_t)initrd->mod_start << PAGE_SHIFT;
> @@ -755,7 +756,7 @@ int __init dom0_construct_pv(struct domain *d,
>       snprintf(si->magic, sizeof(si->magic), "xen-3.0-x86_%d%s",
>                elf_64bit(&elf) ? 64 : 32, parms.pae ? "p" : "");
>   
> -    count = d->tot_pages;
> +    count = domain_tot_pages(d);
>   
>       /* Set up the phys->machine table if not part of the initial mapping. */
>       if ( parms.p2m_base != UNSET_ADDR )
> @@ -786,7 +787,7 @@ int __init dom0_construct_pv(struct domain *d,
>               process_pending_softirqs();
>       }
>       si->first_p2m_pfn = pfn;
> -    si->nr_p2m_frames = d->tot_pages - count;
> +    si->nr_p2m_frames = domain_tot_pages(d) - count;
>       page_list_for_each ( page, &d->page_list )
>       {
>           mfn = mfn_x(page_to_mfn(page));
> @@ -804,15 +805,15 @@ int __init dom0_construct_pv(struct domain *d,
>                   process_pending_softirqs();
>           }
>       }
> -    BUG_ON(pfn != d->tot_pages);
> +    BUG_ON(pfn != domain_tot_pages(d));
>   #ifndef NDEBUG
>       alloc_epfn += PFN_UP(initrd_len) + si->nr_p2m_frames;
>   #endif
>       while ( pfn < nr_pages )
>       {
> -        if ( (page = alloc_chunk(d, nr_pages - d->tot_pages)) == NULL )
> +        if ( (page = alloc_chunk(d, nr_pages - domain_tot_pages(d))) == NULL )
>               panic("Not enough RAM for DOM0 reservation\n");
> -        while ( pfn < d->tot_pages )
> +        while ( pfn < domain_tot_pages(d) )
>           {
>               mfn = mfn_x(page_to_mfn(page));
>   #ifndef NDEBUG
> diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c
> index 4da0b2afff..c95652d1b8 100644
> --- a/xen/arch/x86/pv/domain.c
> +++ b/xen/arch/x86/pv/domain.c
> @@ -173,7 +173,7 @@ int switch_compat(struct domain *d)
>   
>       BUILD_BUG_ON(offsetof(struct shared_info, vcpu_info) != 0);
>   
> -    if ( is_hvm_domain(d) || d->tot_pages != 0 )
> +    if ( is_hvm_domain(d) || domain_tot_pages(d) != 0 )
>           return -EACCES;
>       if ( is_pv_32bit_domain(d) )
>           return 0;
> diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c
> index 7a898fdbe5..f6d8794c62 100644
> --- a/xen/arch/x86/pv/shim.c
> +++ b/xen/arch/x86/pv/shim.c
> @@ -268,7 +268,7 @@ void __init pv_shim_setup_dom(struct domain *d, l4_pgentry_t *l4start,
>        * Set the max pages to the current number of pages to prevent the
>        * guest from depleting the shim memory pool.
>        */
> -    d->max_pages = d->tot_pages;
> +    d->max_pages = domain_tot_pages(d);
>   }
>   
>   static void write_start_info(struct domain *d)
> @@ -280,7 +280,7 @@ static void write_start_info(struct domain *d)
>   
>       snprintf(si->magic, sizeof(si->magic), "xen-3.0-x86_%s",
>                is_pv_32bit_domain(d) ? "32p" : "64");
> -    si->nr_pages = d->tot_pages;
> +    si->nr_pages = domain_tot_pages(d);
>       si->shared_info = virt_to_maddr(d->shared_info);
>       si->flags = 0;
>       BUG_ON(xen_hypercall_hvm_get_param(HVM_PARAM_STORE_PFN, &si->store_mfn));
> diff --git a/xen/common/domctl.c b/xen/common/domctl.c
> index 8b819f56e5..bdc24bbd7c 100644
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -191,7 +191,7 @@ void getdomaininfo(struct domain *d, struct xen_domctl_getdomaininfo *info)
>   
>       xsm_security_domaininfo(d, info);
>   
> -    info->tot_pages         = d->tot_pages;
> +    info->tot_pages         = domain_tot_pages(d);
>       info->max_pages         = d->max_pages;
>       info->outstanding_pages = d->outstanding_pages;
>       info->shr_pages         = atomic_read(&d->shr_pages);
> diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
> index 5536d282b9..8bee6b3b66 100644
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -2261,7 +2261,7 @@ gnttab_transfer(
>            * pages when it is dying.
>            */
>           if ( unlikely(e->is_dying) ||
> -             unlikely(e->tot_pages >= e->max_pages) )
> +             unlikely(domain_tot_pages(e) >= e->max_pages) )
>           {
>               spin_unlock(&e->page_alloc_lock);
>   
> @@ -2271,7 +2271,7 @@ gnttab_transfer(
>               else
>                   gdprintk(XENLOG_INFO,
>                            "Transferee d%d has no headroom (tot %u, max %u)\n",
> -                         e->domain_id, e->tot_pages, e->max_pages);
> +                         e->domain_id, domain_tot_pages(e), e->max_pages);
>   
>               gop.status = GNTST_general_error;
>               goto unlock_and_copyback;
> diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
> index f50490d0f3..87bd145374 100644
> --- a/xen/common/keyhandler.c
> +++ b/xen/common/keyhandler.c
> @@ -271,7 +271,7 @@ static void dump_domains(unsigned char key)
>                  atomic_read(&d->pause_count));
>           printk("    nr_pages=%d xenheap_pages=%d shared_pages=%u paged_pages=%u "
>                  "dirty_cpus={%*pbl} max_pages=%u\n",
> -               d->tot_pages, d->xenheap_pages, atomic_read(&d->shr_pages),
> +               domain_tot_pages(d), d->xenheap_pages, atomic_read(&d->shr_pages),
>                  atomic_read(&d->paged_pages), CPUMASK_PR(d->dirty_cpumask),
>                  d->max_pages);
>           printk("    handle=%02x%02x%02x%02x-%02x%02x-%02x%02x-"
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> index c7d2bac452..38cb5d0bb4 100644
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -1267,7 +1267,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>           switch ( op )
>           {
>           case XENMEM_current_reservation:
> -            rc = d->tot_pages;
> +            rc = domain_tot_pages(d);
>               break;
>           case XENMEM_maximum_reservation:
>               rc = d->max_pages;
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 919a270587..bbd3163909 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -518,8 +518,8 @@ int domain_set_outstanding_pages(struct domain *d, unsigned long pages)
>           goto out;
>       }
>   
> -    /* disallow a claim not exceeding current tot_pages or above max_pages */
> -    if ( (pages <= d->tot_pages) || (pages > d->max_pages) )
> +    /* disallow a claim not exceeding domain_tot_pages() or above max_pages */
> +    if ( (pages <= domain_tot_pages(d)) || (pages > d->max_pages) )
>       {
>           ret = -EINVAL;
>           goto out;
> @@ -532,9 +532,9 @@ int domain_set_outstanding_pages(struct domain *d, unsigned long pages)
>   
>       /*
>        * Note, if domain has already allocated memory before making a claim
> -     * then the claim must take tot_pages into account
> +     * then the claim must take domain_tot_pages() into account
>        */
> -    claim = pages - d->tot_pages;
> +    claim = pages - domain_tot_pages(d);
>       if ( claim > avail_pages )
>           goto out;
>   
> @@ -2269,11 +2269,12 @@ int assign_pages(
>   
>       if ( !(memflags & MEMF_no_refcount) )
>       {
> -        if ( unlikely((d->tot_pages + (1 << order)) > d->max_pages) )
> +        unsigned int tot_pages = domain_tot_pages(d) + (1 << order);
> +
> +        if ( unlikely(tot_pages > d->max_pages) )
>           {
>               gprintk(XENLOG_INFO, "Over-allocation for domain %u: "
> -                    "%u > %u\n", d->domain_id,
> -                    d->tot_pages + (1 << order), d->max_pages);
> +                    "%u > %u\n", d->domain_id, tot_pages, d->max_pages);
>               rc = -E2BIG;
>               goto out;
>           }
> diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> index cfdda6e2a8..126d0ff06e 100644
> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -553,8 +553,8 @@ DEFINE_XEN_GUEST_HANDLE(xen_mem_sharing_op_t);
>    *
>    * Note that a valid claim may be staked even after memory has been
>    * allocated for a domain.  In this case, the claim is not incremental,
> - * i.e. if the domain's tot_pages is 3, and a claim is staked for 10,
> - * only 7 additional pages are claimed.
> + * i.e. if the domain's total page count is 3, and a claim is staked
> + * for 10, only 7 additional pages are claimed.
>    *
>    * Caller must be privileged or the hypercall fails.
>    */
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index 7c5c437247..1b6d7b941f 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -364,12 +364,18 @@ struct domain
>       spinlock_t       page_alloc_lock; /* protects all the following fields  */
>       struct page_list_head page_list;  /* linked list */
>       struct page_list_head xenpage_list; /* linked list (size xenheap_pages) */
> -    unsigned int     tot_pages;       /* number of pages currently possesed */
> -    unsigned int     xenheap_pages;   /* # pages allocated from Xen heap    */
> -    unsigned int     outstanding_pages; /* pages claimed but not possessed  */
> -    unsigned int     max_pages;       /* maximum value for tot_pages        */
> -    atomic_t         shr_pages;       /* number of shared pages             */
> -    atomic_t         paged_pages;     /* number of paged-out pages          */
> +
> +    /*
> +     * This field should only be directly accessed by domain_adjust_tot_pages()
> +     * and the domain_tot_pages() helper function defined below.
> +     */
> +    unsigned int     tot_pages;
> +
> +    unsigned int     xenheap_pages;     /* pages allocated from Xen heap */
> +    unsigned int     outstanding_pages; /* pages claimed but not possessed */
> +    unsigned int     max_pages;         /* maximum value for domain_tot_pages() */
> +    atomic_t         shr_pages;         /* shared pages */
> +    atomic_t         paged_pages;       /* paged-out pages */
>   
>       /* Scheduling. */
>       void            *sched_priv;    /* scheduler-specific data */
> @@ -539,6 +545,12 @@ struct domain
>   #endif
>   };
>   
> +/* Return number of pages currently posessed by the domain */
> +static inline unsigned int domain_tot_pages(const struct domain *d)
> +{
> +    return d->tot_pages;
> +}
> +
>   /* Protect updates/reads (resp.) of domain_list and domain_hash. */
>   extern spinlock_t domlist_update_lock;
>   extern rcu_read_lock_t domlist_read_lock;
> 

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Xen-devel] [PATCH v9 3/4] mm: make pages allocated with MEMF_no_refcount safe to assign
  2020-02-03 10:56 ` [Xen-devel] [PATCH v9 3/4] mm: make pages allocated with MEMF_no_refcount safe to assign Paul Durrant
@ 2020-02-06 10:04   ` Julien Grall
  2020-02-06 10:12     ` Durrant, Paul
  0 siblings, 1 reply; 16+ messages in thread
From: Julien Grall @ 2020-02-06 10:04 UTC (permalink / raw)
  To: Paul Durrant, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Jan Beulich,
	Volodymyr Babchuk, Roger Pau Monné

Hi,

I am sorry to jump that late in the conversation.

On 03/02/2020 10:56, Paul Durrant wrote:
>   
> -        if ( unlikely(domain_adjust_tot_pages(d, 1 << order) == (1 << order)) )
> +    if ( !(memflags & MEMF_no_refcount) &&
> +         unlikely(domain_adjust_tot_pages(d, 1 << order) == (1 << order)) )
>               get_knownalive_domain(d);
> -    }
>   
>       for ( i = 0; i < (1 << order); i++ )
>       {
>           ASSERT(page_get_owner(&pg[i]) == NULL);
> -        ASSERT(!pg[i].count_info);
>           page_set_owner(&pg[i], d);
>           smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
> -        pg[i].count_info = PGC_allocated | 1;
> +        pg[i].count_info =
> +            (pg[i].count_info & PGC_extra) | PGC_allocated | 1;

This is technically incorrect because we blindly assume the state of the 
page is inuse (which is thankfully equal to 0).

See the discussion [1]. This is already an existing bug in the code base 
and I will be taking care of it. However...

>           page_list_add_tail(&pg[i], &d->page_list);
>       }
>   
> @@ -2315,11 +2338,6 @@ struct page_info *alloc_domheap_pages(
>   
>       if ( memflags & MEMF_no_owner )
>           memflags |= MEMF_no_refcount;
> -    else if ( (memflags & MEMF_no_refcount) && d )
> -    {
> -        ASSERT(!(memflags & MEMF_no_refcount));
> -        return NULL;
> -    }
>   
>       if ( !dma_bitsize )
>           memflags &= ~MEMF_no_dma;
> @@ -2332,11 +2350,23 @@ struct page_info *alloc_domheap_pages(
>                                     memflags, d)) == NULL)) )
>            return NULL;
>   
> -    if ( d && !(memflags & MEMF_no_owner) &&
> -         assign_pages(d, pg, order, memflags) )
> +    if ( d && !(memflags & MEMF_no_owner) )
>       {
> -        free_heap_pages(pg, order, memflags & MEMF_no_scrub);
> -        return NULL;
> +        if ( memflags & MEMF_no_refcount )
> +        {
> +            unsigned long i;
> +
> +            for ( i = 0; i < (1ul << order); i++ )
> +            {
> +                ASSERT(!pg[i].count_info);
> +                pg[i].count_info = PGC_extra;

... this is pursuing the wrongness of the code above and not safe 
against offlining.

We could argue this is an already existing bug, however I am a bit 
unease to add more abuse in the code. Jan, what do you think?

> +            }
> +        }
> +        if ( assign_pages(d, pg, order, memflags) )
> +        {
> +            free_heap_pages(pg, order, memflags & MEMF_no_scrub);
> +            return NULL;
> +        }
>       }

Cheers,

[1] https://lore.kernel.org/xen-devel/20200204133357.32101-1-julien@xen.org/

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Xen-devel] [PATCH v9 3/4] mm: make pages allocated with MEMF_no_refcount safe to assign
  2020-02-06 10:04   ` Julien Grall
@ 2020-02-06 10:12     ` Durrant, Paul
  2020-02-06 11:43       ` Jan Beulich
  0 siblings, 1 reply; 16+ messages in thread
From: Durrant, Paul @ 2020-02-06 10:12 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Jan Beulich,
	Volodymyr Babchuk, Roger Pau Monné

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: 06 February 2020 10:04
> To: Durrant, Paul <pdurrant@amazon.co.uk>; xen-devel@lists.xenproject.org
> Cc: Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> <andrew.cooper3@citrix.com>; George Dunlap <George.Dunlap@eu.citrix.com>;
> Ian Jackson <ian.jackson@eu.citrix.com>; Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>; Stefano Stabellini <sstabellini@kernel.org>; Wei
> Liu <wl@xen.org>; Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>; Roger
> Pau Monné <roger.pau@citrix.com>
> Subject: Re: [PATCH v9 3/4] mm: make pages allocated with MEMF_no_refcount
> safe to assign
> 
> Hi,
> 
> I am sorry to jump that late in the conversation.
> 
> On 03/02/2020 10:56, Paul Durrant wrote:
> >
> > -        if ( unlikely(domain_adjust_tot_pages(d, 1 << order) == (1 <<
> order)) )
> > +    if ( !(memflags & MEMF_no_refcount) &&
> > +         unlikely(domain_adjust_tot_pages(d, 1 << order) == (1 <<
> order)) )
> >               get_knownalive_domain(d);
> > -    }
> >
> >       for ( i = 0; i < (1 << order); i++ )
> >       {
> >           ASSERT(page_get_owner(&pg[i]) == NULL);
> > -        ASSERT(!pg[i].count_info);
> >           page_set_owner(&pg[i], d);
> >           smp_wmb(); /* Domain pointer must be visible before updating
> refcnt. */
> > -        pg[i].count_info = PGC_allocated | 1;
> > +        pg[i].count_info =
> > +            (pg[i].count_info & PGC_extra) | PGC_allocated | 1;
> 
> This is technically incorrect because we blindly assume the state of the
> page is inuse (which is thankfully equal to 0).

Assuming the page is inuse seems reasonable at this point.

> 
> See the discussion [1]. This is already an existing bug in the code base
> and I will be taking care of it.

Fair enough; it's a very long standing bug.

> However...
> 
> >           page_list_add_tail(&pg[i], &d->page_list);
> >       }
> >
> > @@ -2315,11 +2338,6 @@ struct page_info *alloc_domheap_pages(
> >
> >       if ( memflags & MEMF_no_owner )
> >           memflags |= MEMF_no_refcount;
> > -    else if ( (memflags & MEMF_no_refcount) && d )
> > -    {
> > -        ASSERT(!(memflags & MEMF_no_refcount));
> > -        return NULL;
> > -    }
> >
> >       if ( !dma_bitsize )
> >           memflags &= ~MEMF_no_dma;
> > @@ -2332,11 +2350,23 @@ struct page_info *alloc_domheap_pages(
> >                                     memflags, d)) == NULL)) )
> >            return NULL;
> >
> > -    if ( d && !(memflags & MEMF_no_owner) &&
> > -         assign_pages(d, pg, order, memflags) )
> > +    if ( d && !(memflags & MEMF_no_owner) )
> >       {
> > -        free_heap_pages(pg, order, memflags & MEMF_no_scrub);
> > -        return NULL;
> > +        if ( memflags & MEMF_no_refcount )
> > +        {
> > +            unsigned long i;
> > +
> > +            for ( i = 0; i < (1ul << order); i++ )
> > +            {
> > +                ASSERT(!pg[i].count_info);
> > +                pg[i].count_info = PGC_extra;
> 
> ... this is pursuing the wrongness of the code above and not safe
> against offlining.
> 
> We could argue this is an already existing bug, however I am a bit
> unease to add more abuse in the code. Jan, what do you think?
> 

I'd consider a straightforward patch-clash. If this patch goes in after yours then it needs to be modified accordingly, or vice versa.

  Paul

> > +            }
> > +        }
> > +        if ( assign_pages(d, pg, order, memflags) )
> > +        {
> > +            free_heap_pages(pg, order, memflags & MEMF_no_scrub);
> > +            return NULL;
> > +        }
> >       }
> 
> Cheers,
> 
> [1] https://lore.kernel.org/xen-devel/20200204133357.32101-1-
> julien@xen.org/
> 
> --
> Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Xen-devel] [PATCH v9 3/4] mm: make pages allocated with MEMF_no_refcount safe to assign
  2020-02-06 10:12     ` Durrant, Paul
@ 2020-02-06 11:43       ` Jan Beulich
  2020-02-06 14:30         ` Julien Grall
  0 siblings, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2020-02-06 11:43 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Durrant, Paul,
	xen-devel, Volodymyr Babchuk, Roger Pau Monné

On 06.02.2020 11:12, Durrant, Paul wrote:
>> From: Julien Grall <julien@xen.org>
>> Sent: 06 February 2020 10:04
>>
>> On 03/02/2020 10:56, Paul Durrant wrote:
>>> @@ -2332,11 +2350,23 @@ struct page_info *alloc_domheap_pages(
>>>                                     memflags, d)) == NULL)) )
>>>            return NULL;
>>>
>>> -    if ( d && !(memflags & MEMF_no_owner) &&
>>> -         assign_pages(d, pg, order, memflags) )
>>> +    if ( d && !(memflags & MEMF_no_owner) )
>>>       {
>>> -        free_heap_pages(pg, order, memflags & MEMF_no_scrub);
>>> -        return NULL;
>>> +        if ( memflags & MEMF_no_refcount )
>>> +        {
>>> +            unsigned long i;
>>> +
>>> +            for ( i = 0; i < (1ul << order); i++ )
>>> +            {
>>> +                ASSERT(!pg[i].count_info);
>>> +                pg[i].count_info = PGC_extra;
>>
>> ... this is pursuing the wrongness of the code above and not safe
>> against offlining.
>>
>> We could argue this is an already existing bug, however I am a bit
>> unease to add more abuse in the code. Jan, what do you think?
>>
> 
> I'd consider a straightforward patch-clash. If this patch goes in
> after yours then it needs to be modified accordingly, or vice versa.

While generally I advocate for not widening existing issues, I agree
with Paul here. His patch should not be penalized by us _later_
having found an issue (which is quite a bit wider).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Xen-devel] [PATCH v9 3/4] mm: make pages allocated with MEMF_no_refcount safe to assign
  2020-02-06 11:43       ` Jan Beulich
@ 2020-02-06 14:30         ` Julien Grall
  0 siblings, 0 replies; 16+ messages in thread
From: Julien Grall @ 2020-02-06 14:30 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Durrant, Paul,
	xen-devel, Volodymyr Babchuk, Roger Pau Monné



On 06/02/2020 11:43, Jan Beulich wrote:
> On 06.02.2020 11:12, Durrant, Paul wrote:
>>> From: Julien Grall <julien@xen.org>
>>> Sent: 06 February 2020 10:04
>>>
>>> On 03/02/2020 10:56, Paul Durrant wrote:
>>>> @@ -2332,11 +2350,23 @@ struct page_info *alloc_domheap_pages(
>>>>                                      memflags, d)) == NULL)) )
>>>>             return NULL;
>>>>
>>>> -    if ( d && !(memflags & MEMF_no_owner) &&
>>>> -         assign_pages(d, pg, order, memflags) )
>>>> +    if ( d && !(memflags & MEMF_no_owner) )
>>>>        {
>>>> -        free_heap_pages(pg, order, memflags & MEMF_no_scrub);
>>>> -        return NULL;
>>>> +        if ( memflags & MEMF_no_refcount )
>>>> +        {
>>>> +            unsigned long i;
>>>> +
>>>> +            for ( i = 0; i < (1ul << order); i++ )
>>>> +            {
>>>> +                ASSERT(!pg[i].count_info);
>>>> +                pg[i].count_info = PGC_extra;
>>>
>>> ... this is pursuing the wrongness of the code above and not safe
>>> against offlining.
>>>
>>> We could argue this is an already existing bug, however I am a bit
>>> unease to add more abuse in the code. Jan, what do you think?
>>>
>>
>> I'd consider a straightforward patch-clash. If this patch goes in
>> after yours then it needs to be modified accordingly, or vice versa.
> 
> While generally I advocate for not widening existing issues, I agree
> with Paul here. His patch should not be penalized by us _later_
> having found an issue (which is quite a bit wider).

Fair enough. For the Arm bits:

Acked-by: Julien Grall <julien@xen.org>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Xen-devel] [PATCH v9 0/4] purge free_shared_domheap_page()
  2020-02-06  9:17     ` Durrant, Paul
@ 2020-02-08  6:58       ` Tim Deegan
  2020-02-13 12:53       ` Durrant, Paul
  1 sibling, 0 replies; 16+ messages in thread
From: Tim Deegan @ 2020-02-08  6:58 UTC (permalink / raw)
  To: Durrant, Paul
  Cc: Stefano Stabellini, Julien Grall, Jun Nakajima, Wei Liu,
	Konrad Rzeszutek Wilk, Andrew Cooper, Ian Jackson, George Dunlap,
	Jan Beulich, xen-devel, Volodymyr Babchuk, Roger Pau Monné

At 09:17 +0000 on 06 Feb (1580980664), Durrant, Paul wrote:
> > -----Original Message-----
> > From: Jan Beulich <jbeulich@suse.com>
> > On 06.02.2020 09:28, Durrant, Paul wrote:
> > >>  xen/arch/x86/mm/shadow/common.c |  2 +-

> George, Julien, Tim,
> 
>   Can I have acks or otherwise, please?

Acked-by: Tim Deegan <tim@xen.org>

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Xen-devel] [PATCH v9 0/4] purge free_shared_domheap_page()
  2020-02-06  9:17     ` Durrant, Paul
  2020-02-08  6:58       ` Tim Deegan
@ 2020-02-13 12:53       ` Durrant, Paul
  1 sibling, 0 replies; 16+ messages in thread
From: Durrant, Paul @ 2020-02-13 12:53 UTC (permalink / raw)
  To: George Dunlap; +Cc: xen-devel

> -----Original Message-----
[snip]
> 
> Ok, thanks. Kevin has completed his acks (patches #1 and #4).
> 
> George, Julien, Tim,
> 
>   Can I have acks or otherwise, please?
> 

I have acks from Julien and Tim. George, can you ack or otherwise please?

  Paul 
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2020-02-13 12:54 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-03 10:56 [Xen-devel] [PATCH v9 0/4] purge free_shared_domheap_page() Paul Durrant
2020-02-03 10:56 ` [Xen-devel] [PATCH v9 1/4] x86 / vmx: move teardown from domain_destroy() Paul Durrant
2020-02-03 10:56 ` [Xen-devel] [PATCH v9 2/4] add a domain_tot_pages() helper function Paul Durrant
2020-02-03 11:40   ` Jan Beulich
2020-02-06  9:46   ` Julien Grall
2020-02-03 10:56 ` [Xen-devel] [PATCH v9 3/4] mm: make pages allocated with MEMF_no_refcount safe to assign Paul Durrant
2020-02-06 10:04   ` Julien Grall
2020-02-06 10:12     ` Durrant, Paul
2020-02-06 11:43       ` Jan Beulich
2020-02-06 14:30         ` Julien Grall
2020-02-03 10:56 ` [Xen-devel] [PATCH v9 4/4] x86 / vmx: use a MEMF_no_refcount domheap page for APIC_DEFAULT_PHYS_BASE Paul Durrant
2020-02-06  8:28 ` [Xen-devel] [PATCH v9 0/4] purge free_shared_domheap_page() Durrant, Paul
2020-02-06  8:45   ` Jan Beulich
2020-02-06  9:17     ` Durrant, Paul
2020-02-08  6:58       ` Tim Deegan
2020-02-13 12:53       ` Durrant, Paul

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.