xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [Xen-devel] [PATCH v5 0/4] purge free_shared_domheap_page()
@ 2020-01-29 10:16 Paul Durrant
  2020-01-29 10:16 ` [Xen-devel] [PATCH v5 1/4] x86 / vmx: move teardown from domain_destroy() Paul Durrant
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Paul Durrant @ 2020-01-29 10:16 UTC (permalink / raw)
  To: xen-devel; +Cc: Paul Durrant

Paul Durrant (4):
  x86 / vmx: move teardown from domain_destroy()...
  mm: modify domain_adjust_tot_pages() to better handle a zero
    adjustment
  mm: make MEMF_no_refcount pages safe to assign
  x86 / vmx: use a MEMF_no_refcount domheap page for
    APIC_DEFAULT_PHYS_BASE

 xen/arch/x86/hvm/vmx/vmx.c | 25 ++++++++++++++++++++-----
 xen/arch/x86/mm.c          | 10 ----------
 xen/common/page_alloc.c    | 38 ++++++++++++++++++++++++++------------
 xen/include/asm-arm/mm.h   |  5 ++++-
 xen/include/asm-x86/mm.h   |  9 +++++----
 5 files changed, 55 insertions(+), 32 deletions(-)

-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [Xen-devel] [PATCH v5 1/4] x86 / vmx: move teardown from domain_destroy()...
  2020-01-29 10:16 [Xen-devel] [PATCH v5 0/4] purge free_shared_domheap_page() Paul Durrant
@ 2020-01-29 10:16 ` Paul Durrant
  2020-01-29 10:16 ` [Xen-devel] [PATCH v5 2/4] mm: modify domain_adjust_tot_pages() to better handle a zero adjustment Paul Durrant
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 8+ messages in thread
From: Paul Durrant @ 2020-01-29 10:16 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Jun Nakajima, Wei Liu, Andrew Cooper, Paul Durrant,
	George Dunlap, Roger Pau Monné

... to domain_relinquish_resources().

The teardown code frees the APICv page. This does not need to be done late
so do it in domain_relinquish_resources() rather than domain_destroy().

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>

v4:
  - New in v4 (disaggregated from v3 patch #3)
---
 xen/arch/x86/hvm/vmx/vmx.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index b262d38a7c..606f3dc2eb 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -419,7 +419,7 @@ static int vmx_domain_initialise(struct domain *d)
     return 0;
 }
 
-static void vmx_domain_destroy(struct domain *d)
+static void vmx_domain_relinquish_resources(struct domain *d)
 {
     if ( !has_vlapic(d) )
         return;
@@ -2240,7 +2240,7 @@ static struct hvm_function_table __initdata vmx_function_table = {
     .cpu_up_prepare       = vmx_cpu_up_prepare,
     .cpu_dead             = vmx_cpu_dead,
     .domain_initialise    = vmx_domain_initialise,
-    .domain_destroy       = vmx_domain_destroy,
+    .domain_relinquish_resources = vmx_domain_relinquish_resources,
     .vcpu_initialise      = vmx_vcpu_initialise,
     .vcpu_destroy         = vmx_vcpu_destroy,
     .save_cpu_ctxt        = vmx_save_vmcs_ctxt,
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [Xen-devel] [PATCH v5 2/4] mm: modify domain_adjust_tot_pages() to better handle a zero adjustment
  2020-01-29 10:16 [Xen-devel] [PATCH v5 0/4] purge free_shared_domheap_page() Paul Durrant
  2020-01-29 10:16 ` [Xen-devel] [PATCH v5 1/4] x86 / vmx: move teardown from domain_destroy() Paul Durrant
@ 2020-01-29 10:16 ` Paul Durrant
  2020-01-29 11:12   ` Jan Beulich
  2020-01-29 10:16 ` [Xen-devel] [PATCH v5 3/4] mm: make MEMF_no_refcount pages safe to assign Paul Durrant
  2020-01-29 10:16 ` [Xen-devel] [PATCH v5 4/4] x86 / vmx: use a MEMF_no_refcount domheap page for APIC_DEFAULT_PHYS_BASE Paul Durrant
  3 siblings, 1 reply; 8+ messages in thread
From: Paul Durrant @ 2020-01-29 10:16 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Paul Durrant, Ian Jackson

Currently the function will pointlessly acquire and release the global
'heap_lock' in this case.

NOTE: No caller yet calls domain_adjust_tot_pages() with a zero 'pages'
      argument, but a subsequent patch will make this possible.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wl@xen.org>

v5:
 - Split out from the subsequent 'make MEMF_no_refcount pages safe to
   assign' patch as requested by Jan
---
 xen/common/page_alloc.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 919a270587..135e15bae0 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -460,6 +460,9 @@ unsigned long domain_adjust_tot_pages(struct domain *d, long pages)
 {
     long dom_before, dom_after, dom_claimed, sys_before, sys_after;
 
+    if ( !pages )
+        goto out;
+
     ASSERT(spin_is_locked(&d->page_alloc_lock));
     d->tot_pages += pages;
 
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [Xen-devel] [PATCH v5 3/4] mm: make MEMF_no_refcount pages safe to assign
  2020-01-29 10:16 [Xen-devel] [PATCH v5 0/4] purge free_shared_domheap_page() Paul Durrant
  2020-01-29 10:16 ` [Xen-devel] [PATCH v5 1/4] x86 / vmx: move teardown from domain_destroy() Paul Durrant
  2020-01-29 10:16 ` [Xen-devel] [PATCH v5 2/4] mm: modify domain_adjust_tot_pages() to better handle a zero adjustment Paul Durrant
@ 2020-01-29 10:16 ` Paul Durrant
  2020-01-29 11:20   ` Jan Beulich
  2020-01-29 10:16 ` [Xen-devel] [PATCH v5 4/4] x86 / vmx: use a MEMF_no_refcount domheap page for APIC_DEFAULT_PHYS_BASE Paul Durrant
  3 siblings, 1 reply; 8+ messages in thread
From: Paul Durrant @ 2020-01-29 10:16 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Paul Durrant, Ian Jackson,
	Volodymyr Babchuk, Roger Pau Monné

Currently it is unsafe to assign a domheap page allocated with
MEMF_no_refcount to a domain because the domain't 'tot_pages' will not
be incremented, but will be decrement when the page is freed (since
free_domheap_pages() has no way of telling that the increment was skipped).

This patch allocates a new 'count_info' bit for a PGC_no_refcount flag
which is then used to mark domheap pages allocated with MEMF_no_refcount.
This then allows free_domheap_pages() to skip decrementing tot_pages when
appropriate and hence makes the pages safe to assign.

NOTE: The patch sets MEMF_no_refcount directly in alloc_domheap_pages()
      rather than in assign_pages() because the latter is called with
      MEMF_no_refcount by memory_exchange().

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wl@xen.org>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v5:
 - Make sure PGC_no_refcount is set before assign_pages() is called
 - Don't bother to clear PGC_no_refcount in free_domheap_pages() and
   drop ASSERT in free_heap_pages()
 - Don't latch count_info in free_heap_pages()

v4:
 - New in v4
---
 xen/common/page_alloc.c  | 35 +++++++++++++++++++++++------------
 xen/include/asm-arm/mm.h |  5 ++++-
 xen/include/asm-x86/mm.h |  7 +++++--
 3 files changed, 32 insertions(+), 15 deletions(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 135e15bae0..984296c687 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -2287,11 +2287,14 @@ int assign_pages(
 
     for ( i = 0; i < (1 << order); i++ )
     {
+        unsigned long count_info = pg[i].count_info;
+
         ASSERT(page_get_owner(&pg[i]) == NULL);
-        ASSERT(!pg[i].count_info);
+        ASSERT(!(count_info & ~PGC_no_refcount));
         page_set_owner(&pg[i], d);
         smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
-        pg[i].count_info = PGC_allocated | 1;
+        count_info &= PGC_no_refcount;
+        pg[i].count_info = count_info | PGC_allocated | 1;
         page_list_add_tail(&pg[i], &d->page_list);
     }
 
@@ -2317,11 +2320,6 @@ struct page_info *alloc_domheap_pages(
 
     if ( memflags & MEMF_no_owner )
         memflags |= MEMF_no_refcount;
-    else if ( (memflags & MEMF_no_refcount) && d )
-    {
-        ASSERT(!(memflags & MEMF_no_refcount));
-        return NULL;
-    }
 
     if ( !dma_bitsize )
         memflags &= ~MEMF_no_dma;
@@ -2334,11 +2332,20 @@ struct page_info *alloc_domheap_pages(
                                   memflags, d)) == NULL)) )
          return NULL;
 
-    if ( d && !(memflags & MEMF_no_owner) &&
-         assign_pages(d, pg, order, memflags) )
+    if ( d && !(memflags & MEMF_no_owner) )
     {
-        free_heap_pages(pg, order, memflags & MEMF_no_scrub);
-        return NULL;
+        if ( memflags & MEMF_no_refcount )
+        {
+            unsigned long i;
+
+            for ( i = 0; i < (1 << order); i++ )
+                pg[i].count_info |= PGC_no_refcount;
+        }
+        if ( assign_pages(d, pg, order, memflags) )
+        {
+            free_heap_pages(pg, order, memflags & MEMF_no_scrub);
+            return NULL;
+        }
     }
 
     return pg;
@@ -2371,6 +2378,8 @@ void free_domheap_pages(struct page_info *pg, unsigned int order)
 
         if ( likely(d) && likely(d != dom_cow) )
         {
+            long pages = 0;
+
             /* NB. May recursively lock from relinquish_memory(). */
             spin_lock_recursive(&d->page_alloc_lock);
 
@@ -2386,9 +2395,11 @@ void free_domheap_pages(struct page_info *pg, unsigned int order)
                     BUG();
                 }
                 arch_free_heap_page(d, &pg[i]);
+                if ( !(pg[i].count_info & PGC_no_refcount) )
+                    pages--;
             }
 
-            drop_dom_ref = !domain_adjust_tot_pages(d, -(1 << order));
+            drop_dom_ref = !domain_adjust_tot_pages(d, pages);
 
             spin_unlock_recursive(&d->page_alloc_lock);
 
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 333efd3a60..1076cc9713 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -119,9 +119,12 @@ struct page_info
 #define PGC_state_offlined PG_mask(2, 9)
 #define PGC_state_free    PG_mask(3, 9)
 #define page_state_is(pg, st) (((pg)->count_info&PGC_state) == PGC_state_##st)
+/* Page is not reference counted */
+#define _PGC_no_refcount  PG_shift(10)
+#define PGC_no_refcount   PG_mask(1, 10)
 
 /* Count of references to this frame. */
-#define PGC_count_width   PG_shift(9)
+#define PGC_count_width   PG_shift(10)
 #define PGC_count_mask    ((1UL<<PGC_count_width)-1)
 
 /*
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 2ca8882ad0..e75feea15e 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -77,9 +77,12 @@
 #define PGC_state_offlined PG_mask(2, 9)
 #define PGC_state_free    PG_mask(3, 9)
 #define page_state_is(pg, st) (((pg)->count_info&PGC_state) == PGC_state_##st)
+/* Page is not reference counted */
+#define _PGC_no_refcount  PG_shift(10)
+#define PGC_no_refcount   PG_mask(1, 10)
 
- /* Count of references to this frame. */
-#define PGC_count_width   PG_shift(9)
+/* Count of references to this frame. */
+#define PGC_count_width   PG_shift(10)
 #define PGC_count_mask    ((1UL<<PGC_count_width)-1)
 
 /*
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [Xen-devel] [PATCH v5 4/4] x86 / vmx: use a MEMF_no_refcount domheap page for APIC_DEFAULT_PHYS_BASE
  2020-01-29 10:16 [Xen-devel] [PATCH v5 0/4] purge free_shared_domheap_page() Paul Durrant
                   ` (2 preceding siblings ...)
  2020-01-29 10:16 ` [Xen-devel] [PATCH v5 3/4] mm: make MEMF_no_refcount pages safe to assign Paul Durrant
@ 2020-01-29 10:16 ` Paul Durrant
  3 siblings, 0 replies; 8+ messages in thread
From: Paul Durrant @ 2020-01-29 10:16 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Paul Durrant, Jun Nakajima,
	Roger Pau Monné

vmx_alloc_vlapic_mapping() currently contains some very odd looking code
that allocates a MEMF_no_owner domheap page and then shares with the guest
as if it were a xenheap page. This then requires vmx_free_vlapic_mapping()
to call a special function in the mm code: free_shared_domheap_page().

By using a MEMF_no_refcount domheap page instead, the odd looking code in
vmx_alloc_vlapic_mapping() can simply use get_page_and_type() to set up a
writable mapping before insertion in the P2M and vmx_free_vlapic_mapping()
can simply release the page using put_page_alloc_ref() followed by
put_page_and_type(). This then allows free_shared_domheap_page() to be
purged.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v4:
 - Use a MEMF_no_refcount page rather than a 'normal' page

v2:
 - Set an initial value for max_pages rather than avoiding the check in
   assign_pages()
 - Make domain_destroy() optional
---
 xen/arch/x86/hvm/vmx/vmx.c | 21 ++++++++++++++++++---
 xen/arch/x86/mm.c          | 10 ----------
 xen/include/asm-x86/mm.h   |  2 --
 3 files changed, 18 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 606f3dc2eb..7423d2421b 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -3028,12 +3028,22 @@ static int vmx_alloc_vlapic_mapping(struct domain *d)
     if ( !cpu_has_vmx_virtualize_apic_accesses )
         return 0;
 
-    pg = alloc_domheap_page(d, MEMF_no_owner);
+    pg = alloc_domheap_page(d, MEMF_no_refcount);
     if ( !pg )
         return -ENOMEM;
+
+    if ( !get_page_and_type(pg, d, PGT_writable_page) )
+    {
+        /*
+         * The domain can't possibly know about this page yet, so failure
+         * here is a clear indication of something fishy going on.
+         */
+        domain_crash(d);
+        return -ENODATA;
+    }
+
     mfn = page_to_mfn(pg);
     clear_domain_page(mfn);
-    share_xen_page_with_guest(pg, d, SHARE_rw);
     d->arch.hvm.vmx.apic_access_mfn = mfn;
 
     return set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE), mfn,
@@ -3047,7 +3057,12 @@ static void vmx_free_vlapic_mapping(struct domain *d)
 
     d->arch.hvm.vmx.apic_access_mfn = _mfn(0);
     if ( !mfn_eq(mfn, _mfn(0)) )
-        free_shared_domheap_page(mfn_to_page(mfn));
+    {
+        struct page_info *pg = mfn_to_page(mfn);
+
+        put_page_alloc_ref(pg);
+        put_page_and_type(pg);
+    }
 }
 
 static void vmx_install_vlapic_mapping(struct vcpu *v)
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index f50c065af3..67351798c7 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -496,16 +496,6 @@ void share_xen_page_with_guest(struct page_info *page, struct domain *d,
     spin_unlock(&d->page_alloc_lock);
 }
 
-void free_shared_domheap_page(struct page_info *page)
-{
-    put_page_alloc_ref(page);
-    if ( !test_and_clear_bit(_PGC_xen_heap, &page->count_info) )
-        ASSERT_UNREACHABLE();
-    page->u.inuse.type_info = 0;
-    page_set_owner(page, NULL);
-    free_domheap_page(page);
-}
-
 void make_cr3(struct vcpu *v, mfn_t mfn)
 {
     struct domain *d = v->domain;
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index e75feea15e..036d7ac22f 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -320,8 +320,6 @@ struct page_info
 
 #define maddr_get_owner(ma)   (page_get_owner(maddr_to_page((ma))))
 
-extern void free_shared_domheap_page(struct page_info *page);
-
 #define frame_table ((struct page_info *)FRAMETABLE_VIRT_START)
 extern unsigned long max_page;
 extern unsigned long total_pages;
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [Xen-devel] [PATCH v5 2/4] mm: modify domain_adjust_tot_pages() to better handle a zero adjustment
  2020-01-29 10:16 ` [Xen-devel] [PATCH v5 2/4] mm: modify domain_adjust_tot_pages() to better handle a zero adjustment Paul Durrant
@ 2020-01-29 11:12   ` Jan Beulich
  2020-01-29 11:15     ` Durrant, Paul
  0 siblings, 1 reply; 8+ messages in thread
From: Jan Beulich @ 2020-01-29 11:12 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, xen-devel

On 29.01.2020 11:16, Paul Durrant wrote:
> Currently the function will pointlessly acquire and release the global
> 'heap_lock' in this case.
> 
> NOTE: No caller yet calls domain_adjust_tot_pages() with a zero 'pages'
>       argument, but a subsequent patch will make this possible.

With this memory_exchange(), as previously indicated, now needlessly
prevents the call when !dec_count. I do think, as said there, that
together with the addition here then redundant checks in callers
should be dropped (and as it looks the named one is the only one).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Xen-devel] [PATCH v5 2/4] mm: modify domain_adjust_tot_pages() to better handle a zero adjustment
  2020-01-29 11:12   ` Jan Beulich
@ 2020-01-29 11:15     ` Durrant, Paul
  0 siblings, 0 replies; 8+ messages in thread
From: Durrant, Paul @ 2020-01-29 11:15 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, xen-devel

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 29 January 2020 11:13
> To: Durrant, Paul <pdurrant@amazon.co.uk>
> Cc: xen-devel@lists.xenproject.org; Andrew Cooper
> <andrew.cooper3@citrix.com>; George Dunlap <George.Dunlap@eu.citrix.com>;
> Ian Jackson <ian.jackson@eu.citrix.com>; Julien Grall <julien@xen.org>;
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Stefano Stabellini
> <sstabellini@kernel.org>; Wei Liu <wl@xen.org>
> Subject: Re: [PATCH v5 2/4] mm: modify domain_adjust_tot_pages() to better
> handle a zero adjustment
> 
> On 29.01.2020 11:16, Paul Durrant wrote:
> > Currently the function will pointlessly acquire and release the global
> > 'heap_lock' in this case.
> >
> > NOTE: No caller yet calls domain_adjust_tot_pages() with a zero 'pages'
> >       argument, but a subsequent patch will make this possible.
> 
> With this memory_exchange(), as previously indicated, now needlessly
> prevents the call when !dec_count. I do think, as said there, that
> together with the addition here then redundant checks in callers
> should be dropped (and as it looks the named one is the only one).
> 

Ok, yes I missed that.

  Paul

> Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Xen-devel] [PATCH v5 3/4] mm: make MEMF_no_refcount pages safe to assign
  2020-01-29 10:16 ` [Xen-devel] [PATCH v5 3/4] mm: make MEMF_no_refcount pages safe to assign Paul Durrant
@ 2020-01-29 11:20   ` Jan Beulich
  0 siblings, 0 replies; 8+ messages in thread
From: Jan Beulich @ 2020-01-29 11:20 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, xen-devel,
	Volodymyr Babchuk, Roger Pau Monné

On 29.01.2020 11:16, Paul Durrant wrote:
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -2287,11 +2287,14 @@ int assign_pages(
>  
>      for ( i = 0; i < (1 << order); i++ )
>      {
> +        unsigned long count_info = pg[i].count_info;
> +
>          ASSERT(page_get_owner(&pg[i]) == NULL);
> -        ASSERT(!pg[i].count_info);
> +        ASSERT(!(count_info & ~PGC_no_refcount));

I think this would benefit from being more strict: The flag may also
not be set if MEMF_no_refcount is clear in the flags passed in.

> @@ -2334,11 +2332,20 @@ struct page_info *alloc_domheap_pages(
>                                    memflags, d)) == NULL)) )
>           return NULL;
>  
> -    if ( d && !(memflags & MEMF_no_owner) &&
> -         assign_pages(d, pg, order, memflags) )
> +    if ( d && !(memflags & MEMF_no_owner) )
>      {
> -        free_heap_pages(pg, order, memflags & MEMF_no_scrub);
> -        return NULL;
> +        if ( memflags & MEMF_no_refcount )
> +        {
> +            unsigned long i;

With this you mean ...

> +            for ( i = 0; i < (1 << order); i++ )

... 1UL here. Or else "unsigned int" and 1U. The file isn't
very consistent with this, but at least locally to a single
functions things should match up.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-01-29 11:21 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-29 10:16 [Xen-devel] [PATCH v5 0/4] purge free_shared_domheap_page() Paul Durrant
2020-01-29 10:16 ` [Xen-devel] [PATCH v5 1/4] x86 / vmx: move teardown from domain_destroy() Paul Durrant
2020-01-29 10:16 ` [Xen-devel] [PATCH v5 2/4] mm: modify domain_adjust_tot_pages() to better handle a zero adjustment Paul Durrant
2020-01-29 11:12   ` Jan Beulich
2020-01-29 11:15     ` Durrant, Paul
2020-01-29 10:16 ` [Xen-devel] [PATCH v5 3/4] mm: make MEMF_no_refcount pages safe to assign Paul Durrant
2020-01-29 11:20   ` Jan Beulich
2020-01-29 10:16 ` [Xen-devel] [PATCH v5 4/4] x86 / vmx: use a MEMF_no_refcount domheap page for APIC_DEFAULT_PHYS_BASE Paul Durrant

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).