All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] xen/mm: Optimize init_heap_pages()
@ 2022-07-15 17:03 Julien Grall
  2022-07-15 17:03 ` [PATCH v2 1/3] xen: page_alloc: Don't open-code IS_ALIGNED() Julien Grall
                   ` (2 more replies)
  0 siblings, 3 replies; 18+ messages in thread
From: Julien Grall @ 2022-07-15 17:03 UTC (permalink / raw)
  To: xen-devel
  Cc: julien, Julien Grall, Andrew Cooper, George Dunlap, Jan Beulich,
	Stefano Stabellini, Wei Liu

From: Julien Grall <jgrall@amazon.com>

Hi all,

As part of the Live-Update work, we noticed that a big part Xen boot
is spent to add pages in the heap. For instance, on when running Xen
in a nested envionment on a c5.metal with 90GB, it takes ~1.4s.

This small series is reworking init_heap_pages() to give the pages
to free_heap_pages() by chunk rather than one by one.

With this approach, the time spent to init the heap is down
to 96 ms in the setup mention above.

There is potentially one more optimization possible that would
allow to further reduce the time spent. The new approach is accessing
the page information multiple time in separate loop that can potentially
be large.

Cheers,

Julien Grall (2):
  xen: page_alloc: Don't open-code IS_ALIGNED()
  xen/heap: Split init_heap_pages() in two
  xen/heap: pass order to free_heap_pages() in heap init

 xen/common/page_alloc.c | 106 ++++++++++++++++++++++++++++++----------
 1 file changed, 79 insertions(+), 27 deletions(-)

-- 
2.32.0



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 1/3] xen: page_alloc: Don't open-code IS_ALIGNED()
  2022-07-15 17:03 [PATCH v2 0/3] xen/mm: Optimize init_heap_pages() Julien Grall
@ 2022-07-15 17:03 ` Julien Grall
  2022-07-18  8:06   ` Wei Chen
  2022-07-18  8:11   ` Jan Beulich
  2022-07-15 17:03 ` [PATCH v2 2/3] xen/heap: Split init_heap_pages() in two Julien Grall
  2022-07-15 17:03 ` [PATCH v2 3/3] xen/heap: pass order to free_heap_pages() in heap init Julien Grall
  2 siblings, 2 replies; 18+ messages in thread
From: Julien Grall @ 2022-07-15 17:03 UTC (permalink / raw)
  To: xen-devel
  Cc: julien, Julien Grall, Andrew Cooper, George Dunlap, Jan Beulich,
	Stefano Stabellini, Wei Liu

From: Julien Grall <jgrall@amazon.com>

init_heap_pages() is using an open-code version of IS_ALIGNED(). Replace
it to improve the readability of the code.

No functional change intended.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---

Changes in v2:
    - Patch added
---
 xen/common/page_alloc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index fe0e15429af3..078c2990041d 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1823,7 +1823,7 @@ static void init_heap_pages(
             unsigned long s = mfn_x(page_to_mfn(pg + i));
             unsigned long e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1));
             bool use_tail = (nid == phys_to_nid(pfn_to_paddr(e - 1))) &&
-                            !(s & ((1UL << MAX_ORDER) - 1)) &&
+                            IS_ALIGNED(s, 1UL << MAX_ORDER) &&
                             (find_first_set_bit(e) <= find_first_set_bit(s));
             unsigned long n;
 
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 2/3] xen/heap: Split init_heap_pages() in two
  2022-07-15 17:03 [PATCH v2 0/3] xen/mm: Optimize init_heap_pages() Julien Grall
  2022-07-15 17:03 ` [PATCH v2 1/3] xen: page_alloc: Don't open-code IS_ALIGNED() Julien Grall
@ 2022-07-15 17:03 ` Julien Grall
  2022-07-18  8:18   ` Wei Chen
  2022-07-18  9:31   ` Jan Beulich
  2022-07-15 17:03 ` [PATCH v2 3/3] xen/heap: pass order to free_heap_pages() in heap init Julien Grall
  2 siblings, 2 replies; 18+ messages in thread
From: Julien Grall @ 2022-07-15 17:03 UTC (permalink / raw)
  To: xen-devel
  Cc: julien, Julien Grall, Andrew Cooper, George Dunlap, Jan Beulich,
	Stefano Stabellini, Wei Liu

From: Julien Grall <jgrall@amazon.com>

At the moment, init_heap_pages() will call free_heap_pages() page
by page. To reduce the time to initialize the heap, we will want
to provide multiple pages at the same time.

init_heap_pages() is now split in two parts:
    - init_heap_pages(): will break down the range in multiple set
      of contiguous pages. For now, the criteria is the pages should
      belong to the same NUMA node.
    - _init_heap_pages(): will initialize a set of pages belonging to
      the same NUMA node. In a follow-up patch, new requirements will
      be added (e.g. pages should belong to the same zone). For now the
      pages are still passed one by one to free_heap_pages().

Note that the comment before init_heap_pages() is heavily outdated and
does not reflect the current code. So update it.

This patch is a merge/rework of patches from David Woodhouse and
Hongyan Xia.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---

Interestingly, I was expecting this patch to perform worse. However,
from testing there is a small increase in perf.

That said, I mainly plit the patch because it keeps refactoring and
optimization separated.

Changes in v2:
    - Rename init_contig_pages() to _init_heap_pages()
    - Fold is_contig_page()
---
 xen/common/page_alloc.c | 77 ++++++++++++++++++++++++++---------------
 1 file changed, 50 insertions(+), 27 deletions(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 078c2990041d..eedb2fed77c3 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1778,16 +1778,44 @@ int query_page_offline(mfn_t mfn, uint32_t *status)
 }
 
 /*
- * Hand the specified arbitrary page range to the specified heap zone
- * checking the node_id of the previous page.  If they differ and the
- * latter is not on a MAX_ORDER boundary, then we reserve the page by
- * not freeing it to the buddy allocator.
+ * This function should only be called with valid pages from the same NUMA
+ * node.
  */
+static void _init_heap_pages(const struct page_info *pg,
+                             unsigned long nr_pages,
+                             bool need_scrub)
+{
+    unsigned long s, e;
+    unsigned int nid = phys_to_nid(page_to_maddr(pg));
+
+    s = mfn_x(page_to_mfn(pg));
+    e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1));
+    if ( unlikely(!avail[nid]) )
+    {
+        bool use_tail = IS_ALIGNED(s, 1UL << MAX_ORDER) &&
+                        (find_first_set_bit(e) <= find_first_set_bit(s));
+        unsigned long n;
+
+        n = init_node_heap(nid, s, nr_pages, &use_tail);
+        BUG_ON(n > nr_pages);
+        if ( use_tail )
+            e -= n;
+        else
+            s += n;
+    }
+
+    while ( s < e )
+    {
+        free_heap_pages(mfn_to_page(_mfn(s)), 0, need_scrub);
+        s += 1UL;
+    }
+}
+
 static void init_heap_pages(
     struct page_info *pg, unsigned long nr_pages)
 {
     unsigned long i;
-    bool idle_scrub = false;
+    bool need_scrub = scrub_debug;
 
     /*
      * Keep MFN 0 away from the buddy allocator to avoid crossing zone
@@ -1812,35 +1840,30 @@ static void init_heap_pages(
     spin_unlock(&heap_lock);
 
     if ( system_state < SYS_STATE_active && opt_bootscrub == BOOTSCRUB_IDLE )
-        idle_scrub = true;
+        need_scrub = true;
 
-    for ( i = 0; i < nr_pages; i++ )
+    for ( i = 0; i < nr_pages; )
     {
-        unsigned int nid = phys_to_nid(page_to_maddr(pg+i));
+        unsigned int nid = phys_to_nid(page_to_maddr(pg));
+        unsigned long left = nr_pages - i;
+        unsigned long contig_pages;
 
-        if ( unlikely(!avail[nid]) )
+        /*
+         * _init_heap_pages() is only able to accept range following
+         * specific property (see comment on top of _init_heap_pages()).
+         *
+         * So break down the range in smaller set.
+         */
+        for ( contig_pages = 1; contig_pages < left; contig_pages++ )
         {
-            unsigned long s = mfn_x(page_to_mfn(pg + i));
-            unsigned long e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1));
-            bool use_tail = (nid == phys_to_nid(pfn_to_paddr(e - 1))) &&
-                            IS_ALIGNED(s, 1UL << MAX_ORDER) &&
-                            (find_first_set_bit(e) <= find_first_set_bit(s));
-            unsigned long n;
-
-            n = init_node_heap(nid, mfn_x(page_to_mfn(pg + i)), nr_pages - i,
-                               &use_tail);
-            BUG_ON(i + n > nr_pages);
-            if ( n && !use_tail )
-            {
-                i += n - 1;
-                continue;
-            }
-            if ( i + n == nr_pages )
+            if ( nid != (phys_to_nid(page_to_maddr(pg))) )
                 break;
-            nr_pages -= n;
         }
 
-        free_heap_pages(pg + i, 0, scrub_debug || idle_scrub);
+        _init_heap_pages(pg, contig_pages, need_scrub);
+
+        pg += contig_pages;
+        i += contig_pages;
     }
 }
 
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 3/3] xen/heap: pass order to free_heap_pages() in heap init
  2022-07-15 17:03 [PATCH v2 0/3] xen/mm: Optimize init_heap_pages() Julien Grall
  2022-07-15 17:03 ` [PATCH v2 1/3] xen: page_alloc: Don't open-code IS_ALIGNED() Julien Grall
  2022-07-15 17:03 ` [PATCH v2 2/3] xen/heap: Split init_heap_pages() in two Julien Grall
@ 2022-07-15 17:03 ` Julien Grall
  2022-07-18  8:38   ` Wei Chen
  2022-07-18  9:43   ` Jan Beulich
  2 siblings, 2 replies; 18+ messages in thread
From: Julien Grall @ 2022-07-15 17:03 UTC (permalink / raw)
  To: xen-devel
  Cc: julien, Hongyan Xia, Andrew Cooper, George Dunlap, Jan Beulich,
	Stefano Stabellini, Wei Liu, Julien Grall

From: Hongyan Xia <hongyxia@amazon.com>

The idea is to split the range into multiple aligned power-of-2 regions
which only needs to call free_heap_pages() once each. We check the least
significant set bit of the start address and use its bit index as the
order of this increment. This makes sure that each increment is both
power-of-2 and properly aligned, which can be safely passed to
free_heap_pages(). Of course, the order also needs to be sanity checked
against the upper bound and MAX_ORDER.

Tested on a nested environment on c5.metal with various amount
of RAM and CONFIG_DEBUG=n. Time for end_boot_allocator() to complete:
            Before         After
    - 90GB: 1445 ms         96 ms
    -  8GB:  126 ms          8 ms
    -  4GB:   62 ms          4 ms

Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
Signed-off-by: Julien Grall <jgrall@amazon.com>

---

Changes in v2:
    - Update comment
    - Update the numbers. They are slightly better as is_contig_page()
      has been folded in init_heap_pages().
---
 xen/common/page_alloc.c | 35 ++++++++++++++++++++++++++++++++---
 1 file changed, 32 insertions(+), 3 deletions(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index eedb2fed77c3..2b99801d2ea3 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1779,7 +1779,7 @@ int query_page_offline(mfn_t mfn, uint32_t *status)
 
 /*
  * This function should only be called with valid pages from the same NUMA
- * node.
+ * node and zone.
  */
 static void _init_heap_pages(const struct page_info *pg,
                              unsigned long nr_pages,
@@ -1806,8 +1806,22 @@ static void _init_heap_pages(const struct page_info *pg,
 
     while ( s < e )
     {
-        free_heap_pages(mfn_to_page(_mfn(s)), 0, need_scrub);
-        s += 1UL;
+        /*
+         * For s == 0, we simply use the largest increment by checking the
+         * MSB of the region size. For s != 0, we also need to ensure that the
+         * chunk is properly sized to end at power-of-two alignment. We do this
+         * by checking the LSB of the start address and use its index as
+         * the increment. Both cases need to be guarded by MAX_ORDER.
+         *
+         * Note that the value of ffsl() and flsl() starts from 1 so we need
+         * to decrement it by 1.
+         */
+        int inc_order = min(MAX_ORDER, flsl(e - s) - 1);
+
+        if ( s )
+            inc_order = min(inc_order, ffsl(s) - 1);
+        free_heap_pages(mfn_to_page(_mfn(s)), inc_order, need_scrub);
+        s += (1UL << inc_order);
     }
 }
 
@@ -1844,6 +1858,9 @@ static void init_heap_pages(
 
     for ( i = 0; i < nr_pages; )
     {
+#ifdef CONFIG_SEPARATE_XENHEAP
+        unsigned int zone = page_to_zone(pg);
+#endif
         unsigned int nid = phys_to_nid(page_to_maddr(pg));
         unsigned long left = nr_pages - i;
         unsigned long contig_pages;
@@ -1856,6 +1873,18 @@ static void init_heap_pages(
          */
         for ( contig_pages = 1; contig_pages < left; contig_pages++ )
         {
+            /*
+             * No need to check for the zone when !CONFIG_SEPARATE_XENHEAP
+             * because free_heap_pages() can only take power-of-two ranges
+             * which never cross zone boundaries. But for separate xenheap
+             * which is manually defined, it is possible for power-of-two
+             * range to cross zones.
+             */
+#ifdef CONFIG_SEPARATE_XENHEAP
+            if ( zone != page_to_zone(pg) )
+                break;
+#endif
+
             if ( nid != (phys_to_nid(page_to_maddr(pg))) )
                 break;
         }
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 1/3] xen: page_alloc: Don't open-code IS_ALIGNED()
  2022-07-15 17:03 ` [PATCH v2 1/3] xen: page_alloc: Don't open-code IS_ALIGNED() Julien Grall
@ 2022-07-18  8:06   ` Wei Chen
  2022-07-18  8:11   ` Jan Beulich
  1 sibling, 0 replies; 18+ messages in thread
From: Wei Chen @ 2022-07-18  8:06 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Julien Grall, Andrew Cooper, George Dunlap, Jan Beulich,
	Stefano Stabellini, Wei Liu

Hi Julien,

On 2022/7/16 1:03, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> init_heap_pages() is using an open-code version of IS_ALIGNED(). Replace
> it to improve the readability of the code.
> 
> No functional change intended.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ---
> 
> Changes in v2:
>      - Patch added
> ---
>   xen/common/page_alloc.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index fe0e15429af3..078c2990041d 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1823,7 +1823,7 @@ static void init_heap_pages(
>               unsigned long s = mfn_x(page_to_mfn(pg + i));
>               unsigned long e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1));
>               bool use_tail = (nid == phys_to_nid(pfn_to_paddr(e - 1))) &&
> -                            !(s & ((1UL << MAX_ORDER) - 1)) &&
> +                            IS_ALIGNED(s, 1UL << MAX_ORDER) &&
>                               (find_first_set_bit(e) <= find_first_set_bit(s));
>               unsigned long n;
>   

LGTM.

Reviewed-by: Wei Chen <Wei.Chen@arm.com>


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 1/3] xen: page_alloc: Don't open-code IS_ALIGNED()
  2022-07-15 17:03 ` [PATCH v2 1/3] xen: page_alloc: Don't open-code IS_ALIGNED() Julien Grall
  2022-07-18  8:06   ` Wei Chen
@ 2022-07-18  8:11   ` Jan Beulich
  1 sibling, 0 replies; 18+ messages in thread
From: Jan Beulich @ 2022-07-18  8:11 UTC (permalink / raw)
  To: Julien Grall
  Cc: Julien Grall, Andrew Cooper, George Dunlap, Stefano Stabellini,
	Wei Liu, xen-devel

On 15.07.2022 19:03, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> init_heap_pages() is using an open-code version of IS_ALIGNED(). Replace
> it to improve the readability of the code.
> 
> No functional change intended.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/3] xen/heap: Split init_heap_pages() in two
  2022-07-15 17:03 ` [PATCH v2 2/3] xen/heap: Split init_heap_pages() in two Julien Grall
@ 2022-07-18  8:18   ` Wei Chen
  2022-07-18 10:05     ` Julien Grall
  2022-07-18  9:31   ` Jan Beulich
  1 sibling, 1 reply; 18+ messages in thread
From: Wei Chen @ 2022-07-18  8:18 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Julien Grall, Andrew Cooper, George Dunlap, Jan Beulich,
	Stefano Stabellini, Wei Liu

Hi Julien,

On 2022/7/16 1:03, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, init_heap_pages() will call free_heap_pages() page
> by page. To reduce the time to initialize the heap, we will want
> to provide multiple pages at the same time.
> 
> init_heap_pages() is now split in two parts:
>      - init_heap_pages(): will break down the range in multiple set
>        of contiguous pages. For now, the criteria is the pages should
>        belong to the same NUMA node.
>      - _init_heap_pages(): will initialize a set of pages belonging to
>        the same NUMA node. In a follow-up patch, new requirements will
>        be added (e.g. pages should belong to the same zone). For now the
>        pages are still passed one by one to free_heap_pages().
> 
> Note that the comment before init_heap_pages() is heavily outdated and
> does not reflect the current code. So update it.
> 
> This patch is a merge/rework of patches from David Woodhouse and
> Hongyan Xia.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ---
> 
> Interestingly, I was expecting this patch to perform worse. However,
> from testing there is a small increase in perf.
> 
> That said, I mainly plit the patch because it keeps refactoring and
> optimization separated.
> 
> Changes in v2:
>      - Rename init_contig_pages() to _init_heap_pages()
>      - Fold is_contig_page()
> ---
>   xen/common/page_alloc.c | 77 ++++++++++++++++++++++++++---------------
>   1 file changed, 50 insertions(+), 27 deletions(-)
> 
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 078c2990041d..eedb2fed77c3 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1778,16 +1778,44 @@ int query_page_offline(mfn_t mfn, uint32_t *status)
>   }
>   
>   /*
> - * Hand the specified arbitrary page range to the specified heap zone
> - * checking the node_id of the previous page.  If they differ and the
> - * latter is not on a MAX_ORDER boundary, then we reserve the page by
> - * not freeing it to the buddy allocator.
> + * This function should only be called with valid pages from the same NUMA
> + * node.
>    */
> +static void _init_heap_pages(const struct page_info *pg,
> +                             unsigned long nr_pages,
> +                             bool need_scrub)
> +{
> +    unsigned long s, e;
> +    unsigned int nid = phys_to_nid(page_to_maddr(pg));
> +
> +    s = mfn_x(page_to_mfn(pg));
> +    e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1));
> +    if ( unlikely(!avail[nid]) )
> +    {
> +        bool use_tail = IS_ALIGNED(s, 1UL << MAX_ORDER) &&
> +                        (find_first_set_bit(e) <= find_first_set_bit(s));
> +        unsigned long n;
> +
> +        n = init_node_heap(nid, s, nr_pages, &use_tail);
> +        BUG_ON(n > nr_pages);
> +        if ( use_tail )
> +            e -= n;
> +        else
> +            s += n;
> +    }
> +
> +    while ( s < e )
> +    {
> +        free_heap_pages(mfn_to_page(_mfn(s)), 0, need_scrub);
> +        s += 1UL;
> +    }
> +}
> +
>   static void init_heap_pages(
>       struct page_info *pg, unsigned long nr_pages)
>   {
>       unsigned long i;
> -    bool idle_scrub = false;
> +    bool need_scrub = scrub_debug;
>  

You have changed idle_scrub to need_scrub, but haven't mentioned this
in commit log, and I also haven't found related discussion in v1. I
am very clear about this change.

Cheers,
Wei Chen

>       /*
>        * Keep MFN 0 away from the buddy allocator to avoid crossing zone
> @@ -1812,35 +1840,30 @@ static void init_heap_pages(
>       spin_unlock(&heap_lock);
>   
>       if ( system_state < SYS_STATE_active && opt_bootscrub == BOOTSCRUB_IDLE )
> -        idle_scrub = true;
> +        need_scrub = true;
>   
> -    for ( i = 0; i < nr_pages; i++ )
> +    for ( i = 0; i < nr_pages; )
>       {
> -        unsigned int nid = phys_to_nid(page_to_maddr(pg+i));
> +        unsigned int nid = phys_to_nid(page_to_maddr(pg));
> +        unsigned long left = nr_pages - i;
> +        unsigned long contig_pages;
>   
> -        if ( unlikely(!avail[nid]) )
> +        /*
> +         * _init_heap_pages() is only able to accept range following
> +         * specific property (see comment on top of _init_heap_pages()).
> +         *
> +         * So break down the range in smaller set.
> +         */
> +        for ( contig_pages = 1; contig_pages < left; contig_pages++ )
>           {
> -            unsigned long s = mfn_x(page_to_mfn(pg + i));
> -            unsigned long e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1));
> -            bool use_tail = (nid == phys_to_nid(pfn_to_paddr(e - 1))) &&
> -                            IS_ALIGNED(s, 1UL << MAX_ORDER) &&
> -                            (find_first_set_bit(e) <= find_first_set_bit(s));
> -            unsigned long n;
> -
> -            n = init_node_heap(nid, mfn_x(page_to_mfn(pg + i)), nr_pages - i,
> -                               &use_tail);
> -            BUG_ON(i + n > nr_pages);
> -            if ( n && !use_tail )
> -            {
> -                i += n - 1;
> -                continue;
> -            }
> -            if ( i + n == nr_pages )
> +            if ( nid != (phys_to_nid(page_to_maddr(pg))) )
>                   break;
> -            nr_pages -= n;
>           }
>   
> -        free_heap_pages(pg + i, 0, scrub_debug || idle_scrub);
> +        _init_heap_pages(pg, contig_pages, need_scrub);
> +
> +        pg += contig_pages;
> +        i += contig_pages;
>       }
>   }
>   


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 3/3] xen/heap: pass order to free_heap_pages() in heap init
  2022-07-15 17:03 ` [PATCH v2 3/3] xen/heap: pass order to free_heap_pages() in heap init Julien Grall
@ 2022-07-18  8:38   ` Wei Chen
  2022-07-18  9:43   ` Jan Beulich
  1 sibling, 0 replies; 18+ messages in thread
From: Wei Chen @ 2022-07-18  8:38 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Hongyan Xia, Andrew Cooper, George Dunlap, Jan Beulich,
	Stefano Stabellini, Wei Liu, Julien Grall

Hi Julien,

On 2022/7/16 1:03, Julien Grall wrote:
> From: Hongyan Xia <hongyxia@amazon.com>
> 
> The idea is to split the range into multiple aligned power-of-2 regions
> which only needs to call free_heap_pages() once each. We check the least
> significant set bit of the start address and use its bit index as the
> order of this increment. This makes sure that each increment is both
> power-of-2 and properly aligned, which can be safely passed to
> free_heap_pages(). Of course, the order also needs to be sanity checked
> against the upper bound and MAX_ORDER.
> 
> Tested on a nested environment on c5.metal with various amount
> of RAM and CONFIG_DEBUG=n. Time for end_boot_allocator() to complete:
>              Before         After
>      - 90GB: 1445 ms         96 ms
>      -  8GB:  126 ms          8 ms
>      -  4GB:   62 ms          4 ms
> 
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ---
> 
> Changes in v2:
>      - Update comment
>      - Update the numbers. They are slightly better as is_contig_page()
>        has been folded in init_heap_pages().
> ---
>   xen/common/page_alloc.c | 35 ++++++++++++++++++++++++++++++++---
>   1 file changed, 32 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index eedb2fed77c3..2b99801d2ea3 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1779,7 +1779,7 @@ int query_page_offline(mfn_t mfn, uint32_t *status)
>   
>   /*
>    * This function should only be called with valid pages from the same NUMA
> - * node.
> + * node and zone.
>    */
>   static void _init_heap_pages(const struct page_info *pg,
>                                unsigned long nr_pages,
> @@ -1806,8 +1806,22 @@ static void _init_heap_pages(const struct page_info *pg,
>   
>       while ( s < e )
>       {
> -        free_heap_pages(mfn_to_page(_mfn(s)), 0, need_scrub);
> -        s += 1UL;
> +        /*
> +         * For s == 0, we simply use the largest increment by checking the
> +         * MSB of the region size. For s != 0, we also need to ensure that the
> +         * chunk is properly sized to end at power-of-two alignment. We do this
> +         * by checking the LSB of the start address and use its index as
> +         * the increment. Both cases need to be guarded by MAX_ORDER.
> +         *
> +         * Note that the value of ffsl() and flsl() starts from 1 so we need
> +         * to decrement it by 1.
> +         */
> +        int inc_order = min(MAX_ORDER, flsl(e - s) - 1);
> +
> +        if ( s )
> +            inc_order = min(inc_order, ffsl(s) - 1);
> +        free_heap_pages(mfn_to_page(_mfn(s)), inc_order, need_scrub);
> +        s += (1UL << inc_order);
>       }
>   }
>   
> @@ -1844,6 +1858,9 @@ static void init_heap_pages(
>   
>       for ( i = 0; i < nr_pages; )
>       {
> +#ifdef CONFIG_SEPARATE_XENHEAP
> +        unsigned int zone = page_to_zone(pg);
> +#endif
>           unsigned int nid = phys_to_nid(page_to_maddr(pg));
>           unsigned long left = nr_pages - i;
>           unsigned long contig_pages;
> @@ -1856,6 +1873,18 @@ static void init_heap_pages(
>            */
>           for ( contig_pages = 1; contig_pages < left; contig_pages++ )
>           {
> +            /*
> +             * No need to check for the zone when !CONFIG_SEPARATE_XENHEAP
> +             * because free_heap_pages() can only take power-of-two ranges
> +             * which never cross zone boundaries. But for separate xenheap
> +             * which is manually defined, it is possible for power-of-two
> +             * range to cross zones.
> +             */
> +#ifdef CONFIG_SEPARATE_XENHEAP
> +            if ( zone != page_to_zone(pg) )
> +                break;
> +#endif
> +
>               if ( nid != (phys_to_nid(page_to_maddr(pg))) )
>                   break;
>           }

Reviewed-by: Wei Chen <Wei.Chen@arm.com>



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/3] xen/heap: Split init_heap_pages() in two
  2022-07-15 17:03 ` [PATCH v2 2/3] xen/heap: Split init_heap_pages() in two Julien Grall
  2022-07-18  8:18   ` Wei Chen
@ 2022-07-18  9:31   ` Jan Beulich
  2022-07-18 10:08     ` Julien Grall
  1 sibling, 1 reply; 18+ messages in thread
From: Jan Beulich @ 2022-07-18  9:31 UTC (permalink / raw)
  To: Julien Grall
  Cc: Julien Grall, Andrew Cooper, George Dunlap, Stefano Stabellini,
	Wei Liu, xen-devel

On 15.07.2022 19:03, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, init_heap_pages() will call free_heap_pages() page
> by page. To reduce the time to initialize the heap, we will want
> to provide multiple pages at the same time.
> 
> init_heap_pages() is now split in two parts:
>     - init_heap_pages(): will break down the range in multiple set
>       of contiguous pages. For now, the criteria is the pages should
>       belong to the same NUMA node.
>     - _init_heap_pages(): will initialize a set of pages belonging to
>       the same NUMA node. In a follow-up patch, new requirements will
>       be added (e.g. pages should belong to the same zone). For now the
>       pages are still passed one by one to free_heap_pages().
> 
> Note that the comment before init_heap_pages() is heavily outdated and
> does not reflect the current code. So update it.
> 
> This patch is a merge/rework of patches from David Woodhouse and
> Hongyan Xia.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
Albeit maybe with ...

> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1778,16 +1778,44 @@ int query_page_offline(mfn_t mfn, uint32_t *status)
>  }
>  
>  /*
> - * Hand the specified arbitrary page range to the specified heap zone
> - * checking the node_id of the previous page.  If they differ and the
> - * latter is not on a MAX_ORDER boundary, then we reserve the page by
> - * not freeing it to the buddy allocator.
> + * This function should only be called with valid pages from the same NUMA
> + * node.
>   */
> +static void _init_heap_pages(const struct page_info *pg,
> +                             unsigned long nr_pages,
> +                             bool need_scrub)
> +{
> +    unsigned long s, e;
> +    unsigned int nid = phys_to_nid(page_to_maddr(pg));
> +
> +    s = mfn_x(page_to_mfn(pg));
> +    e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1));
> +    if ( unlikely(!avail[nid]) )
> +    {
> +        bool use_tail = IS_ALIGNED(s, 1UL << MAX_ORDER) &&
> +                        (find_first_set_bit(e) <= find_first_set_bit(s));
> +        unsigned long n;
> +
> +        n = init_node_heap(nid, s, nr_pages, &use_tail);
> +        BUG_ON(n > nr_pages);
> +        if ( use_tail )
> +            e -= n;
> +        else
> +            s += n;
> +    }
> +
> +    while ( s < e )
> +    {
> +        free_heap_pages(mfn_to_page(_mfn(s)), 0, need_scrub);
> +        s += 1UL;

... the more conventional s++ or ++s used here?

Jan


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 3/3] xen/heap: pass order to free_heap_pages() in heap init
  2022-07-15 17:03 ` [PATCH v2 3/3] xen/heap: pass order to free_heap_pages() in heap init Julien Grall
  2022-07-18  8:38   ` Wei Chen
@ 2022-07-18  9:43   ` Jan Beulich
  2022-07-18 10:24     ` Julien Grall
  1 sibling, 1 reply; 18+ messages in thread
From: Jan Beulich @ 2022-07-18  9:43 UTC (permalink / raw)
  To: Julien Grall
  Cc: Hongyan Xia, Andrew Cooper, George Dunlap, Stefano Stabellini,
	Wei Liu, Julien Grall, xen-devel

On 15.07.2022 19:03, Julien Grall wrote:
> @@ -1806,8 +1806,22 @@ static void _init_heap_pages(const struct page_info *pg,
>  
>      while ( s < e )
>      {
> -        free_heap_pages(mfn_to_page(_mfn(s)), 0, need_scrub);
> -        s += 1UL;
> +        /*
> +         * For s == 0, we simply use the largest increment by checking the
> +         * MSB of the region size. For s != 0, we also need to ensure that the
> +         * chunk is properly sized to end at power-of-two alignment. We do this
> +         * by checking the LSB of the start address and use its index as
> +         * the increment. Both cases need to be guarded by MAX_ORDER.

s/guarded/bounded/ ?

> +         * Note that the value of ffsl() and flsl() starts from 1 so we need
> +         * to decrement it by 1.
> +         */
> +        int inc_order = min(MAX_ORDER, flsl(e - s) - 1);

unsigned int, since ...

> +        if ( s )
> +            inc_order = min(inc_order, ffsl(s) - 1);
> +        free_heap_pages(mfn_to_page(_mfn(s)), inc_order, need_scrub);

... that's what the function parameter says, and the value also can go
negative? Preferably with these adjustments
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/3] xen/heap: Split init_heap_pages() in two
  2022-07-18  8:18   ` Wei Chen
@ 2022-07-18 10:05     ` Julien Grall
  0 siblings, 0 replies; 18+ messages in thread
From: Julien Grall @ 2022-07-18 10:05 UTC (permalink / raw)
  To: Wei Chen, xen-devel
  Cc: Julien Grall, Andrew Cooper, George Dunlap, Jan Beulich,
	Stefano Stabellini, Wei Liu



On 18/07/2022 09:18, Wei Chen wrote:
>>   static void init_heap_pages(
>>       struct page_info *pg, unsigned long nr_pages)
>>   {
>>       unsigned long i;
>> -    bool idle_scrub = false;
>> +    bool need_scrub = scrub_debug;
>>
> 
> You have changed idle_scrub to need_scrub, but haven't mentioned this
> in commit log, and I also haven't found related discussion in v1. I
> am very clear about this change.

The meaning/use of the variable is now different. Before this patch, the 
variable was only indicating whether idle scrub was enabled (this is 
configurable by the admin). This was then or-ed with  'scrub_debug' when 
calling free_heap_pages().

With this patch, we now store the result of the or-ed in the local variable.

This is not something I felt was necessary to mention in the commit message.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/3] xen/heap: Split init_heap_pages() in two
  2022-07-18  9:31   ` Jan Beulich
@ 2022-07-18 10:08     ` Julien Grall
  2022-07-18 10:57       ` Jan Beulich
  0 siblings, 1 reply; 18+ messages in thread
From: Julien Grall @ 2022-07-18 10:08 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Julien Grall, Andrew Cooper, George Dunlap, Stefano Stabellini,
	Wei Liu, xen-devel

Hi Jan,

On 18/07/2022 10:31, Jan Beulich wrote:
> On 15.07.2022 19:03, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> At the moment, init_heap_pages() will call free_heap_pages() page
>> by page. To reduce the time to initialize the heap, we will want
>> to provide multiple pages at the same time.
>>
>> init_heap_pages() is now split in two parts:
>>      - init_heap_pages(): will break down the range in multiple set
>>        of contiguous pages. For now, the criteria is the pages should
>>        belong to the same NUMA node.
>>      - _init_heap_pages(): will initialize a set of pages belonging to
>>        the same NUMA node. In a follow-up patch, new requirements will
>>        be added (e.g. pages should belong to the same zone). For now the
>>        pages are still passed one by one to free_heap_pages().
>>
>> Note that the comment before init_heap_pages() is heavily outdated and
>> does not reflect the current code. So update it.
>>
>> This patch is a merge/rework of patches from David Woodhouse and
>> Hongyan Xia.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks.

>> --- a/xen/common/page_alloc.c
>> +++ b/xen/common/page_alloc.c
>> @@ -1778,16 +1778,44 @@ int query_page_offline(mfn_t mfn, uint32_t *status)
>>   }
>>   
>>   /*
>> - * Hand the specified arbitrary page range to the specified heap zone
>> - * checking the node_id of the previous page.  If they differ and the
>> - * latter is not on a MAX_ORDER boundary, then we reserve the page by
>> - * not freeing it to the buddy allocator.
>> + * This function should only be called with valid pages from the same NUMA
>> + * node.
>>    */
>> +static void _init_heap_pages(const struct page_info *pg,
>> +                             unsigned long nr_pages,
>> +                             bool need_scrub)
>> +{
>> +    unsigned long s, e;
>> +    unsigned int nid = phys_to_nid(page_to_maddr(pg));
>> +
>> +    s = mfn_x(page_to_mfn(pg));
>> +    e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1));
>> +    if ( unlikely(!avail[nid]) )
>> +    {
>> +        bool use_tail = IS_ALIGNED(s, 1UL << MAX_ORDER) &&
>> +                        (find_first_set_bit(e) <= find_first_set_bit(s));
>> +        unsigned long n;
>> +
>> +        n = init_node_heap(nid, s, nr_pages, &use_tail);
>> +        BUG_ON(n > nr_pages);
>> +        if ( use_tail )
>> +            e -= n;
>> +        else
>> +            s += n;
>> +    }
>> +
>> +    while ( s < e )
>> +    {
>> +        free_heap_pages(mfn_to_page(_mfn(s)), 0, need_scrub);
>> +        s += 1UL;
> 
> ... the more conventional s++ or ++s used here?

I would prefer to keep using "s += 1UL" here because:
   * it will be replace with a proper order in the follow-up patch. So 
this is temporary.
   * one could argue that if I use "s++" then I should also switch to a 
for loop which would make sense here but not in the next patch.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 3/3] xen/heap: pass order to free_heap_pages() in heap init
  2022-07-18  9:43   ` Jan Beulich
@ 2022-07-18 10:24     ` Julien Grall
  2022-07-18 11:02       ` Jan Beulich
  0 siblings, 1 reply; 18+ messages in thread
From: Julien Grall @ 2022-07-18 10:24 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Hongyan Xia, Andrew Cooper, George Dunlap, Stefano Stabellini,
	Wei Liu, Julien Grall, xen-devel

Hi Jan,

On 18/07/2022 10:43, Jan Beulich wrote:
> On 15.07.2022 19:03, Julien Grall wrote:
>> @@ -1806,8 +1806,22 @@ static void _init_heap_pages(const struct page_info *pg,
>>   
>>       while ( s < e )
>>       {
>> -        free_heap_pages(mfn_to_page(_mfn(s)), 0, need_scrub);
>> -        s += 1UL;
>> +        /*
>> +         * For s == 0, we simply use the largest increment by checking the
>> +         * MSB of the region size. For s != 0, we also need to ensure that the
>> +         * chunk is properly sized to end at power-of-two alignment. We do this
>> +         * by checking the LSB of the start address and use its index as
>> +         * the increment. Both cases need to be guarded by MAX_ORDER.
> 
> s/guarded/bounded/ ?
> 
>> +         * Note that the value of ffsl() and flsl() starts from 1 so we need
>> +         * to decrement it by 1.
>> +         */
>> +        int inc_order = min(MAX_ORDER, flsl(e - s) - 1);
> 
> unsigned int, since ...

MAX_ORDER, flsl(), ffsl() are both returning signed value. So inc_order 
should be 'int' to avoid any compilation issue.

> 
>> +        if ( s )
>> +            inc_order = min(inc_order, ffsl(s) - 1);
>> +        free_heap_pages(mfn_to_page(_mfn(s)), inc_order, need_scrub);
> 
> ... that's what the function parameter says, and the value also can go
> negative?

AFAICT, none of the values are negative. But per above, if we use min() 
then the local variable should be 'int'. The two possible alternatives are:
   1) Use min_t()
   2) Update MAX_ORDER, flsl(), ffsl() to return an unsigned value

The ideal would be #2 but it will require at least an extra patch and 
effort. If we use #1, then they use may become stale if we implement #2.

So I would prefer to keep min(). That said, I would be open to use 
min_t() if you strongly prefer it.

> Preferably with these adjustments
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks!

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/3] xen/heap: Split init_heap_pages() in two
  2022-07-18 10:08     ` Julien Grall
@ 2022-07-18 10:57       ` Jan Beulich
  0 siblings, 0 replies; 18+ messages in thread
From: Jan Beulich @ 2022-07-18 10:57 UTC (permalink / raw)
  To: Julien Grall
  Cc: Julien Grall, Andrew Cooper, George Dunlap, Stefano Stabellini,
	Wei Liu, xen-devel

On 18.07.2022 12:08, Julien Grall wrote:
> On 18/07/2022 10:31, Jan Beulich wrote:
>> On 15.07.2022 19:03, Julien Grall wrote:
>>> --- a/xen/common/page_alloc.c
>>> +++ b/xen/common/page_alloc.c
>>> @@ -1778,16 +1778,44 @@ int query_page_offline(mfn_t mfn, uint32_t *status)
>>>   }
>>>   
>>>   /*
>>> - * Hand the specified arbitrary page range to the specified heap zone
>>> - * checking the node_id of the previous page.  If they differ and the
>>> - * latter is not on a MAX_ORDER boundary, then we reserve the page by
>>> - * not freeing it to the buddy allocator.
>>> + * This function should only be called with valid pages from the same NUMA
>>> + * node.
>>>    */
>>> +static void _init_heap_pages(const struct page_info *pg,
>>> +                             unsigned long nr_pages,
>>> +                             bool need_scrub)
>>> +{
>>> +    unsigned long s, e;
>>> +    unsigned int nid = phys_to_nid(page_to_maddr(pg));
>>> +
>>> +    s = mfn_x(page_to_mfn(pg));
>>> +    e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1));
>>> +    if ( unlikely(!avail[nid]) )
>>> +    {
>>> +        bool use_tail = IS_ALIGNED(s, 1UL << MAX_ORDER) &&
>>> +                        (find_first_set_bit(e) <= find_first_set_bit(s));
>>> +        unsigned long n;
>>> +
>>> +        n = init_node_heap(nid, s, nr_pages, &use_tail);
>>> +        BUG_ON(n > nr_pages);
>>> +        if ( use_tail )
>>> +            e -= n;
>>> +        else
>>> +            s += n;
>>> +    }
>>> +
>>> +    while ( s < e )
>>> +    {
>>> +        free_heap_pages(mfn_to_page(_mfn(s)), 0, need_scrub);
>>> +        s += 1UL;
>>
>> ... the more conventional s++ or ++s used here?
> 
> I would prefer to keep using "s += 1UL" here because:
>    * it will be replace with a proper order in the follow-up patch. So 
> this is temporary.

Fair enough.

Jan

>    * one could argue that if I use "s++" then I should also switch to a 
> for loop which would make sense here but not in the next patch.
> 
> Cheers,
> 



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 3/3] xen/heap: pass order to free_heap_pages() in heap init
  2022-07-18 10:24     ` Julien Grall
@ 2022-07-18 11:02       ` Jan Beulich
  2022-07-18 17:39         ` Julien Grall
  0 siblings, 1 reply; 18+ messages in thread
From: Jan Beulich @ 2022-07-18 11:02 UTC (permalink / raw)
  To: Julien Grall
  Cc: Hongyan Xia, Andrew Cooper, George Dunlap, Stefano Stabellini,
	Wei Liu, Julien Grall, xen-devel

On 18.07.2022 12:24, Julien Grall wrote:
> Hi Jan,
> 
> On 18/07/2022 10:43, Jan Beulich wrote:
>> On 15.07.2022 19:03, Julien Grall wrote:
>>> @@ -1806,8 +1806,22 @@ static void _init_heap_pages(const struct page_info *pg,
>>>   
>>>       while ( s < e )
>>>       {
>>> -        free_heap_pages(mfn_to_page(_mfn(s)), 0, need_scrub);
>>> -        s += 1UL;
>>> +        /*
>>> +         * For s == 0, we simply use the largest increment by checking the
>>> +         * MSB of the region size. For s != 0, we also need to ensure that the
>>> +         * chunk is properly sized to end at power-of-two alignment. We do this
>>> +         * by checking the LSB of the start address and use its index as
>>> +         * the increment. Both cases need to be guarded by MAX_ORDER.
>>
>> s/guarded/bounded/ ?
>>
>>> +         * Note that the value of ffsl() and flsl() starts from 1 so we need
>>> +         * to decrement it by 1.
>>> +         */
>>> +        int inc_order = min(MAX_ORDER, flsl(e - s) - 1);
>>
>> unsigned int, since ...
> 
> MAX_ORDER, flsl(), ffsl() are both returning signed value. So inc_order 
> should be 'int' to avoid any compilation issue.
> 
>>
>>> +        if ( s )
>>> +            inc_order = min(inc_order, ffsl(s) - 1);
>>> +        free_heap_pages(mfn_to_page(_mfn(s)), inc_order, need_scrub);
>>
>> ... that's what the function parameter says, and the value also can go
>> negative?
> 
> AFAICT, none of the values are negative. But per above, if we use min() 
> then the local variable should be 'int'. The two possible alternatives are:
>    1) Use min_t()
>    2) Update MAX_ORDER, flsl(), ffsl() to return an unsigned value
> 
> The ideal would be #2 but it will require at least an extra patch and 
> effort. If we use #1, then they use may become stale if we implement #2.

I'm not sure #2 is a good option - we'd deviate from conventions used
elsewhere on what flsl() and ffsl() return. And constants would imo
best remain suffix-less unless a suffix is very relevant to have (for
MAX_ORDER, which isn't at risk of being subject to ~ or -, it may be
warranted).

3)
        unsigned int inc_order = min(MAX_ORDER, flsl(e - s) - 1);

        if ( s )
            inc_order = min(inc_order, ffsl(s) - 1U);

No compilation issues to expect here, afaict.

> So I would prefer to keep min(). That said, I would be open to use 
> min_t() if you strongly prefer it.

I consider max_t() / min_t() dangerous, so I'd like to limit their use
to cases where no good alternatives exist.

Jan


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 3/3] xen/heap: pass order to free_heap_pages() in heap init
  2022-07-18 11:02       ` Jan Beulich
@ 2022-07-18 17:39         ` Julien Grall
  2022-07-19  6:01           ` Jan Beulich
  0 siblings, 1 reply; 18+ messages in thread
From: Julien Grall @ 2022-07-18 17:39 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Hongyan Xia, Andrew Cooper, George Dunlap, Stefano Stabellini,
	Wei Liu, Julien Grall, xen-devel

Hi Jan,

On 18/07/2022 12:02, Jan Beulich wrote:
> On 18.07.2022 12:24, Julien Grall wrote:
> 3)
>          unsigned int inc_order = min(MAX_ORDER, flsl(e - s) - 1);
> 
>          if ( s )
>              inc_order = min(inc_order, ffsl(s) - 1U);

I like this idea!

> 
> No compilation issues to expect here, afaict.

Correct, GCC is happy with this approach. Assuming there are no other 
comments, are you happy if I make the change on commit?

Cheers,


-- 
Julien Grall


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 3/3] xen/heap: pass order to free_heap_pages() in heap init
  2022-07-18 17:39         ` Julien Grall
@ 2022-07-19  6:01           ` Jan Beulich
  2022-07-20 18:27             ` Julien Grall
  0 siblings, 1 reply; 18+ messages in thread
From: Jan Beulich @ 2022-07-19  6:01 UTC (permalink / raw)
  To: Julien Grall
  Cc: Hongyan Xia, Andrew Cooper, George Dunlap, Stefano Stabellini,
	Wei Liu, Julien Grall, xen-devel

On 18.07.2022 19:39, Julien Grall wrote:
> On 18/07/2022 12:02, Jan Beulich wrote:
>> On 18.07.2022 12:24, Julien Grall wrote:
>> 3)
>>          unsigned int inc_order = min(MAX_ORDER, flsl(e - s) - 1);
>>
>>          if ( s )
>>              inc_order = min(inc_order, ffsl(s) - 1U);
> 
> I like this idea!
> 
>>
>> No compilation issues to expect here, afaict.
> 
> Correct, GCC is happy with this approach. Assuming there are no other 
> comments, are you happy if I make the change on commit?

Sure - iirc I gave my R-b already.

Jan


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 3/3] xen/heap: pass order to free_heap_pages() in heap init
  2022-07-19  6:01           ` Jan Beulich
@ 2022-07-20 18:27             ` Julien Grall
  0 siblings, 0 replies; 18+ messages in thread
From: Julien Grall @ 2022-07-20 18:27 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Hongyan Xia, Andrew Cooper, George Dunlap, Stefano Stabellini,
	Wei Liu, Julien Grall, xen-devel

Hi Jan,

On 19/07/2022 07:01, Jan Beulich wrote:
> On 18.07.2022 19:39, Julien Grall wrote:
>> On 18/07/2022 12:02, Jan Beulich wrote:
>>> On 18.07.2022 12:24, Julien Grall wrote:
>>> 3)
>>>           unsigned int inc_order = min(MAX_ORDER, flsl(e - s) - 1);
>>>
>>>           if ( s )
>>>               inc_order = min(inc_order, ffsl(s) - 1U);
>>
>> I like this idea!
>>
>>>
>>> No compilation issues to expect here, afaict.
>>
>> Correct, GCC is happy with this approach. Assuming there are no other
>> comments, are you happy if I make the change on commit?
> 
> Sure - iirc I gave my R-b already.

You did. Just wanted to make sure your reviewed-by was valid with your 
proposal.

I have now committed the series.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2022-07-20 18:27 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-15 17:03 [PATCH v2 0/3] xen/mm: Optimize init_heap_pages() Julien Grall
2022-07-15 17:03 ` [PATCH v2 1/3] xen: page_alloc: Don't open-code IS_ALIGNED() Julien Grall
2022-07-18  8:06   ` Wei Chen
2022-07-18  8:11   ` Jan Beulich
2022-07-15 17:03 ` [PATCH v2 2/3] xen/heap: Split init_heap_pages() in two Julien Grall
2022-07-18  8:18   ` Wei Chen
2022-07-18 10:05     ` Julien Grall
2022-07-18  9:31   ` Jan Beulich
2022-07-18 10:08     ` Julien Grall
2022-07-18 10:57       ` Jan Beulich
2022-07-15 17:03 ` [PATCH v2 3/3] xen/heap: pass order to free_heap_pages() in heap init Julien Grall
2022-07-18  8:38   ` Wei Chen
2022-07-18  9:43   ` Jan Beulich
2022-07-18 10:24     ` Julien Grall
2022-07-18 11:02       ` Jan Beulich
2022-07-18 17:39         ` Julien Grall
2022-07-19  6:01           ` Jan Beulich
2022-07-20 18:27             ` Julien Grall

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.