All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] mm: compaction: skip memory hole rapidly when isolating migratable pages
@ 2023-06-13  8:55 Baolin Wang
  2023-06-13  9:56 ` David Hildenbrand
  2023-06-14  9:55 ` Mel Gorman
  0 siblings, 2 replies; 13+ messages in thread
From: Baolin Wang @ 2023-06-13  8:55 UTC (permalink / raw)
  To: akpm
  Cc: mgorman, vbabka, david, ying.huang, baolin.wang, linux-mm, linux-kernel

On some machines, the normal zone can have a large memory hole like
below memory layout, and we can see the range from 0x100000000 to
0x1800000000 is a hole. So when isolating some migratable pages, the
scanner can meet the hole and it will take more time to skip the large
hole. From my measurement, I can see the isolation scanner will take
80us ~ 100us to skip the large hole [0x100000000 - 0x1800000000].

So adding a new helper to fast search next online memory section
to skip the large hole can help to find next suitable pageblock
efficiently. With this patch, I can see the large hole scanning only
takes < 1us.

[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x0000000040000000-0x00000000ffffffff]
[    0.000000]   DMA32    empty
[    0.000000]   Normal   [mem 0x0000000100000000-0x0000001fa7ffffff]
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x0000000040000000-0x0000000fffffffff]
[    0.000000]   node   0: [mem 0x0000001800000000-0x0000001fa3c7ffff]
[    0.000000]   node   0: [mem 0x0000001fa3c80000-0x0000001fa3ffffff]
[    0.000000]   node   0: [mem 0x0000001fa4000000-0x0000001fa402ffff]
[    0.000000]   node   0: [mem 0x0000001fa4030000-0x0000001fa40effff]
[    0.000000]   node   0: [mem 0x0000001fa40f0000-0x0000001fa73cffff]
[    0.000000]   node   0: [mem 0x0000001fa73d0000-0x0000001fa745ffff]
[    0.000000]   node   0: [mem 0x0000001fa7460000-0x0000001fa746ffff]
[    0.000000]   node   0: [mem 0x0000001fa7470000-0x0000001fa758ffff]
[    0.000000]   node   0: [mem 0x0000001fa7590000-0x0000001fa7ffffff]

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
Changes from v1:
 - Fix building errors if CONFIG_SPARSEMEM is not selected.
 - Use NR_MEM_SECTIONS instead of '-1' per Huang Ying.
---
 include/linux/mmzone.h | 10 ++++++++++
 mm/compaction.c        | 30 +++++++++++++++++++++++++++++-
 2 files changed, 39 insertions(+), 1 deletion(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 5a7ada0413da..5ff1fa2efe28 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -2000,6 +2000,16 @@ static inline unsigned long next_present_section_nr(unsigned long section_nr)
 	return -1;
 }
 
+static inline unsigned long next_online_section_nr(unsigned long section_nr)
+{
+	while (++section_nr <= __highest_present_section_nr) {
+		if (online_section_nr(section_nr))
+			return section_nr;
+	}
+
+	return NR_MEM_SECTIONS;
+}
+
 /*
  * These are _only_ used during initialisation, therefore they
  * can use __initdata ...  They could have names to indicate
diff --git a/mm/compaction.c b/mm/compaction.c
index 3398ef3a55fe..c31ff6123891 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -229,6 +229,28 @@ static void reset_cached_positions(struct zone *zone)
 				pageblock_start_pfn(zone_end_pfn(zone) - 1);
 }
 
+#ifdef CONFIG_SPARSEMEM
+static unsigned long skip_hole_pageblock(unsigned long start_pfn)
+{
+	unsigned long next_online_nr;
+	unsigned long start_nr = pfn_to_section_nr(start_pfn);
+
+	if (online_section_nr(start_nr))
+		return 0;
+
+	next_online_nr = next_online_section_nr(start_nr);
+	if (next_online_nr < NR_MEM_SECTIONS)
+		return section_nr_to_pfn(next_online_nr);
+
+	return 0;
+}
+#else
+static unsigned long skip_hole_pageblock(unsigned long start_pfn)
+{
+	return 0;
+}
+#endif
+
 /*
  * Compound pages of >= pageblock_order should consistently be skipped until
  * released. It is always pointless to compact pages of such order (if they are
@@ -1991,8 +2013,14 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
 
 		page = pageblock_pfn_to_page(block_start_pfn,
 						block_end_pfn, cc->zone);
-		if (!page)
+		if (!page) {
+			unsigned long next_pfn;
+
+			next_pfn = skip_hole_pageblock(block_start_pfn);
+			if (next_pfn != 0)
+				block_end_pfn = next_pfn;
 			continue;
+		}
 
 		/*
 		 * If isolation recently failed, do not retry. Only check the
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] mm: compaction: skip memory hole rapidly when isolating migratable pages
  2023-06-13  8:55 [PATCH v2] mm: compaction: skip memory hole rapidly when isolating migratable pages Baolin Wang
@ 2023-06-13  9:56 ` David Hildenbrand
  2023-06-13 11:13   ` Baolin Wang
  2023-06-14  9:55 ` Mel Gorman
  1 sibling, 1 reply; 13+ messages in thread
From: David Hildenbrand @ 2023-06-13  9:56 UTC (permalink / raw)
  To: Baolin Wang, akpm; +Cc: mgorman, vbabka, ying.huang, linux-mm, linux-kernel

On 13.06.23 10:55, Baolin Wang wrote:
> On some machines, the normal zone can have a large memory hole like
> below memory layout, and we can see the range from 0x100000000 to
> 0x1800000000 is a hole. So when isolating some migratable pages, the
> scanner can meet the hole and it will take more time to skip the large
> hole. From my measurement, I can see the isolation scanner will take
> 80us ~ 100us to skip the large hole [0x100000000 - 0x1800000000].
> 
> So adding a new helper to fast search next online memory section
> to skip the large hole can help to find next suitable pageblock
> efficiently. With this patch, I can see the large hole scanning only
> takes < 1us.
> 
> [    0.000000] Zone ranges:
> [    0.000000]   DMA      [mem 0x0000000040000000-0x00000000ffffffff]
> [    0.000000]   DMA32    empty
> [    0.000000]   Normal   [mem 0x0000000100000000-0x0000001fa7ffffff]
> [    0.000000] Movable zone start for each node
> [    0.000000] Early memory node ranges
> [    0.000000]   node   0: [mem 0x0000000040000000-0x0000000fffffffff]
> [    0.000000]   node   0: [mem 0x0000001800000000-0x0000001fa3c7ffff]
> [    0.000000]   node   0: [mem 0x0000001fa3c80000-0x0000001fa3ffffff]
> [    0.000000]   node   0: [mem 0x0000001fa4000000-0x0000001fa402ffff]
> [    0.000000]   node   0: [mem 0x0000001fa4030000-0x0000001fa40effff]
> [    0.000000]   node   0: [mem 0x0000001fa40f0000-0x0000001fa73cffff]
> [    0.000000]   node   0: [mem 0x0000001fa73d0000-0x0000001fa745ffff]
> [    0.000000]   node   0: [mem 0x0000001fa7460000-0x0000001fa746ffff]
> [    0.000000]   node   0: [mem 0x0000001fa7470000-0x0000001fa758ffff]
> [    0.000000]   node   0: [mem 0x0000001fa7590000-0x0000001fa7ffffff]
> 
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
> Changes from v1:
>   - Fix building errors if CONFIG_SPARSEMEM is not selected.
>   - Use NR_MEM_SECTIONS instead of '-1' per Huang Ying.
> ---
>   include/linux/mmzone.h | 10 ++++++++++
>   mm/compaction.c        | 30 +++++++++++++++++++++++++++++-
>   2 files changed, 39 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 5a7ada0413da..5ff1fa2efe28 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -2000,6 +2000,16 @@ static inline unsigned long next_present_section_nr(unsigned long section_nr)
>   	return -1;
>   }
>   
> +static inline unsigned long next_online_section_nr(unsigned long section_nr)
> +{
> +	while (++section_nr <= __highest_present_section_nr) {
> +		if (online_section_nr(section_nr))
> +			return section_nr;
> +	}
> +
> +	return NR_MEM_SECTIONS;
> +}
> +
>   /*
>    * These are _only_ used during initialisation, therefore they
>    * can use __initdata ...  They could have names to indicate
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 3398ef3a55fe..c31ff6123891 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -229,6 +229,28 @@ static void reset_cached_positions(struct zone *zone)
>   				pageblock_start_pfn(zone_end_pfn(zone) - 1);
>   }
>   
> +#ifdef CONFIG_SPARSEMEM
> +static unsigned long skip_hole_pageblock(unsigned long start_pfn)
> +{
> +	unsigned long next_online_nr;
> +	unsigned long start_nr = pfn_to_section_nr(start_pfn);
> +
> +	if (online_section_nr(start_nr))
> +		return 0;
> +
> +	next_online_nr = next_online_section_nr(start_nr);
> +	if (next_online_nr < NR_MEM_SECTIONS)
> +		return section_nr_to_pfn(next_online_nr);
> +

I would simply inline next_online_section_nr and simplify (and add a 
comment):

/*
  * If the PFN falls into an offline section, return the start PFN of the
  * next online section. If the PFN falls into an online section or if
  * there is no next online section, return 0.
  */
static unsigned long skip_hole_pageblock(unsigned long start_pfn)
{
	unsigned long nr = pfn_to_section_nr(start_pfn);

	if (online_section_nr(nr))
		return 0;

	while (++nr <= __highest_present_section_nr) {
		if (online_section_nr(nr))
			return section_nr_to_pfn(nr);
	}
	return 0
}

Easier, no?

And maybe just call that function "skip_offline_sections()" then? 
Because we're not operating on pageblocks.

> +	return 0;
> +}
> +#else
> +static unsigned long skip_hole_pageblock(unsigned long start_pfn)
> +{
> +	return 0;
> +}
> +#endif
> +
>   /*
>    * Compound pages of >= pageblock_order should consistently be skipped until
>    * released. It is always pointless to compact pages of such order (if they are
> @@ -1991,8 +2013,14 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
>   
>   		page = pageblock_pfn_to_page(block_start_pfn,
>   						block_end_pfn, cc->zone);
> -		if (!page)
> +		if (!page) {
> +			unsigned long next_pfn;
> +
> +			next_pfn = skip_hole_pageblock(block_start_pfn);
> +			if (next_pfn != 0)

if (next_pfn)

> +				block_end_pfn = next_pfn;
>   			continue;
> +		}
>   
>   		/*
>   		 * If isolation recently failed, do not retry. Only check the

-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] mm: compaction: skip memory hole rapidly when isolating migratable pages
  2023-06-13  9:56 ` David Hildenbrand
@ 2023-06-13 11:13   ` Baolin Wang
  2023-06-13 12:36     ` David Hildenbrand
  0 siblings, 1 reply; 13+ messages in thread
From: Baolin Wang @ 2023-06-13 11:13 UTC (permalink / raw)
  To: David Hildenbrand, akpm
  Cc: mgorman, vbabka, ying.huang, linux-mm, linux-kernel



On 6/13/2023 5:56 PM, David Hildenbrand wrote:
> On 13.06.23 10:55, Baolin Wang wrote:
>> On some machines, the normal zone can have a large memory hole like
>> below memory layout, and we can see the range from 0x100000000 to
>> 0x1800000000 is a hole. So when isolating some migratable pages, the
>> scanner can meet the hole and it will take more time to skip the large
>> hole. From my measurement, I can see the isolation scanner will take
>> 80us ~ 100us to skip the large hole [0x100000000 - 0x1800000000].
>>
>> So adding a new helper to fast search next online memory section
>> to skip the large hole can help to find next suitable pageblock
>> efficiently. With this patch, I can see the large hole scanning only
>> takes < 1us.
>>
>> [    0.000000] Zone ranges:
>> [    0.000000]   DMA      [mem 0x0000000040000000-0x00000000ffffffff]
>> [    0.000000]   DMA32    empty
>> [    0.000000]   Normal   [mem 0x0000000100000000-0x0000001fa7ffffff]
>> [    0.000000] Movable zone start for each node
>> [    0.000000] Early memory node ranges
>> [    0.000000]   node   0: [mem 0x0000000040000000-0x0000000fffffffff]
>> [    0.000000]   node   0: [mem 0x0000001800000000-0x0000001fa3c7ffff]
>> [    0.000000]   node   0: [mem 0x0000001fa3c80000-0x0000001fa3ffffff]
>> [    0.000000]   node   0: [mem 0x0000001fa4000000-0x0000001fa402ffff]
>> [    0.000000]   node   0: [mem 0x0000001fa4030000-0x0000001fa40effff]
>> [    0.000000]   node   0: [mem 0x0000001fa40f0000-0x0000001fa73cffff]
>> [    0.000000]   node   0: [mem 0x0000001fa73d0000-0x0000001fa745ffff]
>> [    0.000000]   node   0: [mem 0x0000001fa7460000-0x0000001fa746ffff]
>> [    0.000000]   node   0: [mem 0x0000001fa7470000-0x0000001fa758ffff]
>> [    0.000000]   node   0: [mem 0x0000001fa7590000-0x0000001fa7ffffff]
>>
>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>> ---
>> Changes from v1:
>>   - Fix building errors if CONFIG_SPARSEMEM is not selected.
>>   - Use NR_MEM_SECTIONS instead of '-1' per Huang Ying.
>> ---
>>   include/linux/mmzone.h | 10 ++++++++++
>>   mm/compaction.c        | 30 +++++++++++++++++++++++++++++-
>>   2 files changed, 39 insertions(+), 1 deletion(-)
>>
>> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
>> index 5a7ada0413da..5ff1fa2efe28 100644
>> --- a/include/linux/mmzone.h
>> +++ b/include/linux/mmzone.h
>> @@ -2000,6 +2000,16 @@ static inline unsigned long 
>> next_present_section_nr(unsigned long section_nr)
>>       return -1;
>>   }
>> +static inline unsigned long next_online_section_nr(unsigned long 
>> section_nr)
>> +{
>> +    while (++section_nr <= __highest_present_section_nr) {
>> +        if (online_section_nr(section_nr))
>> +            return section_nr;
>> +    }
>> +
>> +    return NR_MEM_SECTIONS;
>> +}
>> +
>>   /*
>>    * These are _only_ used during initialisation, therefore they
>>    * can use __initdata ...  They could have names to indicate
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index 3398ef3a55fe..c31ff6123891 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -229,6 +229,28 @@ static void reset_cached_positions(struct zone 
>> *zone)
>>                   pageblock_start_pfn(zone_end_pfn(zone) - 1);
>>   }
>> +#ifdef CONFIG_SPARSEMEM
>> +static unsigned long skip_hole_pageblock(unsigned long start_pfn)
>> +{
>> +    unsigned long next_online_nr;
>> +    unsigned long start_nr = pfn_to_section_nr(start_pfn);
>> +
>> +    if (online_section_nr(start_nr))
>> +        return 0;
>> +
>> +    next_online_nr = next_online_section_nr(start_nr);
>> +    if (next_online_nr < NR_MEM_SECTIONS)
>> +        return section_nr_to_pfn(next_online_nr);
>> +
> 
> I would simply inline next_online_section_nr and simplify (and add a 
> comment):
> 
> /*
>   * If the PFN falls into an offline section, return the start PFN of the
>   * next online section. If the PFN falls into an online section or if
>   * there is no next online section, return 0.
>   */
> static unsigned long skip_hole_pageblock(unsigned long start_pfn)
> {
>      unsigned long nr = pfn_to_section_nr(start_pfn);
> 
>      if (online_section_nr(nr))
>          return 0;
> 
>      while (++nr <= __highest_present_section_nr) {
>          if (online_section_nr(nr))
>              return section_nr_to_pfn(nr);
>      }
>      return 0
> }
> 
> Easier, no?

Originally I want to add a common helper like next_present_section_nr(), 
which can be used by other users. But yes, your suggestion is easier, 
and I am fine with that.

> And maybe just call that function "skip_offline_sections()" then? 
> Because we're not operating on pageblocks.

OK. Thanks.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] mm: compaction: skip memory hole rapidly when isolating migratable pages
  2023-06-13 11:13   ` Baolin Wang
@ 2023-06-13 12:36     ` David Hildenbrand
  2023-06-14  1:08       ` Huang, Ying
  0 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand @ 2023-06-13 12:36 UTC (permalink / raw)
  To: Baolin Wang, akpm; +Cc: mgorman, vbabka, ying.huang, linux-mm, linux-kernel

On 13.06.23 13:13, Baolin Wang wrote:
> 
> 
> On 6/13/2023 5:56 PM, David Hildenbrand wrote:
>> On 13.06.23 10:55, Baolin Wang wrote:
>>> On some machines, the normal zone can have a large memory hole like
>>> below memory layout, and we can see the range from 0x100000000 to
>>> 0x1800000000 is a hole. So when isolating some migratable pages, the
>>> scanner can meet the hole and it will take more time to skip the large
>>> hole. From my measurement, I can see the isolation scanner will take
>>> 80us ~ 100us to skip the large hole [0x100000000 - 0x1800000000].
>>>
>>> So adding a new helper to fast search next online memory section
>>> to skip the large hole can help to find next suitable pageblock
>>> efficiently. With this patch, I can see the large hole scanning only
>>> takes < 1us.
>>>
>>> [    0.000000] Zone ranges:
>>> [    0.000000]   DMA      [mem 0x0000000040000000-0x00000000ffffffff]
>>> [    0.000000]   DMA32    empty
>>> [    0.000000]   Normal   [mem 0x0000000100000000-0x0000001fa7ffffff]
>>> [    0.000000] Movable zone start for each node
>>> [    0.000000] Early memory node ranges
>>> [    0.000000]   node   0: [mem 0x0000000040000000-0x0000000fffffffff]
>>> [    0.000000]   node   0: [mem 0x0000001800000000-0x0000001fa3c7ffff]
>>> [    0.000000]   node   0: [mem 0x0000001fa3c80000-0x0000001fa3ffffff]
>>> [    0.000000]   node   0: [mem 0x0000001fa4000000-0x0000001fa402ffff]
>>> [    0.000000]   node   0: [mem 0x0000001fa4030000-0x0000001fa40effff]
>>> [    0.000000]   node   0: [mem 0x0000001fa40f0000-0x0000001fa73cffff]
>>> [    0.000000]   node   0: [mem 0x0000001fa73d0000-0x0000001fa745ffff]
>>> [    0.000000]   node   0: [mem 0x0000001fa7460000-0x0000001fa746ffff]
>>> [    0.000000]   node   0: [mem 0x0000001fa7470000-0x0000001fa758ffff]
>>> [    0.000000]   node   0: [mem 0x0000001fa7590000-0x0000001fa7ffffff]
>>>
>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>> ---
>>> Changes from v1:
>>>    - Fix building errors if CONFIG_SPARSEMEM is not selected.
>>>    - Use NR_MEM_SECTIONS instead of '-1' per Huang Ying.
>>> ---
>>>    include/linux/mmzone.h | 10 ++++++++++
>>>    mm/compaction.c        | 30 +++++++++++++++++++++++++++++-
>>>    2 files changed, 39 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
>>> index 5a7ada0413da..5ff1fa2efe28 100644
>>> --- a/include/linux/mmzone.h
>>> +++ b/include/linux/mmzone.h
>>> @@ -2000,6 +2000,16 @@ static inline unsigned long
>>> next_present_section_nr(unsigned long section_nr)
>>>        return -1;
>>>    }
>>> +static inline unsigned long next_online_section_nr(unsigned long
>>> section_nr)
>>> +{
>>> +    while (++section_nr <= __highest_present_section_nr) {
>>> +        if (online_section_nr(section_nr))
>>> +            return section_nr;
>>> +    }
>>> +
>>> +    return NR_MEM_SECTIONS;
>>> +}
>>> +
>>>    /*
>>>     * These are _only_ used during initialisation, therefore they
>>>     * can use __initdata ...  They could have names to indicate
>>> diff --git a/mm/compaction.c b/mm/compaction.c
>>> index 3398ef3a55fe..c31ff6123891 100644
>>> --- a/mm/compaction.c
>>> +++ b/mm/compaction.c
>>> @@ -229,6 +229,28 @@ static void reset_cached_positions(struct zone
>>> *zone)
>>>                    pageblock_start_pfn(zone_end_pfn(zone) - 1);
>>>    }
>>> +#ifdef CONFIG_SPARSEMEM
>>> +static unsigned long skip_hole_pageblock(unsigned long start_pfn)
>>> +{
>>> +    unsigned long next_online_nr;
>>> +    unsigned long start_nr = pfn_to_section_nr(start_pfn);
>>> +
>>> +    if (online_section_nr(start_nr))
>>> +        return 0;
>>> +
>>> +    next_online_nr = next_online_section_nr(start_nr);
>>> +    if (next_online_nr < NR_MEM_SECTIONS)
>>> +        return section_nr_to_pfn(next_online_nr);
>>> +
>>
>> I would simply inline next_online_section_nr and simplify (and add a
>> comment):
>>
>> /*
>>    * If the PFN falls into an offline section, return the start PFN of the
>>    * next online section. If the PFN falls into an online section or if
>>    * there is no next online section, return 0.
>>    */
>> static unsigned long skip_hole_pageblock(unsigned long start_pfn)
>> {
>>       unsigned long nr = pfn_to_section_nr(start_pfn);
>>
>>       if (online_section_nr(nr))
>>           return 0;
>>
>>       while (++nr <= __highest_present_section_nr) {
>>           if (online_section_nr(nr))
>>               return section_nr_to_pfn(nr);
>>       }
>>       return 0
>> }
>>
>> Easier, no?
> 
> Originally I want to add a common helper like next_present_section_nr(),
> which can be used by other users. But yes, your suggestion is easier,
> and I am fine with that.
> 
>> And maybe just call that function "skip_offline_sections()" then?
>> Because we're not operating on pageblocks.
> 
> OK. Thanks.
> 

Feel free to add to the simplified version

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] mm: compaction: skip memory hole rapidly when isolating migratable pages
  2023-06-13 12:36     ` David Hildenbrand
@ 2023-06-14  1:08       ` Huang, Ying
  0 siblings, 0 replies; 13+ messages in thread
From: Huang, Ying @ 2023-06-14  1:08 UTC (permalink / raw)
  To: Baolin Wang
  Cc: David Hildenbrand, akpm, mgorman, vbabka, linux-mm, linux-kernel

David Hildenbrand <david@redhat.com> writes:

> On 13.06.23 13:13, Baolin Wang wrote:
>> On 6/13/2023 5:56 PM, David Hildenbrand wrote:
>>> On 13.06.23 10:55, Baolin Wang wrote:
>>>> On some machines, the normal zone can have a large memory hole like
>>>> below memory layout, and we can see the range from 0x100000000 to
>>>> 0x1800000000 is a hole. So when isolating some migratable pages, the
>>>> scanner can meet the hole and it will take more time to skip the large
>>>> hole. From my measurement, I can see the isolation scanner will take
>>>> 80us ~ 100us to skip the large hole [0x100000000 - 0x1800000000].
>>>>
>>>> So adding a new helper to fast search next online memory section
>>>> to skip the large hole can help to find next suitable pageblock
>>>> efficiently. With this patch, I can see the large hole scanning only
>>>> takes < 1us.
>>>>
>>>> [    0.000000] Zone ranges:
>>>> [    0.000000]   DMA      [mem 0x0000000040000000-0x00000000ffffffff]
>>>> [    0.000000]   DMA32    empty
>>>> [    0.000000]   Normal   [mem 0x0000000100000000-0x0000001fa7ffffff]
>>>> [    0.000000] Movable zone start for each node
>>>> [    0.000000] Early memory node ranges
>>>> [    0.000000]   node   0: [mem 0x0000000040000000-0x0000000fffffffff]
>>>> [    0.000000]   node   0: [mem 0x0000001800000000-0x0000001fa3c7ffff]
>>>> [    0.000000]   node   0: [mem 0x0000001fa3c80000-0x0000001fa3ffffff]
>>>> [    0.000000]   node   0: [mem 0x0000001fa4000000-0x0000001fa402ffff]
>>>> [    0.000000]   node   0: [mem 0x0000001fa4030000-0x0000001fa40effff]
>>>> [    0.000000]   node   0: [mem 0x0000001fa40f0000-0x0000001fa73cffff]
>>>> [    0.000000]   node   0: [mem 0x0000001fa73d0000-0x0000001fa745ffff]
>>>> [    0.000000]   node   0: [mem 0x0000001fa7460000-0x0000001fa746ffff]
>>>> [    0.000000]   node   0: [mem 0x0000001fa7470000-0x0000001fa758ffff]
>>>> [    0.000000]   node   0: [mem 0x0000001fa7590000-0x0000001fa7ffffff]
>>>>
>>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>> ---
>>>> Changes from v1:
>>>>    - Fix building errors if CONFIG_SPARSEMEM is not selected.
>>>>    - Use NR_MEM_SECTIONS instead of '-1' per Huang Ying.
>>>> ---
>>>>    include/linux/mmzone.h | 10 ++++++++++
>>>>    mm/compaction.c        | 30 +++++++++++++++++++++++++++++-
>>>>    2 files changed, 39 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
>>>> index 5a7ada0413da..5ff1fa2efe28 100644
>>>> --- a/include/linux/mmzone.h
>>>> +++ b/include/linux/mmzone.h
>>>> @@ -2000,6 +2000,16 @@ static inline unsigned long
>>>> next_present_section_nr(unsigned long section_nr)
>>>>        return -1;
>>>>    }
>>>> +static inline unsigned long next_online_section_nr(unsigned long
>>>> section_nr)
>>>> +{
>>>> +    while (++section_nr <= __highest_present_section_nr) {
>>>> +        if (online_section_nr(section_nr))
>>>> +            return section_nr;
>>>> +    }
>>>> +
>>>> +    return NR_MEM_SECTIONS;
>>>> +}
>>>> +
>>>>    /*
>>>>     * These are _only_ used during initialisation, therefore they
>>>>     * can use __initdata ...  They could have names to indicate
>>>> diff --git a/mm/compaction.c b/mm/compaction.c
>>>> index 3398ef3a55fe..c31ff6123891 100644
>>>> --- a/mm/compaction.c
>>>> +++ b/mm/compaction.c
>>>> @@ -229,6 +229,28 @@ static void reset_cached_positions(struct zone
>>>> *zone)
>>>>                    pageblock_start_pfn(zone_end_pfn(zone) - 1);
>>>>    }
>>>> +#ifdef CONFIG_SPARSEMEM
>>>> +static unsigned long skip_hole_pageblock(unsigned long start_pfn)
>>>> +{
>>>> +    unsigned long next_online_nr;
>>>> +    unsigned long start_nr = pfn_to_section_nr(start_pfn);
>>>> +
>>>> +    if (online_section_nr(start_nr))
>>>> +        return 0;
>>>> +
>>>> +    next_online_nr = next_online_section_nr(start_nr);
>>>> +    if (next_online_nr < NR_MEM_SECTIONS)
>>>> +        return section_nr_to_pfn(next_online_nr);
>>>> +
>>>
>>> I would simply inline next_online_section_nr and simplify (and add a
>>> comment):
>>>
>>> /*
>>>    * If the PFN falls into an offline section, return the start PFN of the
>>>    * next online section. If the PFN falls into an online section or if
>>>    * there is no next online section, return 0.
>>>    */
>>> static unsigned long skip_hole_pageblock(unsigned long start_pfn)
>>> {
>>>       unsigned long nr = pfn_to_section_nr(start_pfn);
>>>
>>>       if (online_section_nr(nr))
>>>           return 0;
>>>
>>>       while (++nr <= __highest_present_section_nr) {
>>>           if (online_section_nr(nr))
>>>               return section_nr_to_pfn(nr);
>>>       }
>>>       return 0
>>> }
>>>
>>> Easier, no?
>> Originally I want to add a common helper like
>> next_present_section_nr(),
>> which can be used by other users. But yes, your suggestion is easier,
>> and I am fine with that.
>> 
>>> And maybe just call that function "skip_offline_sections()" then?
>>> Because we're not operating on pageblocks.
>> OK. Thanks.
>> 
>
> Feel free to add to the simplified version
>
> Acked-by: David Hildenbrand <david@redhat.com>

With David's above comments addressed, feel free to add

Acked-by: "Huang, Ying" <ying.huang@intel.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] mm: compaction: skip memory hole rapidly when isolating migratable pages
  2023-06-13  8:55 [PATCH v2] mm: compaction: skip memory hole rapidly when isolating migratable pages Baolin Wang
  2023-06-13  9:56 ` David Hildenbrand
@ 2023-06-14  9:55 ` Mel Gorman
  2023-06-14 12:22   ` Baolin Wang
  2023-06-15  3:22   ` Huang, Ying
  1 sibling, 2 replies; 13+ messages in thread
From: Mel Gorman @ 2023-06-14  9:55 UTC (permalink / raw)
  To: Baolin Wang; +Cc: akpm, vbabka, david, ying.huang, linux-mm, linux-kernel

On Tue, Jun 13, 2023 at 04:55:04PM +0800, Baolin Wang wrote:
> On some machines, the normal zone can have a large memory hole like
> below memory layout, and we can see the range from 0x100000000 to
> 0x1800000000 is a hole. So when isolating some migratable pages, the
> scanner can meet the hole and it will take more time to skip the large
> hole. From my measurement, I can see the isolation scanner will take
> 80us ~ 100us to skip the large hole [0x100000000 - 0x1800000000].
> 
> So adding a new helper to fast search next online memory section
> to skip the large hole can help to find next suitable pageblock
> efficiently. With this patch, I can see the large hole scanning only
> takes < 1us.
> 
> [    0.000000] Zone ranges:
> [    0.000000]   DMA      [mem 0x0000000040000000-0x00000000ffffffff]
> [    0.000000]   DMA32    empty
> [    0.000000]   Normal   [mem 0x0000000100000000-0x0000001fa7ffffff]
> [    0.000000] Movable zone start for each node
> [    0.000000] Early memory node ranges
> [    0.000000]   node   0: [mem 0x0000000040000000-0x0000000fffffffff]
> [    0.000000]   node   0: [mem 0x0000001800000000-0x0000001fa3c7ffff]
> [    0.000000]   node   0: [mem 0x0000001fa3c80000-0x0000001fa3ffffff]
> [    0.000000]   node   0: [mem 0x0000001fa4000000-0x0000001fa402ffff]
> [    0.000000]   node   0: [mem 0x0000001fa4030000-0x0000001fa40effff]
> [    0.000000]   node   0: [mem 0x0000001fa40f0000-0x0000001fa73cffff]
> [    0.000000]   node   0: [mem 0x0000001fa73d0000-0x0000001fa745ffff]
> [    0.000000]   node   0: [mem 0x0000001fa7460000-0x0000001fa746ffff]
> [    0.000000]   node   0: [mem 0x0000001fa7470000-0x0000001fa758ffff]
> [    0.000000]   node   0: [mem 0x0000001fa7590000-0x0000001fa7ffffff]
> 
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>

This may only be necessary for non-contiguous zones so a check for
zone_contiguous could be made but I suspect the saving, if any, would be
marginal.

However, it's subtle that block_end_pfn can end up in an arbirary location
past the end of the zone or past cc->free_pfn. As the "continue" will update
cc->migrate_pfn, that might lead to errors in the future. It would be a
lot safer to pass in cc->free_pfn and do two things with the value. First,
there is no point scanning for a valid online section past cc->free_pfn so
terminating after cc->free_pfn may save some cycles. Second, cc->migrate_pfn
does not end up with an arbitrary value which is a more defensive approach
to any future programming errors.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] mm: compaction: skip memory hole rapidly when isolating migratable pages
  2023-06-14  9:55 ` Mel Gorman
@ 2023-06-14 12:22   ` Baolin Wang
  2023-06-15  3:22   ` Huang, Ying
  1 sibling, 0 replies; 13+ messages in thread
From: Baolin Wang @ 2023-06-14 12:22 UTC (permalink / raw)
  To: Mel Gorman; +Cc: akpm, vbabka, david, ying.huang, linux-mm, linux-kernel



On 6/14/2023 5:55 PM, Mel Gorman wrote:
> On Tue, Jun 13, 2023 at 04:55:04PM +0800, Baolin Wang wrote:
>> On some machines, the normal zone can have a large memory hole like
>> below memory layout, and we can see the range from 0x100000000 to
>> 0x1800000000 is a hole. So when isolating some migratable pages, the
>> scanner can meet the hole and it will take more time to skip the large
>> hole. From my measurement, I can see the isolation scanner will take
>> 80us ~ 100us to skip the large hole [0x100000000 - 0x1800000000].
>>
>> So adding a new helper to fast search next online memory section
>> to skip the large hole can help to find next suitable pageblock
>> efficiently. With this patch, I can see the large hole scanning only
>> takes < 1us.
>>
>> [    0.000000] Zone ranges:
>> [    0.000000]   DMA      [mem 0x0000000040000000-0x00000000ffffffff]
>> [    0.000000]   DMA32    empty
>> [    0.000000]   Normal   [mem 0x0000000100000000-0x0000001fa7ffffff]
>> [    0.000000] Movable zone start for each node
>> [    0.000000] Early memory node ranges
>> [    0.000000]   node   0: [mem 0x0000000040000000-0x0000000fffffffff]
>> [    0.000000]   node   0: [mem 0x0000001800000000-0x0000001fa3c7ffff]
>> [    0.000000]   node   0: [mem 0x0000001fa3c80000-0x0000001fa3ffffff]
>> [    0.000000]   node   0: [mem 0x0000001fa4000000-0x0000001fa402ffff]
>> [    0.000000]   node   0: [mem 0x0000001fa4030000-0x0000001fa40effff]
>> [    0.000000]   node   0: [mem 0x0000001fa40f0000-0x0000001fa73cffff]
>> [    0.000000]   node   0: [mem 0x0000001fa73d0000-0x0000001fa745ffff]
>> [    0.000000]   node   0: [mem 0x0000001fa7460000-0x0000001fa746ffff]
>> [    0.000000]   node   0: [mem 0x0000001fa7470000-0x0000001fa758ffff]
>> [    0.000000]   node   0: [mem 0x0000001fa7590000-0x0000001fa7ffffff]
>>
>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> 
> This may only be necessary for non-contiguous zones so a check for
> zone_contiguous could be made but I suspect the saving, if any, would be
> marginal.

Right. But the pageblock_pfn_to_page() have considered the contiguous 
case, and will not return NULL page for a contiguous zone.

> However, it's subtle that block_end_pfn can end up in an arbirary location
> past the end of the zone or past cc->free_pfn. As the "continue" will update
> cc->migrate_pfn, that might lead to errors in the future. It would be a

Ah, yes, thanks for pointing this out that I missed before.

> lot safer to pass in cc->free_pfn and do two things with the value. First,
> there is no point scanning for a valid online section past cc->free_pfn so
> terminating after cc->free_pfn may save some cycles. Second, cc->migrate_pfn

The skipping function introduced in this patch will only scan the first 
online section, so it can not terminate the scanning early by comparing 
if it is greater than cc->free_pfn. It can only compare the first online 
section with cc->free_pfn.

> does not end up with an arbitrary value which is a more defensive approach
> to any future programming errors.

Right. So I think I should make sure the cc->migrate_pfn is not larger 
than cc->free_pfn with below change:

@@ -1965,7 +1965,7 @@ static isolate_migrate_t 
isolate_migratepages(struct compact_control *cc)

                         next_pfn = skip_offline_sections(block_start_pfn);
                         if (next_pfn)
-                               block_end_pfn = next_pfn;
+                               block_end_pfn = min(next_pfn, cc->free_pfn);
                         continue;
                 }

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] mm: compaction: skip memory hole rapidly when isolating migratable pages
  2023-06-14  9:55 ` Mel Gorman
  2023-06-14 12:22   ` Baolin Wang
@ 2023-06-15  3:22   ` Huang, Ying
  2023-06-15  3:59     ` Baolin Wang
  1 sibling, 1 reply; 13+ messages in thread
From: Huang, Ying @ 2023-06-15  3:22 UTC (permalink / raw)
  To: Mel Gorman, david; +Cc: Baolin Wang, akpm, vbabka, linux-mm, linux-kernel

Hi, Mel,

Mel Gorman <mgorman@techsingularity.net> writes:

> On Tue, Jun 13, 2023 at 04:55:04PM +0800, Baolin Wang wrote:
>> On some machines, the normal zone can have a large memory hole like
>> below memory layout, and we can see the range from 0x100000000 to
>> 0x1800000000 is a hole. So when isolating some migratable pages, the
>> scanner can meet the hole and it will take more time to skip the large
>> hole. From my measurement, I can see the isolation scanner will take
>> 80us ~ 100us to skip the large hole [0x100000000 - 0x1800000000].
>> 
>> So adding a new helper to fast search next online memory section
>> to skip the large hole can help to find next suitable pageblock
>> efficiently. With this patch, I can see the large hole scanning only
>> takes < 1us.
>> 
>> [    0.000000] Zone ranges:
>> [    0.000000]   DMA      [mem 0x0000000040000000-0x00000000ffffffff]
>> [    0.000000]   DMA32    empty
>> [    0.000000]   Normal   [mem 0x0000000100000000-0x0000001fa7ffffff]
>> [    0.000000] Movable zone start for each node
>> [    0.000000] Early memory node ranges
>> [    0.000000]   node   0: [mem 0x0000000040000000-0x0000000fffffffff]
>> [    0.000000]   node   0: [mem 0x0000001800000000-0x0000001fa3c7ffff]
>> [    0.000000]   node   0: [mem 0x0000001fa3c80000-0x0000001fa3ffffff]
>> [    0.000000]   node   0: [mem 0x0000001fa4000000-0x0000001fa402ffff]
>> [    0.000000]   node   0: [mem 0x0000001fa4030000-0x0000001fa40effff]
>> [    0.000000]   node   0: [mem 0x0000001fa40f0000-0x0000001fa73cffff]
>> [    0.000000]   node   0: [mem 0x0000001fa73d0000-0x0000001fa745ffff]
>> [    0.000000]   node   0: [mem 0x0000001fa7460000-0x0000001fa746ffff]
>> [    0.000000]   node   0: [mem 0x0000001fa7470000-0x0000001fa758ffff]
>> [    0.000000]   node   0: [mem 0x0000001fa7590000-0x0000001fa7ffffff]
>> 
>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>
> This may only be necessary for non-contiguous zones so a check for
> zone_contiguous could be made but I suspect the saving, if any, would be
> marginal.
>
> However, it's subtle that block_end_pfn can end up in an arbirary location
> past the end of the zone or past cc->free_pfn. As the "continue" will update
> cc->migrate_pfn, that might lead to errors in the future. It would be a
> lot safer to pass in cc->free_pfn and do two things with the value. First,
> there is no point scanning for a valid online section past cc->free_pfn so
> terminating after cc->free_pfn may save some cycles. Second, cc->migrate_pfn
> does not end up with an arbitrary value which is a more defensive approach
> to any future programming errors.

I have thought about this before.  Originally, I had thought that we
were safe because cc->free_pfn should be in a online section and
block_end_pfn should reach cc->free_pfn before the end of zone.  But
after checking more code and thinking about it again, I found that the
underlying sections may go offline under us during compaction.  So that,
cc->free_pfn may be in a offline section or after the end of zone.  So,
you are right, we need to consider the range of block_end_pfn.

But, if we thought in this way (memory online/offline at any time), it
appears that we need to check whether the underlying section was
offlined.  For example, is it safe to use "pfn_to_page()" in
"isolate_migratepages_block()"?  Is it possible for the underlying
section to be offlined under us?

Hi, David, can you teach me on this too?

Best Regards,
Huang, Ying


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] mm: compaction: skip memory hole rapidly when isolating migratable pages
  2023-06-15  3:22   ` Huang, Ying
@ 2023-06-15  3:59     ` Baolin Wang
  2023-06-15  7:22       ` Huang, Ying
  0 siblings, 1 reply; 13+ messages in thread
From: Baolin Wang @ 2023-06-15  3:59 UTC (permalink / raw)
  To: Huang, Ying, Mel Gorman, david; +Cc: akpm, vbabka, linux-mm, linux-kernel



On 6/15/2023 11:22 AM, Huang, Ying wrote:
> Hi, Mel,
> 
> Mel Gorman <mgorman@techsingularity.net> writes:
> 
>> On Tue, Jun 13, 2023 at 04:55:04PM +0800, Baolin Wang wrote:
>>> On some machines, the normal zone can have a large memory hole like
>>> below memory layout, and we can see the range from 0x100000000 to
>>> 0x1800000000 is a hole. So when isolating some migratable pages, the
>>> scanner can meet the hole and it will take more time to skip the large
>>> hole. From my measurement, I can see the isolation scanner will take
>>> 80us ~ 100us to skip the large hole [0x100000000 - 0x1800000000].
>>>
>>> So adding a new helper to fast search next online memory section
>>> to skip the large hole can help to find next suitable pageblock
>>> efficiently. With this patch, I can see the large hole scanning only
>>> takes < 1us.
>>>
>>> [    0.000000] Zone ranges:
>>> [    0.000000]   DMA      [mem 0x0000000040000000-0x00000000ffffffff]
>>> [    0.000000]   DMA32    empty
>>> [    0.000000]   Normal   [mem 0x0000000100000000-0x0000001fa7ffffff]
>>> [    0.000000] Movable zone start for each node
>>> [    0.000000] Early memory node ranges
>>> [    0.000000]   node   0: [mem 0x0000000040000000-0x0000000fffffffff]
>>> [    0.000000]   node   0: [mem 0x0000001800000000-0x0000001fa3c7ffff]
>>> [    0.000000]   node   0: [mem 0x0000001fa3c80000-0x0000001fa3ffffff]
>>> [    0.000000]   node   0: [mem 0x0000001fa4000000-0x0000001fa402ffff]
>>> [    0.000000]   node   0: [mem 0x0000001fa4030000-0x0000001fa40effff]
>>> [    0.000000]   node   0: [mem 0x0000001fa40f0000-0x0000001fa73cffff]
>>> [    0.000000]   node   0: [mem 0x0000001fa73d0000-0x0000001fa745ffff]
>>> [    0.000000]   node   0: [mem 0x0000001fa7460000-0x0000001fa746ffff]
>>> [    0.000000]   node   0: [mem 0x0000001fa7470000-0x0000001fa758ffff]
>>> [    0.000000]   node   0: [mem 0x0000001fa7590000-0x0000001fa7ffffff]
>>>
>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>
>> This may only be necessary for non-contiguous zones so a check for
>> zone_contiguous could be made but I suspect the saving, if any, would be
>> marginal.
>>
>> However, it's subtle that block_end_pfn can end up in an arbirary location
>> past the end of the zone or past cc->free_pfn. As the "continue" will update
>> cc->migrate_pfn, that might lead to errors in the future. It would be a
>> lot safer to pass in cc->free_pfn and do two things with the value. First,
>> there is no point scanning for a valid online section past cc->free_pfn so
>> terminating after cc->free_pfn may save some cycles. Second, cc->migrate_pfn
>> does not end up with an arbitrary value which is a more defensive approach
>> to any future programming errors.
> 
> I have thought about this before.  Originally, I had thought that we
> were safe because cc->free_pfn should be in a online section and
> block_end_pfn should reach cc->free_pfn before the end of zone.  But
> after checking more code and thinking about it again, I found that the
> underlying sections may go offline under us during compaction.  So that,
> cc->free_pfn may be in a offline section or after the end of zone.  So,
> you are right, we need to consider the range of block_end_pfn.
> 
> But, if we thought in this way (memory online/offline at any time), it
> appears that we need to check whether the underlying section was
> offlined.  For example, is it safe to use "pfn_to_page()" in
> "isolate_migratepages_block()"?  Is it possible for the underlying
> section to be offlined under us?

It is possible. There is a previous discussion[1] about the race between 
pfn_to_online_page() and memory offline.

[1] 
https://lore.kernel.org/lkml/87zgc6buoq.fsf@nvidia.com/T/#m642d91bcc726437e1848b295bc57ce249c7ca399

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] mm: compaction: skip memory hole rapidly when isolating migratable pages
  2023-06-15  3:59     ` Baolin Wang
@ 2023-06-15  7:22       ` Huang, Ying
  2023-06-15  7:46         ` David Hildenbrand
  0 siblings, 1 reply; 13+ messages in thread
From: Huang, Ying @ 2023-06-15  7:22 UTC (permalink / raw)
  To: Baolin Wang; +Cc: Mel Gorman, david, akpm, vbabka, linux-mm, linux-kernel

Baolin Wang <baolin.wang@linux.alibaba.com> writes:

> On 6/15/2023 11:22 AM, Huang, Ying wrote:
>> Hi, Mel,
>> Mel Gorman <mgorman@techsingularity.net> writes:
>> 
>>> On Tue, Jun 13, 2023 at 04:55:04PM +0800, Baolin Wang wrote:
>>>> On some machines, the normal zone can have a large memory hole like
>>>> below memory layout, and we can see the range from 0x100000000 to
>>>> 0x1800000000 is a hole. So when isolating some migratable pages, the
>>>> scanner can meet the hole and it will take more time to skip the large
>>>> hole. From my measurement, I can see the isolation scanner will take
>>>> 80us ~ 100us to skip the large hole [0x100000000 - 0x1800000000].
>>>>
>>>> So adding a new helper to fast search next online memory section
>>>> to skip the large hole can help to find next suitable pageblock
>>>> efficiently. With this patch, I can see the large hole scanning only
>>>> takes < 1us.
>>>>
>>>> [    0.000000] Zone ranges:
>>>> [    0.000000]   DMA      [mem 0x0000000040000000-0x00000000ffffffff]
>>>> [    0.000000]   DMA32    empty
>>>> [    0.000000]   Normal   [mem 0x0000000100000000-0x0000001fa7ffffff]
>>>> [    0.000000] Movable zone start for each node
>>>> [    0.000000] Early memory node ranges
>>>> [    0.000000]   node   0: [mem 0x0000000040000000-0x0000000fffffffff]
>>>> [    0.000000]   node   0: [mem 0x0000001800000000-0x0000001fa3c7ffff]
>>>> [    0.000000]   node   0: [mem 0x0000001fa3c80000-0x0000001fa3ffffff]
>>>> [    0.000000]   node   0: [mem 0x0000001fa4000000-0x0000001fa402ffff]
>>>> [    0.000000]   node   0: [mem 0x0000001fa4030000-0x0000001fa40effff]
>>>> [    0.000000]   node   0: [mem 0x0000001fa40f0000-0x0000001fa73cffff]
>>>> [    0.000000]   node   0: [mem 0x0000001fa73d0000-0x0000001fa745ffff]
>>>> [    0.000000]   node   0: [mem 0x0000001fa7460000-0x0000001fa746ffff]
>>>> [    0.000000]   node   0: [mem 0x0000001fa7470000-0x0000001fa758ffff]
>>>> [    0.000000]   node   0: [mem 0x0000001fa7590000-0x0000001fa7ffffff]
>>>>
>>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>
>>> This may only be necessary for non-contiguous zones so a check for
>>> zone_contiguous could be made but I suspect the saving, if any, would be
>>> marginal.
>>>
>>> However, it's subtle that block_end_pfn can end up in an arbirary location
>>> past the end of the zone or past cc->free_pfn. As the "continue" will update
>>> cc->migrate_pfn, that might lead to errors in the future. It would be a
>>> lot safer to pass in cc->free_pfn and do two things with the value. First,
>>> there is no point scanning for a valid online section past cc->free_pfn so
>>> terminating after cc->free_pfn may save some cycles. Second, cc->migrate_pfn
>>> does not end up with an arbitrary value which is a more defensive approach
>>> to any future programming errors.
>> I have thought about this before.  Originally, I had thought that we
>> were safe because cc->free_pfn should be in a online section and
>> block_end_pfn should reach cc->free_pfn before the end of zone.  But
>> after checking more code and thinking about it again, I found that the
>> underlying sections may go offline under us during compaction.  So that,
>> cc->free_pfn may be in a offline section or after the end of zone.  So,
>> you are right, we need to consider the range of block_end_pfn.
>> But, if we thought in this way (memory online/offline at any time),
>> it
>> appears that we need to check whether the underlying section was
>> offlined.  For example, is it safe to use "pfn_to_page()" in
>> "isolate_migratepages_block()"?  Is it possible for the underlying
>> section to be offlined under us?
>
> It is possible. There is a previous discussion[1] about the race
> between pfn_to_online_page() and memory offline.
>
> [1]
> https://lore.kernel.org/lkml/87zgc6buoq.fsf@nvidia.com/T/#m642d91bcc726437e1848b295bc57ce249c7ca399

Thank you very much for sharing!  That answers my questions directly!

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] mm: compaction: skip memory hole rapidly when isolating migratable pages
  2023-06-15  7:22       ` Huang, Ying
@ 2023-06-15  7:46         ` David Hildenbrand
  2023-06-15  8:38           ` Huang, Ying
  0 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand @ 2023-06-15  7:46 UTC (permalink / raw)
  To: Huang, Ying, Baolin Wang; +Cc: Mel Gorman, akpm, vbabka, linux-mm, linux-kernel

On 15.06.23 09:22, Huang, Ying wrote:
> Baolin Wang <baolin.wang@linux.alibaba.com> writes:
> 
>> On 6/15/2023 11:22 AM, Huang, Ying wrote:
>>> Hi, Mel,
>>> Mel Gorman <mgorman@techsingularity.net> writes:
>>>
>>>> On Tue, Jun 13, 2023 at 04:55:04PM +0800, Baolin Wang wrote:
>>>>> On some machines, the normal zone can have a large memory hole like
>>>>> below memory layout, and we can see the range from 0x100000000 to
>>>>> 0x1800000000 is a hole. So when isolating some migratable pages, the
>>>>> scanner can meet the hole and it will take more time to skip the large
>>>>> hole. From my measurement, I can see the isolation scanner will take
>>>>> 80us ~ 100us to skip the large hole [0x100000000 - 0x1800000000].
>>>>>
>>>>> So adding a new helper to fast search next online memory section
>>>>> to skip the large hole can help to find next suitable pageblock
>>>>> efficiently. With this patch, I can see the large hole scanning only
>>>>> takes < 1us.
>>>>>
>>>>> [    0.000000] Zone ranges:
>>>>> [    0.000000]   DMA      [mem 0x0000000040000000-0x00000000ffffffff]
>>>>> [    0.000000]   DMA32    empty
>>>>> [    0.000000]   Normal   [mem 0x0000000100000000-0x0000001fa7ffffff]
>>>>> [    0.000000] Movable zone start for each node
>>>>> [    0.000000] Early memory node ranges
>>>>> [    0.000000]   node   0: [mem 0x0000000040000000-0x0000000fffffffff]
>>>>> [    0.000000]   node   0: [mem 0x0000001800000000-0x0000001fa3c7ffff]
>>>>> [    0.000000]   node   0: [mem 0x0000001fa3c80000-0x0000001fa3ffffff]
>>>>> [    0.000000]   node   0: [mem 0x0000001fa4000000-0x0000001fa402ffff]
>>>>> [    0.000000]   node   0: [mem 0x0000001fa4030000-0x0000001fa40effff]
>>>>> [    0.000000]   node   0: [mem 0x0000001fa40f0000-0x0000001fa73cffff]
>>>>> [    0.000000]   node   0: [mem 0x0000001fa73d0000-0x0000001fa745ffff]
>>>>> [    0.000000]   node   0: [mem 0x0000001fa7460000-0x0000001fa746ffff]
>>>>> [    0.000000]   node   0: [mem 0x0000001fa7470000-0x0000001fa758ffff]
>>>>> [    0.000000]   node   0: [mem 0x0000001fa7590000-0x0000001fa7ffffff]
>>>>>
>>>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>>
>>>> This may only be necessary for non-contiguous zones so a check for
>>>> zone_contiguous could be made but I suspect the saving, if any, would be
>>>> marginal.
>>>>
>>>> However, it's subtle that block_end_pfn can end up in an arbirary location
>>>> past the end of the zone or past cc->free_pfn. As the "continue" will update
>>>> cc->migrate_pfn, that might lead to errors in the future. It would be a
>>>> lot safer to pass in cc->free_pfn and do two things with the value. First,
>>>> there is no point scanning for a valid online section past cc->free_pfn so
>>>> terminating after cc->free_pfn may save some cycles. Second, cc->migrate_pfn
>>>> does not end up with an arbitrary value which is a more defensive approach
>>>> to any future programming errors.
>>> I have thought about this before.  Originally, I had thought that we
>>> were safe because cc->free_pfn should be in a online section and
>>> block_end_pfn should reach cc->free_pfn before the end of zone.  But
>>> after checking more code and thinking about it again, I found that the
>>> underlying sections may go offline under us during compaction.  So that,
>>> cc->free_pfn may be in a offline section or after the end of zone.  So,
>>> you are right, we need to consider the range of block_end_pfn.
>>> But, if we thought in this way (memory online/offline at any time),
>>> it
>>> appears that we need to check whether the underlying section was
>>> offlined.  For example, is it safe to use "pfn_to_page()" in
>>> "isolate_migratepages_block()"?  Is it possible for the underlying
>>> section to be offlined under us?
>>
>> It is possible. There is a previous discussion[1] about the race
>> between pfn_to_online_page() and memory offline.
>>
>> [1]
>> https://lore.kernel.org/lkml/87zgc6buoq.fsf@nvidia.com/T/#m642d91bcc726437e1848b295bc57ce249c7ca399
> 
> Thank you very much for sharing!  That answers my questions directly!

I remember another discussion (but can't find it) regarding why memory 
compaction can get away without pfn_to_online_page() all over the place. 
The use is limited to __reset_isolation_pfn().

But yes, in theory pfn_to_online_page() can race with memory offlining.

-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] mm: compaction: skip memory hole rapidly when isolating migratable pages
  2023-06-15  7:46         ` David Hildenbrand
@ 2023-06-15  8:38           ` Huang, Ying
  2023-06-15  8:41             ` David Hildenbrand
  0 siblings, 1 reply; 13+ messages in thread
From: Huang, Ying @ 2023-06-15  8:38 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Baolin Wang, Mel Gorman, akpm, vbabka, linux-mm, linux-kernel

David Hildenbrand <david@redhat.com> writes:

> On 15.06.23 09:22, Huang, Ying wrote:
>> Baolin Wang <baolin.wang@linux.alibaba.com> writes:
>> 
>>> On 6/15/2023 11:22 AM, Huang, Ying wrote:
>>>> Hi, Mel,
>>>> Mel Gorman <mgorman@techsingularity.net> writes:
>>>>
>>>>> On Tue, Jun 13, 2023 at 04:55:04PM +0800, Baolin Wang wrote:
>>>>>> On some machines, the normal zone can have a large memory hole like
>>>>>> below memory layout, and we can see the range from 0x100000000 to
>>>>>> 0x1800000000 is a hole. So when isolating some migratable pages, the
>>>>>> scanner can meet the hole and it will take more time to skip the large
>>>>>> hole. From my measurement, I can see the isolation scanner will take
>>>>>> 80us ~ 100us to skip the large hole [0x100000000 - 0x1800000000].
>>>>>>
>>>>>> So adding a new helper to fast search next online memory section
>>>>>> to skip the large hole can help to find next suitable pageblock
>>>>>> efficiently. With this patch, I can see the large hole scanning only
>>>>>> takes < 1us.
>>>>>>
>>>>>> [    0.000000] Zone ranges:
>>>>>> [    0.000000]   DMA      [mem 0x0000000040000000-0x00000000ffffffff]
>>>>>> [    0.000000]   DMA32    empty
>>>>>> [    0.000000]   Normal   [mem 0x0000000100000000-0x0000001fa7ffffff]
>>>>>> [    0.000000] Movable zone start for each node
>>>>>> [    0.000000] Early memory node ranges
>>>>>> [    0.000000]   node   0: [mem 0x0000000040000000-0x0000000fffffffff]
>>>>>> [    0.000000]   node   0: [mem 0x0000001800000000-0x0000001fa3c7ffff]
>>>>>> [    0.000000]   node   0: [mem 0x0000001fa3c80000-0x0000001fa3ffffff]
>>>>>> [    0.000000]   node   0: [mem 0x0000001fa4000000-0x0000001fa402ffff]
>>>>>> [    0.000000]   node   0: [mem 0x0000001fa4030000-0x0000001fa40effff]
>>>>>> [    0.000000]   node   0: [mem 0x0000001fa40f0000-0x0000001fa73cffff]
>>>>>> [    0.000000]   node   0: [mem 0x0000001fa73d0000-0x0000001fa745ffff]
>>>>>> [    0.000000]   node   0: [mem 0x0000001fa7460000-0x0000001fa746ffff]
>>>>>> [    0.000000]   node   0: [mem 0x0000001fa7470000-0x0000001fa758ffff]
>>>>>> [    0.000000]   node   0: [mem 0x0000001fa7590000-0x0000001fa7ffffff]
>>>>>>
>>>>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>>>
>>>>> This may only be necessary for non-contiguous zones so a check for
>>>>> zone_contiguous could be made but I suspect the saving, if any, would be
>>>>> marginal.
>>>>>
>>>>> However, it's subtle that block_end_pfn can end up in an arbirary location
>>>>> past the end of the zone or past cc->free_pfn. As the "continue" will update
>>>>> cc->migrate_pfn, that might lead to errors in the future. It would be a
>>>>> lot safer to pass in cc->free_pfn and do two things with the value. First,
>>>>> there is no point scanning for a valid online section past cc->free_pfn so
>>>>> terminating after cc->free_pfn may save some cycles. Second, cc->migrate_pfn
>>>>> does not end up with an arbitrary value which is a more defensive approach
>>>>> to any future programming errors.
>>>> I have thought about this before.  Originally, I had thought that we
>>>> were safe because cc->free_pfn should be in a online section and
>>>> block_end_pfn should reach cc->free_pfn before the end of zone.  But
>>>> after checking more code and thinking about it again, I found that the
>>>> underlying sections may go offline under us during compaction.  So that,
>>>> cc->free_pfn may be in a offline section or after the end of zone.  So,
>>>> you are right, we need to consider the range of block_end_pfn.
>>>> But, if we thought in this way (memory online/offline at any time),
>>>> it
>>>> appears that we need to check whether the underlying section was
>>>> offlined.  For example, is it safe to use "pfn_to_page()" in
>>>> "isolate_migratepages_block()"?  Is it possible for the underlying
>>>> section to be offlined under us?
>>>
>>> It is possible. There is a previous discussion[1] about the race
>>> between pfn_to_online_page() and memory offline.
>>>
>>> [1]
>>> https://lore.kernel.org/lkml/87zgc6buoq.fsf@nvidia.com/T/#m642d91bcc726437e1848b295bc57ce249c7ca399
>> Thank you very much for sharing!  That answers my questions
>> directly!
>
> I remember another discussion (but can't find it) regarding why memory
> compaction can get away without pfn_to_online_page() all over the
> place. The use is limited to __reset_isolation_pfn().

Per my understanding, isolate_migratepages() -> pageblock_pfn_to_page()
will check whether the pageblock is online.  So if the pageblock isn't
offlined afterwards, we can use pfn_to_page().

> But yes, in theory pfn_to_online_page() can race with memory offlining.

Thanks for confirmation.

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] mm: compaction: skip memory hole rapidly when isolating migratable pages
  2023-06-15  8:38           ` Huang, Ying
@ 2023-06-15  8:41             ` David Hildenbrand
  0 siblings, 0 replies; 13+ messages in thread
From: David Hildenbrand @ 2023-06-15  8:41 UTC (permalink / raw)
  To: Huang, Ying; +Cc: Baolin Wang, Mel Gorman, akpm, vbabka, linux-mm, linux-kernel

On 15.06.23 10:38, Huang, Ying wrote:
> David Hildenbrand <david@redhat.com> writes:
> 
>> On 15.06.23 09:22, Huang, Ying wrote:
>>> Baolin Wang <baolin.wang@linux.alibaba.com> writes:
>>>
>>>> On 6/15/2023 11:22 AM, Huang, Ying wrote:
>>>>> Hi, Mel,
>>>>> Mel Gorman <mgorman@techsingularity.net> writes:
>>>>>
>>>>>> On Tue, Jun 13, 2023 at 04:55:04PM +0800, Baolin Wang wrote:
>>>>>>> On some machines, the normal zone can have a large memory hole like
>>>>>>> below memory layout, and we can see the range from 0x100000000 to
>>>>>>> 0x1800000000 is a hole. So when isolating some migratable pages, the
>>>>>>> scanner can meet the hole and it will take more time to skip the large
>>>>>>> hole. From my measurement, I can see the isolation scanner will take
>>>>>>> 80us ~ 100us to skip the large hole [0x100000000 - 0x1800000000].
>>>>>>>
>>>>>>> So adding a new helper to fast search next online memory section
>>>>>>> to skip the large hole can help to find next suitable pageblock
>>>>>>> efficiently. With this patch, I can see the large hole scanning only
>>>>>>> takes < 1us.
>>>>>>>
>>>>>>> [    0.000000] Zone ranges:
>>>>>>> [    0.000000]   DMA      [mem 0x0000000040000000-0x00000000ffffffff]
>>>>>>> [    0.000000]   DMA32    empty
>>>>>>> [    0.000000]   Normal   [mem 0x0000000100000000-0x0000001fa7ffffff]
>>>>>>> [    0.000000] Movable zone start for each node
>>>>>>> [    0.000000] Early memory node ranges
>>>>>>> [    0.000000]   node   0: [mem 0x0000000040000000-0x0000000fffffffff]
>>>>>>> [    0.000000]   node   0: [mem 0x0000001800000000-0x0000001fa3c7ffff]
>>>>>>> [    0.000000]   node   0: [mem 0x0000001fa3c80000-0x0000001fa3ffffff]
>>>>>>> [    0.000000]   node   0: [mem 0x0000001fa4000000-0x0000001fa402ffff]
>>>>>>> [    0.000000]   node   0: [mem 0x0000001fa4030000-0x0000001fa40effff]
>>>>>>> [    0.000000]   node   0: [mem 0x0000001fa40f0000-0x0000001fa73cffff]
>>>>>>> [    0.000000]   node   0: [mem 0x0000001fa73d0000-0x0000001fa745ffff]
>>>>>>> [    0.000000]   node   0: [mem 0x0000001fa7460000-0x0000001fa746ffff]
>>>>>>> [    0.000000]   node   0: [mem 0x0000001fa7470000-0x0000001fa758ffff]
>>>>>>> [    0.000000]   node   0: [mem 0x0000001fa7590000-0x0000001fa7ffffff]
>>>>>>>
>>>>>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>>>>
>>>>>> This may only be necessary for non-contiguous zones so a check for
>>>>>> zone_contiguous could be made but I suspect the saving, if any, would be
>>>>>> marginal.
>>>>>>
>>>>>> However, it's subtle that block_end_pfn can end up in an arbirary location
>>>>>> past the end of the zone or past cc->free_pfn. As the "continue" will update
>>>>>> cc->migrate_pfn, that might lead to errors in the future. It would be a
>>>>>> lot safer to pass in cc->free_pfn and do two things with the value. First,
>>>>>> there is no point scanning for a valid online section past cc->free_pfn so
>>>>>> terminating after cc->free_pfn may save some cycles. Second, cc->migrate_pfn
>>>>>> does not end up with an arbitrary value which is a more defensive approach
>>>>>> to any future programming errors.
>>>>> I have thought about this before.  Originally, I had thought that we
>>>>> were safe because cc->free_pfn should be in a online section and
>>>>> block_end_pfn should reach cc->free_pfn before the end of zone.  But
>>>>> after checking more code and thinking about it again, I found that the
>>>>> underlying sections may go offline under us during compaction.  So that,
>>>>> cc->free_pfn may be in a offline section or after the end of zone.  So,
>>>>> you are right, we need to consider the range of block_end_pfn.
>>>>> But, if we thought in this way (memory online/offline at any time),
>>>>> it
>>>>> appears that we need to check whether the underlying section was
>>>>> offlined.  For example, is it safe to use "pfn_to_page()" in
>>>>> "isolate_migratepages_block()"?  Is it possible for the underlying
>>>>> section to be offlined under us?
>>>>
>>>> It is possible. There is a previous discussion[1] about the race
>>>> between pfn_to_online_page() and memory offline.
>>>>
>>>> [1]
>>>> https://lore.kernel.org/lkml/87zgc6buoq.fsf@nvidia.com/T/#m642d91bcc726437e1848b295bc57ce249c7ca399
>>> Thank you very much for sharing!  That answers my questions
>>> directly!
>>
>> I remember another discussion (but can't find it) regarding why memory
>> compaction can get away without pfn_to_online_page() all over the
>> place. The use is limited to __reset_isolation_pfn().
> 
> Per my understanding, isolate_migratepages() -> pageblock_pfn_to_page()
> will check whether the pageblock is online.  So if the pageblock isn't
> offlined afterwards, we can use pfn_to_page().

Oh, indeed, that was the magic bit, thanks!

-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2023-06-15  8:43 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-13  8:55 [PATCH v2] mm: compaction: skip memory hole rapidly when isolating migratable pages Baolin Wang
2023-06-13  9:56 ` David Hildenbrand
2023-06-13 11:13   ` Baolin Wang
2023-06-13 12:36     ` David Hildenbrand
2023-06-14  1:08       ` Huang, Ying
2023-06-14  9:55 ` Mel Gorman
2023-06-14 12:22   ` Baolin Wang
2023-06-15  3:22   ` Huang, Ying
2023-06-15  3:59     ` Baolin Wang
2023-06-15  7:22       ` Huang, Ying
2023-06-15  7:46         ` David Hildenbrand
2023-06-15  8:38           ` Huang, Ying
2023-06-15  8:41             ` David Hildenbrand

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.