[v2,4/5] mm/memory_hotplug: Make sure the pfn is aligned to the order when onlining
diff mbox series

Message ID 20190814154109.3448-5-david@redhat.com
State In Next
Commit 333628873ff9865b7f74ebce12c2b8b61280a362
Headers show
Series
  • mm/memory_hotplug: online_pages() cleanups
Related show

Commit Message

David Hildenbrand Aug. 14, 2019, 3:41 p.m. UTC
Commit a9cd410a3d29 ("mm/page_alloc.c: memory hotplug: free pages as higher
order") assumed that any PFN we get via memory resources is aligned to
to MAX_ORDER - 1, I am not convinced that is always true. Let's play safe,
check the alignment and fallback to single pages.

Cc: Arun KS <arunks@codeaurora.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/memory_hotplug.c | 3 +++
 1 file changed, 3 insertions(+)

Comments

David Hildenbrand Aug. 14, 2019, 4:09 p.m. UTC | #1
On 14.08.19 17:41, David Hildenbrand wrote:
> Commit a9cd410a3d29 ("mm/page_alloc.c: memory hotplug: free pages as higher
> order") assumed that any PFN we get via memory resources is aligned to
> to MAX_ORDER - 1, I am not convinced that is always true. Let's play safe,
> check the alignment and fallback to single pages.
> 
> Cc: Arun KS <arunks@codeaurora.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  mm/memory_hotplug.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 63b1775f7cf8..f245fb50ba7f 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -646,6 +646,9 @@ static int online_pages_range(unsigned long start_pfn, unsigned long nr_pages,
>  	 */
>  	for (pfn = start_pfn; pfn < end_pfn; pfn += 1ul << order) {
>  		order = min(MAX_ORDER - 1, get_order(PFN_PHYS(end_pfn - pfn)));
> +		/* __free_pages_core() wants pfns to be aligned to the order */
> +		if (unlikely(!IS_ALIGNED(pfn, 1ul << order)))
> +			order = 0;
>  		(*online_page_callback)(pfn_to_page(pfn), order);
>  	}
>  
> 

@Michal, if you insist, we can drop this patch. "break first and fix
later" is not part of my DNA :)
Michal Hocko Aug. 14, 2019, 6:32 p.m. UTC | #2
On Wed 14-08-19 18:09:16, David Hildenbrand wrote:
> On 14.08.19 17:41, David Hildenbrand wrote:
> > Commit a9cd410a3d29 ("mm/page_alloc.c: memory hotplug: free pages as higher
> > order") assumed that any PFN we get via memory resources is aligned to
> > to MAX_ORDER - 1, I am not convinced that is always true. Let's play safe,
> > check the alignment and fallback to single pages.
> > 
> > Cc: Arun KS <arunks@codeaurora.org>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: Oscar Salvador <osalvador@suse.de>
> > Cc: Michal Hocko <mhocko@suse.com>
> > Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
> > Cc: Dan Williams <dan.j.williams@intel.com>
> > Signed-off-by: David Hildenbrand <david@redhat.com>
> > ---
> >  mm/memory_hotplug.c | 3 +++
> >  1 file changed, 3 insertions(+)
> > 
> > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> > index 63b1775f7cf8..f245fb50ba7f 100644
> > --- a/mm/memory_hotplug.c
> > +++ b/mm/memory_hotplug.c
> > @@ -646,6 +646,9 @@ static int online_pages_range(unsigned long start_pfn, unsigned long nr_pages,
> >  	 */
> >  	for (pfn = start_pfn; pfn < end_pfn; pfn += 1ul << order) {
> >  		order = min(MAX_ORDER - 1, get_order(PFN_PHYS(end_pfn - pfn)));
> > +		/* __free_pages_core() wants pfns to be aligned to the order */
> > +		if (unlikely(!IS_ALIGNED(pfn, 1ul << order)))
> > +			order = 0;
> >  		(*online_page_callback)(pfn_to_page(pfn), order);
> >  	}
> >  
> > 
> 
> @Michal, if you insist, we can drop this patch. "break first and fix
> later" is not part of my DNA :)

I do not insist but have already expressed that I am not a fan of this
change. Also I think that "break first" is quite an over statement here.
David Hildenbrand Aug. 14, 2019, 7:04 p.m. UTC | #3
On 14.08.19 20:32, Michal Hocko wrote:
> On Wed 14-08-19 18:09:16, David Hildenbrand wrote:
>> On 14.08.19 17:41, David Hildenbrand wrote:
>>> Commit a9cd410a3d29 ("mm/page_alloc.c: memory hotplug: free pages as higher
>>> order") assumed that any PFN we get via memory resources is aligned to
>>> to MAX_ORDER - 1, I am not convinced that is always true. Let's play safe,
>>> check the alignment and fallback to single pages.
>>>
>>> Cc: Arun KS <arunks@codeaurora.org>
>>> Cc: Andrew Morton <akpm@linux-foundation.org>
>>> Cc: Oscar Salvador <osalvador@suse.de>
>>> Cc: Michal Hocko <mhocko@suse.com>
>>> Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
>>> Cc: Dan Williams <dan.j.williams@intel.com>
>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>> ---
>>>  mm/memory_hotplug.c | 3 +++
>>>  1 file changed, 3 insertions(+)
>>>
>>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>>> index 63b1775f7cf8..f245fb50ba7f 100644
>>> --- a/mm/memory_hotplug.c
>>> +++ b/mm/memory_hotplug.c
>>> @@ -646,6 +646,9 @@ static int online_pages_range(unsigned long start_pfn, unsigned long nr_pages,
>>>  	 */
>>>  	for (pfn = start_pfn; pfn < end_pfn; pfn += 1ul << order) {
>>>  		order = min(MAX_ORDER - 1, get_order(PFN_PHYS(end_pfn - pfn)));
>>> +		/* __free_pages_core() wants pfns to be aligned to the order */
>>> +		if (unlikely(!IS_ALIGNED(pfn, 1ul << order)))
>>> +			order = 0;
>>>  		(*online_page_callback)(pfn_to_page(pfn), order);
>>>  	}
>>>  
>>>
>>
>> @Michal, if you insist, we can drop this patch. "break first and fix
>> later" is not part of my DNA :)
> 
> I do not insist but have already expressed that I am not a fan of this
> change. Also I think that "break first" is quite an over statement here.
> 

Well this version is certainly nicer than the previous one. I'll let
Andrew decide if he wants to pick it up or drop it from this series.

Thanks!
Andrew Morton Aug. 14, 2019, 8:56 p.m. UTC | #4
On Wed, 14 Aug 2019 17:41:08 +0200 David Hildenbrand <david@redhat.com> wrote:

> Commit a9cd410a3d29 ("mm/page_alloc.c: memory hotplug: free pages as higher
> order") assumed that any PFN we get via memory resources is aligned to
> to MAX_ORDER - 1, I am not convinced that is always true. Let's play safe,
> check the alignment and fallback to single pages.
> 
> ...
>
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -646,6 +646,9 @@ static int online_pages_range(unsigned long start_pfn, unsigned long nr_pages,
>  	 */
>  	for (pfn = start_pfn; pfn < end_pfn; pfn += 1ul << order) {
>  		order = min(MAX_ORDER - 1, get_order(PFN_PHYS(end_pfn - pfn)));
> +		/* __free_pages_core() wants pfns to be aligned to the order */
> +		if (unlikely(!IS_ALIGNED(pfn, 1ul << order)))
> +			order = 0;
>  		(*online_page_callback)(pfn_to_page(pfn), order);
>  	}

We aren't sure if this occurs, but if it does, we silently handle it.

It seems a reasonable defensive thing to do, but should we add a
WARN_ON_ONCE() so that we get to find out about it?  If we get such a
report then we can remove the WARN_ON_ONCE() and add an illuminating
comment.
David Hildenbrand Aug. 14, 2019, 9:47 p.m. UTC | #5
On 14.08.19 22:56, Andrew Morton wrote:
> On Wed, 14 Aug 2019 17:41:08 +0200 David Hildenbrand <david@redhat.com> wrote:
> 
>> Commit a9cd410a3d29 ("mm/page_alloc.c: memory hotplug: free pages as higher
>> order") assumed that any PFN we get via memory resources is aligned to
>> to MAX_ORDER - 1, I am not convinced that is always true. Let's play safe,
>> check the alignment and fallback to single pages.
>>
>> ...
>>
>> --- a/mm/memory_hotplug.c
>> +++ b/mm/memory_hotplug.c
>> @@ -646,6 +646,9 @@ static int online_pages_range(unsigned long start_pfn, unsigned long nr_pages,
>>  	 */
>>  	for (pfn = start_pfn; pfn < end_pfn; pfn += 1ul << order) {
>>  		order = min(MAX_ORDER - 1, get_order(PFN_PHYS(end_pfn - pfn)));
>> +		/* __free_pages_core() wants pfns to be aligned to the order */
>> +		if (unlikely(!IS_ALIGNED(pfn, 1ul << order)))
>> +			order = 0;
>>  		(*online_page_callback)(pfn_to_page(pfn), order);
>>  	}
> 
> We aren't sure if this occurs, but if it does, we silently handle it.
> 
> It seems a reasonable defensive thing to do, but should we add a
> WARN_ON_ONCE() so that we get to find out about it?  If we get such a
> report then we can remove the WARN_ON_ONCE() and add an illuminating
> comment.
> 
> 

Makes sense, do you want to add the WARN_ON_ONCE() or shall I resend?

I was recently thinking about limiting offlining to memory blocks
without holes - then also onlining would only apply to memory blocks
without holes and we could simplify both paths (single zone/node, no
holes) - including this check, we would always have memory block size
alignments. But I am not sure yet if there is a valid use case for
offlining/re-online boot memory with holes.
Andrew Morton Aug. 14, 2019, 10:10 p.m. UTC | #6
On Wed, 14 Aug 2019 23:47:24 +0200 David Hildenbrand <david@redhat.com> wrote:

> >> --- a/mm/memory_hotplug.c
> >> +++ b/mm/memory_hotplug.c
> >> @@ -646,6 +646,9 @@ static int online_pages_range(unsigned long start_pfn, unsigned long nr_pages,
> >>  	 */
> >>  	for (pfn = start_pfn; pfn < end_pfn; pfn += 1ul << order) {
> >>  		order = min(MAX_ORDER - 1, get_order(PFN_PHYS(end_pfn - pfn)));
> >> +		/* __free_pages_core() wants pfns to be aligned to the order */
> >> +		if (unlikely(!IS_ALIGNED(pfn, 1ul << order)))
> >> +			order = 0;
> >>  		(*online_page_callback)(pfn_to_page(pfn), order);
> >>  	}
> > 
> > We aren't sure if this occurs, but if it does, we silently handle it.
> > 
> > It seems a reasonable defensive thing to do, but should we add a
> > WARN_ON_ONCE() so that we get to find out about it?  If we get such a
> > report then we can remove the WARN_ON_ONCE() and add an illuminating
> > comment.
> > 
> > 
> 
> Makes sense, do you want to add the WARN_ON_ONCE() or shall I resend?

--- a/mm/memory_hotplug.c~mm-memory_hotplug-make-sure-the-pfn-is-aligned-to-the-order-when-onlining-fix
+++ a/mm/memory_hotplug.c
@@ -647,7 +647,7 @@ static int online_pages_range(unsigned l
 	for (pfn = start_pfn; pfn < end_pfn; pfn += 1ul << order) {
 		order = min(MAX_ORDER - 1, get_order(PFN_PHYS(end_pfn - pfn)));
 		/* __free_pages_core() wants pfns to be aligned to the order */
-		if (unlikely(!IS_ALIGNED(pfn, 1ul << order)))
+		if (WARN_ON_ONCE(!IS_ALIGNED(pfn, 1ul << order)))
 			order = 0;
 		(*online_page_callback)(pfn_to_page(pfn), order);
 	}

Patch
diff mbox series

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 63b1775f7cf8..f245fb50ba7f 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -646,6 +646,9 @@  static int online_pages_range(unsigned long start_pfn, unsigned long nr_pages,
 	 */
 	for (pfn = start_pfn; pfn < end_pfn; pfn += 1ul << order) {
 		order = min(MAX_ORDER - 1, get_order(PFN_PHYS(end_pfn - pfn)));
+		/* __free_pages_core() wants pfns to be aligned to the order */
+		if (unlikely(!IS_ALIGNED(pfn, 1ul << order)))
+			order = 0;
 		(*online_page_callback)(pfn_to_page(pfn), order);
 	}