linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V2] mm/hugetlb: try preferred node first when alloc gigantic page from cma
@ 2020-09-01 14:49 Li Xinhai
  2020-09-01 15:04 ` Michal Hocko
  2020-09-01 22:04 ` Roman Gushchin
  0 siblings, 2 replies; 5+ messages in thread
From: Li Xinhai @ 2020-09-01 14:49 UTC (permalink / raw)
  To: linux-mm; +Cc: akpm, Roman Gushchin, Mike Kravetz, Michal Hocko

Since commit cf11e85fc08cc6a4 ("mm: hugetlb: optionally allocate gigantic
hugepages using cma"), the gigantic page would be allocated from node
which is not the preferred node, although there are pages available from
that node. The reason is that the nid parameter has been ignored in
alloc_gigantic_page().

Besides, the __GFP_THISNODE also need be checked if user required to
alloc only from the preferred node.

After this patch, the preferred node is tried first before other allowed
nodes, and don't try to allocate from other nodes if __GFP_THISNODE is
specified.

Fixes: cf11e85fc08cc6a4 ("mm: hugetlb: optionally allocate gigantic hugepages using cma")
Cc: Roman Gushchin <guro@fb.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Li Xinhai <lixinhai.lxh@gmail.com>
---
v1->v2:
With review by Mike and Michal, need to check __GFP_THISNODE to avoid
allocate from other nodes.

 mm/hugetlb.c | 21 +++++++++++++++------
 1 file changed, 15 insertions(+), 6 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index a301c2d672bf..d24986145087 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1256,15 +1256,24 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
 		struct page *page;
 		int node;
 
-		for_each_node_mask(node, *nodemask) {
-			if (!hugetlb_cma[node])
-				continue;
-
-			page = cma_alloc(hugetlb_cma[node], nr_pages,
-					 huge_page_order(h), true);
+		if (nid != NUMA_NO_NODE && hugetlb_cma[nid]) {
+			page = cma_alloc(hugetlb_cma[nid], nr_pages,
+					huge_page_order(h), true);
 			if (page)
 				return page;
 		}
+
+		if (!(gfp_mask & __GFP_THISNODE)) {
+			for_each_node_mask(node, *nodemask) {
+				if (node == nid || !hugetlb_cma[node])
+					continue;
+
+				page = cma_alloc(hugetlb_cma[node], nr_pages,
+						huge_page_order(h), true);
+				if (page)
+					return page;
+			}
+		}
 	}
 #endif
 
-- 
2.18.4



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH V2] mm/hugetlb: try preferred node first when alloc gigantic page from cma
  2020-09-01 14:49 [PATCH V2] mm/hugetlb: try preferred node first when alloc gigantic page from cma Li Xinhai
@ 2020-09-01 15:04 ` Michal Hocko
  2020-09-01 18:43   ` Mike Kravetz
  2020-09-01 22:04 ` Roman Gushchin
  1 sibling, 1 reply; 5+ messages in thread
From: Michal Hocko @ 2020-09-01 15:04 UTC (permalink / raw)
  To: Li Xinhai; +Cc: linux-mm, akpm, Roman Gushchin, Mike Kravetz

On Tue 01-09-20 22:49:24, Li Xinhai wrote:
> Since commit cf11e85fc08cc6a4 ("mm: hugetlb: optionally allocate gigantic
> hugepages using cma"), the gigantic page would be allocated from node
> which is not the preferred node, although there are pages available from
> that node. The reason is that the nid parameter has been ignored in
> alloc_gigantic_page().
> 
> Besides, the __GFP_THISNODE also need be checked if user required to
> alloc only from the preferred node.
> 
> After this patch, the preferred node is tried first before other allowed
> nodes, and don't try to allocate from other nodes if __GFP_THISNODE is
> specified.
> 
> Fixes: cf11e85fc08cc6a4 ("mm: hugetlb: optionally allocate gigantic hugepages using cma")
> Cc: Roman Gushchin <guro@fb.com>
> Cc: Mike Kravetz <mike.kravetz@oracle.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> Signed-off-by: Li Xinhai <lixinhai.lxh@gmail.com>

LGTM
Acked-by: Michal Hocko <mhocko@suse.com>

> ---
> v1->v2:
> With review by Mike and Michal, need to check __GFP_THISNODE to avoid
> allocate from other nodes.
> 
>  mm/hugetlb.c | 21 +++++++++++++++------
>  1 file changed, 15 insertions(+), 6 deletions(-)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index a301c2d672bf..d24986145087 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1256,15 +1256,24 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
>  		struct page *page;
>  		int node;
>  
> -		for_each_node_mask(node, *nodemask) {
> -			if (!hugetlb_cma[node])
> -				continue;
> -
> -			page = cma_alloc(hugetlb_cma[node], nr_pages,
> -					 huge_page_order(h), true);
> +		if (nid != NUMA_NO_NODE && hugetlb_cma[nid]) {
> +			page = cma_alloc(hugetlb_cma[nid], nr_pages,
> +					huge_page_order(h), true);
>  			if (page)
>  				return page;
>  		}
> +
> +		if (!(gfp_mask & __GFP_THISNODE)) {
> +			for_each_node_mask(node, *nodemask) {
> +				if (node == nid || !hugetlb_cma[node])
> +					continue;
> +
> +				page = cma_alloc(hugetlb_cma[node], nr_pages,
> +						huge_page_order(h), true);
> +				if (page)
> +					return page;
> +			}
> +		}
>  	}
>  #endif
>  
> -- 
> 2.18.4

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH V2] mm/hugetlb: try preferred node first when alloc gigantic page from cma
  2020-09-01 15:04 ` Michal Hocko
@ 2020-09-01 18:43   ` Mike Kravetz
  2020-09-02  2:12     ` Li Xinhai
  0 siblings, 1 reply; 5+ messages in thread
From: Mike Kravetz @ 2020-09-01 18:43 UTC (permalink / raw)
  To: Michal Hocko, Li Xinhai; +Cc: linux-mm, akpm, Roman Gushchin

On 9/1/20 8:04 AM, Michal Hocko wrote:
> On Tue 01-09-20 22:49:24, Li Xinhai wrote:
>> Since commit cf11e85fc08cc6a4 ("mm: hugetlb: optionally allocate gigantic
>> hugepages using cma"), the gigantic page would be allocated from node
>> which is not the preferred node, although there are pages available from
>> that node. The reason is that the nid parameter has been ignored in
>> alloc_gigantic_page().
>>
>> Besides, the __GFP_THISNODE also need be checked if user required to
>> alloc only from the preferred node.
>>
>> After this patch, the preferred node is tried first before other allowed
>> nodes, and don't try to allocate from other nodes if __GFP_THISNODE is
>> specified.
>>
>> Fixes: cf11e85fc08cc6a4 ("mm: hugetlb: optionally allocate gigantic hugepages using cma")
>> Cc: Roman Gushchin <guro@fb.com>
>> Cc: Mike Kravetz <mike.kravetz@oracle.com>
>> Cc: Michal Hocko <mhocko@kernel.org>
>> Signed-off-by: Li Xinhai <lixinhai.lxh@gmail.com>
> 
> LGTM
> Acked-by: Michal Hocko <mhocko@suse.com>

Thank you both for the updates!

>> ---
>> v1->v2:
>> With review by Mike and Michal, need to check __GFP_THISNODE to avoid
>> allocate from other nodes.
>>
>>  mm/hugetlb.c | 21 +++++++++++++++------
>>  1 file changed, 15 insertions(+), 6 deletions(-)
>>
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index a301c2d672bf..d24986145087 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -1256,15 +1256,24 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
>>  		struct page *page;
>>  		int node;
>>  
>> -		for_each_node_mask(node, *nodemask) {
>> -			if (!hugetlb_cma[node])
>> -				continue;
>> -
>> -			page = cma_alloc(hugetlb_cma[node], nr_pages,
>> -					 huge_page_order(h), true);
>> +		if (nid != NUMA_NO_NODE && hugetlb_cma[nid]) {
>> +			page = cma_alloc(hugetlb_cma[nid], nr_pages,
>> +					huge_page_order(h), true);

I missed the NUMA_NO_NODE issue yesterday, but thought about it a bit today.
As Michal says, we do not call into alloc_gigantic_page with
'nid == NUMA_NO_NODE' today, but we should handle it correctly.

Other places in the hugetlb code such as alloc_buddy_huge_page and even the
low level interface alloc_pages_node have code as follows:

	if (nid == NUMA_NO_NODE)
		nid = numa_mem_id();

this attempts the allocation on the current node first if NUMA_NO_NODE is
specified.  Of course, it falls back to other nodes allowed in the mask.
If we are adding the code to interpret NUMA_NO_NODE, I suggest we make this
type of change as well.  This would simply be added at the beginning of
alloc_gigantic_page to handle the non-CMA case as well.  Suggestion for an
updated patch below.

-- 
Mike Kravetz


diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index a301c2d672bf..98dc44a602b4 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1251,20 +1251,32 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
 {
 	unsigned long nr_pages = 1UL << huge_page_order(h);
 
+	if (nid == NUMA_NO_NODE)
+		nid =  numa_mem_id();
+
 #ifdef CONFIG_CMA
 	{
 		struct page *page;
 		int node;
 
-		for_each_node_mask(node, *nodemask) {
-			if (!hugetlb_cma[node])
-				continue;
-
-			page = cma_alloc(hugetlb_cma[node], nr_pages,
-					 huge_page_order(h), true);
+		if (hugetlb_cma[nid]) {
+			page = cma_alloc(hugetlb_cma[nid], nr_pages,
+					huge_page_order(h), true);
 			if (page)
 				return page;
 		}
+
+		if (!(gfp_mask & __GFP_THISNODE)) {
+			for_each_node_mask(node, *nodemask) {
+				if (node == nid || !hugetlb_cma[node])
+					continue;
+
+				page = cma_alloc(hugetlb_cma[node], nr_pages,
+						huge_page_order(h), true);
+				if (page)
+					return page;
+			}
+		}
 	}
 #endif
 


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH V2] mm/hugetlb: try preferred node first when alloc gigantic page from cma
  2020-09-01 14:49 [PATCH V2] mm/hugetlb: try preferred node first when alloc gigantic page from cma Li Xinhai
  2020-09-01 15:04 ` Michal Hocko
@ 2020-09-01 22:04 ` Roman Gushchin
  1 sibling, 0 replies; 5+ messages in thread
From: Roman Gushchin @ 2020-09-01 22:04 UTC (permalink / raw)
  To: Li Xinhai; +Cc: linux-mm, akpm, Mike Kravetz, Michal Hocko

On Tue, Sep 01, 2020 at 10:49:24PM +0800, Li Xinhai wrote:
> Since commit cf11e85fc08cc6a4 ("mm: hugetlb: optionally allocate gigantic
> hugepages using cma"), the gigantic page would be allocated from node
> which is not the preferred node, although there are pages available from
> that node. The reason is that the nid parameter has been ignored in
> alloc_gigantic_page().
> 
> Besides, the __GFP_THISNODE also need be checked if user required to
> alloc only from the preferred node.
> 
> After this patch, the preferred node is tried first before other allowed
> nodes, and don't try to allocate from other nodes if __GFP_THISNODE is
> specified.
> 
> Fixes: cf11e85fc08cc6a4 ("mm: hugetlb: optionally allocate gigantic hugepages using cma")
> Cc: Roman Gushchin <guro@fb.com>
> Cc: Mike Kravetz <mike.kravetz@oracle.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> Signed-off-by: Li Xinhai <lixinhai.lxh@gmail.com>
> ---
> v1->v2:
> With review by Mike and Michal, need to check __GFP_THISNODE to avoid
> allocate from other nodes.

Acked-by: Roman Gushchin <guro@fb.com>

Thank you!

> 
>  mm/hugetlb.c | 21 +++++++++++++++------
>  1 file changed, 15 insertions(+), 6 deletions(-)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index a301c2d672bf..d24986145087 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1256,15 +1256,24 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
>  		struct page *page;
>  		int node;
>  
> -		for_each_node_mask(node, *nodemask) {
> -			if (!hugetlb_cma[node])
> -				continue;
> -
> -			page = cma_alloc(hugetlb_cma[node], nr_pages,
> -					 huge_page_order(h), true);
> +		if (nid != NUMA_NO_NODE && hugetlb_cma[nid]) {
> +			page = cma_alloc(hugetlb_cma[nid], nr_pages,
> +					huge_page_order(h), true);
>  			if (page)
>  				return page;
>  		}
> +
> +		if (!(gfp_mask & __GFP_THISNODE)) {
> +			for_each_node_mask(node, *nodemask) {
> +				if (node == nid || !hugetlb_cma[node])
> +					continue;
> +
> +				page = cma_alloc(hugetlb_cma[node], nr_pages,
> +						huge_page_order(h), true);
> +				if (page)
> +					return page;
> +			}
> +		}
>  	}
>  #endif
>  
> -- 
> 2.18.4
> 


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH V2] mm/hugetlb: try preferred node first when alloc gigantic page from cma
  2020-09-01 18:43   ` Mike Kravetz
@ 2020-09-02  2:12     ` Li Xinhai
  0 siblings, 0 replies; 5+ messages in thread
From: Li Xinhai @ 2020-09-02  2:12 UTC (permalink / raw)
  To: Mike Kravetz, mhocko; +Cc: linux-mm, akpm, guro

On 2020-09-02 at 02:43 Mike Kravetz wrote:
>On 9/1/20 8:04 AM, Michal Hocko wrote:
>> On Tue 01-09-20 22:49:24, Li Xinhai wrote:
>>> Since commit cf11e85fc08cc6a4 ("mm: hugetlb: optionally allocate gigantic
>>> hugepages using cma"), the gigantic page would be allocated from node
>>> which is not the preferred node, although there are pages available from
>>> that node. The reason is that the nid parameter has been ignored in
>>> alloc_gigantic_page().
>>>
>>> Besides, the __GFP_THISNODE also need be checked if user required to
>>> alloc only from the preferred node.
>>>
>>> After this patch, the preferred node is tried first before other allowed
>>> nodes, and don't try to allocate from other nodes if __GFP_THISNODE is
>>> specified.
>>>
>>> Fixes: cf11e85fc08cc6a4 ("mm: hugetlb: optionally allocate gigantic hugepages using cma")
>>> Cc: Roman Gushchin <guro@fb.com>
>>> Cc: Mike Kravetz <mike.kravetz@oracle.com>
>>> Cc: Michal Hocko <mhocko@kernel.org>
>>> Signed-off-by: Li Xinhai <lixinhai.lxh@gmail.com>
>>
>> LGTM
>> Acked-by: Michal Hocko <mhocko@suse.com>
>
>Thank you both for the updates!
>
>>> ---
>>> v1->v2:
>>> With review by Mike and Michal, need to check __GFP_THISNODE to avoid
>>> allocate from other nodes.
>>>
>>>  mm/hugetlb.c | 21 +++++++++++++++------
>>>  1 file changed, 15 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>>> index a301c2d672bf..d24986145087 100644
>>> --- a/mm/hugetlb.c
>>> +++ b/mm/hugetlb.c
>>> @@ -1256,15 +1256,24 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
>>>  struct page *page;
>>>  int node;
>>> 
>>> -	for_each_node_mask(node, *nodemask) {
>>> -	if (!hugetlb_cma[node])
>>> -	continue;
>>> -
>>> -	page = cma_alloc(hugetlb_cma[node], nr_pages,
>>> -	huge_page_order(h), true);
>>> +	if (nid != NUMA_NO_NODE && hugetlb_cma[nid]) {
>>> +	page = cma_alloc(hugetlb_cma[nid], nr_pages,
>>> +	huge_page_order(h), true);
>
>I missed the NUMA_NO_NODE issue yesterday, but thought about it a bit today.
>As Michal says, we do not call into alloc_gigantic_page with
>'nid == NUMA_NO_NODE' today, but we should handle it correctly.
>
>Other places in the hugetlb code such as alloc_buddy_huge_page and even the
>low level interface alloc_pages_node have code as follows:
>
>	if (nid == NUMA_NO_NODE)
>	nid = numa_mem_id();
>
>this attempts the allocation on the current node first if NUMA_NO_NODE is
>specified.  Of course, it falls back to other nodes allowed in the mask.
>If we are adding the code to interpret NUMA_NO_NODE, I suggest we make this
>type of change as well.  This would simply be added at the beginning of
>alloc_gigantic_page to handle the non-CMA case as well.  Suggestion for an
>updated patch below.
> 
It looks good to me, and it makes sure same behavior when allocate from CMA or
non-CMA for gigantic page, and non-gigantic page from buddy.

I will send a formal V3 patch with updated commit message.

>--
>Mike Kravetz
>
>
>diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>index a301c2d672bf..98dc44a602b4 100644
>--- a/mm/hugetlb.c
>+++ b/mm/hugetlb.c
>@@ -1251,20 +1251,32 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
> {
> unsigned long nr_pages = 1UL << huge_page_order(h);
>
>+	if (nid == NUMA_NO_NODE)
>+	nid =  numa_mem_id();
>+
> #ifdef CONFIG_CMA
> {
> struct page *page;
> int node;
>
>-	for_each_node_mask(node, *nodemask) {
>-	if (!hugetlb_cma[node])
>-	continue;
>-
>-	page = cma_alloc(hugetlb_cma[node], nr_pages,
>-	huge_page_order(h), true);
>+	if (hugetlb_cma[nid]) {
>+	page = cma_alloc(hugetlb_cma[nid], nr_pages,
>+	huge_page_order(h), true);
> if (page)
> return page;
> }
>+
>+	if (!(gfp_mask & __GFP_THISNODE)) {
>+	for_each_node_mask(node, *nodemask) {
>+	if (node == nid || !hugetlb_cma[node])
>+	continue;
>+
>+	page = cma_alloc(hugetlb_cma[node], nr_pages,
>+	huge_page_order(h), true);
>+	if (page)
>+	return page;
>+	}
>+	}
> }
> #endif
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-09-02  2:12 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-01 14:49 [PATCH V2] mm/hugetlb: try preferred node first when alloc gigantic page from cma Li Xinhai
2020-09-01 15:04 ` Michal Hocko
2020-09-01 18:43   ` Mike Kravetz
2020-09-02  2:12     ` Li Xinhai
2020-09-01 22:04 ` Roman Gushchin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).